August 28, 2018

hackergotchi for Vasudev Kamath

Vasudev Kamath

SPAKE2 in Golang: ECDH, SPAKE2 and Curve Ed25519

In my previous post I talked about finite field and how it helps in Elliptic Curve Cryptography. In this post we will see briefly how Diffie-Hellmann key exchange varies with use of Elliptic curve groups, then we will see SPAKE2 original variant followed by Elliptic curve version and finally we will have a look at curve Ed25519 which is used as default group in python-spake2 module.

Elliptic Curve Diffie-Hellman (ECDH)

In the previous post we defined the domain parameter of elliptic curve cryptography as \((p, a, b, G, n, h)\). Now we will see how we use this in Diffie-Hellman Key exchange.

Diffie-Hellman key exchange is a way to securely exchange cryptographic key over public channel. Original Diffie-Hellman protocol used multiplicative group of integers modulo p where p is the a large prime and g a generator for subgroup (as we saw in earlier post). The protocol can be explained as follows

  1. Alice and Bob agrees on p and g belonging to group G
  2. Alice selects a belonging to G and calculates \(A = g^a \bmod{p}\)
  3. Bob selects b belonging to G and calculates \(B = g^b \bmod{p}\)
  4. Alice sends Bob A
  5. Bob sends Alice B
  6. Now Alice calculates \(s = B^a \bmod{p}\)
  7. Now Bob calculates \(s = A^b \bmod{p}\)

Mathematically s computed by both Alice and Bob are same and hence both share a shared secret now.

\begin{equation*} s = B^a \bmod{p} = g^{ba} \bmod{p} = A^b \bmod{p} = g^{ab} \bmod{p} \end{equation*}

Since group is Abelian \(g^{ba} \bmod{p} = g^{ab} \bmod{p}\) and hence both side will come to same shared key.

Now in ECC,

  1. private key \(d\) is a random integer choosen from \(\{1, \dots, n - 1\}\) where n is the order of subgroup
  2. public key is the point \(H = dG\) where G is the base point of subgroup.

With above now we can write Diffie-Hellman key exchange as

  1. Alice selects private key \(d_A\) and public key \(H_A = d_AG\) and sends it to Bob
  2. Bob selects private key \(d_B\) and public key \(H_B = d_BG\) and sends it to Alice
  3. Alice calculates \(S = d_AH_B\) and Bob calculates \(S = d_BH_A\) which if you see carefully is one and same and Alice and Bob now share a secret key!.
\begin{equation*} S = d_A H_B = d_A (d_B G) = d_B (d_A G) = d_B H_A \end{equation*}

In both cases observer only sees the public key and will not be able to find discrete logarithm (hard problem), given the numbers are large prime.

Advantage of ECDH is its faster as its replacing the costly exponentiation operation with scalar multiplication without reducing hardness of the problem.

SPAKE2 Protocol

Now that we understood Diffie-Hellman exchange and saw how to apply Elliptic curve in Diffie-Hellman exchange, lets see what is SPAKE2 protocol. This paper by Abdalla and Pointcheval gives full explanation of SPAKE2 and proof of its security. I highly recommend reading the paper as I can only summarize my understadning here.

SPAKE2 is a variation of Diffie-Hellman problem we described above. Domain parameters for SPAKE2 are \((G, g, p, M, N, H)\)

  • G is the group
  • g is the generator for group
  • p is the big prime which is order of group
  • \(M, N \in G\) and are selected by Alice and Bob respectively
  • H is the hash function used to derive final shared key.

Along with this SPAKE2 both side will have common password \(pw \in Z_p\). Protocol is defined as follows

  1. Alice selects a random scalar \(x \xleftarrow{R} Z_p\) and calculates \(X \leftarrow g^x\).
  2. Alice then computes \(X^* \leftarrow X \cdot M^{pw}\)
  3. Bob selects a random scalar \(y \xleftarrow{R} Z_p\) and calculates \(Y \leftarrow g^y\).
  4. Bob then computes \(Y^* \leftarrow Y \cdot N^{pw}\).
  5. \(X^*, Y^*\) are called pake messages and are sent to other side. i.e. Alice sends \(X^*\) to Bob and Bob sends \(Y^*\) to Alice.
  6. Alice computes \(K_A \leftarrow (Y*/N^{pw})^x\) and Bob computes \(K_B \leftarrow (X^*/M^{pw})^y\)
  7. Shared Key is calculated by Alice as \(SK_A \leftarrow H(A,B,X^*,Y^*,pw,K_A)\) and Bob computes \(SK_B \leftarrow H(A,B,X^*,Y^*, pw, K_B)\)

\(SK_A = SK_B\) because mathematically value \(K_A = K_B\) (you can expand \(X^*\) and \(Y^*\) in step 6 aboveand see that they are really same).

In step 7 we calculate Hash of transcript, where A and B are identities of Alice and Bob. and rest is calculated during protocol execution.

One thing to note here is paper does not define what are identities or which hash function is used. This allows some creativity from implementer side to choose things. For python-spake2 interoperability Brian has written a detailed blog post describing decision he has taken for all these points which are not defined in original paper.

Curve Ed25519 Group

Now that we have seen the SPAKE2 protocol, we will next see the use of Elliptic Curve groups in it and see how it varies.

SPAKE2 uses Abelian Group with large number of "elements". We know that Elliptic curve groups are Abelian groups, so we can fit them in SPAKE2. Brian Warner has choosen elliptic curve group Ed25519 (some times also referred as X25519) as default group in python-spake2 implementation. This is the same group which is used in Ed25519 signature scheme. The difference between multiplicative integer group modulo p and elliptic curve group is that, element in integer group is just a number but in elliptic curve group its a point. (represented by 2 co-ordinates).

Curve Ed25519 which is actually called Edwards25519 is a twisted Edwards curve, defined in affine form as \(ax^2 + y^2 = 1 + dx^2y^2\) where \(d \in k\{0,1\}\).

  • \(q = 2^{255} - 19\) is the order of curve groups
  • \(l = 2^{252} + 27742317777372353535851937790883648493\) is the order of curve subgroup.
  • \(a = -1\)
  • \(d = \frac{-121665}{121666}\)
  • Base point \(B\) is unique and has y-co-ordinate :math:4/5 and x co-ordinate is positive.

Curve itself is given as \(E/\mathbb F_q\)

\begin{equation*} -x^2 + y^2 = 1 - \frac{121665}{121666} x^2y^2 \end{equation*}

This curve is birationally equivalaent to the Montgomery curve known as Curve25519. If you are wondering what 25519 is?, well its in the order of group i.e. \(2^{255} - 19\).

Till now we were working with elliptic curves with affine co-ordinates, i.e. each point is represented as \((x,y)\). But for fast operation twisted edwards curve introduces new type of co-ordinates called Extended Co-ordinates where x, y is represented as X,Y,T and Z, and affine co-ordinates are represented using the extended co-ordinates as follows

\begin{align*} x = X/Z \\ y = Y/Z \\ x*y = T/Z \\ \end{align*}

Initial base point is converted to extended co-ordinate using Z as 1. In all above case the operations are \(mod q\). Additionally all division operations are actually multiplication with inverse of element.

We also noted above, Base point represented using only y co-ordinate. This is because x co-ordinate can be recovered from y, using twisted edwards curve equation we defined above. In most of libraries you will see that this compressed notation of representing a point as just y co-ordinate is used. (Its called CompressedEdwardsY in Rust's curve25519-dalek crate.)

In all above case the operations are \(mod q\). Additionally all division operations are actually multiplication with inverse of element.Inverse of a element is calculated as point raised to power q-2 modulo q. I could not find the technical/mathematical reason behind this. If some one knows please let me know. So the inverse operation can be mathematically defined as follows.

\begin{equation*} x^{-1} = x^{q - 2} \bmod{q} \end{equation*}

The addition and doubling operations are as per algorithms defined in hyperelliptic.org post. We have seen scalar multiplication in second post of this series, which depends on addition and doubling operation.

SPAKE2 using Ed25519 group

Unlike normal elliptic curve here the domain parameters are slightly different. Ed25519 domain parameters are defined as \((q, d, B, l)\) where q gives the order of elliptic curve group and l is the order of subgroup. B is the base point of the group.

Now lets rewrite original SPAKE2 protocol using elliptic curve groups

  1. Alice selects a random scalar \(x \xleftarrow{R} E/\mathbb F_q\) and calculates \(X \leftarrow B \cdot x\) and computes \(X^* \leftarrow X + M \cdot pw\). Alice sends \(X^*\) to Bob.
  2. Bob selects random scalar \(y \xleftarrow{R} E/\mathbb F_q\) and calculates \(Y \leftarrow B \cdot y\) and computes \(Y^* \leftarrow Y + N \cdot pw\). Bob sends \(Y^*\) to Alice.
  3. Alice now calculates \(K_A \leftarrow (Y^* - N \cdot pw) \cdot x\)
  4. Bob now calculates \(K_B \leftarrow (X^* - M \cdot pw) \cdot y\)
  5. Shared key is calculated by Alice \(SK_A \leftarrow H(A, B, X^*, Y^*, pw, K_A)\) and by Bob \(SK_B \leftarrow H(A,B, X^*, Y^*, pw, K_B)\)

In 3 and 4 if you expand \(X^*\) and \(Y^*\) you will see that \(K_A = K_B\). And given password used by both sides are same we will arrive at same shared key.

As you see above protocol for SPAKE2 remains same only things what changed from earlier is operations, exponentiation is changed to multiplication and division to substraction. Since we do not explicitly define substraction what we do is negate the password and do addition instead.

Conclusion

So we now have seen all the basics needed to start writing the actual Go code to implement SPAKE2 library. It was bit long I know but if you know the basics writing code is a cake walk!. (quoting from Ramakrishnan). So in the next post I will start writing implementation notes.

28 August, 2018 02:55PM by copyninja

hackergotchi for Guido Günther

Guido Günther

GTK+ and the application id

tl;dr: If you want to be sure your application will be displayed with the correct icon under different Wayland compositors make sure that your GApplication (or GtkApplication) uses

g_set_prgname(your_g_application_id);

on GTK+3. On GTK+4 this is handled for you.

Details: While working on touch based window switching for the Librem5 I noticed that lots of the GNOME application did not end up with a proper icon when using g_desktop_app_info_new (desktop_id). The desktop_id is determined from the Wayland xdg surface's app_id as specified by in Wayland's xdg-shell protocol.

The protocol says:

The compositor shell will try to group application surfaces together
by their app ID. As a best practice, it is suggested to select app
ID's that match the basename of the application's .desktop file.
For example, "org.freedesktop.FooViewer" where the .desktop file is
"org.freedesktop.FooViewer.desktop".

It's even more explicit about the relation of the app_id to the D-Bus service name:

For D-Bus activatable applications, the app ID is used as the D-Bus
service name.

So why does this currently fail? It's because GTK+3 historically uses g_get_prgname() to set the app_id and this defaults to application's basename. But what we rather want is

g_application_id == D-Bus service name == $(basename desktop_file_path .desktop) == xdg app_id

There were patches by Jonas Ådahl to fix this but those were partially reverted since it broke existing applications. Now with GTK+4 around the corner we can fix this. See the migration docs.

This will also allow us to get rid of all the rename-desktop-file in the flatpak manifests too.

(The reason why this currently works in gnome-shell is because there's a private protocoll between GTK+ and GNOME Shell that (among other things) works around this).

28 August, 2018 01:36PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

VP9 live streaming from a webcam

I recently bought a new webcam, because my laptop screen broke down, and the replacement screen that the Fujitsu people had on stock did not have a webcam anymore -- and, well, I need one.

One thing the new webcam has which the old one did not is a sensor with a 1920x1080 resolution. Since I've been playing around with various video-related things, I wanted to see if it was possible to record something in VP9 at live encoding settings. A year or so ago I would have said "no, that takes waaay too much CPU time", but right now I know that this is not true, you can easily do so if you use the right ffmpeg settings.

After a bit of fiddling about, I came up with the following:

ffmpeg -f v4l2 -framerate 25 -video_size 1920x1080 -c:v mjpeg -i /dev/video0 -f alsa -ac 1 -i hw:CARD=C615 -c:a libopus -map 0:v -map 1:a -c:v libvpx-vp9 -r 25 -g 90 -s 1920x1080 -quality realtime -speed 6 -threads 8 -row-mt 1 -tile-columns 2 -frame-parallel 1 -qmin 4 -qmax 48 -b:v 4500 output.mkv

Things you might want to change in the above:

  • Set the hw:CARD=C615 bit to something that occurs in the output of arecord -L on your system rather than on mine.
  • Run v4l2-ctl --list-formats-ext and verify that your camera supports 1920x1080 resolution in motion JPEG at 25 fps. If not, change the values of the parameters -framerate and -video_size, and the -c:v that occurs before the -i /dev/video0 position (which sets the input video codec; the one after selects the output video codec and you don't want to touch that unless you don't want VP9).
  • If don't have a quad-core CPU with hyperthreading, change the -threads setting.

If your CPU can't keep up with things, you might want to read the documentation on the subject and tweak the -qmin, -qmax, and/or -speed parameters.

This was done on a four-year-old Haswell Core i7; it should be easier on more modern hardware.

Next up: try to get a live stream into a DASH system. Or something.

28 August, 2018 12:55PM

hackergotchi for Timo Jyrinki

Timo Jyrinki

Repeated prompts for SSH key passphrase after upgrading to Ubuntu 18.04 LTS?

This was a tricky one (for me, anyway) so posting a note to help others.

The problem was that after upgrading to Ubuntu 18.04 LTS from 16.04 LTS, I had trouble with my SSH agent. I was always being asked for the passphrase again and again, even if I had just used the key. This wouldn't have been a showstopper otherwise, but it made using virt-manager over SSH impossible because it was asking for the passphrase tens of times.

I didn't find anything on the web, and I didn't find any legacy software or obsolete configs to remove to fix the problem. I only got a hint when I tried ssh-add -l, with which I got the error message ”error fetching identities: Invalid key length”. This lead me on the right track, since after a while I started suspecting my old keys in .ssh that I hadn't used for years. And right on: after I removed one id_dsa (!) key and one old RSA key from .ssh directory (with GNOME's Keyring app to be exact), ssh-add -l started working and at the same time the familiar SSH agent behavior resumed and I was able to use my remote VMs fine too!

Hope this helps.

ps. While at the topic, remember to upgrade your private keys' internal format to the new OpenSSH format from the ”worse than plaintext” format with the -o option: blog post – tl; dr; ssh-keygen -p -o -f id_rsa and retype your passphrase.

28 August, 2018 07:46AM by Timo Jyrinki (noreply@blogger.com)

Russ Allbery

Review: So Lucky

Review: So Lucky, by Nicola Griffith

Publisher: FSG Originals
Copyright: 2018
ISBN: 0-374-71834-2
Format: Kindle
Pages: 179

The first sign of trouble was easy to ignore. Mara tripped on the day her partner of fourteen years moved out, and thought nothing of it. But it was only a week and a half before the more serious fall in her kitchen, a doctor's visit, and a diagnosis: multiple sclerosis.

The next few days were a mess of numbness, shock, and anger: a fight at her job as the director of an HIV foundation over a wheelchair ramp, an unintended outburst in a spreadsheet, and then being fired. Well, a year of partial pay and medical coverage, "as gratitude for her service." But fired, for being disabled.

Mara is not the sort of person to take anything slow. Less time at the job means more time to research MS, time to refit her house for her upcoming disability, time to learn how to give herself injections, time to buy a cat. Time to bounce hard off of an MS support group while seeing an apparently imaginary dog. Time to get angry, like she had years ago when she was assaulted and threw herself obsessively into learning self-defense. Time to decide to fight back.

I so wanted to like this book. It's the first new Nicola Griffith novel since Hild, and I've loved everything of hers I've read. It's a book about disability, about finding one's people, about activism, about rights of people with disabilities, and about how people's reactions to others with disabilities are predictable and awful and condescending. Mara isn't a role model, isn't inspiration, isn't long-suffering. She's angry, scared, obsessive, scary, and horrible at communication. She spent her career helping people with a type of medical disability, and yet is entirely unprepared for having one herself.

I'm glad this book exists. I want more books like this to exist.

I mostly didn't enjoy reading it.

In part, this is because I personally bounced off some themes of the book. I have a low tolerance for horror, and there's a subplot involving Mara's vividly-imagined fear of a human predator working their way through her newly-discovered community that made me actively uncomfortable to read. (I realize that was part of the point, and I appreciate it as art, but I didn't enjoy it as a reader.) But I also think some of it is structural.

There is a character development arc here: Mara has to come to terms with what MS means to her, how she's going to live with it, and how she's going to define herself after loss of her job, without a long-term relationship, and with a disabling disease, all essentially at once. Pieces of that worked for me, such as Mara's interaction with Aiyana. But Griffith represents part of that arc with several hallucinatory encounters with a phantom embodiment of what Mara is fighting against, which plays a significant role in the climax of the book. And that climax didn't work for me. It felt off-tempo somehow, not quite supported by Mara's previous changes in attitude, too abrupt, too heavily metaphorical for me to follow.

It's just one scene, but So Lucky puts a lot of weight on that scene. This is a short novel full of furious energy, pushing towards some sort of conclusion or explosion. Mara is, frankly, a rather awful person for most of the book, for reasons that follow pre-existing fracture lines in her personality and are understandable and even forgivable but still unpleasant. I needed some sort of emotional catharsis, some dramatic turning point in her self-image and engagement with the world, and I think Griffith's intent was to provide that catharsis, and it didn't land for me, which left me off-balance and disturbed and unsatisfied. And frustrated, because I was rooting for the book and stuck with it through some rather nasty plot developments, hoping the payoff would be worth it.

This is all very individual; it doesn't surprise me at all that other people love this book. I'm also not disabled. I'm sure that would add additional layers, and it might have made the catharsis land for me. But I personally spent most of the book wanting to read about Aiyana instead of Mara.

Spending the book wishing I was reading about the non-disabled character, the one who isn't angry and isn't scary and isn't as scared, is partly the point. And it's a very good point; despite not enjoying this book, I'm glad I read it. It made me think. It made me question why I liked one character over another, what made me uncomfortable about Mara, and why I found her off-putting. As a work of activism, I think So Lucky lands its punches well. People like me wanting comfort instead of truth is part of how people with disabilities are treated in society, and not a very attractive part. But at the same time, I read books for pleasure. I'm not sure how to reconcile those conflicting goals.

So Lucky is a Griffith novel, so the descriptions are gorgeous and the quality of the writing is exceptional. Griffith gives each moment a heft and weight and physicality. The relationships in this book worked for me in all their complexity, even when I was furious at Mara for breaking hers. And Griffith's descriptions of physical bodies, touching and feeling and being in each other's spaces, remain the best of any author I've read. If the plot works better for you than it did for me, there's a lot here to enjoy.

I can't quite recommend it, or at least as much as I hoped I could. But I think some people will love it.

One final note: I keep seeing reviews and blurbs about this book that describe it as an autobiographical novel, and it irritates me every time. It's not autobiographical. Yes, Griffith and the protagonist both have MS, are both lesbians, and both taught self-defense. But Griffith has put lesbians, self-defense teachers, and people with MS in many of her books. Mara runs a charitable organization; Griffith is a writer. Mara's relationships are a mess; Griffith has been happily married for nearly 25 years. I'm sure Griffith drew heavily on her own reactions to MS to write this novel, as novelists do, but that doesn't make Mara a self-insert or make this fictional story an autobiography. Disabled authors can write disabled protagonists without making the story non-fiction. It's weirdly dismissive to cast the book this way, to take away Griffith's technique and imagination and ability to invent character and situation and instead classify the book as some sort of transcription of her own life. And I don't think it would happen if it weren't for the common disability.

This is identifying people as their disability, and it's lazy and wrong and exclusionary. Stop doing this.

Rating: 5 out of 10

28 August, 2018 03:47AM

August 27, 2018

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- August 2018

Here’s a summary of some of the bugs against the Debian Policy Manual. Please consider getting involved, whether or not you’re an existing contributor.

Consensus has been reached and help is needed to write a patch

#228692 User/group creation/removal in package maintainer scripts

#685506 copyright-format: new Files-Excluded field

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#845715 Please document that packages are not allowed to write outside thei…

#874206 allow a trailing comma in package relationship fields

#902612 Packages should not touch users’ home directories

#905453 Policy does not include a section on NEWS.Debian files

#906286 repository-format sub-policy

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#845255 Include best practices for packaging database applications

#897217 Vcs-Hg should support -b too

#901437 Add footnote warning readers that sometimes shared libaries are not coinstallable

#904248 Add netbase to build-essential

27 August, 2018 09:38PM

Reproducible builds folks

Reproducible Builds: Weekly report #174

Here’s what happened in the Reproducible Builds effort between Sunday August 19 and Saturday August 25 2018:

Packages reviewed and fixed, and bugs filed

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

27 August, 2018 04:09PM

Sven Hoexter

FrOSCon 2018: Herding Docker Images

I gave my first public talk on Saturday at FrOSCon 13. In case you're interested in how we maintain Docker base images (based on Debian-slim) at REWE Digital, the video is already online (German). The slides are also available and a tarball, containing the slides and all files, so you do not have to copy the snippets from the slides. The relevant tool, container-diff, is provided by Google on GitHub. In case you're interested in our migration to microservices, you can find the referenced talk given by Paul Puschmann at OSDC 2018 on Youtube (English). If you've any question regarding the talk don't hesitate to write me a mail, details on how to reach out are here.

If you're interested in the topic I highly recommend to also watch the talk given by Chris Jantz Unboxing and Building Container Images (English). Chris is not only talking about what a container image contains, but also about the rather new Google tool Kaniko, which can build container based on Dockerfiles without root permission and without dockerd.

Beside of that two of my colleagues gave a talk about Kafka from a developer perspective Apache Kafka: Lessons learned (German). Judging from the feedback it was well received.

All in all it was a great experience and a huge thank you to all the volunteers keeping this event alive, especially to those who helped to set me up for the talk. You're awesome!

27 August, 2018 12:47PM

hackergotchi for Keith Packard

Keith Packard

window-scaling

Window Scaling

One of the ideas we had in creating the compositing mechanism was to be able to scale window contents for the user -- having the window contents available as an image provides for lots of flexibility for presentation.

However, while we've seen things like “overview mode” (presenting all of the application windows scaled and tiled for easy selection), we haven't managed to interact with windows in scaled form. That is, until yesterday.

glxgears thinks the window is only 32x32 pixels in size. xfd is scaled by a factor of 2. xlogo is drawn at the normal size.

Two Window Sizes

The key idea for window scaling is to have the X server keep track of two different window sizes -- the sarea occupied by the window within its parent, and the area available for the window contents, including descendents. For now, at least, the origin of the window is the same between these two spaces, although I don't think there's any reason they would have to be.

  • Current Size. This is the size as seen from outside the window, and as viewed by all clients other than the owner of the window. It reflects the area within the parent occupied by the window, including the area which captures pointer events. This can probably use a better name.

  • Owner Size. This is the size of the window viewed from inside the window, and as viewed by the owner of the window. When composited, the composite pixmap gets allocated at this size. When automatically composited, the X server will scale the image of the window from this size to the current size.

Clip Lists

Normally, when computing the clip list for a composited window, the X server uses the current size of the window (aka the “borderSize” region) instead of just the porition of the window which is not clipped by the ancestor or sibling windows. This is how we capture output which is covered by those windows and can use it to generate translucent effects.

With an output size set, instead of using the current size, I use the owner size instead. All un-redirected descendents are thus clipped to this overall geometry.

Sub Windows

Descendent windows are left almost entirely alone; they keep their original geometry, both position and size. Because the output sized window retains its original position, all of the usual coordinate transformations 'just work'. Of course, the clipping computations will start with a scaled clip list for the output sized window, so the descendents will have different clipping. There's suprisingly little effect otherwise.

Output Handling

When an owner size is set, the window gets compositing enabled. The composite pixmap is allocate at the owner size instead of the current size. When no compositing manager is running, the automatic compositing painting code in the server now scales the output from the output size to the current size.

Most X applications don't have borders, but I needed to figure out what to do in case one appeared. I decided that the boarder should be the same size in the output and current presentations. That's about the only thing that I could get to make sense; the border is 'outside' the window size, so if you want to make the window contents twice as big, you want to make the window size twice as big, not some function of the border width.

About the only trick was getting the transformation from output size to current size correct in the presence of borders. That took a few iterations, but I finally just wrote down a few equations and solved for the necessary values. Note that Render transforms take destination space coordinates and generate source space coordinates, so they appear “backwards”. While Render supports projective transforms, this one is just scaling and translation, so we just need:

x_output_size = A * x_current_size + B

Now, we want the border width for input and output to be the same, which means:

border_width + output_size = A * (border_width + current_size) + B
border_width               = A * border_width                   + B

Now we can solve for A:

output_size = A * current_size
A = output_size / current_size

And for B:

border_width = output_size / current_size * border_width + B
B = (1 - output_size / current_size) * border_width

With these, we can construct a suitable transformation matrix:

⎡ Ax  0 Bx ⎤
⎢  0 Ay By ⎥
⎣  0  0  1 ⎦

Input Handling

Input device root coordinates need to be adjusted for owner sized windows. If you nest an owner sized window inside another owner sized window, then there are two transformations involved.

There are actually two places where these transformations need to be applied:

  1. To compute which window the pointer is in. If an output sized window has descendents, then the position of the pointer within the output window needs to be scaled so that the correct descendent is identified as containing the pointer.

  2. To compute the correct event coordinates when sending events to the window. I decided not to attempt to separate the window owner from other clients for event delivery; all clients see the same coordinates in events.

Both of these require the ability to transform the event coordinates relative to the root window. To do that, we translate from root coordinates to window coordinates, scale by the ratio of output to current size and then translate back:

void
OwnerScaleCoordinate(WindowPtr pWin, double *xd, double *yd)
{
    if (wOwnerSized(pWin)) {
        *xd = (*xd - pWin->drawable.x) * (double) wOwnerWidth(pWin) /
            (double) pWin->drawable.width + pWin->drawable.x;
        *yd = (*yd - pWin->drawable.y) * (double) wOwnerHeight(pWin) /
            (double) pWin->drawable.height + pWin->drawable.y;
    }
}

This moves the device to the scaled location within the output sized windows. Performing this transformation from the root window down to the target window adjusts the position correctly even when there is more than one output sized window among the window ancestry.

Case 1. is easy; XYToWindow, and the associated miSpriteTrace function, already traverse the window tree from the root for each event. Each time we descend through a window, we apply the transformation so that subsequent checks for descendents will check the correct coordinates. At each step, I use OwnerScaleCoordinate for the transformation.

Case 2. means taking an arbitrary window and walking up the window tree to the root and then performing each transformation on the way back down. Right now, I'm doing this recursively, but I'm reasonably sure it could be done iteratively instead:

void
ScaleRootCoordinate(WindowPtr pWin, double *xd, double *yd)
{
    if (pWin->parent)
        ScaleRootCoordinate(pWin->parent, xd, yd);
    OwnerScaleCoordinate(pWin, xd, yd);
}

Events and Replies

To make the illusion for the client work, everything the client hears about the window needs to be adjusted so that the window seems to be the owner size and not the current size.

  • Input events. The root coordinates are modified as described above, and then the window-relative coordinates are computed as usual—by subtracting the window origin from the root position. That's because the windows are all left in their original location.

  • ConfigureNotify events. These events are rewritten before being delivered to the owner so that the width and height reflect the owner size. Because window managers send synthetic configure notify events when moving windows, I also had to rewrite those events, or the client would get the wrong size information.

  • PresentConfigureNotify events. For these, I decided to rewrite the size values for all clients. As these are intended to be used to allocate window buffers for presentation, the right size is always the owner size.

  • OwnerWindowSizeNotify events. I created a new event so that the compositing manager could track the owner size of all child windows. That's necessary because the X server only performs the output size scaling operation for automatically redirected windows; if the window is manually redirected, then the compositing manager will have to perform the scaling operation instead.

  • GetGeometry replies. These are rewritten for the window owner to reflect the owner size value. Other clients see the current size instead.

  • GetImage replies. I haven't done this part yet, but I think I need to scale the window image for clients other than the owner. In particular, xwd currently fails with a Match error when it sees a window with a non-default visual that has an output size smaller than the window size. It tries to perform a GetImage operation using the current size, which fails when the server tries to fetch that rectangle from the owner-sized window pixmap.

Composite Extension Changes

I've stuck all of this stuff into the Composite extension; mostly because you need to use Composite to capture the scaled window output anyways.

12. Composite Events (0.5 and later)

Version 0.5 of the extension defines an event selection mechanism and a couple of events.

COMPOSITEEVENTTYPE {
    CompositePixmapNotify = 0
    CompositeOwnerWindowSizeNotify = 1
}

Event type delivered in events

COMPOSITEEVENTMASK {
    CompositePixmapNotifyMask = 0x0001
    CompositeOwnerWindowSizeNotifyMask = 0x0002
}

Event select mask for CompositeSelectInput

⎡
⎢    CompositeSelectInput
⎢
⎢                window:                                Window
⎢                enable                                SETofCOMPOSITEEVENTMASK
⎣

This request selects the set of events that will be delivered to the client from the specified window.

CompositePixmapNotify
    type:            CARD8          XGE event type (35)
    extension:       CARD8          Composite extension request number
    sequence-number: CARD16
    length:          CARD32         0
    evtype:          CARD16         CompositePixmapNotify
    window:          WINDOW
    windowWidth:     CARD16
    windowHeight:    CARD16
    pixmapWidth:     CARD16
    pixmapHeight:    CARD16

This event is delivered whenever the composite pixmap for a window is created, changed or deleted. When the composite pixmap is deleted, pixmapWidth and pixmapHeight will be zero. The client can call NameWindowPixmap to assign a resource ID for the new pixmap.

13. Output Window Size (0.5 and later)

⎡
⎢    CompositeSetOwnerWindowSize
⎢
⎢                window:                                Window
⎢                width:                                 CARD16
⎢                height:                                CARD16
⎣

This request specifies that the owner-visible window size will be set to the provided value, overriding the actual window size as seen by the owner. If composited, the composite pixmap will be created at this size. If automatically composited, the server will scale the output from the owner size to the current window size.

If the window is mapped, an UnmapWindow request is performed automatically first. Then the owner size is set. A CompositeOwnerWindowSizeNotify event is then generated. Finally, if the window was originally mapped, a MapWindow request is performed automatically.

Setting the width and height to zero will clear the owner size value and cause the window to resume normal behavior.

Input events will be scaled from the actual window size to the owner size for all clients.

A Match error is generated if:

  • The window is a root window
  • One, but not both, of width/height is zero

And, of course, you can retrieve the current size too:

⎡
⎢    CompositeGetOwnerWindowSize
⎢
⎢                window:                                Window
⎢
⎢                →
⎢
⎢                width:                                 CARD16
⎢                height:                                CARD16
⎣

This request returns the current owner window size, if set. Otherwise it returns 0,0, indicating that there is no owner window size set.

CompositeOwnerWindowSizeNotify
    type:            CARD8          XGE event type (35)
    extension:       CARD8          Composite extension request number
    sequence-number: CARD16
    length:          CARD32         0
    evtype:          CARD16         CompositeOwnerWindowSizeNotify
    window:          WINDOW
    windowWidth:     CARD16
    windowHeight:    CARD16
    ownerWidth:      CARD16
    ownerHeight:     CARD16

This event is generated whenever the owner size of the window is set. windowWidth and windowHeight report the current window size. ownerWidth and ownerHeight report the owner window size.

Git repositories

These changes are in various repositories at gitlab.freedesktop.org all using the “window-scaling” branch:

And here's a sample command line app which modifies the owner scaling value for an existing window:

Current Status

This stuff is all very new; I started writing code on Friday evening and got a simple test case working. I then spent Saturday making most of it work, and today finding a pile of additional cases that needed handling. I know that GetImage is broken; I'm sure lots of other stuff is also not quite right.

I'd love to get feedback on whether the API and feature set seem reasonable or not.

27 August, 2018 01:46AM

August 26, 2018

Iustin Pop

Disqus comments

I’ve enabled on-click Disqus comments on the post pages. I’ve used to run ikiwiki comments a long while ago, which ended up in interesting conversations, but the comment spam was atrocious, so I gave up on them long before I moved from ikiwiki.

Given that Hakyll is purely read-only site generator, it means relying on an external commenting system. And since I don’t want to deal with maintaining such a thing, I have to rely on a third party to do so.

So let’s give Disqus comments a try. I’ve seen them used with good results (speaking as an end-user) on many pages, but you also read about bad experiences with it. So, trial run, let’s see. It might live for less than a day, six months, or until I change things again (not date-ordered events).

The comments are not loaded on-demand, since Disqus is heavy and I don’t want them to track all my visitors, but only on-click. So people will have to click the not-very-obvious “Show/Post comments” link, but at least this seems nicer for feed aggregators/etc.

And yes, the style of the Disqus div does not match my site’s. A battle for another day.

26 August, 2018 11:58PM

Carl Chenet

FOSS: passive consumerism kills our community

TL;DR: Don’t be a passive consumer of FOSS. It’s going to kill the FOSS community or change it in bad ways. Contribute in any way described in this article, even really basic ones, but contribute daily or on a very regular basis.


I have been a system engineer for more than 10 years now, almost exclusively working with GNU/Linux systems. I’m also deeply involved in the Free and Open Source Software (FOSS) community for a long time and I spend a lot of time on social networks (mostly Twitter and Mastodon these days). And some behaviours always piss me off.

The consumer thinks he’s smarter and more efficient than others

Many IT professionals using FOSS display a behaviour of pure consumerism in their relationship with FOSS. They often try to use a software in a very specific environment (specific version of a GNU/Linux distribution, specific version of a software). They don’t succeed using it in that environment? That software is obviously crap, it should work with the default settings, otherwise it’s not user-friendly. The documentation is available? Who reads the doc? I need answers now, I don’t have time to read the damn documentation! And who wrote this piece of crap anyway?

If the answer is not the first StackOverFlow link of the first Google search, I’m done with this shit. My time is precious so I’m going to try another software (and waste 2x time) or better code it myself (100x waste of time) in a unreusable way.

Passive consumers never write a bug report. It’s a waste of time, requiring effort. Who has time to write it except fuckers? Not even a ping to the maintainer or the lead dev of the project (they should know, they wrote this crap! Ok I pinged him/her on Twitter 2 minutes ago. People don’t reply in a minute? Fuck off, you bunch of time-wasting losers! I don’t care if it’s 2AM for him.

Ok, ok, FINE, I’ll write a bug report if you whiners insist: IT DOES NOT WORK YOU FUCKERS MOVE YOUR ASSES FIX IT NOW!

Rewards for the lead dev? What for?

Even with softwares they like and use everyday and that perfectly work,  upgrading just fine as needed, most of IT professionals have the exact same behaviour of passive consumerism.

5 years this software powered the whole IT, helping the company making big money? True. The lead dev asks for money/recognition through social networks? What a bunch of beggars! He needs money? Me too! Does this person have a Patreon? Who cares! This guy owes me to use his software, he loves coding for free, the sucker.

Helping him by subscribing a professional license of this software? What for? My boss will laugh. Nobody pays for softwares (except suckers). That’s free as in free beer baby!

I’ll even ask him/her to modify the license because I can not rebrand the software and use it for my own proprietary software he maintains for free. He should thank me to help him spread his software, this wannabe Marc Zuckerberg. Pretty sure he gets tons of money. Not by me, no way.

And of course this behaviour of passive consumerism has negative impacts on the FOSS ecosystem. Really. Usually after somes years, the lead dev eventually gives up the project. At this time, you usually can read these kinds of furious comments  « You lazy fuck you didn’t upgrade your software in years, serious people use it, reply fast or I’ll leave thousands of insulting comments! I bet my ass on you, you should thank me crawling. You lazy communist, I would remove my star on the Gihub/Gitlab repo if I had starred it. But of course I didn’t, I’m not going to star every projects I use, what do you expect? Contributions in any way? Come on, grow up, deal with it. Life is hard. »

Promote and interact with the projects you use

Please help the projects you use. If your company earns money thanks to FOSS and you are the boss of this company, providing money or manpower for at least one project you use daily should be a reasonable goal and show some understanding of th FOSS ecosystem.

If you are an employee of a company using FOSS, a very important step is to let your boss know that parts of your infrastructure will die in short term (some years) if you don’t help this project in any way.

99.9% of FOSS projects are one-man project. This small javascript library the frontend of your company website uses or this small database backup script nobody cares but saved your ass 2 times already.

If no money is implied or you only provide a free service to others, let the world know you use FOSS and thank some of them from time to time. Just telling people through Mastodon or Twitter you use their softwares will cheer them up BIG TIME. Star their projects on Gitlab or Github to let them know this project is helpful.

Some ways to contribute

Here is a list of very great ways to contribute:

  • Tell the world through social networks of your latest upgrade of this software was smooth and easy. Spread the word.
  • Write a blog post describing your experiences and how much value was provided for your company or for your projects by this great  FOSS project.
  • Follow lead devs of different projects on Mastodon or Twitter and retweet/like/boost/favorite latest news from time to time.
  • Write a thankful comment on the project blog or on the lead dev blog. Reading your comment will ensure the dev have a great day.

Star the Feed2toot project on Gitlab

Don’t be a passive consumer

Don’t be a passive consumer of FOSS. It’s going to kill the FOSS community or change it in bad ways. The required average level of contribution to a project and the expectations towards FOSS increase days after days in a world where complexity and interactions grow fast. The core of really fundamental FOSS projects is often only a very small team of people (1 to 5).

I talk daily about some FOSS latest news on my Twitter account

Contribute in any way described in this article, even really basic ones, but contribute daily or on a very regular basis. You’re powerful by providing good vibes and great contributions to FOSS projects. Your contributions WILL change things, encourage and (re)motivate people. It’s good for you, you will improve your skills, gain knowledge about the FOSS community and visibility for your company or your projects. And it’s good for the FOSS community, having more and more people contributing in ANY productive way.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.fr, a job board for Free and Open Source Jobs in France.

Follow Me On Social Networks

26 August, 2018 10:00PM by Carl Chenet

August 25, 2018

Ingo Juergensmann

#Friendica vs #Hubzilla vs #Mastodon

I've been running a #Friendica node for several years now. Some months ago I also started to run a #Hubzilla hub as well. Some days ago I also installed #Mastodon on a virtual machine, because there was so much hype about Mastodon in the last days due to some changes Twitter made in regards of 3rd party clients.

All of those social networks do have their own focus:

Friendica: basically can connect to all other social networks, which is quite nice because there exists historically two different worlds: the Federation (Diaspora, Socialhome) and the Fediverse (GnuSocial, Mastodon, postActiv, Pleroma). Only Friendica and Hubzilla can federate with both: Federation and Fediverse.
Friendicas look&feel appears sometimes a little bit outdated and old, but it works very well and reliable.

Hubzilla: is the second player in the field of connecting both federations, but has a different focus. It is more of one-size-fits-all approach. If you need a microblogging site, a wiki, a cloud service, a website, etc. then Hubzilla is the way to go. The look&feel is a little bit more modern, but there are some quirks that appears a little odd to me. A unique feature for Hubzilla seems to be the concept of "nomadic accounts": you can move to a different hub and take all your data with you. Read more about that in the Hubzilla documentation.

Mastodon: this aims to be a replacement for Twitter as a microblogging service. It looks nice and shiny, has a bunch of nice clients for smartphones and has the largest userbase by far (which is not that important because of federation).
But the web GUI is rather limited and weird, as far as I can tell after just some days.

Technically spoken these are the main differences:
- Friendica: MySQL/MariaDB, PHP on the server, Clients: some Android clients, no iOS client
- Hubzilla: MySQL/MariaDB or PostgreSQL, PHP on the server, Clients: don't know, didn't care so far.
- Mastodon: PostgreSQL, Ruby on the server, Clients: many iOS and Android clients available

I'm not that big Ruby fan and if I remember correctly the Ruby stuff turned me away from Diaspora years ago and made me switch to Friendica, because back then it was a pain to maintain Diaspora. Mastodon addresses this by offering Docker container for the ease of installation and maintenance. But as I'm no Docker fan either, I followed the guide to install Mastodon without Docker, which works so far as well (for the last 3 days ).

So after all my Friendica node is still my favorit, because is just works and is reliable. Hubzilla has a different approach and offers a full set of webfeatures and nomadic accounts. The best I can say about Mastodon at this moment is: it runs on PostgreSQL and has nice clients on mobile devices.

Here are my instances:
- Friendica: https://nerdica.net/
- Hubzilla: https://silverhaze.eu/
- Mastodon: https://nerdculture.de/

PS: "A quick guide to The Free Network" by Sean Tilley on https://medium.com/we-distribute/a-quick-guide-to-the-free-network-c0693...

PPS: this is a cross post from my Friendica node.

Kategorie: 
 

25 August, 2018 04:58PM by ij

Russ Allbery

Review: Overwhelmed

Review: Overwhelmed, by Brigid Schulte

Publisher: Farrar, Straus & Giroux
Copyright: 2014
ISBN: 1-4299-4587-7
Format: Kindle
Pages: 286

Subtitled Work, Love, and Play When No One Has the Time, Overwhelmed is part of the latest batch of reading I've been doing on time management and life organization. The focus of this book is particularly appealing: Why does life feel so busy? Why do we feel constantly overwhelmed with things we're supposed to be doing? Did something change? If so, what changed? And how can we fix it? Schulte avoids many of the pitfalls of both science popularization and self-help books by personalizing her questions in an appealing way. She is overwhelmed, she wants to escape that trap, and she goes looking for things that would help her personally, bringing the reader along for the ride.

The caveat to this approach, which I wish were more obvious from the marketing surrounding this book, is that Overwhelmed is focused on the type of overwhelm that the author herself is dealing with: being a working mother. Roughly two-thirds of this book is about parenting, gender balance in both parenting and household chores, time stress unique to working mothers, and the interaction between the demands of family and the demands of the workplace.

To be clear, there is nothing wrong with this focus. I'm delighted to see more time and attention management books and workplace policy investigations written for the working mother instead of the male executive. Just be aware that a lot of this book is not going to apply directly to people without partners or kids, although I still found it useful as a tool for building social empathy and thinking about work and government policy.

Schulte starts the book with a brilliant hook. Overwhelmed, fragmented, and exhausted, Schulte had kept a time diary for a year, and is turning it over to John Robinson, a well-known sociologist specializing in time use. Schulte memorably describes how her time diaries have become confessionals of panic attacks, unpaid bills, hours spent waiting on hold, and tarot readings telling her to take more quiet time for herself. But Robinson's conclusion is ruthless: she had 28 hours of leisure in the week they analyzed during the visit. A little less than average, but a marked contrast to Schulte's sense that she had no leisure at all. Based on his research with meticulous time diaries, Robinson is insistent that we have as much or more leisure than we had fifty years ago. (He has his own book on the topic, Time for Life.) Schulte's subjective impression of her time is wildly inconsistent with that analysis. What happened?

In the first part of the book, Schulte introduces two useful concepts: time confetti, to describe her subjective impression of the shredding of her schedule and attention, and role overload. The latter is used in academic work on time use to describe attempting to fulfill multiple roles simultaneously without the necessary resources for all of them, and has a strong correlation with depression and anxiety. Schulte immediately recognized the signs of role overload in her own conflicts between work and parenting, but even without the parenting component, I recognized role overload in the strain between work and volunteer commitments. Simplified, it's a more academic version of the common concept of "work-life balance," but it comes with additional research on the consequences: constant multitasking, a sense of accelerating pace, and a breakdown of clean divisions between blocks of time devoted to different activities.

The rest of the book looks at this problem in three distinct spheres: work, love (mostly family and child-rearing), and play. Schulte adds the additional concepts of the Ideal Worker, Ideal Mother, and Providing Father archetypes and their pressure towards both gender stereotypes and an unhealthy devotion to work availability and long work hours. I found the Ideal Worker concept and its framing of the standards against which we unconsciously measure ourselves particularly useful, even though I'm in an extremely relaxed and flexible work place by US standards. The Ideal Mother and Providing Father concepts in the section on love were more academic to me (since I don't have kids), but gave me new empathy for the struggles to apply an abstract ideal of equal partnership to the messy world of subconscious stereotypes and inflexible workplaces designed for providing fathers.

Schulte does offer a few tentative solutions, or at least pushes in a better direction, but mostly one comes away from this book wanting to move to Denmark or the Netherlands (both used here, as in so many other places these days, as examples of societies that have made far different choices about work and life than the US has). So many of the flaws are structural: jobs of at least forty hours a week, a culture of working late at the office or taking work home, inadequate child care, and deeply ingrained gender stereotypes that shape our behavior even when we don't want them to. Carving out a less overwhelmed life as an individual is an exhausting swim upstream, which is nigh-impossible when exhaustion and burnout is the starting point. If you're looking for a book to make you feel empowered and in control of eliminating the sense of overwhelm from your life, that's not this book, although that also makes it a more realistic study.

That said, Schulte herself sounds more optimistic at the end of the book than at the beginning, and seems to have found some techniques that helped without moving to Denmark. She summarizes them at the end of the book, and it's a solid list. Several will be familiar to any time management reader (stop multitasking, prioritize the important things first, make room for quiet moments, take advantage of human burst work cycles, be very clear about your objectives, and, seriously, stop multitasking), but for me they gained more weight from Schulte's personal attempts to understand and apply them. But I think this is more a book about the shape of the problem than about the shape of the solution.

Overwhelmed is going to have the most to say to women and to people with children, but I'm glad I read it. This is the good sort of summary of scientific and social research: personalized, embracing ambiguity and conflicting research and opinions, capturing the sense of muddling through and trying multiple things, and honest and heartfelt in presenting the author's personal take and personal challenges. It avoids both the guru certainty of the self-help book and the excessive generalization of Gladwell-style popularizations. More like this, please.

Rating: 7 out of 10

25 August, 2018 04:13AM

hackergotchi for Simon Richter

Simon Richter

Project Template for Bison and Flex

In case you need to add a parser to a C/C++ project, this should be a useful starting point with an empty lexer/parser combination that uses no global variables and accurately tracks location.

You can replace the string "project" in these files with the name of your parser and then rename the files accordingly. Obviously these defaults are not correct for every project.

Lexer Template

%option 8bit
%option never-interactive
%option noyywrap
%option nodefault
%option bison-bridge
%option bison-locations
%option reentrant
%option warn
%option yylineno
%option outfile="project_lex.cc"
%option header-file="project_lex.hh"

%{
#include "project_parse.hh"
#include <stdio.h>

#define YY_USER_ACTION \
    { \
        yylloc->first_line = yylloc->last_line; \
        yylloc->first_column = yylloc->last_column; \
        yylloc->last_line = yylineno; \
        yylloc->last_column = yycolumn; \
        yycolumn += yyleng; \
    }
%}

%%

\n*     yycolumn = 1;
.       fprintf(stderr, "%d:%d:Unhandled character %02x\n", yylineno, yycolumn, (unsigned int)(unsigned char)(yytext[0]));

Newlines need explicit handling in a rule, as the column counter needs to be reset to 1. If there is no return statement in this line, this also means that newlines do not appear as tokens in the token stream, so if these are significant in your language, you will need to adapt this line accordingly. The yylineno variable is silently incremented when newlines are matched, so no special handling is required here.

The catch-all rule at the bottom generates an error message with location data for otherwise unhandled input, then ignores these characters for further parsing.

The YY_USER_ACTION macro updates the standard YYLTYPE as defined by Bison and the internal yycolumn variable. If you define your own YYLTYPE, e.g. because you need to add a file name, this needs to be adjusted as well.

The parser definition needs the lexer declarations (for the token types), so this is included here as well.

Parser Template

%define api.pure full
%define parse.error verbose
%param {yyscan_t scanner}
%parse-param {toplevel &top}
%locations

%output "project_parse.cc"
%defines "project_parse.hh"

%union {
}

%code requires {
#include "project_tree.h"
    typedef void *yyscan_t;
}

%code {
#include "project_lex.hh"

#include <stdio.h>

void yyerror(YYLTYPE *yylval, yyscan_t, toplevel &, char const *msg)
{
    fprintf(stderr, "%d:%d: %s\n", yylval->first_line, yylval->first_column, msg);
}

}

%token end_of_file 0 "end of file"

%%

start:

The order of the include files here is tricky, as the parser definition needs the lexer declarations, which in turn needs the parser declarations.

Also, the user rules in the parser require the syntax tree declarations, which I normally keep in a separate file.

As the pure parser doesn't pass the value of the top production outside of yyparse, the top production needs to copy it somewhere. For this, a separate parameter is added to the parser, a reference to a toplevel object. This is available in all levels of yyparse as well as in yyerror, so an alternative error handling method could store errors in a list inside the toplevel.

The typedef void *yyscan_t; is knowledge we're not supposed to have here, but this definition needs to be available before the declaration of yyparse in the parser header file, which normally doesn't have the lexer definition visible. Including the AST definitions that early makes them available for use in the generated YYSTYPE declaration, so AST types can be used as value types for productions.

Invocation Template

To use the generated parser, you need to open a FILE * stream, attach it to the lexer and invoke the parser (which will call the lexer as required):

toplevel top;

FILE *in = fopen(input, "r");
if(!in)
    /* handle error */ ;
yyscan_t scanner;
yylex_init(&scanner);
yylex_set_in(in, scanner);
int ret = yyparse(scanner, top);
yylex_destroy(scanner);
fclose(in);

The return code from yyparse indicates if the top production was matched successfully, the action in this production should then update the toplevel object.

Makefile Template

To compile the files, just invoke flex and bison with the respective file as a single argument. The output names are listed in the input files, and both outputs are generated at the same time, so make sure dependencies are declared correctly:

project_lex.cc: project_lex.hh
    @

project_lex.hh: project.ll
    flex $<

project_parse.cc: project_parse.hh
    @

project_parse.hh: project.yy
    bison $<

If you generate dummy dependency files somewhere, include the generated headers in them:

%.d:
    @echo >$@ "$*.o: project_lex.hh project_parse.hh"

This ensures that the headers are generated before any source is compiled on the first build — subsequent builds will have accurate dependency information anyway.

25 August, 2018 02:31AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.3.4: More updated examples

max-heap image

The fifth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to rather recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers are welcome!).

A few examples as highlighted at the Github repo:

This release updates a few of examples scripts. One nice change is the most-excellent docopt package for command-line parsing no longer needs stringr and hence stringi for its operations, making our command-line scripts really lightweight as they now need only the (small) docopt package---thanks to Edwin de Jonge for all his work on that package. Also, Brandon Berterlsen contributed a script (which we later reduced to an option of an existing script) to only install not-yet-installed packages.

The NEWS file entry is below.

Changes in littler version 0.3.4 (2018-08-24)

  • Changes in examples

    • The shebang line is now #!/usr/bin/env r to work with either /usr/local/bin/r or /usr/bin/r.

    • New example script to only install packages not yet installed (Brandon Bertelsen in #59); later added into install2.r.

    • Functions getRStudioDesktop.r and getRStudioServer.r updated their internal URLs.

    • Several minor enhancements were made to example scripts.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 August, 2018 12:23AM

August 24, 2018

Hideki Yamane

A Contents holder asks government to filter DNS query, Japan

One of the biggest publisher in Japan, Kadokawa's CEO (and also CTO of dwango, it runs "Niconico"), Nobuo Kawakami(川上 量生) claimed "To eliminate contents piracy, the Japanese government should filter DNS query except to ISP's DNS server (OP53B - Outbound Port 53 Blocking), and also block public DNS service, especially Cloudflare's 1.1.1.1 and Google's 8.8.8.8" at Japanese government Intellectual Property taskforce meeting.


Oh, greed, shame and stupid...

24 August, 2018 10:34PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Lars Wirzenius

Lars Wirzenius

Software freedom for the modern era

I was watching the Matthew Garret "Heresies" talk from Debconf17 today. The following thought struck me:

True software freedom for this age: you can get the source code of a service you use, and can set it up on your own server. You can also get all your data from the service, and migrate it to another service (hosted by you or someone else). Futher, all of this needs to be easy, fast, and cheap enough to be feasible, and there can't be "network effects" that lock you into a specific service instance.

I will need to think hard what this means for my future projects.

24 August, 2018 10:23PM

hackergotchi for Joey Hess

Joey Hess

camping Roan highlands

small tent overlooking a big view

My second time camping the Roan highlands on the AT.

24 August, 2018 07:49PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, July 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, about 224 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 51 packages with a known CVE and the dla-needed.txt file has 43 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

24 August, 2018 11:59AM by Raphaël Hertzog

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live updates 20180824

Exactly one month has passed since the last TeX Live for Debian update, so here we are with the usual bunch. Besides the usual updates to macros and font packages, this time I also uploaded a new set of binaries for TeX Live which incorporates several bug fixes to programs.

The version for the binary packages is 2018.20180824.48463-1 and is based on svn revision 48463 up from revision 48169 of the last set. That means we get: new version of dvisvgm, fixes to synctex including the API version bump, new versions of opendetex, various fixes for dvipdfmx, some fixes for the ptex family programs.

The version of the macro/font packages is 2018.20180824-1 and contains the usual menu listed below.

Please enjoy.

New packages

beamertheme-npbt, businesscard-qrcode, clrstrip, inline-images, mathfont, plautopatch, returntogrid, tikz-network, ucsmonograph, worksheet, xfakebold.

Updated packages

abnt, alegreya, animate, apxproof, beamer, beamertheme-focus, bib2gls, biblatex-gb7714-2015, biblatex-ieee, bibleref, bidi, bxjscls, chemfig, context-filter, context-vim, datetime2-magyar, datetime2-norsk, datetime2-polish, datetime2-portuges, ducksay, dynkin-diagrams, etoolbox, factura, fetchbibpes, fira, fontawesome5, fontools, fontspec, gbt7714, gentombow, glossaries-extra, hyphen-base, hyphen-bulgarian, hyphen-indic, hyph-utf8, invoice, jlreq, knowledge, kpfonts, l3build, latexindent, latexmk, lettrine, libertinus-otf, lshort-chinese, lualatex-truncate, luatexja, luatexko, marginfit, marginnote, mathastext, multirow, musixtex, nicematrix, nidanfloat, pgf-blur, phonenumbers, platex, plautopatch, plex, proofread, pst-poker, ptex-base, pxrubrica, qcircuit, register, sapthesis, schule, schwalbe-chess, subfiles, tagpdf, tcolorbox, thesis-gwu, turabian-formatting, ucsmonograph, unicode-math, uplatex, witharrows, xepersian, xetexko, xfakebold.

24 August, 2018 07:28AM by Norbert Preining

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, July 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 3 hours from June. I worked 10 hours and therefore carried over 8 hours to August.

I uploaded an update to the linux package with fixes for a large number of security (and other) issues (DLA-1422-1). I had to make a second update to resolve a build failure on armhf (DLA-1422-2).

Since the "jessie-backports" suite is no longer accepting updates, and there are LTS users depending on the updated kernel (Linux 4.9) there, I added the linux-4.9 (DLA-1423-1) and linux-latest-4.9 (DLA-1424-1) packages to provide an upgrade path for these users. I also updated the linux-base package (DLA-1434-1) to satisfy the dependencies of the new linux-image binary packages.

24 August, 2018 04:28AM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Heading to the Bay Area

On September 4th, I’ll be starting a fellowship at the Center for Advanced Studies in the Behavioral Sciences (CASBS), a wonderful social science research institute at Stanford that’s perched on a hill overlooking Palo Alto and the San Francisco Bay. The fellowship is a one-year gig and I’ll be back in Seattle next June.

A CASBS fellowship is an incredible gift in several senses. In the most basic sense, it will mean time to focus on research and writing. I’ll be using my time there to continuing my research on the social scientific study of peer production and cooperation. More importantly though, the fellowship will give me access to a community of truly incredible social social scientists who be my “fellow fellows” next year.

Finally, being invited for a CASBS fellowship is a huge honor. I’ve been preparing by reading a list of Wikipedia articles I built about the previous occupants of the study that I’ll be working out of next year (the third fellow to work out of my study was Claude Shannon!). It’s rare for junior faculty like myself to be invited and I’m truly humbled.

The only real downside of the fellowship is that it means that I’ll be spending the academic year away from Seattle. I’m going to miss working out of UW, my department, and the Community Data Science Collective lab here enormously.

In a personal sense, it means I’ll be leaving a wonderful community in Seattle in and around my home at Extraordinary Least Squares. I’m going to miss folks deeply and I look forward to returning.

Of course, I’m also pretty excited about moving to Palo Alto. It will be the first time either Mika or I have lived in California and we hope to take advantage of the opportunity.

Please help us do so!  If you’re at Stanford, in Silicon Valley, or are anywhere in the Bay Area and want to meet up, please don’t hesitate to get in contact! We’ll be arriving with very little community and I’m really interested in meeting and making friends  and taking advantage of my nine-months in the area to make connections!

24 August, 2018 02:39AM by Benjamin Mako Hill

August 23, 2018

hackergotchi for Joey Hess

Joey Hess

Dear Ad Networks

In 1 week, I plan to benchmark all your advertisment delivery systems from IP address block 184.20/16.

Please note attached Intel microcode license may apply to your servers. If you don't want me benchmarking your ad servers, simply blacklist my IP block now.

Love, Joey

PS The benchmarking will continue indefinitely.

23 August, 2018 01:48PM

hackergotchi for Norbert Preining

Norbert Preining

CafeOBJ 1.5.8 released

Some time ago we released CafeOBJ 1.5.8 with some new features and bugfixes for the inductive theorem prover CITP. We are still struggling with SBCL builds on Windows, which suddendly started to produce corrupt images, something that doesn’t happen on Linux or Mac.

cafeobj-logo

To quote from our README:

CafeOBJ is a new generation algebraic specification and programming language. As a direct successor of OBJ, it inherits all its features (flexible mix-fix syntax, powerful typing system with sub-types, and sophisticated module composition system featuring various kinds of imports, parameterised modules, views for instantiating the parameters, module expressions, etc.) but it also implements new paradigms such as rewriting logic and hidden algebra, as well as their combination.

Availability

Binary packages for Linux, MacOS, and Windows are already available, both in 32 and 64 bit and based on Allegro CL and SBCL (with some exceptions). All downloads can be found at the CafeOBJ download page. The source code can also be found on the download page, or directly from here: cafeobj-1.5.8.tar.gz.

The CafeOBJ Debian package is already updated.

Macports file has also been updated, please see the above download/install page for details how to add our sources to your macport.

Bug reports

If you find a bug, have suggestions, or complains, please open an issue at the Github issue page.

For other inquiries, please use info@cafeobj.org

23 August, 2018 01:14AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.16

digest version 0.6.16 arrived on CRAN earlier today, and was just prepared for Debian as well.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'sha-512', 'crc32', 'xxhash32', 'xxhash64' and 'murmur32' algorithms) permitting easy comparison of R language objects.

This release brings a few robustifications. Radford Neal pointed out that serialize() output should not be unit-tested as it always reflects the R version, and will change--so we no longer do that. Henrik Bengtsson pointed out missing leading padding for crc32 output which we added, and corrected the minimal R version we should depend on. Thanks to both for the help in making the package better. We also added some more tests now achieving 100% coverage.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 August, 2018 12:49AM

August 22, 2018

hackergotchi for Charles Plessy

Charles Plessy

Print with Brother from Debian

I ran into circles for more than one hour, before eventually understanding that one needs to install the lib32stdc++6 package in order to use the Brother drivers (HL-L2365DW printer) on an amd64 system, since they are provided as i386 packages. Only after, I realised that there was more than a hint in the online instructions. The hardest part was that without lib32stdc++6, everything seemed to work fine, except that nothing was coming out from the printer.

22 August, 2018 12:41PM

Andrej Shadura

Linux Vacation Eastern Europe 2018

On Friday, I will be attending LVEE (Linux Vacation Eastern Europe) once again after a few years of missing it for various reasons. I will be presenting a talk on my experience of working with LAVA; the talk is based on a talk given by my colleague Guillaume Tucker, who helped me a lot when I was ramping up on LAVA.

Since the conference is not well known outside, well, a part of Eastern Europe, I decided I need to write a bit on it. According to the organisers, they had the idea of having a Linux conference after the newly reborn Minsk Linux User Group organised quite a successful celebration of the ten years anniversary of Debian, and they wanted to have even a bigger event. The first LVEE took place in 2005 in a middle of a forest near Hrodna.

LVEE 2005 group photo

As the name suggests, this conference is quite different from many other conferences, and it is actually a bit close in spirit to the Linux Bier Wanderung. The conference is very informal, it happens basically in a middle of nowhere (until 2010, the Internet connection was very slow and unreliable or absent), and there’s a massive evening programme every evening with beer, shashlyk and a lot of chatting.

My first LVEE was in 2009, and it was, in fact, my first Linux conference. The venue for LVEE has traditionally been a tourist camp in a forest. For those unfamiliar with the concept, a tourist camp (at least in the post-Soviet countries) is an accommodation facility usually providing a bare minimum comfort; people are normally staying in huts or small houses with shared facilities, often located outside.

Houses part of the tourist camp Another house part of the tourist camp

When the weather permits (which usually is defined as: not raining), talks are usually held outside. When it starts raining, they move inside one of the houses which is big enough to accommodate most of the people interested in talks.

Grigory Zlobin in front of a house talks about FOSS in education in Ukraine

Some participants prefer to stay in tents:

Tents of some of the participants

People not interested in talks organise impromptu open-air hacklabs:

Impromptu open-air hacklab

Or take a swim in a lake:

Person standing on a pier by a lake

Of course, each conference day is followed by shashlyks and beer:

Shashlyks work in progress

And, on the final day of the conference, cake!

LVEE cake

This year, for the first time LVEE is being sponsored by Collabora and Red Hat.

The talks are usually in Russian (with slides usually being in English), but even if you don’t speak Russian and want to attend, fear not: most of the participants speak English to some degree, so you will unlikely feel isolated. If enough English-speaking participants sign up, it is possible that we can organise some aids (e.g. translated subtitles) to make both people not speaking English and not speaking Russian feel at home.

I hope to see some of the readers at LVEE next time :)

22 August, 2018 11:24AM by Andrej Shadura

hackergotchi for Wouter Verhelst

Wouter Verhelst

DebConf18 video work

For personal reasons, I didn't make it to DebConf18 in Taiwan this year; but that didn't mean I wasn't interested in what was happening. Additionally, I remotely configured SReview, the video review and transcoding system which I originally wrote for FOSDEM.

I received a present for that today:

And yes, of course I'm happy about that :-)

On a side note, the videos for DebConf18 have all been transcoded now. There are actually three files per event:

  • The .webm file contains the high quality transcode of the video. It uses VP9 for video, and Opus for audio, and uses the Google-recommended settings for VP9 bitrate configuration. On modern hardware with decent bandwidth, you'll want to use this file.
  • The .lq.webm file contains the low-quality transcode of the video. It uses VP8 for video, and Vorbis for audio, and uses the builtin default settings of ffmpeg for VP8 bitrate configuration, as well as a scale-down to half the resolution; those usually end up being somewhere between a third and half of the size of the high quality transcodes.
  • The .ogg file contains the extracted vorbis audio from the .lq.webm file, and is useful only if you want to listen to the talk and not watch it. It's also the smallest download, for obvious reasons.

If you have comments, feel free to let us know on the mailinglist

22 August, 2018 07:27AM

hackergotchi for Norbert Preining

Norbert Preining

#!/usr/bin/env != /usr/bin/perl

I just say one thing … I hate irrelevant policies. Now I have to either patch 59 files in TeX Live, or write a script that goes through a few Gb of data to search like lintian for these kind of things, and replace it automatically on build time.

Now what was the argumentation for this change on the Debian Perl list:

No, for perl programs shipped by Debian, that can only be expected to work reliably if invoked with /usr/bin/perl – any other perl that happens to be in a user’s $PATH might not have the correct modules installed, or might have other behavioural differences that break things. Note that this change is about fixing a long-standing inconsistency between the main part of policy and the perl sub-policy; this requirement has been a must in the perl policy since 2001.

Speaking to your arguments, the user is of course free to use a different perl for applications installed locally, but this should not be the case for packages in Debian.

I can not disagree more. I consider it our (= Debian Developer) job to package stuff, but not to hold the sysadmin hand if he wants to shoot himself. I consider it a valid use case to replace/override a Perl installation by prepending it to the PATH (think testing of new Perl or Perl development). In this case, I expect Perl scripts to use the Perl *I* as sysadmin provided. In this case it is the obligation to ensure proper availability of modules and support files.

Doing this kind of policing is not helpful at all, thus I filed a bug against debian-policy to get this requirement removed.

What a waste of time. Thanks a lot for a useless addition to the Policy!

22 August, 2018 02:28AM by Norbert Preining

August 20, 2018

Iustin Pop

A sick/recovery/photography/rant week

Plenty of recovery :/

My very optimistic post from last week claimed my cough was gone after a very long and difficult race. Which doesn’t make sense, and contradicts common sense.

So while Monday (13th) was OK, by Tuesday afternoon—one and a half days after the race—my cough was back in strength, and I slept very badly that night due to waking up a lot. And Wednesday followed with a properly-returned cold, blood tests says “yes, you have a viral infection, again; but your lungs are fine, just wait”, Thursday off, Friday somewhat better and first small workout, Saturday again a bit off but did a tiny workout, and Sunday was finally OK-ish.

Too many pictures, again

So finally out of the house properly on Saturday and off to Walter Zoo where I took way too many pictures as usual, but at least some came out (in my subjective opinion) pretty well:

Crop from 200mm - f/4.0, 1/800s, ISO 320 Crop from 200mm - f/4.0, 1/800s, ISO 320
200mm, uncropped - f/4.0, 1/800s, ISO 200 200mm, uncropped - f/4.0, 1/800s, ISO 200
155mm, uncropped - f/7.1, 1/200s, ISO 12800 155mm, uncropped - f/7.1, 1/200s, ISO 12800
200mm, uncropped - f/2.8, 1/800s, ISO 720 200mm, uncropped - f/2.8, 1/800s, ISO 720
200mm, aspect ratio changed but otherwise uncropped - f/2.8, 1/800s, ISO 9000 200mm, aspect ratio changed but otherwise uncropped - f/2.8, 1/800s, ISO 9000
300mm, aspect ratio changed but otherwise uncropped - f/4.0, 1/1250s, ISO 110 300mm, aspect ratio changed but otherwise uncropped - f/4.0, 1/1250s, ISO 110

Of course, after you take the pictures and get home, you realise how many mistakes are in each of them. Composition, ISO/Exposure/Aperture triad, etc. One day… one day I’ll take a picture and I’ll be “I don’t see how I could have done this better”. But that day is definitely not yet here.

In the meantime, having a good camera saves the day more often than not. Case in point, the chameleon picture (face only) was taken at 1/800s, ISO 9000. Given VR, one could have taken this picture at two or even three stops slower, thus allowing ISO 2500 or even 1250. But the camera’s sensor allows nice ISO 9000 at full original resolution, especially in this light and these colours, and down-sampling for the web (45MP to 4MP here) makes it even better. I feel bad of having gear to compensate for bad skills, but…

One thing that is harder to compensate using gear is however strong harsh light, the kind present in the first two pictures and somewhat in the last one. Here a flash would have helped (significantly, I think), but flash in a Zoo? Poor animals are probably scared enough by all the people. So, colours are bad in the first two (green does not play nice), the last one has some nice matching of colours - all reddish/greyish. The interior pictures, despite high ISO, are much nicer in this aspect. As always, “IMHO”.

APIs are hard

And also this weekend while trying to upload pictures I learned that Facebook stopped photo upload access for all “non-browser desktop clients”, or at least that’s what the built-in Lightroom Facebook plugin said, and Jeffrey Friedl’s plugin says as well. In a way, that’s not surprising. It’s 2018, and in this stone age of technology supporting a photo upload API is too difficult. Right?

But with a strange twist of fate, now Google Photos supports some API access. You can upload photos via an API, but not modify them, not even the one you uploaded previously. This is basically the functionality that the FB plugin had, before it went “poof” in the wind.

And that’s why I still use SmugMug, because they understand the ease of simplicity of a proper publish workflow (including republishes, including metadata-only updates, including deletions).

20 August, 2018 09:17PM

Reproducible builds folks

Reproducible Builds: Weekly report #173

Here’s what happened in the Reproducible Builds effort between Sunday August 12 and Saturday August 18 2018:

Packages reviewed and fixed, and bugs filed

tests.reproducible-builds.org development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

20 August, 2018 08:04PM

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.18

Previously: v4.17.

Linux kernel v4.18 was released last week. Here are details on some of the security things I found interesting:

allocation overflow detection helpers
One of the many ways C can be dangerous to use is that it lacks strong primitives to deal with arithmetic overflow. A developer can’t just wrap a series of calculations in a try/catch block to trap any calculations that might overflow (or underflow). Instead, C will happily wrap values back around, causing all kinds of flaws. Some time ago GCC added a set of single-operation helpers that will efficiently detect overflow, so Rasmus Villemoes suggested implementing these (with fallbacks) in the kernel. While it still requires explicit use by developers, it’s much more fool-proof than doing open-coded type-sensitive bounds checking before every calculation. As a first-use of these routines, Matthew Wilcox created wrappers for common size calculations, mainly for use during memory allocations.

removing open-coded multiplication from memory allocation arguments
A common flaw in the kernel is integer overflow during memory allocation size calculations. As mentioned above, C doesn’t provide much in the way of protection, so it’s on the developer to get it right. In an effort to reduce the frequency of these bugs, and inspired by a couple flaws found by Silvio Cesare, I did a first-pass sweep of the kernel to move from open-coded multiplications during memory allocations into either their 2-factor API counterparts (e.g. kmalloc(a * b, GFP...) -> kmalloc_array(a, b, GFP...)), or to use the new overflow-checking helpers (e.g. vmalloc(a * b) -> vmalloc(array_size(a, b))). There’s still lots more work to be done here, since frequently an allocation size will be calculated earlier in a variable rather than in the allocation arguments, and overflows happen in way more places than just memory allocation. Better yet would be to have exceptions raised on overflows where no wrap-around was expected (e.g. Emese Revfy’s size_overflow GCC plugin).

Variable Length Array removals, part 2
As discussed previously, VLAs continue to get removed from the kernel. For v4.18, we continued to get help from a bunch of lovely folks: Andreas Christoforou, Antoine Tenart, Chris Wilson, Gustavo A. R. Silva, Kyle Spiers, Laura Abbott, Salvatore Mesoraca, Stephan Wahren, Thomas Gleixner, Tobin C. Harding, and Tycho Andersen. Almost all the rest of the VLA removals have been queued for v4.19, but it looks like the very last of them (deep in the crypto subsystem) won’t land until v4.20. I’m so looking forward to being able to add -Wvla globally to the kernel build so we can be free from the classes of flaws that VLAs enable, like stack exhaustion and stack guard page jumping. Eliminating VLAs also simplifies the porting work of the stackleak GCC plugin from grsecurity, since it no longer has to hook and check VLA creation.

Kconfig compiler detection
While not strictly a security thing, Masahiro Yamada made giant improvements to the kernel’s Kconfig subsystem so that kernel build configuration now knows what compiler you’re using (among other things) so that configuration is no longer separate from the compiler features. For example, in the past, one could select CONFIG_CC_STACKPROTECTOR_STRONG even if the compiler didn’t support it, and later the build would fail. Or in other cases, configurations would silently down-grade to what was available, potentially leading to confusing kernel images where the compiler would change the meaning of a configuration. Going forward now, configurations that aren’t available to the compiler will simply be unselectable in Kconfig. This makes configuration much more consistent, though in some cases, it makes it harder to discover why some configuration is missing (e.g. CONFIG_GCC_PLUGINS no longer gives you a hint about needing to install the plugin development packages).

That’s it for now! Please let me know if you think I missed anything. Stay tuned for v4.19; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

20 August, 2018 06:29PM by kees

Jamie McClelland

Which is faster, rsync or rdiff-backup?

Surprise: rdiff-backup (given our particular constraints).

As our data grows (and some filesystems balloon to over 800GBs, with many small files) we have started seeing our night time backups continue through the morning, causing serious disk i/o problems as our users wake up and regular usage rises.

For years we have implemented a conservative backup policy - each server runs the backup twice: once via rdiff-backup to the onsite server with 10 days of increments kept. A second is an rsync to our offsite backup servers for disaster recovery.

Simple, I thought. I will change the rdiff-backup to the onsite server to use the ultra fast and simple rsync. Then, I'll use borgbackup to create an incremental backup from the onsite backup server to our off site backup servers. Piece of cake. And with each server only running one backup instead of two, they should complete in record time.

Except, some how the rsync backup to the onsite backup server was taking almost as long as the original rdiff-backup to the onsite server and rsync backup to the offsite server combined. What? I thought nothing was faster than the awesome simplicity of rsync, especially compared to the ancient python-based rdiff-backup, which hasn't had an upstream release since 2009.

Turns out that rsync is not faster if disk i/o on the target server is your bottle neck.

By default, rsync determines if a file needs to be updated by comparing the time stamp and size of the files on both the source and the target server. That means rsync has to read the meta data on every single file on the source and every single file on the target. At first glance, this would seem faster than rdiff-backup, which compares sha1 checksums (it has to read the entire file, not just the metadata). And, this is definitely the case the first time rdiff-backup runs. However, rdiff-backup has a trick up its sleave: the rdiff-backup-data/mirror_metadata file.

As rdiff-backup runs, it keeps track of the sha1 checksum of every file it backups up in the mirror_medata file on the target. It seems that the next time it runs, it simply compares the sha1 on the source with the sha1 in this file, meaning it doesn't have to read each file on the target. The result: significantly less disk i/o on the target for faster backups (there is more disk i/o on the source, though, since rdiff-backup has to calculate the sha1 checksum instead of just collecting the size and last modified time stamp).

rdiff-backup also wins by saving all metadata (file ownership and permissions). Since we backup to a non-privileged user on the backup server, this data is lost with rsync. And, for reasons of simplicity, I appreciate having the backup files via a plain filesystem (unlike borgbackup which requires special commands just to get a listing of the files).

For the long term, filesystem-based backup tools seem like a losing proposition compared with block-based backups (like drbd). However, until we can re-organize our data to take advantage of drdb, we will be sticking with rdiff-backup.

20 August, 2018 03:51PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 2.0.0: Updated and extended

rfoaas greed example

FOAAS upstream recently went to release 2.0.0, so here we are catching up bringing you all the new accessors from FOAAS 2.0.0: bag(), equity(), fts(), ing(), particular(), ridiculous(), and shit(). We also added off_with() which was missing previously. Documentation and tests were updated. The screenshot shows an example of the new functions.

As usual, CRANberries provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 August, 2018 12:24PM

Russ Allbery

Review: Riders of the Storm

Review: Riders of the Storm, by Julie E. Czerneda

Series: Stratification #2
Publisher: DAW
Copyright: 2008
ISBN: 1-101-21557-7
Format: Kindle
Pages: 452

Riders of the Storm is the second book in the Stratification sub-series in Czerneda's larger Trade Pact universe, and a direct sequel to Reap the Wild Wind. Czerneda is telling a larger story in multiple parts, so this isn't a series to read out of order.

Reap the Wild Wind broke apart Aryl's world view (along with everything else about her life) and gave her contact with a larger universe than she thought existed. Riders of the Storm builds on that, doing middle-book setup and stabilization and bringing the shape of the trilogy into clearer focus. But it takes its sweet time getting there. First, we get an interminable slog across snowy mountains during a winter storm, and then a maddeningly slow exploration of an oddly depopulated Om'ray settlement that none of Aryl's clan knew about (even though that shouldn't be possible).

This book does get somewhere eventually. Aryl can't avoid getting pulled into inter-species politics, including desperate attempts to understand the maddeningly opaque Oud and unpredictably malevolent Tiktik. There's less contact with varied off-worlders in this book than the last; Aryl instead gets a much deeper connection and conversation with one specific off-worlder. That, when it finally comes, does move past one of my complaints about the first book: Aryl finally realizes that she needs to understand this outside perspective and stop being so dismissive of the hints that this reader wished she'd follow up on. We're finally rewarded with a few glimpses of why the off-worlders are here and why Aryl's world might be significant. Just hints, though; all the payoff is saved for (hopefully) the next book.

We also get a glimpse of the distant Om'ray clan that no one knows anything about, although I found that part unsatisfyingly disconnected from the rest of the story. I think this is a middle-book setup problem, since the Tiktik are also interested and Czerneda lays some groundwork for bringing the pieces together.

If Riders of the Storm were just the second half of this book, with Tiktik and Oud politics, explorations of Om'ray powers, careful and confused maneuvering between the human off-worlder and Aryl, and Enris's explorations of unexpected corners of Om'ray technology, I would have called this a solid novel and a satisfying continuation of the better parts of the first book. But I thought the first half of this book was painfully slow, and it took a real effort of will to get through it. I think I'm still struggling with a deeper mismatch of what Czerneda finds interesting and what I'm reading this series for.

I liked the broader Trade Pact universe. I like the world-building here, but mostly for its mysteries. I want to find out the origins of this world, how it ties into the archaeological interests of the off-worlders, why one of the Om'ray clans is so very strange, and how the Oud, Tiktik, and Om'ray all fit together in the history of this strange planet. Some of this I might know if I remembered the first Trade Pact trilogy better, but the mystery is more satisfying for not having those clues. What I'm very much not interested in is the interpersonal politics of Aryl's small band, or their fears of having enough to eat, or their extended, miserable reaction to being in a harsh winter storm for the first time in their lives. All this slice-of-life stuff is so not why I'm reading this series, and for my taste there was rather too much of it. In retrospect, I think that was one of the complaints I had about the previous book as well.

If instead you more strongly identify with Aryl and thus care about the day-to-day perils of her life, rather than seeing them as a side-show and distraction from the larger mystery, I think your reaction to this book would be very different from mine. That would be in line with how Aryl sees her own world, so, unlike me, you won't be constantly wanting her to focus on one thing when she's focused on something else entirely. I think I'm reading this series a bit against the grain because I don't find Aryl's tribal politics, or in-the-moment baffled reactions, interesting enough to hold my attention without revelations the deeper world-building.

That frustration aside, I'm glad I got through the first part of the book to get to the meat because that world-building is satisfying. I'm thoroughly hooked: I want to know a lot more about the Oud and Tiktik, about the archaeological mission, and about the origins of Aryl's bizarre society. But I'm also very glad that there's only one more book so that this doesn't drag on much longer, and I hope that book delivers up revelations at a faster and more even pace.

Followed by Rift in the Sky.

Rating: 6 out of 10

20 August, 2018 03:03AM

Mostly Kindle haul

It's been a little while since I've posted one of these. The good news is that I've been able to increase my reading a lot (just not my reviewing, quite yet), so I've already read a couple of these!

Bella Bathurst — Sound (non-fiction)
Sarah Rees Brennan — In Other Lands (sff)
Jacqueline Carey — Starless (sff)
Becky Chambers — Record of a Spaceborn Few (sff)
William J. Cook — In Pursuit of the Traveling Salesman (non-fiction)
Mur Lafferty — Six Wakes (sff)
Fonda Lee — Jade City (sff)
Yoon Ha Lee — Revenant Gun (sff)
Bridget McGovern & Chris Lough (ed.) — Rocket Fuel (non-fiction anthology)
Naomi Novik — Spinning Silver (sff)
John Scalzi — The Collapsing Empire (sff)
Karl Schroeder — The Million (sff)
Tade Thompson — Rosewater (sff)
Jo Walton — An Informal History of the Hugos (non-fiction)
Walter Jon Williams — The Praxis (sff)
Walter Jon Williams — The Sundering (sff)
Walter Jon Williams — Conventions of War (sff)

Reviews of the Scalzi and Chambers are upcoming, and I'm reading In Pursuit of the Traveling Salesman at the moment. (I've been filling in some gaps in my understanding of algorithms recently, ran across that popular history of the traveling salesman problem, and couldn't resist.)

20 August, 2018 01:02AM

August 19, 2018

hackergotchi for Sune Vuorela

Sune Vuorela

Post Akademy

So, it has been a busy week of Qt and KDE hacking in the beautiful city of Vienna.
Besides getting quite some of the Viennese staple food, schnitzel, it was an interesting adventure of getting smarter.

  • Getting smarter about making sure what happens in North Korea doesn’t stay in North Korea
  • Getting smarter about what is up with this newfangled Wayland technology and how KDE uses it
  • Getting smarter about how to Konquer the world and welcoming new contributors
  • Getting smarter about opensource licensing compliance
  • Getting smarter about KItinerary, the opensource travel assistant
  • Getting smarter about TNEF, a invitation transport format that isn’t that neutral
  • Getting smarter about Yocto, automotive and what KDE can do

And lots of other stuff.

Besides getting smarter, also getting to talk to people about what they do and to write some patches are important events.
I also wrote some code. Here is a highlight:

And a lot of other minor things, including handling a couple of Debian bugs.

What I’m hoping to either put to my own todolist, or preferably others, is

I felt productive, welcome and … ready to sleep for a week.

19 August, 2018 06:10PM by Sune Vuorela

August 18, 2018

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - August 2018

Summer is slowly coming to an end in Montreal and as much as I would like it to last another month, I'm also glad to fall back into my regular routine.

Part of that routine means the return of Montreal's Debian & Stuff - our informal gathering of the local Debian community!

If you are in Montreal on August 26th, come and say hi: everyone's welcome!

Some of us plan to work on specific stuff (I want to show people how nice the Tomu boards I got are) - but hanging out and having a drink is also a perfectly reasonable option.

Here's a link to the event's page.

18 August, 2018 04:00AM by Louis-Philippe Véronneau

August 17, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.100.5.0

armadillo image

A new RcppArmadillo release 0.9.100.5.0, based on the new Armadillo release 9.100.5 from earlier today, is now on CRAN and in Debian.

It once again follows our (and Conrad's) bi-monthly release schedule. Conrad started with a new 9.100.* series a few days ago. I ran reverse-depends checks and found an issue which he promptly addressed; CRAN found another which he also very promptly addressed. It remains a true pleasure to work with such experienced professionals as Conrad (with whom I finally had a beer around the recent useR! in his home town) and of course the CRAN team whose superb package repository truly is the bedrock of the R community.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 479 other packages on CRAN.

This release once again brings a number of improvements to the sparse matrix functionality. We also fixed one use case of the OpemMP compiler and linker flags which will likely hit a number of the by now 501 (!!) CRAN packages using RcppArmadillo.

Changes in RcppArmadillo version 0.9.100.5.0 (2018-08-16)

  • Upgraded to Armadillo release 9.100.4 (Armatus Ad Infinitum)

    • faster handling of symmetric/hermitian positive definite matrices by solve()

    • faster handling of inv_sympd() in compound expressions

    • added .is_symmetric()

    • added .is_hermitian()

    • expanded spsolve() to optionally allow keeping solutions of systems singular to working precision

    • new configuration options ARMA_OPTIMISE_SOLVE_BAND and ARMA_OPTIMISE_SOLVE_SYMPD smarter use of the element cache in sparse matrices

    • smarter use of the element cache in sparse matrices

  • Aligned OpenMP flags in the RcppArmadillo.package.skeleton used Makevars,.win to not use one C and C++ flag.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

Edited on 2018-08-17 to correct one sentence (thanks, Barry!) and adjust the RcppArmadillo to 501 (!!) as we crossed the threshold of 500 packages overnight.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 August, 2018 12:00PM

hackergotchi for Sune Vuorela

Sune Vuorela

Invite me to your meetings

I was invited by my boss to a dinner. He uses exchange or outlook365 or something like that. The KMail TNEF parser didn’t succeed in parsing all the info, so I’m kind of trying to fix it.

But I need test data. From Exchange or outlook or outlook365. That I can add to the repoository for unit tests.

So if you can help me generate test data, please setup a meeting and invite me. publicinvites@sune.vuorela.dk

Just to repeat. The data will be made public.

17 August, 2018 08:39AM by Sune Vuorela

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.100.5.0

armadillo image

A new RcppArmadillo release 0.9.100.5.0, based on the new Armadillo release 9.100.5 from earlier today, is now on CRAN and in Debian.

It once again follows our (and Conrad's) bi-monthly release schedule. Conrad started with a new 9.100.* series a few days ago. I ran reverse-depends checks and found an issue which he promptly addressed; CRAN found another which he also very promptly addressed. It remains a true pleasure to work with such experienced professionals as Conrad (with whom I finally had a beer around the recent useR! in his home town) and of course the CRAN team whose superb package repository truly is the bedrock of the R community.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 479 other packages on CRAN.

This release once again brings a number of improvements to the sparse matrix functionality. We also also one use case of the OpemMP compiler and linker flags which will likely hit a number of the by now 499 (!!) CRAN packages using RcppArmadillo.

Changes in RcppArmadillo version 0.9.100.5.0 (2018-08-16)

  • Upgraded to Armadillo release 9.100.4 (Armatus Ad Infinitum)

    • faster handling of symmetric/hermitian positive definite matrices by solve()

    • faster handling of inv_sympd() in compound expressions

    • added .is_symmetric()

    • added .is_hermitian()

    • expanded spsolve() to optionally allow keeping solutions of systems singular to working precision

    • new configuration options ARMA_OPTIMISE_SOLVE_BAND and ARMA_OPTIMISE_SOLVE_SYMPD smarter use of the element cache in sparse matrices

    • smarter use of the element cache in sparse matrices

  • Aligned OpenMP flags in the RcppArmadillo.package.skeleton used Makevars,.win to not use one C and C++ flag.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 August, 2018 01:20AM

August 16, 2018

hackergotchi for Steve McIntyre

Steve McIntyre

25 years...

We had a small gathering in the Haymakers pub tonight to celebrate 25 years since Ian Murdock started the Debian project.

people in the pub!

We had 3 DPLs, a few other DDs and a few more users and community members! Good to natter with people and share some history. :-) The Raspberry Pi people even chipped in for some drinks. Cheers! The celebrations will continue at the big BBQ at my place next weekend.

16 August, 2018 09:42PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solskogen 2018: Tireless wireless (a retrospective)

These days, Internet access is a bit like oxygen—hard to get excited about, but living without it can be profoundly annoying. With prevalent 4G coverage and free roaming within the EU, the need for wifi in the woods has diminished somewhat, but it's still important for computers (bleep bloop!), and even more importantly, streaming.

As Solskogen's stream wants 5 Mbit/sec out of the party place (we reflect it outside, where bandwidth is less scarce), we were a bit dismayed when we arrived a week before the party for pre-check and discovered that the Internet access from the venue was capped at 5/0.5. After some frenzied digging, we discovered the cause: Since Solskogen is the only event at Flateby that uses the Internet much, they have reverted to the cheapest option except in July—and that caused us to eventually being relegated to an ADSL line card in the DSLAM, as opposed to the VDSL we've had earlier (which gave us 50/10). Even worse, with a full DSLAM, the change back would take weeks. We needed a plan B.

The obvious first choice would be 4G, but it's not a perfect match; just the stream alone would be 150+ GB (although it can be reduced or turned off when there's nothing happening on the big screen), and it's not the only thing that wants bandwidth. In other words, it would have a serious cost issue, and then there was the question to what degree it could deliver rock-stable streaming or not. There would be the option to use multiple providers and/or use the ADSL line for non-prioritized traffic (ie., participant access), but in the end, it didn't look so attractive, so we filed this as plan C and moved on to find another B.

Plan B eventually materialized in the form of the Ubiquiti Litebeam M5, a ridiculously cheap ($49 MSRP!) point-to-point link based on a somewhat tweaked Wi-Fi chipset. The idea was to get up on the roof (køb min fisk!), shoot to somewhere else with better networking and then use that link for everything. Øyafestivalen, by means of Daniel Husand, borrowed us a couple of M5s on short notice, and off we went to find trampolines on Google Maps. (For the uninitiated, trampolines = kids = Internet access.)

We considered the home of a fellow demoscener living nearby—at 1.4 km, it's well within the range of the M5 (we know of deployments running over 17 km).. However, the local grocery store in Flateby, Spar, managed to come up with something even more interesting; it turns out that behind the store, more or less across the street, there's a volunteer organization called Frivillighetssentralen that were willing to borrow out their 20/20 fiber Internet from Viken Fiber. Even better, after only a quick phone call, the ISP was more than willing to boost the line to 200/200 for the weekend. (The boost would happen Friday or so, so we'd run most of our testing with 20/20, but even that would be plenty.)

After a trip up on the roof of the party place, we decided approximately where to put the antenna, and put one of the M5s in the window of Frivillighetssentralen pointing roughly towards that spot. In a moment of hubris, we decided to try without going up on the roof again, just holding the other M5 out of the window, pointed it roughly in the right directoin… and lo and behold, it synced on 150 Mbit/sec both ways, reporting a distance of 450 meters. (This was through another house that was in the way, ie., no clear path. Did we mention the M5s are impossibly good for the price?)

So, after mounting it on the wall, we started building the rest of the network. Having managed switches everywhere paid off; instead of having to pull a cable from the wireless to the central ARM machine (an ODROID XU4) running as a router, we could just plug it into the closest participant switch and configure the ports. I'm aware that most people would consider VLANs overkill for a 200-person network, but it really helps in flexibility when something unexpected happens—and also in terms of cable.

However, as the rigging progressed and we started getting to the point where we could run test streams, it became clear that something was wrong. The Internet link just wasn't pushing the amount of bandwidth we wanted it to; in particular, the 5 Mbit/sec stream just wouldn't go through. (In parallel, we also had some problems with access points refusing to join the wireless controller, which turned out to be a faulty battery that caused the clock on the WLC to revert to year 2000, which in turn caused its certificate to be invalid. If we'd had Internet at that stage, it would have had NTP and never seen the problem, but of course, we didn't because we were still busy trying to figure out the best place on the roof at the time!)

Of course, frantic debugging ensued. We looked through every setting we could find on the M5s, we moved them to a spot with clear path and pointed them properly at each other (bringing the estimated link up to 250 Mbit/sec) and upgraded their software to the latest version. Nothing helped at all.

Eventually, we started looking elsewhere in our network. We run a fairly elaborate shaping and tunneling setup; this allows us to be fully in control over relative bandwidth prioritization, both ways (the stream really gets dedicated 5 Mbit/sec, for example), but complexity can also be scary when you're trying to debug. TCP performance can also be affected by multiple factors, and then of course, there's the Internet on its way. We tried blasting UDP at the other end full speed, which the XU4 would police down to 13 Mbit/sec, accurate to two decimals, for us (20 Mbit uplink, minus 5 for the stream, minus some headroom)—but somehow, the other end only received 12. Hmm. We reduced the policer to 12 Mbit/sec, and only got 11… what the heck?

At this point, we understood we had a packet loss problem on our hands. It would either be the XU4s or the M5s; something dropped 10% or so of all packets, indiscriminately. Again, the VLANs helped; we could simply insert a laptop on the right VLAN and try to send traffic outside of the XU4. We did so, and after some confusion, we figured out it wasn't that. So what was wrong with the M5s?

It turns out the latest software version has iperf built-in; you can simply ssh to the box and run from there. We tried the one on the ISP side; it got great TCP speeds to the Internet. We tried the one on the local side; it got… still great speeds! What!?

So, after six hours of debugging, we found the issue; there was a faulty Cat5 cable between two switches in the hall, that happened to be on the path out to the inner M5. Somehow it got link at full gigabit, but it caused plenty of dropped packets—I've never seen this failure mode before, and I sincerely hope we'll never be seeing it again. We replaced the cable, and tada, Internet.

Next week, we'll talk about how the waffle irons started making only four hearts instead of five, and how we traced it to a poltergeist that we brought in a swimming pool when we moved from Ås to Flateby five years ago.

16 August, 2018 08:00PM

hackergotchi for Bdale Garbee

Bdale Garbee

Mixed Emotions On Debian Anniversary

When I woke up this morning, my first conscious thought was that today is the 25th anniversary of a project I myself have been dedicated to for nearly 24 years, the Debian GNU/Linux distribution. I knew it was coming, but beyond recognizing the day to family and friends, I hadn't really thought a lot about what I might do to mark the occasion.

Before I even got out of bed, however, I learned of the passing of Aretha Franklin, the Queen of Soul. I suspect it would be difficult to be a caring human being, born in my country in my generation, and not feel at least some impact from her mere existence. Such a strong woman, with amazing talent, whose name comes up in the context of civil rights and women's rights beyond the incredible impact of her music. I know it's a corny thing to write, but after talking to my wife about it over coffee, Aretha really has been part of "the soundtrack of our lives". Clearly, others feel the same, because in her half-century-plus professional career, "Ms Franklin" won something like 18 Grammy awards, the Presidential Medal of Freedom, and other honors too numerous to list. She will be missed.

What's the connection, if any, between these two? In 2002, in my platform for election as Debian Project Leader, I wrote that "working on Debian is my way of expressing my most strongly held beliefs about freedom, choice, quality, and utility." Over the years, I've come to think of software freedom as an obvious and important component of our broader freedom and equality. And that idea was strongly reinforced by the excellent talk Karen Sandler and Molly de Blanc gave at Debconf18 in Taiwan recently, in which they pointed out that in our modern world where software is part of everything, everything can be thought of as a free software issue!

So how am I going to acknowledge and celebrate Debian's 25th anniversary today? By putting some of my favorite Aretha tracks on our whole house audio system built entirely using libre hardware and software, and work to find and fix at least one more bug in one of my Debian packages. Because expressing my beliefs through actions in this way is, I think, the most effective way I can personally contribute in some small way to freedom and equality in the world, and thus also the finest tribute I can pay to Debian... and to Aretha Franklin.

16 August, 2018 05:26PM

hackergotchi for Bits from Debian

Bits from Debian

25 years and counting

Debian is 25 years old by Angelo Rosa

When the late Ian Murdock announced 25 years ago in comp.os.linux.development, "the imminent completion of a brand-new Linux release, [...] the Debian Linux Release", nobody would have expected the "Debian Linux Release" to become what's nowadays known as the Debian Project, one of the largest and most influential free software projects. Its primary product is Debian, a free operating system (OS) for your computer, as well as for plenty of other systems which enhance your life. From the inner workings of your nearby airport to your car entertainment system, and from cloud servers hosting your favorite websites to the IoT devices that communicate with them, Debian can power it all.

Today, the Debian project is a large and thriving organization with countless self-organized teams comprised of volunteers. While it often looks chaotic from the outside, the project is sustained by its two main organizational documents: the Debian Social Contract, which provides a vision of improving society, and the Debian Free Software Guidelines, which provide an indication of what software is considered usable. They are supplemented by the project's Constitution which lays down the project structure, and the Code of Conduct, which sets the tone for interactions within the project.

Every day over the last 25 years, people have sent bug reports and patches, uploaded packages, updated translations, created artwork, organized events about Debian, updated the website, taught others how to use Debian, and created hundreds of derivatives.

Here's to another 25 years - and hopefully many, many more!

16 August, 2018 06:50AM by Ana Guerrero Lopez

hackergotchi for Norbert Preining

Norbert Preining

DebConf 18 – Day 3

Most of Japan is on summer vacation now, only a small village in the north resists the siege, so I am continuing my reports on DebConf. See DebConf 18 – Day 1 and DebConf 18 – Day 2 for the previous ones.

With only a few talks of interest for me in the morning, I spent the time preparing my second presentation Status of Japanese (and CJK) typesetting (with TeX in Debian) during the morning, and joined for lunch and the afternoon session.

First to attend was the Deep Learning BoF by Mo Zou. Mo reported on the problems of getting Deep Learning tools into Debian: Here not only the pure software, where proprietary drivers for GPU acceleration are often highly advisable, but also the data sets (pre-trained data) which often fall under a non-free license, pose problems with integration into Debian. With several deep learning practitioners around, we had a lively discussion how to deal with all this.

Next up was Markus Koschany with Debian Java, where he gave an overview on the packaging tools for Java programs and libraries, and their interaction with the Java build tools like Maven, Ant, and Gradle.

After the coffee break I gave my talk about Status of Japanese (and CJK) typesetting (with TeX in Debian), and I must say I was quite nervous. As a non CJK-native foreigner speaking about the intricacies of typesetting with Kanji was a bit a challenge. At the end I think it worked out quite well, and I got some interesting questions after the talk.

Last for today was Nathan Willis’ presentation Rethinking font packages—from the document level down. With design, layout, and fonts being close to my personal interests, too, this talk was one of the highlights for me. Starting from a typical user’s workflow in selecting a font set for a specific project, Nathan discussed the current situation of fonts in Linux environment and Debian, and suggested improvements. Unfortunately what would be actually needed is a complete rewrite of the font stack, management, system organization etc, a rather big task at hand.

After the group photo shot by Aigars Mahinovs who also provided several more photos and a relaxed dinner I went climbing with Paul Wise to a nearby gym. It was – not surprisingly – quite humid and warm in the gym, so the amount of sweat I lost was considerable, but we had some great boulders and a fun time. In addition to that, I found a very nice book, nice out of two reasons: first, it was about one of my (and my daughters – seems to be connected) favorite movies, Totoro by Miyazaki Hayao, and second, it was written in Taiwanese Mandarin with some kind of Furigana to aid reading for kids – something that is very common in Japan (even in books for adults in case of rare readings), but I have never seen before with Chinese. The proper name is Zhùyīn Zìmǔ 註音字母 or (or more popular) Bopomofo.

This interesting and long day finished in my hotel with a cold beer to compensate for the loss of minerals during climbing.

16 August, 2018 12:46AM by Norbert Preining

August 14, 2018

Enrico Zini

DebConf 18

This is a quick recap of what happened during my DebConf 18.

24 July:

  • after buying a new laptop I didn't set up a build system for Debian on it. I finally did it, with cowbuilder. It was straightforward to set up and works quite fast.
  • shopping for electronics. Among other things, I bought myself a new USB-C power supply that I can use for laptop and phone, and now I can have a power supply for home and one always in my backpack for traveling. I also bought a new pair of headphones+microphone, since I cannot wear in-ear, and I only had the in-ear ones that came with my phone.
  • while trying out the new headphones, I unexpectedly started playing loud music in the hacklab. I then debugged audio pin mapping on my new laptop and reported #904437
  • fixed debtags.debian.org nightly maintenance scripts, which have been mailing me errors for a while.

25 July:

26 July:

  • I needed to debug a wreport FTBFS on a porterbox, and since the procedure to set up a build system on a porterbox was long and boring, I wrote debug-on-porterbox
  • Fixed a wreport FTBFS and replaced it with another FTBFS, that I still haven't managed to track down.

27 July:

  • worked on multiple people talk notes, alone and with Rhonda
  • informal FD/DAM brainstorming with jmw
  • local antiharassment coordination with Tassia and Taowa
  • talked to ansgar about how to have debtags tags reach ftp-master automatically, without my manual intervention
  • watched a wonderful lunar eclipse

28 July:

  • implemented automatic export of debtags data for ftp-master
  • local anti-harassment team work

29 July:

30 July:

31 July:

  • Implemented F-Droid antifeatures as privacy:: Debtags tags

01 August:

  • Day trip and barbecue

02 August:

03 August:

  • Multiple People talk
  • Debug Boot of my laptop with UEFI with Steve, and found out that HP firmware updates for it can only be installed using Windows. I am really disappointed with HP for this, given it's a rather high-end business laptop.

04 August:

14 August, 2018 01:08PM

Reproducible builds folks

Reproducible Builds: Weekly report #172

Here’s what happened in the Reproducible Builds effort between Sunday August 5 and Saturday August 11 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

There were a handful of updates to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages:

jenkins.debian.net development

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

14 August, 2018 06:17AM

Minkush Jain

Google Summer of Code 2018 Final Report

This is the summary of my work done during Google Summer of Code 2018 with Debian.

Project Title: Wizard/GUI helping new interns/students get started

Final Work Product: https://wiki.debian.org/MinkushJain/WorkProduct

Mentor: Daniel Pocock

Codebase: gsoc-2018-experiments

CardBook debian/sid

What is Google Summer of Code?

Google Summer of Code is a global program focused on introducing students to open source software development. Students work on a 3-month programming project with an open source organization during their break from university.

As you can probably guess, there is a high demand for its selection as thousands of students apply for it every year. The program offers students real-world experience to build software along with collaboration with the community and other student developers.

Project Overview

This project aims at developing tools and packages which would simplify the process for new applicants in the open source community to get the required setup. It would consist of a GUI/Wizard with integrated scripts to setup various communication and development tools like PGP and SSH key, DNS, IRC, XMPP, mail filters along with Jekyll blog creation, mailing lists subscription, project planner, searching for developer meet-ups, source code scanner and much more! The project would be free and open source hosted on Salsa (Debian based Gitlab)

I created various scripts and packages for automating tasks and helping a user get started by managing contacts, emails, subscribe to developer’s lists, getting started with Github, IRC and more.

Mailing Lists Subscription

I made a script for fully automating the subscription to various Debian mailing lists. The script also automates its reply process as well to complete the procedure for a user.

It works for all ten important Debian mailing lists for a newcomer like ‘debian-outreach’, ‘debian-announce’, ‘debian-news’, ‘debian-devel-announce’ and more.

I also spent time refactoring the code with my mentors to make it work as a stand-alone script by adding utility functions and fixing the syntax.

The video demo of the script had also been added in my blog.

It inputs the email and automated reply-code received from @lists.debian.org from the user, and subscribes them to the mailing list. The script uses requests library to send data on the website and submit it on their server.

For the application task, I also created a basic GUI for the program using PyQt.

Libraries used:

  • Requests
  • Smtp
  • PyQt
  • MIME handlers

This is a working demo of the script. The user can enter any Debian mailing lists to subscribe to it. They have to enter the unique code received by email to confirm their subscription:


Thunderbird Setup

This task involved writing program to simplify the setup procedure of Thunderbird for a new user.

I made a script which kills the Thunderbird process if it is running and then edits the ‘prefs.js’ configuration file to modify configuration settings of the software.

The program overwrites the existing settings by creating ‘user.js’ with cusotm settings. It gets implemented as soon Thunderbird is re-opened.

Also added the feature to extend the script to all profiles or a specific one which would be user’s choice.

Features:

  • Examines system process to find if Thunderbird is running in background and kills it.

  • Searches dynamically in user’s system to find the configuration file’s path.

  • User can chose which profile should they allow to change.

  • Modifies the default settings to accomplish the following:

    • User’s v-card is automatically appended in mails and posts.
    • Top-posting configuration has been setup by default.
    • Reply heading format is changed.
    • Plain-text mode made default for new mails.
    • No sound and alerts for incoming mails.

and many more…

Libraries used:

  • Psutil
  • Os
  • Subprocess


Source Code Scanner

I created a program to analyse user’s project directory to find which Programming Language they are proficient.

The script would help them realise which language and skill they prefer by finding the percentage of each language present.

It scans through all the file extensions like (.py, .java, .cpp) which are stored in a separate file and examines them to display the total number of lines and percentage of each language present in the directory.

The script uses Pygount library to scan all folders for source code files. It uses pygments syntax highlighting package to analyse the source code and can examine any language.

Libraries used:

  • os (operating system interfaces)
  • pygount

I added a Python script with all common file extensions included in it.

The script could be excecuted easily by entering the directory’s path by the user.

Research:

  • Searched Python’s glob library to iterate through home directory.

  • Using Github Linguists library to analyse code.

  • Pygments library to search languages through syntax highlighter.

This is a working demo of the script. The user can enter their project’s directory and the script will analyse it to publish the result:


CardBook Debian Package

For managing contacts/calendar for a user, Thunderbird extensions need to be installed and setup.

I created a Debian package for CardBook, a Thunderbird add on for managing contact using vCard and CardDAV standards.

I have written a blog here, explaining the entire development process , as well as using tools to make it comply to Debian standards.

Creating a Debian package from scratch, involved a lot of learning from resources and wiki pages.

I created the package using debhelper commands, and included the CardBook extension inside the package. I modified the binary package files like changes, control, rules, copyright for its installation.

I also created a Local Debian Repository for testing the package.

I created four updated versions of the package, which are present in the changelog.

I used Lintian tool to check for bugs, packaging errors and policy violations. I spent some time to remove all the Lintian errors in 1.3.0 version of the package.

I took help from mentors on IRC (#debian-mentors) and mailing lists during the packaging process. Finally, I added mozilla-devscripts to build the package using xul-ext architecture.

I updated the ‘watch’ file to automatically pull tags from upstream.

I mailed Carsten Schoenert, Debian Maintainer of Thunderbird and Lightning package, who helped me a lot along with my mentor, Daniel during the packaging process.

CardBook Debian Package: https://salsa.debian.org/minkush-guest/CardBook/tree/debian-package

Blog: http://minkush.me/cardbook-debian-package/

I created and setup my public and private GPG key using GnuPg and added them on mentors.debian.net.

I signed the package files including ‘.changes’, ‘.dsc’, ‘.deb’ using ‘dpkg-sig’ and ‘debsign’ and then verified them with my keys.

Finally, the package has been uploaded on mentors.debian.net using dput HTTPS method.

Link: https://mentors.debian.net/package/cardbook

This is video demo showing the package’s installation inside Thunderbird. As it can be clearly observed, CardBook was successfully installed as a Thunderbird add-on:


IRC Setup

One of most challenging tasks for a new contributor is getting started with Internet Relay Protocol chat and its setup.

I made an IRC Python bot to overcome the initial setup required. The script uses socket programming to connect to freenode server and send data.

Features:

*It registers new nickname for the user on Freenode server by sending user’s credentials to Nickserv. An email is received on successful registration of the nickname.

  • The script checks if the entered email is invalid or the nickname chosen by the user is already registered on the server. If this is case, the server disconnects and prompts the user again for re-entering the details.

  • It does identification for the nickname on the server before joining any channel by messaging ‘nickserv’ , if the nick registration is successful.

  • It displays the list of all available ‘#debian’ channels live on the server with minimum 30 members.

  • The script connects and joins with any IRC channel entered by the user and displays the live chat occurring on the channel.

  • Implements ping-pong protocol to keep the server live. This makes sure that the connection is not lost during the operation and simulate human interaction with the server by responding to its pings.

  • It continuously prints all data received from the server after decoding it with UTF-8 and closes the server after the operation is done.

Libraries:

Socket library

This is a working video demo for the IRC script.

To display one of it features, I have entered my already registered nickname (Mjain) to test it. It analyses server response to ask the user to again enter it.


Salsa and Github Registration

I created scripts using Selenium Web Driver to automate new account creation on Salsa and Github.

This task would provide a quick-start for a user to get started to contribute to Open source by registering account on web-hosting clients for version control.

I learned Selenium automation techniques in Python to accomplish it. It uses web driver to control it through automated scripts. (Tested with geckodriver for Firefox)

I used Pytest to write test scripts for both the programs which finds whether the account was successfully created or not.

Libraries used:

  • Selenium Web driver
  • Geckodriver
  • Pytest

Extract Mail Data

The aim for this task was to extract data from user’s email for ease of managing contacts.

I created a script to analyse user’s email and extract all Phone numbers present in it. The Program fetches all mails from the server using IMAP and decodes it in using UTF-8 to obtain it in readable format.

Features:

  • Easy login on mail server through user’s credentials

  • Obtains the date and time for all mails

  • Option to iterate through all or unseen mails

  • Extracts the Sender, Receiver, Subject and body of the email.

It scans the body of each message to look for phone numbers using python-phonenumbers and stores all of them along with details in a text file in external system.

Features:

  • Converts all the telephone numbers in Standard International Format E164 (adds country code if not already present)

  • Using geocoder to find the location of the phone numbers

  • Also extracts the Carrier name and Timezone details for all the phone numbers.

  • Saves all this data along with sender’s details in a file and also displays it on the terminal.

Libraries used:

  • Imaplib
  • IMAPClient
  • Python port of libphonenumbers (phoneumbers)

The original libphonenumbers is a popular Google’s library for parsing, formatting, and validating international phone numbers.

I also researched Telify Mozilla plugin for a similar algorithm to have click-to-save phone numbers.

This is a working video demo for the script:


HTTP Post Salsa Registration

I have created another script to automate the process of new account creation on Salsa using HTTP Post.

The script uses requests library to send HTTP requests on the website and send data in forms.

I used Beautiful Soup 4 library to parse and navigate HTML and XML data inside the URL and get tokens and form fields within the website.

The script checks for password mismatch and duplicate usernames and creates a new account instantly.

Libraries used:

  • Requests
  • Beautiful Soup

This is a working demo for the script. An email is received from Salsa which confirms that new account has been created:


Mail Filters Setup

One of the problems faced by a developer is filtering hundreds of unnecessary mails incoming from mailing lists, promotion websites, and spam.

Email client does the job to certain extent, still many emails are left which need to be sorted into categories.

For this purpose, I created a script which examines user’s mailbox and filters mails into labels and folders in Gmail, by creating them. The script uses IMAP to fetch mails from the server.

Libraries used:

Acknowledgment:

I would like to thank Debian and Google for giving me this opportunity to work on this project.

I am grateful to my mentors Daniel Pocock, Urvika Gola, Jaminy Prabharan and Sanyam Khurana for their constant help throughout GSoC.

Finally, this journey wouldn’t have been possible without my friends and family who supported me.

Special Mention

I would like to thank Carsten Schönert and Andrey Rahmatullin for their help with Debian packaging.

14 August, 2018 04:00AM by Minkush Jain

Athos Ribeiro

Google Summer of Code 2018 Final Report: Automatic Builds with Clang using Open Build Service

Project Overview

Debian package builds with Clang were performed from time to time through massive rebuilds of the Debian archive on AWS. The results of these builds are published on clang.debian.net. This summer project aimed to automate Debian archive clang rebuilds by substituting the current clang builds in clang.debian.net with Open Build System (OBS) builds.

Our final product consists of a repository with salt states to deploy an OBS instance which triggers Clang builds of Debian Unstable packages as soon as they get uploaded by their maintainers.

An instance of our clang builder is hosted at irill8.siege.inria.fr and the Clang builds triggered so far can be seen here.

My Google Summer of Code Project can bee seen at summerofcode.withgoogle.com/projects/#6144149196111872.

My contributions

The major contribution for the summer is our running OBS instance at irill8.siege.inria.fr.

Salt states to deploy our OBS intance

We created a series of Salt states to deploy and configure our OBS instance. The states for local deploy and development are available at github.com/athos-ribeiro/salt-obs.

Commits

The commits above were condensed and submitted as a Pull Request to the project’s mentor github account, with production deployment configurations.

OBS Source Service to make gcc/clang binary substitutions

To perform deb packages Clang builds, we substitute GCC binaries with the Clang binaries in the builders chroot during build time. To do that, we use the OBS Source Services feature, which requires a package (which performs the desired task) to be available to the target OBS project.

Our obs-service-clang-build package is hosted at github.com/athos-ribeiro/obs-service-clang-build.

Commits

Monitor Debian Unstable archive and trigger clang builds for newly uploaded packages

We also use two scripts to monitor the debian-devel-changes mailing lists, watching for new package uploads in Debian Unstable, and trigger Clang builds in our OBS instance whenever a new upload is accepted.

Our scripts to monitor the debian-devel-changes mailing list and trigger Clang builds in our OBS instance are available at github.com/athos-ribeiro/obs-trigger-sid-builds.

Commits

OBS documentation contributions

During the summer, most of my work was to read OBS documentation and code to understand how to trigger Debian Unstable builds in OBS and how to perform customized Clang builds (replacing GCC).

My contributions

Pending PRs

We want to change the Clang build links at tracker.debian.org/pkg/firefox To do so, we must change Debian distro-tracker to point to our OBS instance. As of the time this post was written, we have an open PR in distro-tracker to change the URLs:

Reports written through the summer

Adding new workers to the OBS instance

To configure new workers to our current OBS instance, hosted at irill8.siege.inria.fr, just set new salt slaves and provision them with obs-common and obs-worker, from github.com/opencollab/llvm-slave-salt. This should be done in the top.sls file.

Future work

  • We want to extend our OBS instance with more projects to provide Upstream LLVM packages to Debian and derived distributions.
  • More automation is needed in our salt states. For instance, we may want to automate SSL certificates generation using Let’s encrypt.
  • During the summer, several issues were detected in Debian Stable OBS packages. We want to work closer to OBS packages to help improving OBS packages and OBS itself.

Google Summer of Code experience

Working with Debian during the summer was an interesting experience. I did not expect to have so many problems as I did (see reports) with the OBS packages. This problems were turned into hours of debuging and reading Perl code in order to understand how OBS processes comunicate and trigger new builds. I also learned more about Debian packaging, salt and vagrant. I do expect to keep working with OBS and help maintaining the service we deployed during the summer. There’s still a lot of room for improvements and it is easy to see how the project benefits FLOSS communities.

14 August, 2018 03:20AM

August 13, 2018

Iustin Pop

Eiger Bike Challenge 2018

So… another “fun” ride. Probably the most fun ever, both subjectively and in terms of Strava’s relative effort level. And that despite it being the “short” version of the race (55km/2’500m ascent vs. 88km/3’900m).

It all started very nicely. About five weeks ago, I started the Sufferfest climbing plan, and together with some extra cross-training, I was going very strong, feeling great and seeing my fitness increasing constantly. I was quite looking forward to my first time at this race.

Then, two weeks ago, after already having registered, family gets sick, then I get sick—just a cold, but with a persistent cough that has not gone away even after two weeks. The week I got sick my training plan went haywire (it was supposed to be the last heavy week), and the week of the race itself I was only half-recovered so I only did a couple of workouts.

With two days before the race, I was still undecided whether to actually try to do it or not. Weather was quite cold, which was on the good side (I was even a bit worried about too cold in the morning), then it turned to the better.

So, what do I got to lose? I went to the start of the 55km version. As to length, this is on the easy side. But it does have 2’500m of ascent, which is a lot for me for such a short ride. I’ve done this amount of ascent before—2017 BerGiBike, long route—but that was “spread” over 88km of distance and in lower temperatures and with quite a few kilograms fewer (on my body, not on the bike), and still killed me.

The race starts. Ten minutes in, 100m gained; by 18 minutes, 200m already. By 1h45m I’m done with the first 1’000m of ascent, and at this time I’m still on the bike. But I was also near the end of my endurance reserve, and even worse, at around 1h30m in, the sun was finally high enough in the sky to start shining on my and temperature went from 7-8°C to 16°. I pass Grosse Scheidegg on the bike, a somewhat flat 5k segment follows to the First station, but this flat segment still has around 300m of ascent, with one portion that VeloViewer says is around 18% grade. After pedalling one minute at this grade, I give up, get off the bike, and start pushing.

And once this mental barrier of “I can bike the whole race” is gone, it’s so much easier to think “yeah, this looks steep, let’s get off and push” even though one might still have enough reserves to bike uphill. In the end, what’s the difference between biking at 5km/h and pushing at 4.0-4.3km/h? Not much, and heart rate data confirms it.

So, after biking all the way through the first 1’100m of ascent, the remainder 1’400m were probably half-biking, half-pushing. And that might still be a bit generous. Temperatures went all the way up to 32.9°C at one point, but went back down a bit and stabilised at around 25°. Min/Avg/Max overall were 7°/19°/33° - this is not my ideal weather, for sure.

Other fun things:

  • Average (virtual) power over time as computed by VeloViewer went from 258W at 30m, to 230W at the end of first hour, 207W at 2h, 164W at 4h, and all the way down to 148W at the end of the race.
  • The brakes faded enough on the first long descend that in one corner I had to half-way jump of the bike and stop it against the hill; I was much more careful later to avoid this, which lead to very slow going down gravel roads (25-30km/h, not more); I need to fix this ASAP.
  • By last third of the race, I was tired enough that even taking a 2 minutes break didn’t relax my heart rate, and I was only able to push the bike uphill at ~3km/h.
  • The steepest part of the race (a couple of hundred meters at 22-24%) was also in the hottest temperature (33°).
  • At one point, there was a sign saying “Warning, ahead 2.5km uphill with 300m altitude gain”; I read that as “slowly pushing the bike for 2.5km”, and that was true enough.
  • In the last third of the race, there was a person going around the same speed as me (in the sense that we were passing each other again and again, neither gaining significantly). But he was biking uphill! Not much faster than my push, but still biking! Hat off, sir.
  • My coughing bothered me a lot (painful coughing) in the first two thirds, by the end of the race it was gone (now it’s back, just much better than before the race).
  • I met someone while pushing and we went together for close to two hours (on and off the bike), I think; lots of interesting conversation, especially as pushing is very monotonous…
  • At the end of the race (really, after the finish point), I was “ok, now what?” Brain was very confused that more pushing is not needed, especially as the race finishes with 77m of ascent.
  • BerGiBike 2017 (which I didn’t write about, apparently) was exactly the same recorded ascent to the meter: 2’506, which is a fun coincidence ☺

The route itself is not the nicest one I’ve done at a race. Or rather, the views are spectacular, but a lot of the descent is on gravel or even asphalt roads, and the single-trails are rare and on the short side. And a large part of the difficult descent are difficult enough that I skipped them, which in many other races didn’t happen to me. On the plus side, they had very good placements of the official photographers, I think one of the best setups I’ve seen (as to the number of spots and their positioning).

And final fun thing: I was not the last! Neither overall nor in my age category:

  • In my age category, I was place 129 our of 131 finishers, and there were another six DNF.
  • Overall (55km men), I was 391 out of 396 finishers, plus 17 DNF.

So, given my expectations for the race—I only wanted to finish—this was a good result. Grand questions:

  • How much did my sickness affect me? Especially as lung capacity is involved, and this being at between 1’000 and 2’000m altitude, when I do my training at below 500?
  • How much more could I have pushed the bike? E.g. could I push all above 10%, but bike the rest? What’s the strategy when some short bits are 20%? Or when there’s a long one at ~12%?
  • If I had an actual power meter, could I do much better by staying below my FTP, or below 90% FTP at all times? I tried to be careful with heart rate, but coupled with temperature increase this didn’t go as well as I thought it would.
  • My average overall speed was 8.5km/h. First in 55km category was 19.72km/h. In my age category and non-licensed, first one was 18.5km/h. How, as in how much training/how much willpower does that take?
  • Even better, in the 88km and my age category, first placed speed was 16.87km/h, finishing this longer route more than one hour faster than me. Fun! But how?

In any case, at my current weight/fitness level, I know what my next race profile will be. I know I can bike more than one thousand meters of altitude in a single long (10km) uphill, so that’s where I should aim at. Or not?

Closing with one picture to show how the views on the route are:

Yeah, that's me ☺ Yeah, that’s me ☺

And with that, looking forward to the next trial, whatever it will be!

13 August, 2018 09:50PM

hackergotchi for Thomas Goirand

Thomas Goirand

Official Debian testing OpenStack image news

A few things happened to the testing image, thanks to Steve McIntire, myself, and … some debconf18 foo!

  • The buster/testing image wasn’t generated since last April, this is now fixed. Thanks to Steve for it.
  • The datasource_list is now correct, in both the Stretch and Testing image (previously, cloustack was set too early in the list, which made the image wait 120 seconds for a data source which wasn’t available if booting on OpenStack).
  • The buster/testing image is now using the new package linux-image-cloud-amd64. This made the qcow file shrink from 614 MB to 493 MB. Unfortunately, we don’t have a matching arm64 cloud kernel image yet, but it’s still nice to have this for the amd64 arch.

Please use the new images, and report any issue or suggestion against the openstack-debian-images package.

13 August, 2018 10:46AM by Goirand Thomas

Petter Reinholdtsen

A bit more on privacy respecting health monitor / fitness tracker

A few days ago, I wondered if there are any privacy respecting health monitors and/or fitness trackers available for sale these days. I would like to buy one, but do not want to share my personal data with strangers, nor be forced to have a mobile phone to get data out of the unit. I've received some ideas, and would like to share them with you. One interesting data point was a pointer to a Free Software app for Android named Gadgetbridge. It provide cloudless collection and storing of data from a variety of trackers. Its list of supported devices is a good indicator for units where the protocol is fairly open, as it is obviously being handled by Free Software. Other units are reportedly encrypting the collected information with their own public key, making sure only the vendor cloud service is able to extract data from the unit. The people contacting me about Gadgetbirde said they were using Amazfit Bip and Xiaomi Band 3.

I also got a suggestion to look at some of the units from Garmin. I was told their GPS watches can be connected via USB and show up as a USB storage device with Garmin FIT files containing the collected measurements. While proprietary, FIT files apparently can be read at least by GPSBabel and the GpxPod Nextcloud app. It is unclear to me if they can read step count and heart rate data. The person I talked to was using a Garmin Forerunner 935, which is a fairly expensive unit. I doubt it is worth it for a unit where the vendor clearly is trying its best to move from open to closed systems. I still remember when Garmin dropped NMEA support in its GPSes.

A final idea was to build ones own unit, perhaps by basing it on a wearable hardware platforms like the Flora Geo Watch. Sound like fun, but I had more money than time to spend on the topic, so I suspect it will have to wait for another time.

While I was working on tracking down links, I came across an inspiring TED talk by Dave Debronkart about being a e-patient, and discovered the web site Participatory Medicine. If you too want to track your own health and fitness without having information about your private life floating around on computers owned by others, I recommend checking it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 August, 2018 07:00AM

August 12, 2018

Shashank Kumar

Google Summer of Code 2018 with Debian - Final Report

Three weeks of Google Summer of Code went off to be life-changing for me. This here is the summary of my work which also serves as my Final Report of Google Summer of Code 2018.

GSoC and Debian

Preperations

My project is Wizard/GUI helping students/interns apply and get started and the final application is named New Contributor Wizard. It originated as the brainchild and Project Idea of Daniel Pocock for GSoC 2018 under Debian. I prepared the application task for the same and shared my journey through Open Source till GSoC 2018 in two of my blogs, From Preparations to Debian to Proposal and The Application Task and Results.

Project Overview

Sign Up Screen

New Contributor Wizard is a GUI application build to help new contributors get started with Open Source. It was an idea to bring together all the Tools and Tutorials necessary for a person to learn and start contributing to Open Source. The application contains different courseware sections like Communication, Version Control System etc. and within each section, there are respective Tools and Tutorials.

A Tool is an up and running service right inside the application which can perform tasks to help the user understand the concepts. For example, encrypting a message using the primary key, decrypting the encrypted message using the private key, and so on, these tools can help the user better understand the concepts of encryption.

A tutorial is comprised of lessons which contain text, images, questions and code snippets. It is a comprehensive guide for a particular concept. For example, Encryption 101, How to use git?, What is a mailing list? and so on.

In addition to providing the Tools and Tutorials, this application is build to be progressive. One can easily contribute new Tutorials by just creating a JSON file, the process of which is documented in the project repository itself. Similarly, a documentation for contributing Tools is present as well.

Project Details

Programming Language and Tools

For Development

For Testing

Environment

  • Pipenv for Python Virtual Environment
  • Debian 9 for Project Development and testing

Version Control System

For pinned dependencies and sub-dependencies one can have a look at the Pipfile and Pipfile.lock

My Contributions

The project was just an idea before GSoC and I had to make all the decisions for the implementation with the help of mentors whether it was Design or Architecture of the application. Below is the list of my contributions in shape of merge requests and every merge request contains UI, application logic, tests, and documentation. My contributions can also be seen in Changelog and Contribution Graph of the application.

Sign Up

Sign Up is the first screen a user is shown and asks for all the information required to create an account. It then takes the user to the Dashboard with all the courseware sections.

Merge request - Adds SignUp feature

Redmine Issue - Create SignUp Feature

Feature In Action (updated working of the feature)

Sign In

Alternate to Sign Up, the user has option to select Sign In to use existing account in order to access the application.

Merge Request - Adds SignIn feature

Redmine Issue - Create SignIn Feature

Feature In Action (updated working of the feature)

Dashboard

The Dashboard is said to be the protagonist screen of the application. It contains all the courseware sessions and their respective Tools and Tutorials.

Merge Request - Adds Dashboard feature

Redmine Issue - Implementing Dashboard

Feature In Action (updated working of the feature)

Adding Tool Architecture

Every courseware section can have respective Tools and Tutorials. To add Tools to a section I devised an architecture and implemented on Encryption to add 4 different Tools. They are:

  • Create Key Pair
  • Display and manage Key Pair
  • Encrypt a message
  • Decrypt a message

Merge Request - Adding encryption tools

Redmine Issue - Adding Encryption Tools

Feature In Action (updated working of the feature)

Adding Tutorial Architecture

Similar to Tools, Tutorials can be found with respect to any courseware section. I have created a Tutorial Parser, which can take a JSON file and build GUI for the Tutorial easily without any coding required. This way folks can easily contribute Tutorials to the project. I added Encryption 101 Tutorial to showcase the use of Tutorial Parser.

Merge Request - Adding encryption tutorials

Redmine Issue - Adding Encryption Tutorials

Feature In Action (updated working of the feature)

Adding 'Invite Contributor' block to Tools and Tutorials

In order to invite the contributor to New Contributor Wizard, every Tools and Tutorials menu display an additional block by linking the project repository.

Merge Request - Inviting contributors

Redmine Issue - Inviting contributors to the project

Feature In Action (updated working of the feature)

Adding How To Use

One of the courseware section How To Use help the user understand about different sections of the application in order to get the best out of it.

Merge Request - Updating How To Use

Redmine Issue - Adding How To Use in the application

Feature In Action (updated working of the feature)

Adding description to all the modules

All the courseware sections or modules need a simple description to describe what the user will learn using it's Tutorials and Tools.

Merge Request - Description added to all the modules

Redmine Issue - Add a introduction/description to all the modules

Feature In Action (updated working of the feature)

Adding Generic Tools and Tutorials Menu

This feature allows the abstraction of Tools and Tutorials architecture I mentioned earlier so that the Menu architecture can be used by any of the courseware sections following the DRY approach.

Merge Request - Adding Generic Menu

Redmine Issue - Adding Tutorial and Tools menu to all the modules

Tutorial Contribution Doc

A tutorial in the application can be added using just a JSON file. As mentioned earlier, it is made possible using the Tutorial Parser. A comprehensive ocumentation is added to help the users understand how they can contribute Tutorials to the application for the world to take advantage of.

Merge Request - Tutorial contribution docs

Redmine Issue - Add documentation for Tutorial development

Tools Contribution Doc

A tool in the application is build using Kivy lang and Python. A comprehensive documentation is added to the project in order for folks to contribute Tools for the world to take advantage of.

Merge Request - Tools contribution docs

Redmine Issue - Add documentation for Tools development

Adding a License to project

After having discussions with the mentors and a bit of research, GNU GPLv3 was finalized as the license for the project and has been added to the repository.

Merge Request - Adds License to project

Redmine Issue - Add a license to Project Repository

Allowing different timezones during Sign Up

Sign Up feature is refactored to support different timezones from the user.

Merge Request - Allowing different timezones during signup

Redmine Issue - Allow different timezones

All other contributions

Here's a list of all the merge request I raised to develop a feature or fix an issue with the application - All merge request by Shashank Kumar

Here are all the issues/bug/features I created, resolved or was associated to on the Redmine - All the redmine issue associated to Shashank Kumar

Packaging

The application has been packaged for PyPi and can be installed using either pip or pipenv.

Package - new-contributor-wizard

Packaging Tool - setuptools

To Do List

Weekly Updates And Reports

These report were send daily to private mentors mail thread and weekly on Debian Outreach mailing list.

Talk Delivered On My GSoC Project

On 12th August 2018, I gave a talk on How my Google Summer of Code project can help bring new contributors to Open Source during a meetup in Hacker Space, Noida, India. Here are the slides I prepared for my talk and a collection of photographs of the event.

Summary

New Contributor Wizard is ready for the users who would like to get started with Open Source as well as to the folks who would like to contribute Tools and Tutorials to the application as well.

Acknowledgment

I would like to thank Google Summer of Code for giving me the opportunity of giving back to the community and Debian for selecting me for the project.

I would like to thank Daniel Pocock for his amazing blogs and ideas he comes up which end up inspiring students and result in a project like above.

I would like to thank Sanyam Khurana for constantly motivating me by reviewing every single line of code which I wrote to come up with the best solution to put in front of the community.

Thanks to all the loved ones who always believed in me and kept me motivated.

12 August, 2018 06:30PM by Shashank Kumar

hackergotchi for Vasudev Kamath

Vasudev Kamath

SPAKE2 In Golang: Finite fields of Elliptic Curve

In my previous post I talked about elliptic curve basics and how the operations are done on elliptic curves, including the algebraic representation which is needed for computers. For usage in cryptography we need a elliptic curve group with some specified number of elements, that is what we called Finite Fields. We limit Elliptic Curve groups with some big prime number p. In this post I will try to briefly explain finite fields over elliptic curve.

Finite Fields

Finite field or also called Galois Field is a set with finite number of elements. An example we can give is integer modulo `p` where p is prime. Finite fields can be denoted as \(\mathbb Z/p, GF(p)\) or \(\mathbb F_p\).

Finite fields will have 2 operations addition and multiplications. These operations are closed, associative and commutative. There exists a unique identity element and inverse element for every element in the set.

Division operation in finite fields is defined as \(x / y = x \cdot y^{-1}\), that is x multiplied by inverse of y. and substraction \(x - y\) is defined in terms of addition as \(x + (-y)\) which is x added by negation of y. Multiplicative inverse can be easily calculated using extended Euclidean algorithm which I've not understood yet myself as there were readily available library functions which does this for us. But I hear from Ramakrishnan that its very easy one.

Elliptic Curve in \(\mathbb F_p\)

Now we understood what is finite fields we now need to restrict our elliptic curves to the finite field. So our original definition of elliptic curve becomes slightly different, that is we will have modulo p to restrict the elements.

\begin{equation*} \begin{array}{rcl} \left\{(x, y) \in (\mathbb{F}_p)^2 \right. & \left. | \right. & \left. y^2 \equiv x^3 + ax + b \pmod{p}, \right. \\ & & \left. 4a^3 + 27b^2 \not\equiv 0 \pmod{p}\right\}\ \cup\ \left\{0\right\} \end{array} \end{equation*}

All our previous operations can now be written as follows

\begin{equation*} \begin{array}{rcl} x_R & = & (m^2 - x_P - x_Q) \bmod{p} \\ y_R & = & [y_P + m(x_R - x_P)] \bmod{p} \\ & = & [y_Q + m(x_R - x_Q)] \bmod{p} \end{array} \end{equation*}

Where slope, when \(P \neq Q\)

\begin{equation*} m = (y_P - y_Q)(x_P - x_Q)^{-1} \bmod{p} \end{equation*}

and when \(P = Q\)

\begin{equation*} m = (3 x_P^2 + a)(2 y_P)^{-1} \bmod{p} \end{equation*}

So now we need to know order of this finite field. Order of elliptic curve finite field can be defined as number of points in the finite field. Unlike integer modulo p where number of elements are 0 to p-1, in case of elliptic curve you need to count points from x to p-1. This counting will be \(O(p)\). Given large p this will be hard problem. But there are faster algorithm to count order of group, which even I don't know much in detail :). But from my reference its called Schoof's algorithm.

Scalar Multiplication and Cyclic Group

When we consider scalar multiplication over elliptic curve finite fields, we discover a special property. Taking example from Andrea Corbellini's post, consider curve \(y^2 \equiv x^3 + 2x + 3 ( mod 97)\) and point \(P = (3,6)\). If we try calculating multiples of P

\begin{align*} 0P = 0 \\ 1P = (3,6) \\ 2P = (80,10) \\ 3P = (80,87) \\ 4P = (3, 91) \\ 5P = 0 \\ 6P = (3,6) \\ 7P = (80, 10) \\ 8P = (80, 87) \\ 9P = (3, 91) \\ ... \end{align*}

If you are wondering how to calculate above (I did at first). You need to use point addition formula from earlier post where P = Q with mod 97. So we observe that there are only 5 multiples of P and they are repeating cyclicly. we can write above points as

  • \(5kP = 0P\)
  • \((5k + 1)P = 1P\)
  • \((5k + 2)P = 2P\)
  • \((5k + 3)P = 3P\)
  • \((5k + 4)P = 4P\)

Or simply we can write these as \(kP = (k mod 5)P\). We also note that all these 5 Points are closed under addition. This means adding two multiples of P, we obtain a multiple of P and the set of multiples of P form cyclic subgroup

\begin{equation*} nP + mP = \underbrace{P + \cdots + P}_{n\ \text{times}} + \underbrace{P + \cdots + P}_{m\ \text{times}} = (n + m)P \end{equation*}

Cyclic subgroups are foundation of Elliptic Curve Cryptography (ECC).

Subgroup Order

Subgroup order tells how many points are really there in the subgroup. We can redefine the order of group in subgroup context as order of P is the smallest positive integer such that nP = 0. In above case if you see we have smallest n as 5 since 5P = 0. So order of subgroup above is 5, it contains 5 element.

Order of subgroup is linked to order of elliptic curve by Lagrange's Theorem which says the order of subgroup is divisor of order of parent group. Lagrange is another name which I had read in my college, but the algorithms were different.

From this we have following steps to find out the order of subgroup with base point P

  1. Calculate the elliptic curve's order N using Schoof's algorithm.
  2. Find out all divisors of N.
  3. For every divisor of n, compute nP.
  4. The smallest n such that nP = 0 is the order of subgroup N.

Note that its important to choose smallest divisor, not a random one. In above examples 5P, 10P, 15P all satisfy condition but order of subgroup is 5.

Finding Base Point

Far all above which is used in ECC, i.e. Group, subgroup and order we need a base point P to work with. So base point calculation is not done at the beginning but in the end i.e. first choose a order which looks good then look for subgroup order and finally find the suitable base point.

We learnt above that subgroup order is divisor of group order which is derived from Lagrange's Theorem. This term \(h = N/n\) is actually called co-factor of the subgroup. Now why is this term co-factor important?. Without going into details, this co-factor is used to find generator for the subgroup as \(G = hP\).

Conclusion

So now are you wondering why I went on such length to describe all these?. Well one thing I wanted to make some notes for myself because you can't find all these information in single place, another these topics we talked in my previous post and this point forms the domain parameters of Elliptic Curve Cryptography.

Domain parameters in ECC are the parameters which are known publicly to every one. Following are 6 parameters

  • Prime p which is order of Finite field
  • Co-efficients of curve a and b
  • Base point \(\mathbb G\) the generator which is the base point of curve that generates subgroup
  • Order of subgroup n
  • Co-factor h

So in short following is the domain parameters of ECC \((p, a, b, G, n, h)\)

In my next post I will try to talk about the specific curve group which is used in SPAKE2 implementation called twisted Edwards curve and give a brief overview of SPAKE2 protocol.

12 August, 2018 05:21PM by copyninja

hackergotchi for Steve McIntyre

Steve McIntyre

DebConf in Taiwan!

DebConf 18 logo

So I'm slowly recovering from my yearly dose of full-on Debian! :-) DebConf is always fun, and this year in Hsinchu was no different. After so many years in the project, and so many DebConfs (13, I think!) it has become unmissable for me. It's more like a family gathering than a work meeting. In amongst the great talks and the fun hacking sessions, I love catching up with people. Whether it's Bdale telling me about his fun on-track exploits or Stuart sharing stories of life in an Australian university, it's awesome to meet up with good friends every year, old and new.

DC18 venue

For once, I even managed to find time to work on items from my own TODO list during DebCamp and DebConf. Of course, I also got totally distracted helping people hacking on other things too! In no particular order, stuff I did included:

  • Working with Holger and Wolfgang to get debian-edu netinst/USB images building using normal debian-cd infrastructure;
  • Debugging build issues with our buster OpenStack images, fixing them and also pushing some fixes to Thomas for build-openstack-debian-image;
  • Reviewing secure boot patches for Debian's GRUB packages;
  • As an AM, helping two DD candidates working their way through NM;
  • Monitoring and tweaking an archive rebuild I'm doing, testing building all of our packages for armhf using arm64 machines;
  • Releasing new upstream and Debian versions of abcde, the CD ripping and encoding package;
  • Helping to debug UEFI boot problems with Helen and Enrico;
  • Hacking on MoinMoin, the wiki engine we use for wiki.debian.org;
  • Engaging in lots of discussions about varying things: Arm ports, UEFI Secure Boot, Cloud images and more

I was involved in a lot of sessions this year, as normal. Lots of useful discussion about Ignoring Negativity in Debian, and of course lots of updates from various of the teams I'm working in: Arm porters, web team, Secure Boot. And even an impromptu debian-cd workshop.

Taipei 101 - datrip venue

I loved my time at the first DebConf in Asia (yay!), and I was yet again amazed at how well the DebConf volunteers made this big event work. I loved the genius idea of having a bar in the noisy hacklab, meaning that lubricated hacking continued into the evenings too. And (of course!) just about all of the conference was captured on video by our intrepid video team. That gives me a chance to catch up on the sessions I couldn't make it to, which is priceless.

So, despite all the stuff I got done in the 2 weeks my TODO list has still grown. But I'm continuing to work on stuff, energised again. See you in Curitiba next year!

12 August, 2018 03:11PM

Sam Hartman

Dreaming of a Job to Promote Love, Empathy and sexual Freedom

Debianhas always been filled with people who want to make the world a better place. We consider the social implications of our actions. Many are involved in work that focuses on changing the world. I’ been hesitant to think too closely about how that applies to me: I fear being powerless to bring about the world in which I would like to live.

Recently though, I've been taking the time to dream. One day my wife came home and told another story of how she’d helped a client reduce their pain and regain mobility. I was envious. Every day she advances her calling and brings happiness into the world, typically by reducing physical suffering. What would it be like for me to find a job where I helped advance my calling and create a world where love could be more celebrated. That seems such a far cry from writing code and working on software design every day. But if I don’t articulate what I want, I'll never find it.

I’ve been working to start this journey by acknowledging the ways in which I already bring love into the world. One of the most important lessons of Venus’s path is that to bring love into the world, you have to start by leading a life of love. At work I do this by being part of a strong team. We’re there helping each other grow, whether it is people trying entirely new jobs or struggling to challenge each other and do the best work we can. We have each other’s back when things outside of work mean we're not at our best. We pitch in together when the big deadlines approach.

I do not shove my personal life or my love and spirituality work in people’s faces, but I do not hide it. I'm there as a symbol and reminder that different is OK. Because I am open people have turned to me in some unusual situations and I have been able to invite compassion and connection into how people thought about challenges they faced.

This is the most basic—most critical love work. In doing this I’m already succeeding at bringing love into the world. Sometimes it is hard to believe that. Recently I have been daring to dream of a job in which the technology I created also helped bring love into the world.

I'd love to find a company that's approaching the world in a love-positive, sex-positive manner. And of course they need to have IT challenges big enough to hire someone who is world class at networking, security and cloud architecture. While I'd be willing to take a pay cut for the right job, I'd still need to be making a US senior engineer's salary.

Actually saying that is really hard. I feel vulnerable because I’m being honest about what I want. Also, it feels like I’m asking for the impossible.

Yet, the day after I started talking about this on Facebook, OkCupid posted a job for a senior engineer. That particular job would require moving to New York, something I want to avoid. Still, it was reassuring as a reminder that asking for what you want is the first step.

I doubt that will be the only such job. It's reasonable to assume that as we embrace new technologies like blockchains and continue to appreciate what the evolving web platform standards have to offer, there will be new opportunities. Yes, a lot of the adult-focused industries are filled with corruption and companies that use those who they touch. However, there's also room for approaching intimacy in a way that celebrates desire, connection, and all the facets of love.

And yes, I do think sexuality and desire are an important part of how I’d like to promote love. With platforms like Facebook, Amazon and Google, it's easier than ever for people to express themselves, to connect, and if they are willing to give up privacy, to try and reach out and create. Yet all of these platforms have increasingly restrictive rules about adult content. Sometimes it’s not even intentional censorship. My first post about this topic on Facebook was marked as spam probably because some friends suggested some businesses that I might want to look at. Those businesses were adult-focused and apparently even positive discussion of such businesses is now enough to trigger a presumption of spam.

If we aren't careful, we're going to push sex further out of our view and add to an ever-higher wall of shame and fear. Those who wish to abuse and hurt will find their spaces, but if we aren't careful to create spaces where sex can be celebrated alongside love, those seedier corners of the Internet will be all that explores sexuality. Because I'm willing to face the challenge of exploring sexuality in a positive, open way, I think I should: few enough people are.

I have no idea what this sort of work might look like. Perhaps someone will take on the real challenge of creating content platforms that are more decentralized and that let people choose how they want content filtered. Perhaps technology can be used to improve the safety of sex workers or eventually to fight shame associated with sex work. Several people have pointed out the value of cloud platforms in allowing people to host whatever service they would choose. Right now I’m at the stage of asking for what I want. I know I will learn from the exploration and grow stronger by understanding what is possible. And if it turns out that filling my every day life with love is the answer I get, then I’ll take joy in that. Another one of the important Venus lessons is celebrating desires even when they cannot be achieved.

12 August, 2018 02:05PM

Sven Hoexter

iptables with random-fully support in stretch-backports

I've just uploaded iptables 1.6.2 to stretch-backports (thanks Arturo for the swift ACK). The relevant new feature here is the --random-fully support for the MASQUERADE target. This release could be relevant to you if you've to deal with a rather large amount of NATed outbound connections, which is likely if you've to deal with the whale. The engineering team at Xing published a great writeup about this issue in February. So the lesson to learn here is that the nf_conntrack layer propably got a bit more robust during the Bittorrent heydays, but NAT is still evil shit we should get rid of.

12 August, 2018 12:45PM

Mike Hommey

Announcing git-cinnabar 0.5.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0?

  • git-cinnabar-helper is now mandatory. You can either download one with git cinnabar download on supported platforms or build one with make.
  • Performance and memory consumption improvements.
  • Metadata changes require to run git cinnabar upgrade.
  • Mercurial tags are consolidated in a separate (fake) repository. See the README file.
  • Updated git to 2.18.0 for the helper.
  • Improved memory consumption and performance.
  • Improved experimental support for pushing merges.
  • Support for clonebundles for faster clones when the server provides them.
  • Removed support for the .git/hgrc file for mercurial specific configuration.
  • Support any version of Git (was previously limited to 1.8.5 minimum)
  • Git packs created by git-cinnabar are now smaller.
  • Fixed incompatibilities with Mercurial 3.4 and >= 4.4.
  • Fixed tag cache, which could lead to missing tags.
  • The prebuilt helper for Linux now works across more distributions (as long as libcurl.so.4 is present, it should work)
  • Properly support the pack.packsizelimit setting.
  • Experimental support for initial clone from a git repository containing git-cinnabar metadata.
  • Now can successfully clone the pypy and GNU octave mercurial repositories.
  • More user-friendly errors.

Development process changes

It took about 6 months between version 0.3 and 0.4. It took more than 18 months to reach version 0.5 after that. That’s a long time to wait for a new version, considering all the improvements that have happened under the hood.

From now on, the release branch will point to the last tagged release, which is roughly the same as before, but won’t be the default branch when cloning anymore.

The default branch when cloning will now be master, which will receive changes that are acceptable for dot releases (0.5.x). These include:

  • Changes in behavior that are backwards compatible (e.g. adding new options which default to the current behavior).
  • Changes that improve error handling.
  • Changes to existing experimental features, and additions of new experimental features (that require knobs to be enabled).
  • Changes to Continuous Integration/Tests.
  • Git version upgrades for the helper.

The next branch will receive changes for the next “major” release, which as of writing is planned to be 0.6.0. These include:

  • Changes in behavior.
  • Changes in metadata.
  • Stabilizing experimental features.
  • Remove backwards compability with older metadata (< 0.5.0).

12 August, 2018 01:57AM by glandium

August 11, 2018

hackergotchi for Shirish Agarwal

Shirish Agarwal

Journeys

This would be a long blog post as I would be sharing a lot of journeys, so have your favorite beverage in your hand and prepare for an evening of musing.

Before starting the blog post, I have been surprised as the last week and the week before, lot of people have been liking my Debconf 2016 blog post on diaspora which is almost two years old. Almost all the names mean nothing to me but was left unsure as to reason of the spike. Were they debconf newcomers who saw my blog post and their experience was similar to mine or something, don’t know.

About a month and half back, I started reading Gandhiji’s ‘My Experiments with Truth‘ . To be truthful, a good friend had gifted this book back in 2015 but I had been afraid to touch it. I have had read a few autobiographies and my experience had been less than stellar when reading the autobiographies. Some exceptions are there, but those are and will remain exceptions. Now, just as everybody else even I had held high regard for Gandhiji and was afraid that reading the autobiography it would lower him in my eyes. As it is he is lovingly regarded as the ‘Father of the Nation‘ and given the honorific title of ‘Mahatma’ (Great Soul) so there was quite a bit of resistance within me to read the book as its generally felt to be impossible to be like him or even emulate him in any way.

So, with some hesitancy, I started reading his autobiography about a month and half back. It is a shortish book, topping out at 470 odd pages and I have read around 350 pages or so. While I am/was reading it I could identify with lot of what was written and in so many ways it also represents a lot of faults which are still prevalent in Indian society today as well.

The book is heavy with layered meanings. I do feel in parts there have been brushes of RSS . I dunno maybe that’s the paranoia in me, but would probably benefit from an oldish version (perhaps the 1993 version) if I can find it somewhere which probably may be somewhat more accurate. I don’t dare to review it unless I have read and re-read it at least 3-4 times or more. I can however share what I have experienced so far. He shares quite a bit of religion and shares his experience and understanding of reading the various commentaries both on Gita and the various different religious books like the ‘Koran’, ‘The Bible’ and so on and so forth. When I was reading it, I felt almost like an unlettered person. I know that at sometime in the near future I would have to start read and listen to various commentaries of Hinduism as well as other religions to have at least some base understanding.

The book makes him feel more human as he had the same struggles that we all do, with temptations of flesh, food, medicine, public speaking. The only difference between him and me that he was able to articulate probably in a far better way than people even today.

Many passages in the book are still highly or even more prevalent in today’s ‘life’ . It really is a pity it isn’t an essential book to be read by teenagers and young adults. At the very least they would start with their own inquiries at a young age.

The other part which was interesting to me is his description of life in Indian Railways. He traveled a lot by Indian Railways, in both third and first class. I have had the pleasure of traveling in first, second and general (third class), military cabin, guard cabin, luggage cabin as well as the cabin for people with disabilities and once by mistake even in a ladies cabin. The only one I haven’t tried is the loco pilot’s cabin and it’s more out of fear than anything else. While I know the layout of the cabin more or less and am somewhat familiar the job they have to do, I still fear as I know the enormous responsibilities the loco pilots have, each train carrying anywhere between 2300 to 2800 passengers or more depending on the locomotive/s, rake, terrain, platform length and number of passengers.

The experiences which Gandhiji shared about his travels then and my own limited traveling experience seem to indicate that change has hardly been in Indian Railways as far as the traveling experience goes.

A few days back my mum narrated one of her journeys on the Indian Railways when she was a kid, about five decades or so back. Her experience was similar to what even one can experience even today and probably decades from now till things don’t improve which I don’t think will happen at least in the short-term, medium to long-term who knows.

Anyways, my grandfather (my mother’s father, now no more 😦 ) had a bunch of kids. In those days, having 5-6 kids was considered normal. My mother, her elder sister (who is not with us anymore, god rest her soul.) and my grandpa took a train from Delhi/Meerut to Pune. As that time there was no direct train to Pune, the idea was to travel from Delhi to Bombay (today’s Mumbai). Take a break in Bombay (Mumbai) and then take a train to Pune. The journey was supposed to take only couple of days or a bit more. My grandma had made Puris and masala bhaji (basically boiled Potatoes mixed with Onions fried a bit) .

Puri bhaji image taken from https://www.spiceupthecurry.com/hi/poori-bhaji-recipe-hindi/

You can try making it with a recipe shared by Sanjeev Kapoor, a celebrity chef from India. This is not the only way to make it, Indian cooking is all about improvisation and experimentation but that’s a story for another day.

This is/was a staple diet for most North Indians traveling in trains and you can even find the same today as well. She had made it enough for 2 days with some to spare as she didn’t want my mum or her sister taking any outside food (food hygiene, health concerns etc.) My mum and sister didn’t have much to do and they loved my grandma’s cooking what was made for 2 days didn’t even last a day. What my mother, her sister and grandpa didn’t know it was one of those ill-fated journeys. Because of some accident which happened down the line, the train was stopped in Bhopal for indefinite time. This was the dead in night and there was nothing to eat there. Unfortunately for them, the accident or whatever happened down the line meant that all food made for travelers was either purchased by travelers before my mother’s train or was being diverted to help those injured . The end-result being that till they reached Mumbai which took another one and a half day which means around 4 days instead of two days were spent traveling. My grandpa also tried to get something for the children to eat but still was unable to find any food for them.

Unfortunately when they reached Bombay (today’s Mumbai) it was also dead at night so grandpa knew that he wouldn’t be able to get anything to eat as all shops were shut at night, those were very different times.

Fortunately for them, one of my grandfather’s cousins had got a trunk-call a nomenclature from a time in history when calling long-distance was pretty expensive for some other thing at his office from Delhi by one of our relatives on some unrelated matter. Land-lines were incredibly rare and it was just a sheer coincidence that he came to know that my grandpa would be coming to Bombay (Mumbai) and if possible receive him. My grandpa’s cousin made inquiries and came to know the accident and knew that the train would arrive late although he had no foreknowledge how late it would be. Even then he got meals for the extended family on both days as he knew that they probably would not be getting meals.

On the second night, my grandpa was surprised and relived to see his cousin and both my mum and her sister who had gone without food finished whatever food was bought within 10 minutes.

The toilets on Indian Railways in Gandhiji’s traveling days (the book was written in 1918 while he resided in Pune’s Yerwada Jail [yup, my city] ) and the accounts he shared were of 1908 and even before that, the days my mother traveled and even today are same, they stink irrespective of whichever class you travel. After reading the book, read and came to know that Yerwada had lot of political prisoners

The only changes which have happened is in terms of ICT but that too only when you know specific tools and sites. There is the IRCTC train enquiry site and the map train tracker . For food you have sites like RailRestro but all of these amenities are for the well-heeled or those who can pay for the amenities and know how to use the services. I say this for the reason below.

India is going to have elections come next year, to win in those elections round the corner the Government started the largest online recruitment drive for exams of loco pilot, junior loco pilot and other sundry posts. For around 90k jobs, something like 0.28 billion applied. Out of which around 0.17 billion were selected to apply for ‘online’ exam with 80-85% percent of the selected student given centers in excess of 1000 kms. At the last moment some special trains were made for people who wanted to go for the exams.

Sadly, due to the nature of conservatism in India, majority of the women who were selected choose to not travel that far as travel was time-consuming, expensive (about INR 5k or a bit more) excluding any lodging, just for traveling and a bit for eating. Most train journeys are and would be in excess of 24 hours or more as the tracks are more or less the same (some small improvement has been for e.g. from wooden tracks, it’s now concrete tracks) while the traffic has quadrupled from before and they can’t just build new lines without encroaching on people’s land or wildlife sanctuaries (both are happening but that’s a different story altogether.)

The exams are being held in batches and will continue for another 2-3 days. Most of the boys/gentleman are from rural areas for whom getting INR 5k/- is a huge sum. There are already reports of collusion, cheating etc. After the exams are over, the students fear that some people might go to court of law saying cheating happened, the court might make the whole exam null and void, cheating students of the hard-earned money and the suffering the long journeys they had to take. The date of the exams were shared just a few days and clashed with some other government exams so many will miss those exams, while some wouldn’t have time to prepare for the other exams.

It’s difficult to imagine the helplessness and stress the students might be feeling.

I just hope that somehow people’s hopes are fulfilled.

Ending the blog post on a hopeful and yet romantic ballad

12/08/18 – Update – CAG: Failure to invest in infra behind delay in train services

11 August, 2018 07:27PM by shirishag75