March 18, 2019

hackergotchi for Jonathan Dowland

Jonathan Dowland

WadC 3.0

[blockmap.wl]( being reloaded (click for animation)

blockmap.wl being reloaded (click for animation)

A couple of weeks ago I release version 3.0 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

3.0 introduces more flexible randomness with rand; two new test maps (blockmap and bsp) that demonstrate approaches to random dungeon generation; some useful data structures in the library; better Hexen support and a bunch of other improvements.

Check the release notes for the full details.

Version 3.0 of WadC is dedicated to Lu (1972-2019). RIP.

18 March, 2019 03:12PM

March 17, 2019

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.8: Small updates

A new version 0.4.8 of RQuantLib reached CRAN and Debian. This release was triggered by a CRAN request for an update to the script which was easy enough (and which, as it happens, did not result in changes in the configure script produced). I also belatedly updated the internals of RQuantLib to follow suit to an upstream change in QuantLib. We now seamlessly switch between shared_ptr<> from Boost and from C++11 – Luigi wrote about the how and why in an excellent blog post that is part of a larger (and also excellent) series of posts on QuantLib internals.

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

In other news, we finally have a macOS binary package on CRAN. After several rather frustrating months of inaction on the pull request put together to enable this, it finally happened last week. Yay. So CRAN currently has an 0.4.7 macOS binary and should get one based on this release shortly. With Windows restored with the 0.4.7 release, we are in the best shape we have been in years. Yay and three cheers for Open Source and open collaboration models!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.8 (2019-03-17)

  • Changes in RQuantLib code:

    • Source code supports Boost shared_ptr and C+11 shared_ptr via QuantLib::ext namespace like upstream.
  • Changes in RQuantLib build system:

    • The file no longer upsets R CMD check; the change does not actually change configure.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 March, 2019 07:12PM

Rcpp 1.0.1: Updates

Following up on the 10th anniversary and the 1.0.0. release, we excited to share the news of the first update release 1.0.1 of Rcpp. package turned ten on Monday—and we used to opportunity to mark the current version as 1.0.0! It arrived at CRAN overnight, Windows binaries have already been built and I will follow up shortly with the Debian binary.

We had four years of regular bi-monthly release leading up to 1.0.0, and having now taken four months since the big 1.0.0 one. Maybe three (or even just two) releases a year will establish itself a natural cadence. Time will tell.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1598 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 152 in BioConductor release 3.8. Per the (partial) logs of CRAN downloads, we currently average 921,000 downloads a month.

This release feature a number of different pull requests detailed below.

Changes in Rcpp version 1.0.1 (2019-03-17)

  • Changes in Rcpp API:

    • Subsetting is no longer limited by an integer range (William Nolan in #920 fixing #919).

    • Error messages from subsetting are now more informative (Qiang and Dirk).

    • Shelter increases count only on non-null objects (Dirk in #940 as suggested by Stepan Sindelar in #935).

    • AttributeProxy::set() and a few related setters get Shield<> to ensure rchk is happy (Romain in #947 fixing #946).

  • Changes in Rcpp Attributes:

    • A new plugin was added for C++20 (Dirk in #927)

    • Fixed an issue where 'stale' symbols could become registered in RcppExports.cpp, leading to linker errors and other related issues (Kevin in #939 fixing #733 and #934).

    • The wrapper macro gets an UNPROTECT to ensure rchk is happy (Romain in #949) fixing #948).

  • Changes in Rcpp Documentation:

    • Three small corrections were added in the 'Rcpp Quickref' vignette (Zhuoer Dong in #933 fixing #932).

    • The Rcpp-modules vignette now has documentation for .factory (Ralf Stubner in #938 fixing #937).

  • Changes in Rcpp Deployment:

    • Travis CI again reports to (Dirk and Ralf Stubner in #942 fixing #941).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 March, 2019 01:49PM

March 16, 2019

hackergotchi for Andy Simpkins

Andy Simpkins

Race for science: A Prostate Cancer Research Centre charity challenge

This post may feel like an advert – to an extent it is.  I won’t plug events very often; however as a charity event in aid of prostate cancer and I genuinely think it will be great fun for anybody able to take part, so hence the unashamed plug.  The Race for Science will happen on 30th March in and around Cambridge. There is still time to enter, have fun and raise some money for a deserving charity at the same time.

A few weeks ago friends of ours, Jo (Randombird) and Josh were celebrating their birthday’s, we split into a couple of groups and had a go at a couple of escape room challenges at Lock House Games.  Four of us, Josh, Isy, Jane and myself had a go at the Egyptian Tomb.  Whilst this was the 2nd time Isy has tried an escape room the rest of us were N00bs.   I won’t describe the puzzles inside because that would spoil anyone’s enjoyment who would later goon to play the game.   We did make it out of the room inside our allotted time with only a couple of hints.  It was great fun, and is suitable for all ages our group ranged from 12 to 46, and this would work well for all adult teams as well as more mature children..

We escaped the tomb…

Anyway having had a good time, and whilst we waited for other group to finish their challenge, I spotted a request for people to Beta test some puzzles for the Prostate Cancer Research Centre‘s 2019 Race For Science to happen a couple of days later.

I volunteered my time and re-arranged my Monday so that I could spend the afternoon to trial the escape room elements of the challenge.  Some great people involved and the puzzles were very good indeed.  I visited 3 different locations in Cambridge each with very different puzzles to solve.  One challenge stood head and shoulders above the others; not that the others were poor – they are great, at least as good as a commercial escape room.  The reason that this challenge that was so outstanding wasn’t the because of the challenge itself, it was because it had been set specifically for the venue that will host this challenge, using the venue as part of the puzzle (The Race For Science is a scavenger hunt and contains several challenges at different locations within the city).

Beta testing one of the puzzles

Last Thursday I went back into town in the afternoon to beta test yet another location – Once again this was outstanding, the challenge differed from those that I had trialed the previous week.  This was another fantastic puzzle to solve and took full advantage of its location, both for it’s ‘back story’ and for the puzzle itself.  After this we moved onto testing  the scavenger hunt part of the event.  This is an played on the streets of the city on foot following clues will take you in and around some of the city’s museums and landmarks – and will unlock access to the escape room challenges I had been testing earlier.  My only concern is that it is played using a browser on a mobile device (i.e. phone).  I had to move around a bit in some locations to ensure that I had adequate signal.  You may want to make sure that you have fully charged battery!

The event is open to teams of up-to 6 people and will take the form of an “immersive scavenger hunt adventure”.  Unfortunately I can not take part as I have already played the game, but there is still time for you to register and take part.  Anyway if you are able to get to Cambridge at the end of the month please enter the Race for Science

16 March, 2019 04:18PM by andy

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.3.7: Small tweaks

max-heap image

The eight release of littler as a CRAN package is now available, following in the thirteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. And it is (in my very biased eyes) better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript converted to rather recently.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!).

A few examples as highlighted at the Github repo, as well as in the examples vignette.

This release brings an small update (thanks to Gergely) to scripts install2.r and installGithub.r allow more flexible setting of repositories, and fixes a minor nag from CRAN concerning autoconf programming style.

The NEWS file entry is below.

Changes in littler version 0.3.6 (2019-01-26)

  • Changes in examples

    • The scripts installGithub.r and install2.r get a new option -r | --repos (Gergely Daroczi in #67)
  • Changes in build system

    • The AC_DEFINE macro use rewritten to please R CMD check.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 March, 2019 12:15AM

Arnaud Rebillout

Building your Pelican website with schroot

Lately I moved my blog to Pelican. I really like how simple and flexible it is. So in this post I'd like to highlight one particular aspect of my Pelican's workflow: how to setup a Debian-based environment to build your Pelican's website, and how to leverage Pelican's Makefile to transparently use this build environment. Overall, this post has more to do with the Debian tooling, and little with Pelican.


First thing first, why would you setup a build environment for your project?

Imagine that you run Debian stable on your machine, then you want to build your website with a fancy theme, that requires the latest bleeding edge features from Pelican. But hey, in Debian stable you don't have these shiny new things, and the version of Pelican you need is only available in Debian unstable. How do you handle that? Will you start messing around with apt configuration and pinning, and try to install an unstable package in your stable system? Wrong, please stop.

Another scenario, the opposite: you run Debian unstable on your system. You have all the new shiny things, but sometimes an update of your system might break things. What if you update, and then can't build your website because of this or that? Will you wait a few days until another update comes and fixes everything? How many days before you can build your blog again? Or will you dive in the issues, and debug, which is nice and fun, but can also keep you busy all night, and is not exactly what you wanted to do in the first place, right?

So, for both of these issues, there's one simple answer: setup a build environment for your project. The most simple way is to use a chroot, which is roughly another filesystem hierarchy that you create and install somewhere, and in which you will run your build process. A more elaborate build environment is a container, which brings more isolation from the host system, and many more features, but for something as simple as building your website on your own machine, it's kind of overkill.

So that's what I want to detail here, I'll show you the way to setup and use a chroot. There are many tools for the job, and for example Pelican's official documentation recommends virtualenv, which is kind of the standard Python solution for that. However, I'm not too much of a Python developer, and I'm more familiar with the Debian tools, so I'll show you the Debian way instead.

Version-wise, it's 2019, we're talking about Pelican 4.x, if ever it matters.

Create the chroot

To create a basic, minimal Debian system, the usual command is debootstrap. Then in order to actually use this new system, we'll use schroot. So be sure to have these two packages installed on your machine.

sudo apt install debootstrap schroot

It seems that the standard location for chroots is /srv/chroot, so let's create our chroot there. It also seems that the traditional naming scheme for these chroots is something like SUITE-ARCH-APPLICATION, at least that's what other tools like sbuild do. While you're free to do whatever you want, in this tutorial we'll try to stick to the conventions.

Let's go and create a buster chroot:

sudo mkdir -p ${SYSROOT:?}
sudo debootstrap --variant=minbase buster ${SYSROOT:?}

And there we are, we just installed a minimal Debian system in $SYSROOT, how easy and neat is that! Just run a quick ls there, and see by yourself:

ls ${SYSROOT:?}

Now let's setup schroot to be able to use it. schroot will require a bit of a configuration file that tells it how to use this chroot. This is where things might get a bit complicated and cryptic for the newcomer.

So for now, stick with me, and create the schroot config file as follow:

cat << EOF | sudo tee /etc/schroot/chroot.d/buster-amd64-pelican.conf

Here, we tell schroot who can use this chroot ($LOGNAME, it's you), as normal user and root user. We also say where is the chroot directory located, and that we want an overlay, which means that the chroot will actually be read-only, and during operation a writable overlay will be stacked up on top of it, so that modifications are possible, but are not saved when you exit the chroot.

In our case, it makes sense because we have no intention to modify the build environment. The basic idea with a build environment is that it's identical for every build, we don't want anything to change, we hate surprises. So we make it read-only, but we also need a writable overlay on top of it, in case some process might want to write things in /var, for example. We don't care about these changes, so we're fine discarding this data after each build, when we leave the chroot.

And now, for the last step, let's install Pelican in our chroot:

schroot -c source:buster-amd64-pelican -u root -- \
  bash -c "apt update && apt install --yes make pelican && apt clean"

In this command, we log into the source chroot as root, and we install the two packages make and pelican. We also clean up after ourselves, to save a bit of space on the disk.

At this point, our chroot is ready to be used. If you're new to all of this, then read the next section, I'll try to explain a bit more how it works.

A quick introduction to schroot

In this part, let me try to explain a bit more how schroot works. If you're already acquainted, you can skip this part.

So now that the chroot is ready, let's experiment a bit. For example, you might want to start by listing the chroots available:

$ schroot -l

Interestingly, there are two of them... So, this is due to the overlay thing that I mentioned just above. Using the regular chroot (chroot:) gives you the read-only version, for daily use, while the source chroot (source:) allows you to make persistent modifications to the filesystem, for install and maintenance basically. In effect, the source chroot has no overlay mounted on top of it, and is writable.

So you can experiment some more. For example, to have a shell into your regular chroot, run:

$ schroot -c chroot:buster-amd64-pelican

Notice that the namespace (eg. chroot: or source:) is optional, if you omit it, schroot will be smart and choose the right namespace. So the command above is equivalent to:

$ schroot -c buster-amd64-pelican

Let's try to see the overlay thing in action. For example, once inside the chroot, you could create a file in some writable place of the filesystem.

(chroot)$ touch /var/tmp/this-is-an-empty-file
(chroot)$ ls /var/tmp

Then log out with <Ctrl-D>, and log in again. Have a look in /var/tmp: the file is gone. The overlay in action.

Now, there's a bit more to that. If you look into the current directory, you will see that you're not within any isolated environment, you can still see all your files, for example:

(chroot)$ pwd
(chroot)$ ls
content  Makefile  ...

Not only are all your files available in the chroot, you can also create new files, delete existing ones, and so on. It doesn't even matter if you're inside or outside the chroot, and the reason is simple: by default, schroot will mount the /home directory inside the chroot, so that you can access all your files transparently. For more details, just type mount inside the chroot, and see what's listed.

So, this default of schroot is actually what makes it super convenient to use, because you don't have to bother about bind-mounting every directory you care about inside the chroot, which is actually quite annoying. Having /home directly available saves time, because what you want to isolate are the tools you need for the job (so basically /usr), but what you need is the data you work with (which is supposedly in /home). And schroot gives you just that, out of the box, without having to fiddle too much with the configuration.

If you're not familiar with chroots, containers, VMs, or more generally bind mounts, maybe it's still very confusing. But you'd better get used to it, as virtual environment are very standard in software development nowadays.

But anyway, let's get back to the topic. How to make use of this chroot to build our Pelican website?

Chroot usage with Pelican

Pelican provides two helpers to build and manage your project: one is a Makefile, and the other is a Python script called As I said before, I'm not really a seasoned Pythonista, but it happens that I'm quite a fan of make, hence I will focus on the Makefile for this part.

So, here's how your daily blogging workflow might look like, now that everything is in place.

Open a first terminal, and edit your blog posts with your favorite editor:

$ nano content/

Then open a second terminal, enter the chroot, build your blog and serve it:

$ schroot -c buster-amd64-pelican
(chroot)$ make html
(chroot)$ make serve

And finally, open your web browser at http://localhost:8000 and enjoy yourself.

This is easy and neat, but guess what, we can even do better. Open the Makefile and have a look at the very first lines:


It turns out that the Pelican developers know how to write Makefiles, and they were kind enough to allow their users to easily override the default commands. In our case, it means that we can just replace these two lines with these ones:

PY?=schroot -c buster-amd64-pelican -- python3
PELICAN?=schroot -c buster-amd64-pelican -- pelican

And after these changes, we can now completely forget about the chroot, and simply type make html and make serve. The chroot invocation is now handled automatically in the Makefile. How neat!


So you might want to update your chroot from time to time, and you do that with apt, like for any Debian system. Remember the distinction between regular chroot and source chroot due to the overlay? If you want to actually modify your chroot, what you want is the source chroot. And here's the one-liner:

schroot -c source:$PROJECT -u root -- \
  bash -c "apt update && apt --yes dist-upgrade && apt clean"

If one day you stop using it, just delete the chroot directory, and the schroot configuration file:

sudo rm /etc/schroot/chroot.d/buster-amd64-pelican.conf
sudo rm -fr /srv/chroot/buster-amd64-pelican

And that's about it.

Last words

The general idea of keeping your build environments separated from your host environment is a very important one if you're a software developer, and especially if you're doing consulting and working on several projects at the same time. Installing all the build tools and dependencies directly on your system can work at the beginning, but it won't get you very far.

schroot is only one of the many tools that exist to address this, and I think it's Debian specific. Maybe you have never heard of it, as chroots in general are far from the container hype, even though they have some common use-cases. schroot has been around for a while, it works great, it's simple, flexible, what else? Just give it a try!

It's also well integrated with other Debian tools, for example you might use it through sbuild to build Debian packages (another daily task that is better done in a dedicated build environment), so I think it's a tool worth knowing if you're doing some Debian work.

That's about it, in the end it was mostly a post about schroot, hope you liked it.

16 March, 2019 12:00AM by Arnaud Rebillout

March 15, 2019

Romain Perier

Hello planet !

Introducing myself

My name is Romain, I have been nominated to the status of  Debian Maintainer recently. I am part of the debian-kernel team (still a padawan) since few months, and, as a DM, I will co-maintain the package raspi3-firmware with Gunnar Wolf.

Current contributions

As a kernel and linux en enginner, I focus on embedded stuffs and kernel development. This is a summary of what I have done in the previous months.

Kernel team

As a contributor, I work a various things, I try to work where help is the more needed. I have wrote a python script for generating debian changelog in firmware-nonfree, I have bumped the package for new releases. I bump the linux kernel for new upstream releases, I help to close and resolve bugs, I backport new features when it makes sense for doing so, enable new hardware and recently I have added  new flavour for the RPI 1 and RPI Zero in armel ! (spoil)


I have recently added a new mode in the configuration file of the package that let you device what you would like to boot from the firmware. You can either boot a linux kernel directly, passing the adress of the initramfs to use, a baremetal application, or a second level bootloader like u-boot or barebox (personnally I prefer u-boot). From u-boot then, you can use extlinux and get a nice generated menu by uboot menu. I have also added the support for using the devicetree-blob of the RPI 1 and the RPI Zero W when the firmware boots the kernel directly. I am also participating for reducing lintian warnings, new upstream release and improvements in general.


I have recently sent a MR for enabling support for the RPI Zero W in uboot for armel and it was accepted (thanks to Vagrant). As I use U-Boot everyday on my boards, I will probably send others MR ;)

Raspberry Pi Zero

As written described above, I have added a flavour for enabling support for the RPI1 and RPI Zero in armel for Linux 4.19.x. Like the Raspberry PI 3, there are no official images for this, but you can use debos or vmdb2 for building a buster image for your PI Zero. I have personally tried it, at home. I was able to run an LXDE session, with llvmpipe (I am still investigating if vc4 in gallium works for this SoC or not, while it's working perfectly fine for the PI3, it fallback to llvmpipe on the PI Zero).

Raspberry Pi 3

As posted on planet recently by Gunnar, you can find an unofficial image for the PI 3 if you want to try it. On buster you will be able to run a kernel 4.19.x LTS with an excellent DRM/KMS support and Gallium support in mesa. I was able to run a LXDE session with VC4 gallium here !

Future work

I will try my best to get an excellent support for all Raspberry PI in Debian (with unofficial images at the beginning). Including kernel support, kernel bugs fixes or improvements, debos and/or vmdb2 recipes for generating buster images easily, and even graphical stack hacks :) . I will continue my work in the kernel-team, because there are a tons of things to do, and of courses as co-maintainer, maintain raspi3-firmware (that will be probably renamed to something more generic, *spoil*).

15 March, 2019 07:50PM by Romain Perier (

John Goerzen

The Rightward, Establishment Bias of Lazy Journalism

Note: I also posted this post on medium.

I remember clearly the moment I’d had enough of NPR for the day. It was early morning January 25 of this year, still pretty dark outside. An NPR anchor was interviewing an NPR reporter — they seem to do that a lot these days — and asked the following simple but important question:

“So if we know that Roger Stone was in communications with WikiLeaks and we know U.S. intelligence agencies have said WikiLeaks was operating at the behest of Russia, does that mean that Roger Stone has been now connected directly to Russia’s efforts to interfere in the U.S. election?”

The factual answer, based on both data and logic, would have been “yes”. NPR, in fact, had spent much airtime covering this; for instance, a June 2018 story goes into detail about Stone’s interactions with WikiLeaks, and less than a week before Stone’s arrest, NPR referred to “internal emails stolen by Russian hackers and posted to Wikileaks.” In November of 2018, The Atlantic wrote, “Russia used WikiLeaks as a conduit — witting or unwitting — and WikiLeaks, in turn, appears to have been in touch with Trump allies.”

Why, then, did the NPR reporter begin her answer with “well,” proceed to hedge, repeat denials from Stone and WikiLeaks, and then wind up saying “authorities seem to have some evidence” without directly answering the question? And what does this mean for bias in the media?

Let us begin with a simple principle: facts do not have a political bias. Telling me that “the sky is blue” no more reflects a Democratic bias than saying “3+3=6” reflects a Republican bias. In an ideal world, politics would shape themselves around facts; ideas most in agreement with the data would win. There are not two equally-legitimate sides to questions of fact. There is no credible argument against “the earth is round”, “climate change is real,” or “Donald Trump is an unindicted co-conspirator in crimes for which jail sentences have been given.” These are factual, not political, statements. If you feel, as I do, a bit of a quickening pulse and escalating tension as you read through these examples, then you too have felt the forces that wish you to be uncomfortable with unarguable reality.

That we perceive some factual questions as political is a sign of a deep dysfunction in our society. It’s a sign that our policies are not always guided by fact, but that a sustained effort exists to cause our facts to be guided by policy.

Facts do not have a political bias. There are not two equally-legitimate sides to questions of fact. “Climate change is real” is a factual, not a political, statement. Our policies are not always guided by fact; a sustained effort exists to cause our facts to be guided by policy.

Why did I say right-wing bias, then? Because at this particular moment in time, it is the political right that is more engaged in the effort to shape facts to policy. Whether it is denying the obvious lies of the President, the clear consensus on climate change, or the contours of various investigations, it is clear that they desire to confuse and mislead in order to shape facts to their whim.

It’s not always so consequential when the media gets it wrong. When CNN breathlessly highlights its developing story — that an airplane “will struggle to maintain altitude once the fuel tanks are empty” —it gives us room to critique the utility of 24/7 media, but not necessarily a political angle.

But ask yourself: who benefits when the media is afraid to report a simple fact about an investigation with political connotations? The obvious answer, in the NPR example I gave, is that Republicans benefit. They want the President to appear innocent, so every hedge on known facts about illegal activities of those in Trump’s orbit is a gift to the right. Every time a reporter gives equal time to climate change deniers is a gift to the right and a blow to informed discussion in a democracy.

Not only is there a rightward bias, but there is also an establishment bias that goes hand-in-hand. Consider this CNN report about Facebook’s “pivot to privacy”, in which CEO Zuckerberg is credited with “changing his tune somewhat”. To the extent to which that article highlights “problems” with this, they take Zuckerberg at face-value and start to wonder if it will be harder to clamp down on fake news in the news feed if there’s more privacy. That is a total misunderstanding of what was being proposed; a more careful reading of the situation was done by numerous outlets, resulting in headlines such as this one in The Intercept: “Mark Zuckerberg Is Trying to Play You — Again.” They correctly point out the only change actually mentioned pertained only to instant messages, not to the news feed that CNN was talking about, and even that had a vague promise to happen “over the next few years.” Who benefited from CNN’s failure to read a press release closely? The established powers — Facebook.

Pay attention to the media and you’ll notice that journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots. Much has been written about the “media narrative,” often critical, with good reason. Back in November of 2018, an excellent article on “The Ubearable Rightness of Seth Abramson” covered one particular case in delightful detail.

Journalists trip all over themselves to report a new dot in a story, but they run away scared from being the first to connect the dots.

Seth Abramson himself wrote, “Trump-Russia is too complex to report. We need a new kind of journalism.” He argues the culprit is not laziness, but rather that “archive of prior relevant reporting that any reporter could review before they publish their own research is now so large and far-flung that more and more articles are frustratingly incomplete or even accidentally erroneous than was the case when there were fewer media outlets, a smaller and more readily navigable archive of past reporting for reporters to sift through, and a less internationalized media landscape.” Whether laziness or not, the effect is the same: a failure to properly contextualize facts leading to misrepresented or outright wrong outcomes that, at present, have a distinct bias towards right-wing and establishment interests.

Yes, the many scandals in Trumpland are extraordinarily complex, and in this age of shrinking newsroom budgets, it’s no wonder that reporters have trouble keeping up. Highly-paid executives like Zuckerberg and politicians in Congress have years of practice with obfuscation, and it takes skill to find the truth (if there even is any) behind a corporate press release or political talking point. One would hope, though, that reporters would be less quick to opine if they lack those skills or the necessary time to dig in.

There’s not just laziness; there’s also, no doubt, a confusion about what it means to be a balanced journalist. It is clear that there are two sides to a debate over, say, whether to give a state’s lottery money to the elementary schools or the universities. When there is the appearance of a political debate over facts, shouldn’t that also receive equal time for each side? I argue no. In fact, politicians making claims that contradict establish fact should be exposed by journalists, not covered by them.

And some of it is, no doubt, fear. Fear that if they come out and say “yes, this implicates Stone with Russian hacking” that the Fox News crowd will attack them as biased. Of course this will happen, but that attack will be wrong. The right has done an excellent job of convincing both reporters and the public that there’s a big left-leaning bias that needs to be corrected, by yelling about it every time a fact is mentioned that they don’t like. The unfortunate result is that the fact-leaning bias in the media is being whittled away.

Politicians making claims that contradict establish fact should be exposed by journalists, not covered by them. The fact-leaning bias in the media is being whittled away.

Regardless of the cause, media organizations and their reporters need to be cognizant of the biases actors of all stripes wish them to display, and refuse to play along. They need to be cognizant of the demands they put on their own reporters, and give them space to understand the context of a story before explaining it. They need to stand up to those that try to diminish facts, to those that would like them to be uninformed.

A world in which reporters know the context of their stories and boldly state facts as facts, come what may, is a world in which reporters strengthen the earth’s democracies. And, by extension, its people.

15 March, 2019 03:21PM by John Goerzen

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Linux Desktop Usage 2019

If I look back now, it must be more than 20 years since I got fascinated with GNU/Linux ecosystem and started using it.

Back then, it was more curiosity of a young teenager and the excitement to learn something. There’s one thing that I have always admired/respected about Free Software’s values, is: Access for everyone to learn. This is something I never forget and still try to do my bit.

It was perfect timing and I was lucky to be part of it. Free Software was (and still is) a great platform to learn upon, if you have the willingness and desire for it.

Over the years, a lot lot lot has changed, evolved and improved. From the days of writing down the XF86Config configuration file to get the X server running, to a new world where now everything is almost dynamic, is a great milestone that we have achieved.

All through these years, I always used GNU/Linux platform as my primary computing platform. The CLI, Shell and Tools, have all been a great source of learning. Most of the stuff was (and to an extent, still is) standardized and focus was usually on a single project.

There was less competition on that front, rather there was more collaboration. For example, standard tools like: sed, awk, grep etc were single tools. Like you didn’t have 2 variants of it. So, enhancements to these tools was timely and consistent and learning these tools was an incremental task.

On the other hand, on the Desktop side of things, it started and stood for a very long time, to do things their own ways. But eventually, quite a lot of those things have standardized, thankfully.

For the larger part of my desktop usage, I have mostly been a KDE user. I have used other environments like IceWM, Enlightenment briefly but always felt the need to fallback to KDE, as it provided a full and uniform solution. For quite some time, I was more of a user preferring to only use the K* tools, as in if it wasn’t written with kdelibs, I’d try to avoid it. But, In the last 5 years, I took at detour and tried to unlearn and re-learn the other major desktop environment, GNOME.

GNOME is an equally beautiful and elegant desktop environment with a minimalistic user interface (but which at many times ends up plaguing its application’s feature set too, making it “minimalistic feature set applications”). I realized that quite a lot of time and money is invested into the GNOME project, especially by the leading Linux Distribution Vendors.

But the fact is that GNU/Linux is still not a major player on the Desktop market. Some believe that the Desktop Market itself has faded and been replaced by the Mobile market. I think Desktop Computing still has a critical place in the near foreseeable future and the Mobile Platform is more of an extension shell to it. For example, for quickies, the Mobile platform is perfect. But for a substantial amount of work to be done, we still fallback to using our workstations. Mobile platform is good for a quick chat or email, but if you need to write a review report or a blog post or prepare a presentation or update an excel sheet, you’d still prefer to use your workstation.

So…. After using GNOME platform for a couple of years, I realized that there’s a lot of work and thought put into this platform too, just like the KDE platform. BUT To really be able to dream about the “Year of the dominance of the GNU/Linux desktop platform”, all these projects need to work together and synergise their efforts.

Pain points:

  • Multiple tools, multiple efforts wasted. Could be synergised.
  • Data accessiblity
  • Integration and uniformity

Multiple tools

Kmail used to be an awesome email client. Evoltuion today is an awesome email client. Thunderbird was an awesome email client, which from what I last remember, Mozilla had lack of funds to continue maintaining it. And then there’s the never ending stream of new/old projects that come and go. Thankfully, email is pretty standardized in its data format. Otherwise, it would be a nightmare to switch between these client. But still, GNU/Linux platforms have the potential to provide a strong and viable offering if they could synergise their work together. Today, a lot of resource is just wasted and nobody wins. Definitely not the GNU/Linux platform. Who wins are: GMail, Hotmail etc.

If you even look at the browser side of things, Google realized the potential of the Web platform for its business. So they do have a Web client for GNU/Linux. But you’ll never see an equivalent for Email/PIM. Not because it is obsolete. But more because it would hurt their business instead.

Data accessibility

My biggest gripe is data accessiblity. Thankfully, for most of the stuff that we rely upon (email, documents etc), things are standardized. But there still are annoyances. For example, when KDE 4.x debacle occured, kwallet could not export its password database to the newer one. When I moved to GNOME, I had another very very very hard time extracting passwords from kwallet and feeding them to SeaHorse. Then, when recently, I switched back to KDE, I had to similarly struggle exporting back my data from SeaHorse (no, not back to KWallet). Over the years, I realized that critical data should be kept in its simplest format. And let the front-ends do all the bling they want to. I realized this more with Emails. Maildir is a good clean format to store my email in, irrespective of how I access my email. Whether it is dovecot, Evolution, Akonadi, Kmail etc, I still have my bare data intact.

I had burnt myself on the password front quite a bit, so on this migration back to KDE, I wanted an email like solution. So there’s pass, a password store, which fits the bill just like the Email use case. It would make a lot more sense for all Desktop Password Managers to instead just be a frontend interface to pass and let it keep the crucial data in bare minimal format, and accessbile at all times, irrespective of the overhauling that the Desktop projects tend to do every couple of years or so.

Data is critical. Without retaining its compatibility (both backwards and forward), no battle can you win.

I honestly feel the Linux Desktop Architects from the different projects should sit together and agree on a set of interfaces/tools (yes yes there is fd.o) and stick to it. Too much time and energy is wasted otherwise.

Integration and Uniformity

This is something I have always desired and I was quite impressed (and delighted) to see some progress on the KDE desktop in the UI department. On GNOME, I developed a liking for the Evolution email client. Infact, it is my client currently, for Email, NNTP and other PIM. And I still get to use it nicely in a KDE environment. Thank you. Evolution KDE/GNOME Integration

15 March, 2019 02:35PM by Ritesh Raj Sarraf (

Enrico Zini

gitpython: list all files in a git commit

A little gitpython recipe to list the paths of all files in a commit:


import git
from pathlib import Path
import sys

def list_paths(root_tree, path=Path(".")):
    for blob in root_tree.blobs:
        yield path /
    for tree in root_tree.trees:
        yield from list_paths(tree, path /

repo = git.Repo(".", search_parent_directories=True)
commit = repo.commit(sys.argv[1])
for path in list_paths(commit.tree):

It can be a good base, for example, for writing a script that, given two git branches, shows which django migrations are in one and not in the other, without doing any git checkout of the code.

15 March, 2019 10:41AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#20: Dependencies. Now with badges!

Welcome to post number twenty in the randomly redundant R rant series of posts, or R4 for short. It has been a little quiet since the previous post last June as we’ve been busy with other things but a few posts (or ideas at least) are queued.

Dependencies. We wrote about this a good year ago in post #17 which was (in part) tickled by the experience of installing one package … and getting a boatload of others pulled in. The topic and question of dependencies has seen a few posts over the year, and I won’t be able to do them all justice. Josh and I have been added a few links to the page. The (currently) last one by Russ Cox titled Our Software Dependency Problem is particularly trenchant.

And just this week the topic came up in two different, and unrelated posts. First, in What I don’t like in you repo, Oleg Kovalov lists a brief but decent number of items by which a repository can be evaluated. And one is about [b]loated dependencies where he nails it with a quick When I see dozens of deps in the lock file, the first question which comes to my mind is: so, am I ready to fix any failures inside any of them? This is pretty close to what we have been saying around the tinyverse.

Second, in Beware the data science pin factory, Eric Colson brings an equation. Quoting from footnote 2: […] the number of relationships (r) grows as a function number of members (n) per this equation: r = (n^2-n) / 2. Granted, this was about human coordination and ideal team size. But let’s just run with it: For n=10, we get r=9 which is not so bad. For n=20, it is r=38. And for n=30 we are at r=87. You get the idea. “Big-Oh-N-squared”.

More dependencies means more edges between more nodes. Which eventually means more breakage.

Which gets us to announcement embedded in this post. A few months ago, in what still seems like a genuinely extra-clever weekend hack in an initial 100 or so lines, Edwin de Jonge put together a remarkable repo on GitLab. It combines Docker / Rocker via hourly cron jobs with deployment at netlify … giving us badges which visualize the direct as well as recursive dependencies of a package. All in about 100 lines, fully automated, autonomously running and deployed via CDN. Amazing work, for which we really need to praise him! So a big thanks to Edwin.

With these CRAN Dependency Badges being available, I have been adding them to my repos at GitHub over the last few months. As two quick examples you can see

  • Rcpp Rcpp
  • RcppArmadillo RcppArmadillo

to get the idea. RcppArmadillo (or RcppEigen or many other packages) will always have one: Rcpp. But many widely-used packages such as data.table also get by with a count of zero. It is worth showing this – and the badge does just that! And I even sent a PR to the badger package: if you’re into this, you can have a badge made for your via badger::badge_depdencies(pkgname).

Otherwise, more details at Edwin’s repo and of course his actual site hosting the badges. It’s easy as all other badges: reference the CRAN package, get a badge.

So if you buy into the idea that lightweight is the right weight then join us and show it via the dependency badges!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 March, 2019 03:41AM

Hideki Yamane

pbuilder hack with new debootstrap option

Suddenly I noticed that maybe I can use --cache-dir option that I've added to debootstrap some time ago for pbuilder, too. Then hacked it.

> original
real    3m34.811s
user    1m6.676s
sys     0m33.051s

> use aptcache for debootstrap
real    2m52.397s
user    0m59.660s
sys     0m28.631s

It cuts 40s for creating base.tgz. Nice, isn't it? :) Hope pbuilder team will accept this Merge Request and push it to buster since it's worth for stable release, IMHO.

15 March, 2019 02:17AM by Hideki Yamane (

March 14, 2019

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, February 2019

I was assigned 19.5 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from January. I worked only 4 hours and so will carry over 16.5 hours.

I backported various security fixes to Linux 3.16, but did not upload a new release yet.

14 March, 2019 11:46PM

Craig Small

WordPress 5.1.1

The Debian packages for WordPress version 5.1.1 are being updated as I write this. This is a security fix for WordPress that stops comments causing a cross-site scripting bug. It’s an important one to update.

The backports should happen soon so even if you are using Debian stable you’ll be covered.

14 March, 2019 11:09AM by Craig

March 13, 2019

hackergotchi for Shirish Agarwal

Shirish Agarwal

The road to hell is paved with good intentions

First of all I would like to share about a video which I should have talked or added about in the ‘Celebrating National Science Day at GMRT‘ blog post. It’s a documentary called ‘The Most Unknown‘ . It’s a great documentary as it gives you a glimpse of how much is there yet to discover. The reason I shared this is I have seen lot of money being removed from Government Research and put god knows where. Just a fair warning, it would be somewhat of a long conversation.

Almost all of IIT’s are in bad shape, in fact IIT Mumbai which I know and have often had the privilege to associate myself with has been going through tough times. This is when institutes such as IIT Mumbai, NCRA, GMRT, FTII and all such institutions have made loads of contributions in creating awareness and has given the public the ability to question rather than just ‘believe’ . For any innovation to happen, you have to question, investigate, prove, share the findings and the way you have done things so it could be reproduced.

Even Social Sciences as shared in the Documentary and my brief learnings and takeaways from TISS has been the same. The reason is even they are in somewhat dire-straits. I was just sharing or having a conversation with another friend few days back who is into higher education that IISER Pune where the recent Wordcamp happened had to commercialize and open its doors to events in order to sustain itself. While I and perhaps all wordcampers would forever be grateful for sharing with us such a great place as well as a studios vibe which also influenced how Wordcamp was held, I did feel sad that we intruded in their study areas which should be meant for IISER’s only.

Before I get too carried away, I should point to people that people should look at Ian Cheney’s some of the old documentaries as well (the one who just did the most Unknown) and has found his previous work compelling as well. The City Dark is a beautiful masterpiece and shares lot of insights about light pollution which India could use well to improve both our lighting as well as reduce light pollution in the atmosphere.

Meeting with Bhakts and ‘Good intentions’

The reason I shared the above was also keeping in mind the conversations I have whenever I meet Bhakts. The term bhakt comes from bhakti in Sanskrit which at one time meant spirituality and purity although now in politics it means one who choose to believe in the leader and the party absolutely. Whenever a bhakt starts losing an logical argument, one of the argument that is often meted out is whatever you say you cannot doubt Mr. Narendra Modi’s intentions which is the reason why I took the often used proverb to prove the same point and is the heading of the blog post. The problem with the whole ‘good intentions’ part is, it’s pretty much a strawman argument. The problem with intentions is everybody can either say or mask their intentions. Even ISIS says that they want to bring back the golden phase of Islam. We have seen their actions, should we believe what they say ? Or even Hitler who said ‘One people, one empire, one leader’ who claimed that the Aryans were superior to the Jewish people while history has gone to show the exact opposite. Israel, today is the eight-biggest arms supplier in the world, our military is the second-biggest buyer of arms from them as well as far more prosperous than us and many other countries. Their work on drip-irrigation and water retention, agricultural techniques, there is much we could learn from them. Same thing about manning borders and such. While I could give many such examples the easiest example to share in context of good intentions gone wrong is Demonetisation in India which deserves its own paragraph.


Demonetisation was heralded by Mr. Modi with great fanfare . It was supposed to take out black money. While we learned later that black money didn’t get wiped out but has become more into your face. This we learned later was debunked by the earlier R.B.I. Governor Raghuram Rajan and then now from R.B.I. itself. This is before Mr. Narendra Modi announced demonetisation. Sharing below an excerpt from the Freakonomics Radio show which has Mr. Rajan’s interview. Makes for interesting reading or listening as the case may be.

DUBNER: Now, shortly after your departure as governor of the R.B.I., Prime Minister Modi executed a sudden, controversial plan to abolish 500- and 1,000-rupee banknotes, hoping to crack down on the shadow economy and tax evasion. I understand you had not been in favor of that idea, correct? p, li { white-space: pre-wrap; }

RAJAN: Absolutely. It didn’t make sense. I was asked for my opinion, and I said, “Look, it is taking away money that people use in transactions. It’s going to create enormous disruption unless we replace it overnight with freshly printed money.” And it’s very important that we have all that in place, difficult to maintain secrecy, and then the fundamental sort of objective of this, which was to get people to bring out the money that they hoarded in their basements and pay taxes on them — I said, “That’s probably not going to work out, because they’re going to find ways to infuse the money back into the system without paying those taxes.”

DUBNER: It’s been roughly two years now. What have been the effects of this demonetization?

RAJAN: Well, I think more than the numbers suggest, because India was growing at that time. And we had numbers which were in the 7.5 percent growth range at that point, at a time, in 2016, when the world was actually growing quite slowly. When growth picked up in 2017, instead of going along with the world, which we typically do and we exceed world growth significantly, we went down. That suggests it had a tremendous effect on growth, but that, the numbers don’t capture it all, because what actually got killed was the informal sector — the people who were doing work with notes rather than with checks, who didn’t have formal bank accounts. And when you look at the job numbers that some private-sector people estimate 10, 12 million jobs were lost in that episode. And, of course, we haven’t recovered them yet. It was one of those places where more economic thinking would have helped.

DUBNER: Was it a coincidence that Prime Minister Modi went ahead with the plan only after you’d left?

RAJAN: Well, I can’t speak on that. I can only say that I made my objections very, very clear.

Freakonomics Podcast Stephen J. Dubner interviewing Mr. Raghuram Rajan, RBI Governor 5th September 2013 – 5th September 2017. Aired on 6th February 2019 .

I would urge people to listen to Freakonomics Radio as there are lots of pearls of wisdom in there. There is also the Good ideas are not enough podcast which is very relevant to the topic at hand but would digress about the Freakonomics radio for now.

The interesting part to ask from the details known from R.B.I. are –

a. Why did Mr. Narendra Modi feel the need to have the permission of R.B.I. after 38 days ?

b. If Mr. Modi were confident of the end-result then shouldn’t he have instead of asking permission and have the PMO have taken all the responsibility.

In any case, as was seen from R.B.I. counting only 0.3% of the money did not come even though many people’s valid claims were thrown out and the expense in the whole exercise was much more than the shortfall. R.B.I. didn’t get INR 10k crore while it spent INR 13k crore for the new currency. Does anybody see any saving here ?

The bhakts counter-argument is that the bankers were bad, if everybody had done their work, then it would have all worked out as Mr. Modi wanted. The statement itself implies that they didn’t know the reality. Even if we take the statement at face-value that all the bankers were cheaters (which I don’t agree with at all) , didn’t they know it when they were the opposition. Where was the party’s economic intelligence, didn’t it tell them so many years they were in Opposition. This is what the Opposition should be doing and knowing about the state of the Economy and know the workings to say the least.

There is also this

These are minutes obtained by Venkatesh Nayak under the RTI tool.

To rub salt to the wounds, now the IPP is at low of 1.7 percent as well 😦 . As always, I’m sure BJP will say these are not the final numbers.

I am curious to know what RSS people would think or make of this video –

The other terminology people do use when they are unable to win an argument is I don’t know in detail, why don’t you come to the shaaka and meet our ‘prachaarak’ . Pracharaak while a hindi word used to mean a wise man disseminating knowledge but in RSS-speak he is a political spinner. In style, mannerisms they are very close to how born again Jesuists or Missionaries do, the ones who are forceful, don’t want to have a meaningful conversation. The most interesting video on this topic can be seen on twitter

Important Legislations which need Public Scrutiny

The other interesting development or regression which I and probably many tech-savvy Indians would probably noted is the total lack of comments on any of the new regulations by Mozilla about Internet, whether it is the News and Draft Intermedairy rules, the Aadhar Amendment Bill, 2019 which was passed recently, the Draft E-commerce Bill , Data Protection Bill, all of which are important pieces of legislation which need and needed careful study. While the Government of India isn’t going to do anything apart from asking comments, people should have come forward and made better systems. One of the things that any social group could do is either have Stet or a co-ment instance so it could capture people comments and also mail it from their to Meity .

The BJP Site hack

Last and not the least was the BJP site hack which is now in ninth day where it is still under maintainance . It was a hack because there was a meme which went viral on the web. Elliot in his inimitable style also shared how they should have backed up their site . In an un-related event I was attending devops event where how web apps, websites should be done was shared. It’s not rocket science, even if one of the people had looked at ‘high-availability’ they would have got loads of web-pages and links which tell how to be secure and still serve content. Apparently, they just did not take BJP site data but probably also donor data (people who have donated wealth to BJP) . There is a high possibility if it’s a real hack that crackers, foreign agents could put BJP to ransom and have dominion on India if BJP comes back to power. While I do hope such a scenario doesn’t play out you never know. I would probably share about the devops event some other time as there was much happening there and deserves its own blog post.

On the other hand, it could be a ploy to tell to EC (Election Commission) we don’t have the data, our website got hacked when EC asks from where you got the funding for the Elections.

In either way it doesn’t seem a good time for either BJP or even India as a whole if it has such weak I.T. Government 😦

Frankly speaking, when I first heard it, I was thinking hopefully they wouldn’t have put their donor details on the same server and they probably would be taking backups. I chided myself for thinking such stupid thoughts. A guy like me could be screwy with his backups due to time, budget constraints but a political party like BJP which has unbridled wealth wouldn’t do such rookie mistakes and would have the best technical talent available. Many BJP well-wishers were thinking they would be able to punish the culprit and I had to tell them he could use any number of tools to hide his real identity. You could use VPN, tor or even plain old IP spoofing. The possibilities are just endless. This is just when I’m a infosec rookie or baby.

The other interesting part of this hack is till date BJP has neither acknowledged the hack, nor have they shared what went wrong. I think we have been to comfortable and been used to hacks on reddit, gmail, twitter where the developers feel that the users should know the extent of the hack, what was lost and not lost, what they are doing to recover and till when we can expect services to start and then later a full disclosure report as to what they could determine. Of course, there could be disinformation in such information as well but that would have been a better response method than how the BJP IT Cell has responded. Not something I would expect from a well-oiled structure.

Update – 14/03/19 – Seems fb, instagram, whatsapp is down . Also see how fb responded on twitter . Also Indian Express ran an article on the recent BJP hack. We will know what happens hopefully in the next few days .

13 March, 2019 11:41PM by shirishag75

hackergotchi for Jo Shields

Jo Shields

Too many cores

Arming yourself

ARM is important for us. It’s important for IOT scenarios, and it provides a reasonable proxy for phone platforms when it comes to developing runtime features.

We have big beefy ARM systems on-site at Microsoft labs, for building and testing Mono – previously 16 Softiron Overdrive 3000 systems with 8-core AMD Opteron A1170 CPUs, and our newest system in provisional production, 4 Huawei Taishan XR320 blades with 2×32-core HiSilicon Hi1616 CPUs.

The HiSilicon chips are, in our testing, a fair bit faster per-core than the AMD chips – a good 25-50%. Which begged the question “why are our Raspbian builds so much slower?”

Blowing a raspberry

Raspbian is the de-facto main OS for Raspberry Pi. It’s basically Debian hard-float ARM, rebuilt with compiler flags better suited to ARM11 76JZF-S (more precisely, the ARMv6 architecture, whereas Debian targets ARMv7). The Raspberry Pi is hugely popular, and it is important for us to be able to offer packages optimized for use on Raspberry Pi.

But the Pi hardware is also slow and horrible to use for continuous integration (especially the SD-card storage, which can be burned through very quickly, causing maintenance headaches), so we do our Raspbian builds on our big beefy ARM64 rack-mount servers, in chroots. You can easily do this yourself – just grab the raspbian-archive-keyring package from the Raspbian archive, and pass the Raspbian mirror to debootstrap/pbuilder/cowbuilder instead of the Debian mirror.

These builds have always been much slower than all our Debian/Ubuntu ARM builds (v5 soft float, v7 hard float, aarch64), but on the new Huawei machines, the difference became much more stark – the same commit, on the same server, took 1h17 to build .debs for Ubuntu 16.04 armhf, and 9h24 for Raspbian 9. On the old Softiron hardware, Raspbian builds would rarely exceed 6h (which is still outrageously slow, but less so). Why would the new servers be worse, but only for Raspbian? Something to do with handwavey optimizations in Raspbian? No, actually.

When is a superset not a superset

Common wisdom says ARM architecture versions add new instructions, but can still run code for older versions. This is, broadly, true. However, there are a few cases where deprecated instructions become missing instructions, and continuity demands those instructions be caught by the kernel, and emulated. Specifically, three things are missing in ARMv8 hardware – SWP (swap data between registers and memory), SETEND (set the endianness bit in the CPSR), and CP15 memory barriers (a feature of a long-gone control co-processor). You can turn these features on via abi.cp15_barrier, abi.setend, and abi.swp sysctl flags, whereupon the kernel fakes those instructions as required (rather than throwing SIGILL).

CP15 memory barrier emulation is slow. My friend Vince Sanders, who helped with some of this analysis, suggested a cost of order 1000 cycles per emulated call. How many was I looking at? According to dmesg, about a million per second.

But it’s worse than that – CP15 memory barriers affect the whole system. Vince’s proposal was that the HiSilicon chips were performing so much worse than the AMD ones, because I had 64 cores not 8 – and that I could improve performance by running a VM, with only one core in it (so CP15 calls inside that environment would only affect the entire VM, not the rest of the computer).

Escape from the Pie Folk

I already had libvirtd running on all my ARM machines, from a previous fit of “hey one day this might be useful” – and as it happened, it was. I had to grab a qemu-efi-aarch64 package, containing a firmware, but otherwise I was easily able to connect to the system via virt-manager on my desktop, and get to work setting up a VM. virt-manager has vastly improved its support for non-x86 since I last used it (once upon a time it just wouldn’t boot systems without a graphics card), but I was easily able to boot an Ubuntu 18.04 arm64 install CD and interact with it over serial just as easily as via emulated GPU.

Because I’m an idiot, I then wasted my time making a Raspbian stock image bootable in this environment (Debian kernel, grub-efi-arm64, battling file-size constraints with the tiny /boot, etc) – stuff I would not repeat. Since in the end I just wanted to be as near to our “real” environment as possible, meaning using pbuilder, this simply wasn’t a needed step. The VM’s host OS didn’t need to be Raspbian.

Point is, though, I got my 1-core VM going, and fed a Mono source package to it.

Time taken? 3h40 – whereas the same commit on the 64-core host took over 9 hours. The “use a single core” hypothesis more than proven.

Next steps

The gains here are obvious enough that I need to look at deploying the solution non-experimentally as soon as possible. The best approach to doing so is the bit I haven’t worked out yet. Raspbian workloads are probably at the pivot point between “I should find some amazing way to automate this” and “automation is a waste of time, it’s quicker to set it up by hand”

Many thanks to the #debian-uk community for their curiosity and suggestions with this experiment!

13 March, 2019 02:48PM by directhex

Reproducible builds folks

Reproducible Builds: Weekly report #202

Here’s what happened in the Reproducible Builds effort between Sunday March 3 and Saturday March 9 2019:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

Chris Lamb uploaded version 113 to Debian unstable fixing a long list of issues. It included contributions already covered in previous weeks as well as new ones by Chris, including:

  • Provide explicit help when the libarchive system package is missing or “incomplete”. (#50)
  • Explicitly mention when the guestfs module is missing at runtime and we are falling back to a binary diff. (#45)

Vagrant Cascadian made the corresponding update to GNU Guix. []

Packages reviewed and fixed, and bugs filed

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers This week, Holger Levsen made the following improvements:

  • Analyse node maintenance job runs to determine whether to mark nodes offline. []
  • Detect hanging health check runs, not just failed ones. []
  • Allow members of the jenkins UNIX group to sudo(8) to the jenkins user [] and simplify adding users to said group [].
  • Improve the “SHA1 checker” script to deal with packages with more than one version [] and to re-download’s files if they are older than two weeks. []
  • Node maintenance. [][][][]
  • In the version checker, correctly deal with a rare situation when several, say, diffoscope versions are available in one Debian suite at the same time. []

In addition, Alexander “lynxis” Couzens, made a number of changes to our OpenWrt support, including:

  • Add OpenWrt support to our database. []
  • Adding a script. []
  • Strip unreproducible certificates from images. []


Don’t forget that Reproducible Builds is part of May/August 2019 round of Outreachy. Outreachy provides internships to work free software. Internships are open to applicants around the world, working remotely and are not required to move. Interns are paid a stipend of $5,500 for the three month internship and have an additional $500 travel stipend to attend conferences/events.

So far, we received more than ten initial requests from candidates. The closing date for applicants is April 2nd. More information is available on the application page.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 March, 2019 02:24PM

March 12, 2019

hackergotchi for Kees Cook

Kees Cook

security things in Linux v5.0

Previously: v4.20.

Linux kernel v5.0 was released last week! Looking through the changes, here are some security-related things I found interesting:

read-only linear mapping, arm64
While x86 has had a read-only linear mapping (or “Low Kernel Mapping” as shown in /sys/kernel/debug/page_tables/kernel under CONFIG_X86_PTDUMP=y) for a while, Ard Biesheuvel has added them to arm64 now. This means that ranges in the linear mapping that contain executable code (e.g. modules, JIT, etc), are not directly writable any more by attackers. On arm64, this is visible as “Linear mapping” in /sys/kernel/debug/kernel_page_tables under CONFIG_ARM64_PTDUMP=y, where you can now see the page-level granularity:

---[ Linear mapping ]---
0xffffb07cfc402000-0xffffb07cfc403000    4K PTE   ro NX SHD AF NG    UXN MEM/NORMAL
0xffffb07cfc403000-0xffffb07cfc4d0000  820K PTE   RW NX SHD AF NG    UXN MEM/NORMAL
0xffffb07cfc4d0000-0xffffb07cfc4d1000    4K PTE   ro NX SHD AF NG    UXN MEM/NORMAL
0xffffb07cfc4d1000-0xffffb07cfc79d000 2864K PTE   RW NX SHD AF NG    UXN MEM/NORMAL

per-task stack canary, arm
ARM has supported stack buffer overflow protection for a long time (currently via the compiler’s -fstack-protector-strong option). However, on ARM, the compiler uses a global variable for comparing the canary value, __stack_chk_guard. This meant that everywhere in the kernel needed to use the same canary value. If an attacker could expose a canary value in one task, it could be spoofed during a buffer overflow in another task. On x86, the canary is in Thread Local Storage (TLS, defined as %gs:20 on 32-bit and %gs:40 on 64-bit), which means it’s possible to have a different canary for every task since the %gs segment points to per-task structures. To solve this for ARM, Ard Biesheuvel built a GCC plugin to replace the global canary checking code with a per-task relative reference to a new canary in struct thread_info. As he describes in his blog post, the plugin results in replacing:

8010fad8:       e30c4488        movw    r4, #50312      ; 0xc488
8010fadc:       e34840d0        movt    r4, #32976      ; 0x80d0
8010fb1c:       e51b2030        ldr     r2, [fp, #-48]  ; 0xffffffd0
8010fb20:       e5943000        ldr     r3, [r4]
8010fb24:       e1520003        cmp     r2, r3
8010fb28:       1a000020        bne     8010fbb0
8010fbb0:       eb006738        bl      80129898 <__stack_chk_fail>


8010fc18:       e1a0300d        mov     r3, sp
8010fc1c:       e3c34d7f        bic     r4, r3, #8128   ; 0x1fc0
8010fc60:       e51b2030        ldr     r2, [fp, #-48]  ; 0xffffffd0
8010fc64:       e5943018        ldr     r3, [r4, #24]
8010fc68:       e1520003        cmp     r2, r3
8010fc6c:       1a000020        bne     8010fcf4
8010fcf4:       eb006757        bl      80129a58 <__stack_chk_fail>

r2 holds the canary saved on the stack and r3 the known-good canary to check against. In the former, r3 is loaded through r4 at a fixed address (0x80d0c488, which “readelf -s vmlinux” confirms is the global __stack_chk_guard). In the latter, it’s coming from offset 0x24 in struct thread_info (which “pahole -C thread_info vmlinux” confirms is the “stack_canary” field).

per-task stack canary, arm64
The lack of per-task canary existed on arm64 too. Ard Biesheuvel solved this differently by coordinating with GCC developer Ramana Radhakrishnan to add support for a register-based offset option (specifically “-mstack-protector-guard=sysreg -mstack-protector-guard-reg=sp_el0 -mstack-protector-guard-offset=...“). With this feature, the canary can be found relative to sp_el0, since that register holds the pointer to the struct task_struct, which contains the canary. I’m hoping there will be a workable Clang solution soon too (for this and 32-bit ARM). (And it’s also worth noting that, unfortunately, this support isn’t yet in a released version of GCC. It’s expected for 9.0, likely this coming May.)

top-byte-ignore, arm64
Andrey Konovalov has been laying the groundwork with his Top Byte Ignore (TBI) series which will also help support ARMv8.3’s Pointer Authentication (PAC) and ARMv8.5’s Memory Tagging (MTE). While TBI technically conflicts with PAC, both rely on using “non-VA-space” (Virtual Address) bits in memory addresses, and getting the kernel ready to deal with ignoring non-VA bits. PAC stores signatures for checking things like return addresses on the stack or stored function pointers on heap, both to stop overwrites of control flow information. MTE stores a “tag” (or, depending on your dialect, a “color” or “version”) to mark separate memory allocation regions to stop use-after-tree and linear overflows. For either of these to work, the CPU has to be put into some form of the TBI addressing mode (though for MTE, it’ll be a “check the tag” mode), otherwise the addresses would resolve into totally the wrong place in memory. Even without PAC and MTE, this byte can be used to store bits that can be checked by software (which is what the rest of Andrey’s series does: adding this logic to speed up KASan).

ongoing: implicit fall-through removal
An area of active work in the kernel is the removal of all implicit fall-through in switch statements. While the C language has a statement to indicate the end of a switch case (“break“), it doesn’t have a statement to indicate that execution should fall through to the next case statement (just the lack of a “break” is used to indicate it should fall through — but this is not always the case), and such “implicit fall-through” may lead to bugs. Gustavo Silva has been the driving force behind fixing these since at least v4.14, with well over 300 patches on the topic alone (and over 20 missing break statements found and fixed as a result of the work). The goal is to be able to add -Wimplicit-fallthrough to the build so that the kernel will stay entirely free of this class of bug going forward. From roughly 2300 warnings, the kernel is now down to about 200. It’s also worth noting that with Stephen Rothwell’s help, this bug has been kept out of linux-next by him sending warning emails to any tree maintainers where a new instance is introduced (for example, here’s a bug introduced on Feb 20th and fixed on Feb 21st).

ongoing: refcount_t conversions
There also continues to be work converting reference counters from atomic_t to refcount_t so they can gain overflow protections. There have been 18 more conversions since v4.15 from Elena Reshetova, Trond Myklebust, Kirill Tkhai, Eric Biggers, and Björn Töpel. While there are more complex cases, the minimum goal is to reduce the Coccinelle warnings from scripts/coccinelle/api/atomic_as_refcounter.cocci to zero. As of v5.0, there are 131 warnings, with the bulk of the remaining areas in fs/ (49), drivers/ (41), and kernel/ (21).

userspace PAC, arm64
Mark Rutland and Kristina Martsenko enabled kernel support for ARMv8.3 PAC in userspace. As mentioned earlier about PAC, this will give userspace the ability to block a wide variety of function pointer overwrites by “signing” function pointers before storing them to memory. The kernel manages the keys (i.e. selects random keys and sets them up), but it’s up to userspace to detect and use the new CPU instructions. The “paca” and “pacg” flags will be visible in /proc/cpuinfo for CPUs that support it.

platform keyring
Nayna Jain introduced the trusted platform keyring, which cannot be updated by userspace. This can be used to verify platform or boot-time things like firmware, initramfs, or kexec kernel signatures, etc.

Edit: added userspace PAC and platform keyring, suggested by Alexander Popov
Edit: tried to clarify TBI vs PAC vs MTE

That’s it for now; please let me know if I missed anything. The v5.1 merge window is open, so off we go! :)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

12 March, 2019 11:04PM by kees

hackergotchi for Daniel Lange

Daniel Lange

Wiping harddisks in 2019

Wiping hard disks is part of my company's policy when returning servers. No exceptions.

Good providers will wipe what they have received back from a customer, but we don't trust that as the hosting / cloud business is under constant budget-pressure and cutting corners (wipefs) is a likely consequence.

With modern SSDs there is "security erase" (man hdparm or see the - as always well maintained - Arch wiki) which is useful if the device is encrypt-by-default. These devices basically "forget" the encryption key but it also means trusting the devices' implementation security. Which doesn't seem warranted. Still after wiping and trimming, a secure erase can't be a bad idea :-).

Still there are three things to be aware of when wiping modern hard disks:

  1. Don't forget to add bs=4096 (blocksize) to dd as it will still default to 512 bytes and that makes writing even zeros less than half the maximum possible speed. SSDs may benefit from larger block sizes matched to their flash page structure. These are usually 128kB, 256kB, 512kB, 1MB, 2MB and 4MB these days.1
  2. All disks can usually be written to in parallel. screen is your friend.
  3. The write speed varies greatly by disk region, so use 2 hours per TB and wipe pass as a conservative estimate. This is better than extrapolating what you see initially in the fastest region of a spinning disk.
  4. The disks have become huge (we run 12TB disks in production now) but the write speed is still somewhere 100 MB/s ... 300 MB/s. So wiping servers on the last day before returning is not possible anymore with disks larger than 4 TB each (and three passes). Or 12 TB and one pass (where e.g. fully encrypted content allows to just do a final zero-wipe).

hard disk size one pass three passes
1 TB2 h6 h
2 TB4 h12 h
3 TB6 h18 h
4 TB8 h24 h (one day)
5 TB10 h30 h
6 TB12 h36 h
8 TB16 h48 h (two days)
10 TB20 h60 h
12 TB24 h72 h (three days)
14 TB28 h84 h
16 TB32 h96 h (four days)
18 TB36 h108 h
20 TB40 h120 h (five days)

Hard disk wipe animation

  1. As Douglas pointed out correctly in the comment below, these are IT Kilobytes and Megabytes, so 210 Bytes and 220 Bytes. So Kibibytes and Mebibytes for those firmly in SI territory. 

12 March, 2019 06:53PM by Daniel Lange

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Paulo Henrique de Lima Santana (phls)
  • Unit 193 (unit193)
  • Marcio de Souza Oliveira (marciosouza)
  • Ross Vandegrift (rvandegrift)

The following contributors were added as Debian Maintainers in the last two months:

  • Romain Perier
  • Felix Yan


12 March, 2019 12:00PM by Jean-Pierre Giraud

John Goerzen

Goodbye to a 15-year-old Debian server

It was October of 2003 that the server I’ve called “glockenspiel” was born. It was the early days of Linux-based VM hosting, using a VPS provider called memset, running under, of all things, User Mode Linux. Over the years, it has been migrated around, sometimes running on the metal and sometimes in a VM. The operating system has been upgraded in-place using standard Debian upgrades over the years, and is now happily current on stretch (albeit with a 32-bit userland). But it has never been reinstalled. When I’d migrate hosting providers, I’d use tar or rsync to stream glockenspiel across the Internet to its new home.

A lot of people reinstall an OS when a new version comes out. I’ve been doing Debian upgrades with apt for ages, and this one is a case in point. It lingers.

Root’s .profile was last modified in November 2004, and its .bashrc was last modified in December 2004. My own home directory still has a .pinerc, .gopherrc, and .arch-params file. I last edited my .vimrc in 2003 and my .emacs dates back to 2002 (having been copied over from a pre-glockenspiel FreeBSD server).

drwxr-xr-x  3 jgoerzen jgoerzen      4096 Dec  3  2003 irclogs
-rw-r--r--  1 jgoerzen jgoerzen       373 Dec  3  2003 .vimrc
-rw-r--r--  1 jgoerzen jgoerzen       651 Nov 27  2003 .reportbugrc
drwx------  3 jgoerzen jgoerzen      4096 Sep  2  2003 .arch-params
-rw-r--r--  1 jgoerzen jgoerzen      1115 Aug 23  2003 .gopherrc
drwxr-xr-x  3 jgoerzen jgoerzen      4096 Jul 18  2003 .subversion
-rw-r--r--  1 jgoerzen jgoerzen     15317 Jun 21  2003 .pinerc

Poking around /etc on glockenspiel is like a trip back in time. Various apache sites still have configuration files around, but have long since been disabled. Over the years, glockenspiel has hosted source code repositories using Subversion, arch, tla, darcs, mercurial and git. It’s hosted websites using Drupal, WordPress, Serendipity, and so forth. It’s hosted gopher sites, websites or mailing lists for various Free Software projects (such as Freeciv), and any number of local charitable organizations. Remnants of an FTP configuration still exist, when people used web design software to build websites for those organizations on their PCs and then upload them to glockenspiel.

-rw-r--r--   1 root  root                      268 Dec 25  2005 libnet.cfg
-rw-r-----   1 root  root                     1305 Nov 11  2004 mrtg.cfg
-rw-r--r--   1 root  root                      552 Jul 31  2004 pam.conf

All this has been replaced by a set of Docker containers running my docker-debian-base software. They’re all in git, I can rebuild one of the containers in a few seconds or a few minutes by typing “make”, and there is no cruft from 2002. There are a lot of benefits to this.

And yet, there is a part of me that feels it’s all so… cold. Servers having “personalities” was always a distinctly dubious thing, but these days as we work through more and more layers of virtualization and indirection and become more distant from the hardware, we lose an appreciation for what we have and the many shoulders of giants upon which we stand.

And, so with that, the final farewell to this server that’s been running since 2003:

glockenspiel:/etc# shutdown -P now
Shared connection to closed.

12 March, 2019 09:18AM by John Goerzen

hackergotchi for Daniel Lange

Daniel Lange

Openssh taking minutes to become available, booting takes half an hour ... because your server waits for a few bytes of randomness

So, your machine now needs minutes to boot before you can ssh in where it used to be seconds before the Debian Buster update?


Linux 3.17 (2014-10-05) learnt a new syscall getrandom() that, well, gets bytes from the entropy pool. Glibc learnt about this with 2.25 (2017-02-05) and two tries and four years after the kernel, OpenSSL used that functionality from release 1.1.1 (2018-09-11). OpenSSH implemented this natively for the 7.8 release (2018-08-24) as well.

Now the getrandom() syscall will block1 if the kernel can't provide enough entropy. And that's frequenty the case during boot. Esp. with VMs that have no input devices or IO jitter to source the pseudo random number generator from.

First seen in the wild January 2017

I vividly remember not seeing my Alpine Linux VMs back on the net after the Alpine 3.5 upgrade. That was basically the same issue.

Systemd. Yeah.

Systemd makes this behaviour worse, see issue #4271, #4513 and #10621.
Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted.

#4167 is a re-opened discussion about systemd eating randomness early at boot (hashmaps in PID 0...). Some Debian folks participate in the recent discussion and it is worth reading if you want to learn about the mess that booting a Linux system has become.

While we're talking systemd ... #10676 also means systems will use RDRAND in the future despite Ted Ts'o's warning on RDRAND [ mirror and mirrored locally as 130905_Ted_Tso_on_RDRAND.pdf, 205kB as Google+ will be discontinued in April 2019].


Debian is seeing the same issue working up towards the Buster release, e.g. Bug #912087.

The typical issue is:

[    4.428797] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: data=ordered
[ 130.970863] random: crng init done

with delays up to tens of minutes on systems with very little external random sources.

This is what it should look like:

[    1.616819] random: fast init done
[    2.299314] random: crng init done

Check dmesg | grep -E "(rng|random)" to see how your systems are doing.

If this is not fully solved before the Buster release, I hope some of the below can end up in the release notes3.


You need to get entropy into the random pool earlier at boot. There are many ways to achieve this and - currently - all require action by the system administrator.

Kernel boot parameter

From kernel 4.19 (Debian Buster currently runs 4.18 [Update: but will be getting 4.19 before release according to Ben via Mika]) you can set RANDOM_TRUST_CPU at compile time or random.trust_cpu=on on the kernel command line. This will make recent Intel / AMD systems trust RDRAND and fill the entropy pool with it. See the warning from Ted Ts'o linked above.

Update: Since Linux kernel build 4.19.20-1 CONFIG_RANDOM_TRUST_CPU has been enabled by default in Debian.

Using a TPM

The Trusted Platform Module has an embedded random number generator that can be used. Of course you need to have one on your board for this to be useful. It's a hardware device.

Load the tpm-rng module (ideally from initrd) or compile it into the kernel (config HW_RANDOM_TPM). Now, the kernel does not "trust" the TPM RNG by default, so you need to add


to the kernel command line. 1000 means "trust", 0 means "don't use". So you can chose any value in between that works for you depending on how much you consider your TPM to be unbugged.


For Virtual Machines (VMs) you can forward entropy from the host (that should be running longer than the VMs and have enough entropy) via virtio_rng.

So on the host, you do:

kvm ... -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,bus=pci.0,addr=0x7

and within the VM newer kernels should automatically load virtio_rng and use that.

You can confirm with dmesg as per above.

Or check:

# cat /sys/devices/virtual/misc/hw_random/rng_available
# cat /sys/devices/virtual/misc/hw_random/rng_current

Patching systemd

The Fedora bugtracker has a bash / python script that replaces the systemd rnd seeding with a (better) working one. The script can also serve as a good starting point if you need to script your own solution, e.g. for reading from an entropy provider available within your (secure) network.


The wonderful Keith Packard and Bdale Garbee have developed a USB dongle, ChaosKey, that supplies entropy to the kernel. Hard- and software are open source.


Kernel 4.2 introduced jitterentropy_rng which will use the jitter in CPU timings to generate randomness.

modprobe jitterentropy_rng

This apparently needs a userspace daemon though (read: design mistake) so

apt install jitterentropy-rngd (available from Buster/testing).

The current version 1.0.8-3 installs nicely on Stretch. dpkg -i is your friend.

But - drumroll - that daemon doesn't seem to use the kernel module at all.

That's where I stopped looking at that solution. At least for now. There are extensive docs if you want to dig into this yourself.


apt install haveged

Haveged is a user-space daemon that gathers entropy though the timing jitter any CPU has. It will only run "late" in boot but may still get your openssh back online within seconds and not minutes.

It is also - to the best of my knowledge - not verified at all regarding the quality of randomness it generates. The haveged design and history page provides and interesting read and I wouldn't recommend haveged if you have alternatives. If you have none, haveged is a wonderful solution though as it works reliably. And unverified entropy is better than no entropy. Just forget this is 2018 2019 :-).


Thorsten Glaser has posted newly developed early-rng-init-tools in a debian-devel thread. He provides packages at .

First he deserves kudos for naming a tool for what it does. This makes it much more easily discoverable than the trend to name things after girlfriends, pets or anime characters. The implementation hooks into the early boot via initrd integration and carries over a seed generated during the previous shutdown. This and some other implementation details are not ideal and there has been quite extensive scrutiny but none that discovered serious issues. Early-rng-init-tools look like a good option for non-RDRAND (~CONFIG_RANDOM_TRUST_CPU) capable platforms.



Stefan Fritsch, the Apache2 maintainer in Debian, OpenBSD developer and a former Debian security team member stumbled over the systemd issue preventing Apache libssl to initialize at boot in a Debian bug #916690 - apache2: getrandom call blocks on first startup, systemd kills with timeout.

The bug has been retitled "document getrandom changes causing entropy starvation" hinting at not fixing the underlying issue but documenting it in the Debian Buster release notes.

Unhappy with this "minimal compromise" Stefan wrote a comprehensive summary of the current situation to the Debian-devel mailing list. The discussion spans over December 2018 and January 2019 and mostly iterated what had been written above already. The discussion has - so far - not reached any consensus. There is still the "systemd stance" (not our problem, fix the daemons and the "ssh/apache stance" (fix systemd, credit entropy).

The "document in release notes" minimal compromise was brought up again and Stefan warned of the problems this would create for Buster users:

> I'd prefer having this documented in the release notes:
> with possible solutions like installing haveged, configuring virtio-rng,
> etc. depending on the situation.

That would be an extremely user-unfriendly "solution" and would lead to 
countless hours of debugging and useless bug reports.

This is exactly why I wrote this blog entry and keep it updated. We need to either fix this or tell everybody we can reach before upgrading to Buster. Otherwise this will lead to huge amounts of systems dead on the network after what looked like a successful upgrade.

Some interesting tidbits were mentioned within the thread:

Raphael Hertzog fixed the issue for Kali Linux by installing haveged by default. Michael Prokop did the same for the grml distribution within its December 2018 release.

Ben Hutchings pointed to an interesting thread on the debian-release mailing list he kicked off in May 2018. Multiple people summarized the options and the fact that there is no "general solution that is both correct and easy" at the time.

Sam Hartman identified Debian Buster VMs running under VMware as an issue, because that supervisor does not provide virtio-rng. So Debian VMs wouldn't boot into ssh availability within a reasonable time. This is an issue for real world use cases albeit running a proprietary product as the supervisor.


Daniel Kahn Gillmor wrote in to explain a risk for VMs starting right after the boot of the host OS:

If that pool is used by the guest to generate long-term secrets because it appears to be well-initialized, that could be a serious problem.
(e.g. "Mining your P's and Q's" by Heninger et al --
I've just opened to report a way to improve that situation in qemu by default.

So ... make sure that your host OS has access to a hardware random number generator or at least carries over its random seed properly across reboots. You could also delay VM starts until the crng on the host Linux is fully initialized (random: crng init done).
Otherwise your VMs may get insufficiently generated pseudo-random numbers and won't even know.


Stefan Fritsch revived the thread on debian-devel again and got a few more interesting titbits out of the developer community:

Ben Hutchings has enabled CONFIG_RANDOM_TRUST_CPU for Debian kernels from 4.19.20-1 so the problem is somewhat contained for recent CPU AMD64 systems (RDRAND capable) in Buster.

Thorsten Glaser developed early-rng-init-tools which combine a few options to try and get entropy carried across boot and generated early during boot. He received some scrutiny as can be expected but none that would discourage me from using it. He explains that this is for early boot and thus has initrd integration. It complements safer randomness sources or haveged.

  1. it will return with EAGAIN in the GRND_NONBLOCK use case. The blocking behaviour when lacking entropy is a security measure as per Bug #1559 of Google's Project Zero

  2. Update 18.12.2018: "SysVinit times" ::= "The times when most Linux distros used SysVinit over other init systems." So Wheezy and previous for Debian. Some people objected to the statement, so I added this footnote as a clarification. See the discussion in the comments below. 

  3. there is no Buster branch in the release notes repository yet (2018-12-17) 

12 March, 2019 07:03AM by Daniel Lange

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

On Debian frustrations

Michael Stapelberg writes about his frustrations with Debian, resulting in him reducing his involvement in the project. That’s sad: over the years, Michael has made a lot of great contributions to Debian, addressing hard problems in interesting, disruptive ways.

He makes a lot of good points about Debian, with which I’m generally in agreement. An interesting exercise would be to rank those issues: what are, today, the biggest issues to solve in Debian? I’m nowadays not following Debian closely enough to be able to do that exercise, but I would love to read others’ thoughts (bonus points if it’s in a DPL platform, given that it seems that we have a pretty quiet DPL election this year!)

Most of Michael’s points are about the need for modernization of Debian’s infrastructure and workflows, and I agree that it’s sad that we have made little progress in that area over the last decade. And I think that it’s important to realize that providing alternatives to developers have a cost, and that when a large proportion of developers or packages have switched to doing something (using git, using dh, not using 1.0-based patch systems such as dpatch, …), there are huge advantages with standardizing and pushing this on everybody.

There are a few reasons why this is harder than it sounds, though.

First, there’s Debian culture of stability and technical excellence. “Above all, do not harm” could also apply to the mindset of many Debian Developers. On one hand, that’s great, because this focus on not breaking things probably contributes a lot to our ability to produce something that works as well as Debian. But on the other hand, it means that we often seek solutions that limit short-term damage or disruption, but are far from optimal on the long term.
An example is our packaging software stack. I wrote most of the introduction to Debian packaging found in the packaging-tutorial package (which is translated in six languages now), but am still amazed by all the unjustified complexity. We tend to fix problems by adding additional layers of software on top of existing layers, rather than by fixing/refactoring the existing layers. For example, the standard way to package software today is using dh. However, dh stands on dh_* commands (even if it does not call them directly, contrary to what CDBS did), and all the documentation on dh is still structured around those commands: if you want to install an additional file in a package, probably the simplest way to do that is to add it to debian/packagename.install, but this is documented in the manpage for dh_install, which your are not going to actually call because dh abstracts that away for you! I realize that this could be better explained in packaging-tutorial… (patch welcomed)

There’s also the fact that Debian is very large, very diverse, and hard to test. It’s very easy to break things silently in Debian, because many  of our packages are niche packages, or don’t have proper test suites (because not everything can be easily tested automatically). I don’t see how the workflows for large-scale changes that Michael describes could work in Debian without first getting much better at detecting regressions.

Still, there’s a lot of innovation going on inside packaging teams, with the development of language-specific packaging helpers (listed on the AutomaticPackagingTools wiki page). However, this silo-ed organization tends to fragment the expertise of the project about what works and what doesn’t: because packaging teams don’t talk much together, they often solve the same problems in slightly different ways. We probably need more ways to discuss interesting stuff going on in teams, and consolidating what can be shared between teams. The fact that many people have stopped following debian-devel@ nowadays is probably not helping…

The addition of is probably the best thing that happened to Debian recently. How much this ends up being used for improving our workflows remain to be seen:

  • We could use Gitlab merge requests to track patches, rather than attachments in the BTS. Some tooling to provide an overview of open MRs in various dashboards is probably needed (and unfortunately GitLab’s API is very slow when dealing with large number of projects).
  • We could probably have a way to move the package upload to a gitlab-ci job (for example, by committing the signed changes file in a specific branch, similar to what pristine-tar does, but there might be a better way)
  • I would love to see a team experiment with a monorepo approach (instead of the “one git repo per package + mr to track them all” approach). For teams with lots of small packages there are probably a lot of things to win with such an organization.


12 March, 2019 05:51AM by lucas

hackergotchi for Shirish Agarwal

Shirish Agarwal

Processing Insanity

This blog post starts from where it ended a few days ago. I am fortunate and to an extent even blessed that I have usually had honest advise but sometimes even advice falls short when you see some harsh realities. There were three people who replied, you can read mark’s and frode’s reply as they have shared in the blog post.

I even shared it with a newly-found acquaintaince in the hopes that there may be some ways, some techniques or something which would make more sense as this is something I have never heard within the social circles I have been part of so was feeling more than a bit ill-prepared. When I shared with Paramitra (gentleman whom I engaged as part of another socio-techno probable intervention meetup I am hoping to meet soon) , he also shared some sound advice which helped me mentally prepare as well –

So, if you’re serious about what you can do with/for this friend of yours and his family, I do have several suggestions. 

1. To the best of my knowledge, and I have some exposure, no one goes ‘insane’ just like that. There has to be a diagnosis. Please find out from his family if he’s been taken to a psychiatrist. If not, that’s the first thing you can convince his family to do. Be with them, help them with that task.

2. If he’s been diagnosed, find out what that is. Most psychiatric disorders can be brought to a manageable level with proper medications and care. But any suggestions I can offer on that depends on the diagnosis.

3. However, definitely inform his family that tying him up, keeping him locked etc will only worsen the situation. He needs medical and family care – not incarceration, unless the doctor prescribes institutionalized treatment.

Hope this helps. Please be a friend to him and his family at this hour of crisis. As a nation, our understanding of mental health per se is poor, to say the least.

Paramita Banerjee

So armed with what I thought was sufficient knowledge I went to my friend’s home. The person whom I met could not be the same person whom I knew as a friend in college. During college, he had a reputation of a toughie and he looked and acted the part. So, in many ways it was the unlikeliest of friendships. I shared with him tips of Accountancy, Economics etc. and he was my back. He was also very quick on the re-partees so we used to have quite a fun time exchanging those. For the remainder of the exchange I will call my friend ‘Amar’ and his sister ‘Kritika’ as they have been the names I like.

The person whom I met was a mere shadow of the person I knew. Amar had no memory of who I was. He had trouble comprehending written words and mostly mumbled. Amar did say something interesting and fresh once in a while but it was like talking mostly to a statue. He stank and was mostly asleep even when he was awake. Amar couldn’t look straight at me and he had that if I touched him or he touched me he would infect me. He had long nails as well. Kritika told me that he does have baths once every few days but takes 3-4 hours to take a bath, sleeps in there as well. The same happens when he goes for shitting as well. The saving grace is they have their own toilet and bathroom within the house. I have no comprehension how they might be adjusting, all in that small space.

I learned from Kritika what I hadn’t known about him and the family over the last ten odd years. His mum died in the same room where he was and he had no comprehension that she had died, this had happened just a few weeks back. He was one of three children, the middle child, the elder daughter, who is now a widow and has three daughters who are living with them. Amar, his father and the youngest sister who is trying desperately to keep it altogether but I don’t know how and what she will be able to do. 7 mouths to feed and 6 people who all have their own needs and wants apart from basic existence. They are from a low-income group. The elder sister does have lot of body pains although I was not able to ask what from. I do know nursing is a demanding profession and from my hospital stay, at times they have to around the clock 24×7 doing things no normal person can do.

Two of the nieces are nearing teenage years and was told of sexually suggestive remarks to the nieces by one of the neighbors. The father is a drunk, the brother-in-law who died was a drunk and the brother, Amar had consumed lots of cannabis seeds. Apparently, during the final year exams where we were given different centers he went to Bombay/Mumbai to try his hands at movies, then went to Delhi and was into selling some sort of chemicals from company to the other.

Maybe it was ‘bad company’ as her mother on the phone had told me, maybe it was the work he was doing which he was not happy with which led him to cannabis addiction. I have no way of knowing anything of his past. I did ask Kritika if she can dig out any visiting cards or something. I do have enough friends in Delhi so it’s possible I can know about how things came to be this bad.

There was a small incident which also left me a bit shaken. The place where they are is a place called Pavana Nagar. This is on back of Pimpri-Chinchwad industrial township so most of the water that the town/village people consume has lot of chemical effluents and this the local councillor (called nagar sevak) knows but either can’t or won’t do anything about it. There are lot of diseases due to the pollutants in the water. The grains they buy or purchase, Kritika suspects or/and knows also use the same water but she is helpless to do anything about it.

The incident is a small one but I wanted to share a fuller picture before sharing that. I had left my bag, a sort of sling bag where I was sitting in the room . After Kritika took me to another building to show me the surrounding areas (as I was new here and had evinced interest to know the area) , when we came back, my bag was not to be found. While after searching for a while, I got the bag, there was no money in it ( I usually keep INR 100-200 in case money gets stolen from on me. I also keep some goodies (sweet and sour both) just in case I feel hungry and there’s nothing around. Both were missing. The father pretended, he had put the bag away by mistake. I didn’t say anything because it would have been loss of face for the younger sister although it’s possible that she knows or had some suspicions. With the younger kids around, it would have been awkward to say that and I didn’t really wanna make a scene. It wasn’t much, but just something I didn’t expect.

Also later I came to know that whenever the father drinks, he creates lot of drama and says whatever comes to his mind. It is usually dirty, nasty and hurtful from what I could gather.

Due to my extended stay in hopsital due to Epilepsy had come to know of couple of medical schemes which were meant for weaker sections of the society. I did share what I knew of the schemes. While I hope to talk with Kritika more, I don’t see a way out of the current mess they are in. The sense I got from her is that she is fighting too many battles and I don’t how she can win them all. I also told her about NMRI I have no clue where to go from here. Also don’t wanna generalize but there might be possibilities of many Amars and Kritikas in our midst or around us whose story we don’t know. If they could just have some decent water, no mosquitoes it probably would enhance their lives quite a bit and maybe have a bit more agency about themselves. There is one thing that Kritika shared which was also interesting. She had experience of working back-office for some IT company but now looking after the family she just couldn’t do the same thing.

Note and disclaimer – The names ‘Amar’ and ‘Kritika’ are just some names I chose. The names have been given to –

a. Give privacy to the people involved.
b. To embody sustance to the people and the experience so they are not nameless people.

12 March, 2019 05:08AM by shirishag75

hackergotchi for Steve McIntyre

Steve McIntyre

Debian BSP in Cambridge, 08 - 10 March 2019

Lots of snacks, lots of discusssion, lots of bugs fixed! YA BSP at my place.


12 March, 2019 02:08AM

March 11, 2019

John Goerzen

A (Partial) Defense of Debian

I was sad to read on his blog that Michael Stapelberg is winding down his Debian involvement. In his post, he outlined some critiques of Debian. In his post, I want to acknowledge that he is on point with some of them, but also push back on others. Some of this is also a response to some of the comments on Hacker News.

I’d first like to discuss some of the assumptions I believe his post rests on: namely that “we’ve always done it this way” isn’t a good reason to keep doing something. I completely agree. However, I would also say that “this thing is newer, so it’s better and we should use it” is also poor reasoning. Newer is not always better. Sometimes it is, sometimes it’s not, but deeper thought is generally required.

Also, when thinking about why things are a certain way or why people prefer certain approaches, we must often ask “why does that make sense to them?” So let’s dive in.

Debian’s Perspective: Stability

Stability, of course, can mean software that tends not to crash. That’s important, but there’s another aspect of it that is also important: software continuing to act the same over time. For instance, if you wrote a C program in 1985, will that program still compile and run today? Granted, that’s a bit of an extreme example, but the point is: to what extent can you count on software you need continuing to operate without forced change?

People that have been sysadmins for a long period of time will instantly recognize the value of this kind of stability. Change is expensive and difficult, and often causes outages and incidents as bugs are discovered when software is adapted to a new environment. Being able to keep up-to-date with security patches while also expecting little or no breaking changes is a huge win. Maintaining backwards compatibility for old software is also important.

Even from a developer’s perspective, lack of this kind of stability is why I have handed over maintainership of most of my Haskell software to others. Some of my Haskell projects were basically “done”, and every so often I’d get bug reports that it no longer compiles due to some change in the base library. Occasionally I’d have patches with those bug reports, but they would inevitably break compatibility with older versions (even though the language has plenty good support for something akin to a better version of #ifdefs to easily deal with this.) The culture of stability was not there.

This is not to say that this kind of stability is always good or always bad. In the Haskell case, there is value to be had in fixing designs that are realized to be poor and removing cruft. Some would say that strcpy() should be removed from libc for this reason. People that want the latest versions of gimp or whatever are probably not going to be running Debian stable. People that want to install a machine and not really be burdened by it for a couple of years are.

Debian has, for pretty much its entire life, had a large proportion of veteran sysadmins and programmers as part of the organization. Many of us have learned the value of this kind of stability from the school of hard knocks – over and over again. We recognize the value of something that just works, that is so stable that things like unattended-upgrades are safe and reliable. With many other distros, something like this isn’t even possible; when your answer to a security bug is to “just upgrade to the latest version”, just trusting a cron job to do it isn’t going to work because of the higher risk.

Recognizing Personal Preference

Writing about Debian’s bug-tracking tool, Michael says “It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).” This is representative of a personal preference. A web form might be more convenient for Michael — I have no reason to doubt this — but is it more convenient for everyone? I’d say no.

In his linked post, Michael also writes: “Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload. “

I think that’s fair for someone that wants a web-based workflow. Allow me to present the opposite: for me, I tend to push off contributions that only come through Github, and the reason is that, for me, they’re less convenient. It’s also harder for me to contribute to Github projects than Debian ones. Let’s look at this – say I want to send in a small patch for something. If it’s Github, it’s going to look like this:

  1. Go to the website for the thing, click fork
  2. Now clone that fork or add it to my .git/config, hack, and commit
  3. Push the commit, go back to the website, and submit a PR
  4. Github’s email integration is so poor that I basically have to go back to the website for most parts of the conversation. I can do little from the comfort of mu4e.
  5. Remember to clean up my fork after the patch is accepted or rejected.

Compare that to how I’d contribute with Debian:

  1. Hack (and commit if I feel like it)
  2. Type “reportbug foo”, attach my patch
  3. Followup conversation happens directly in email where it’s convenient to reply

How about as the developer? Github constantly forces me to their website. I can’t very well work on bug reports, etc. without a strong Internet connection. And it’s designed to push people into using their tools and their interface, which is inferior in a lot of ways to a local interface – but then the process to pull down someone else’s set of patches involves a lot of typing and clicking, much more that would be involved from a simple git format-patch. In short, I don’t have my shortcut keys, my environment, etc. for reviewing things – the roadblocks are there to make me use theirs.

If I get a contribution from someone in debbugs, it’s so much easier. It’s usually just git apply or patch -p1 and boom, I can see exactly what’s changed and review it. A review comment is just a reply to an email. I don’t have to ever fire up a web browser. So much more convenient.

I don’t write this to say Michael is wrong about what’s more convenient for him. I write it to say he’s wrong about what’s more convenient for me (or others). It may well be the case that debbugs is so inconvenient that it pushes him to leave while github is so inconvenient for others that it pushes them to avoid it.

I will note before leaving this conversation that there are some command-line tools available for Github and a web interface to debbugs, but it is still clear that debbugs is a lot easier to work with from within my own mail reader and tooling, and Github is a lot easier to work with from within a web browser.

The case for reportbug

I remember the days before we had reportbug. Over and over and over again, I would get bug reports from users that wouldn’t have the basic information needed to investigate. reportbug gathers information from the system: package versions, configurations, versions of dependencies, etc. A simple web form can’t do this because it doesn’t have a local agent. From a developer’s perspective, trying to educate users on how to do this over and over as an unending, frustrating, and counter-productive task. Even if it’s clearly documented, the battle will be fought over and over. From a user’s perspective, having your bug report ignored or told you’re doing it wrong is frustrating too.

So I think reportbug is much nicer than having some github-esque web-based submission form. Could it be better? Sure. I think a mode to submit the reportbug report via HTTPS instead of email would make sense, since a lot of machines no longer have local email configured.

Where Debian Should Improve

I agree that there are areas where Debian should improve.

Michael rightly identifies the “strong maintainer” concept as a source of trouble. I agree. Though we’ve been making slow progress over time with things like low-threshold NMU and maintainer teams, the core assumption that a maintainer has a lot of power over particular packages is one that needs to be thrown out.

Michael, and commentators on HN, both identify things that boil down to documentation problems. I have heard so many times that it’s terribly hard to package something up for Debian. That’s really not the case for most things; dh_make and similar tools will do the right thing for many packages, and all you have to do is add some package descriptions and such. I wrote a “concise guide” to packaging for my workplace that ran to only about 2 pages. But it is true that the documentation on doesn’t clearly offer this simple path, so people are put off and not aware of it. Then there were the comments about how hard it is to become a Debian developer, and how easy it is to submit a patch to NixOS or some such. The fact is, these are different things; one does not need to be a Debian Developer to contribute to Debian. A DD is effectively the same as a patch approver elsewhere; these are the people that can ultimately approve software for insertion into the OS, and you DO want an element of trust there. Debian could do more to offer concise guides for drive-by contributions and the building of packages that follow standard language community patterns, both of which can be done without much knowledge of packaging tools and inner workings of the project.

Finally, I have distanced myself from conversations in Debian for some time, due to lack of time to participate in what I would call excessive bikeshedding. This is hardly unique to Debian, but I am glad to see the project putting more effort into expecting good behavior from conversations of late.

11 March, 2019 11:44AM by John Goerzen

March 10, 2019

Noah Meyerhans

Further Discussion for DPL!

Further Discussion builds concensus within Debian!

Further Discussion gets things done!

Further Discussion welcomes diverse perspectives in Debian!

We'll grow the community with Further Discussion!

Further Discussion has been with Debian from the very beginning! Don't you think it's time we gave Further Discussion its due, after all the things Further Discussion has accomplished for the project?

Somewhat more seriously, have we really exhausted the community of people interested in serving as Debian Project Leader? That seems unfortunate. I'm not worried about it from a technical point of view, as Debian has established ways of operating without a DPL. But the lack of interest suggest some kind of stagnation within the community. Or maybe this is just the cabal trying to wrest power from the community by stifling the vote. Is there still a cabal?

10 March, 2019 10:18PM

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in February 2019

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • February was the last month to package new upstream releases before the full freeze, if the changes were not too invasive of course :-). Atomix, gamine, simutrans, simutrans-pak64, simutrans-pak128.britain and hitori qualified.
  • I sponsored a new version of mgba, a Game Boy Advance emulator, for Reiner Herrmann and worked together with Bret Curtis on wildmidi and openmw. The latest upstream version resolved a long-standing bug and made it possible that the game engine, a reimplementation of The Elder Scrolls III: Morrowind, will be part of a Debian stable release for the first time.
  • Johann Suhter reported a bug in one of brainparty’s minigames and also provided the patch. All I had to do was uploading it. Thanks. (#922485)
  • I corrected a minor cross-build FTBFS in openssn. Patch by Helmut Grohne. (#914724)
  • I released a new version of debian-games and updated the dependency list of our games metapackages. This is almost the final version but expect another release in one or two months.

Debian Java


Debian LTS

This was my thirty-sixth month as a paid contributor and I have been paid to work 19,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 25.02.2019 until 03.03.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in sox, collabtive, libkohana2-php, ldb, libpodofo, libvirt, openssl, wordpress, twitter-bootstrap, ceph, ikiwiki, edk2, advancecomp, glibc, spice-xpi and zabbix.
  • DLA-1675-1. Issued a security update for python-gnupg fixing 1 CVE.
  • DLA-1676-1. Issued a security update for unbound fixing 1 CVE.
  • DLA-1696-1. Issued a security update for ceph fixing 2 CVE.
  • DLA-1701-1. Issued a security update for openssl fixing 1 CVE.
  • DLA-1702-1. Issued a security update for advancecomp fixing 2 CVE.
  • DLA-1703-1. Issued a security update for jackson-databind fixing 10 CVE.
  • DLA-1706-1. Issued a security update for poppler fixing 5 CVE.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my ninth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 25.02.2019 until 03.03.2019 and I triaged CVE in file, gnutls26, nettle, libvirt, busybox and eglibc.
  • ELA-84-1. Issued a security update for gnutls26 fixing 4 CVE. I also investigated CVE-2018-16869 in nettle and also CVE-2018-16868 in gnutls26. After some consideration I decided to mark these issues as ignored because the changes were invasive and would have required intensive testing. The benefits appeared small in comparison.
  • ELA-88-1. Issued a security update for openssl fixing 1 CVE.
  • ELA-90-1. Issued a security update for libsdl1.2 fixing 11 CVE.
  • I started to work on sqlalchemy which requires a complex backport to fix a possible SQL injection vulnerability.

Thanks for reading and see you next time.

10 March, 2019 08:48PM by Apo

Michael Stapelberg

Winding down my Debian involvement

This post is hard to write, both in the emotional sense but also in the “I would have written a shorter letter, but I didn’t have the time” sense. Hence, please assume the best of intentions when reading it—it is not my intention to make anyone feel bad about their contributions, but rather to provide some insight into why my frustration level ultimately exceeded the threshold.

Debian has been in my life for well over 10 years at this point.

A few weeks ago, I have visited some old friends at the Zürich Debian meetup after a multi-year period of absence. On my bike ride home, it occurred to me that the topics of our discussions had remarkable overlap with my last visit. We had a discussion about the merits of systemd, which took a detour to respect in open source communities, returned to processes in Debian and eventually culminated in democracies and their theoretical/practical failings. Admittedly, that last one might be a Swiss thing.

I say this not to knock on the Debian meetup, but because it prompted me to reflect on what feelings Debian is invoking lately and whether it’s still a good fit for me.

So I’m finally making a decision that I should have made a long time ago: I am winding down my involvement in Debian to a minimum.

What does this mean?

Over the coming weeks, I will:

  • transition packages to be team-maintained where it makes sense
  • remove myself from the Uploaders field on packages with other maintainers
  • orphan packages where I am the sole maintainer

I will try to keep up best-effort maintenance of the service and the service, but any help would be much appreciated.

For all intents and purposes, please treat me as permanently on vacation. I will try to be around for administrative issues (e.g. permission transfers) and questions addressed directly to me, permitted they are easy enough to answer.


When I joined Debian, I was still studying, i.e. I had luxurious amounts of spare time. Now, over 5 years of full time work later, my day job taught me a lot, both about what works in large software engineering projects and how I personally like my computer systems. I am very conscious of how I spend the little spare time that I have these days.

The following sections each deal with what I consider a major pain point, in no particular order. Some of them influence each other—for example, if changes worked better, we could have a chance at transitioning packages to be more easily machine readable.

Change process in Debian

The last few years, my current team at work conducted various smaller and larger refactorings across the entire code base (touching thousands of projects), so we have learnt a lot of valuable lessons about how to effectively do these changes. It irks me that Debian works almost the opposite way in every regard. I appreciate that every organization is different, but I think a lot of my points do actually apply to Debian.

In Debian, packages are nudged in the right direction by a document called the Debian Policy, or its programmatic embodiment, lintian.

While it is great to have a lint tool (for quick, local/offline feedback), it is even better to not require a lint tool at all. The team conducting the change (e.g. the C++ team introduces a new hardening flag for all packages) should be able to do their work transparent to me.

Instead, currently, all packages become lint-unclean, all maintainers need to read up on what the new thing is, how it might break, whether/how it affects them, manually run some tests, and finally decide to opt in. This causes a lot of overhead and manually executed mechanical changes across packages.

Notably, the cost of each change is distributed onto the package maintainers in the Debian model. At work, we have found that the opposite works better: if the team behind the change is put in power to do the change for as many users as possible, they can be significantly more efficient at it, which reduces the total cost and time a lot. Of course, exceptions (e.g. a large project abusing a language feature) should still be taken care of by the respective owners, but the important bit is that the default should be the other way around.

Debian is lacking tooling for large changes: it is hard to programmatically deal with packages and repositories (see the section below). The closest to “sending out a change for review” is to open a bug report with an attached patch. I thought the workflow for accepting a change from a bug report was too complicated and started mergebot, but only Guido ever signaled interest in the project.

Culturally, reviews and reactions are slow. There are no deadlines. I literally sometimes get emails notifying me that a patch I sent out a few years ago (!!) is now merged. This turns projects from a small number of weeks into many years, which is a huge demotivator for me.

Interestingly enough, you can see artifacts of the slow online activity manifest itself in the offline culture as well: I don’t want to be discussing systemd’s merits 10 years after I first heard about it.

Lastly, changes can easily be slowed down significantly by holdouts who refuse to collaborate. My canonical example for this is rsync, whose maintainer refused my patches to make the package use debhelper purely out of personal preference.

Granting so much personal freedom to individual maintainers prevents us as a project from raising the abstraction level for building Debian packages, which in turn makes tooling harder.

How would things look like in a better world?

  1. As a project, we should strive towards more unification. Uniformity still does not rule out experimentation, it just changes the trade-off from easier experimentation and harder automation to harder experimentation and easier automation.
  2. Our culture needs to shift from “this package is my domain, how dare you touch it” to a shared sense of ownership, where anyone in the project can easily contribute (reviewed) changes without necessarily even involving individual maintainers.

To learn more about how successful large changes can look like, I recommend my colleague Hyrum Wright’s talk “Large-Scale Changes at Google: Lessons Learned From 5 Yrs of Mass Migrations”.

Fragmented workflow and infrastructure

Debian generally seems to prefer decentralized approaches over centralized ones. For example, individual packages are maintained in separate repositories (as opposed to in one repository), each repository can use any SCM (git and svn are common ones) or no SCM at all, and each repository can be hosted on a different site. Of course, what you do in such a repository also varies subtly from team to team, and even within teams.

In practice, non-standard hosting options are used rarely enough to not justify their cost, but frequently enough to be a huge pain when trying to automate changes to packages. Instead of using GitLab’s API to create a merge request, you have to design an entirely different, more complex system, which deals with intermittently (or permanently!) unreachable repositories and abstracts away differences in patch delivery (bug reports, merge requests, pull requests, email, …).

Wildly diverging workflows is not just a temporary problem either. I participated in long discussions about different git workflows during DebConf 13, and gather that there were similar discussions in the meantime.

Personally, I cannot keep enough details of the different workflows in my head. Every time I touch a package that works differently than mine, it frustrates me immensely to re-learn aspects of my day-to-day.

After noticing workflow fragmentation in the Go packaging team (which I started), I tried fixing this with the workflow changes proposal, but did not succeed in implementing it. The lack of effective automation and slow pace of changes in the surrounding tooling despite my willingness to contribute time and energy killed any motivation I had.

Old infrastructure: package uploads

When you want to make a package available in Debian, you upload GPG-signed files via anonymous FTP. There are several batch jobs (the queue daemon, unchecked, dinstall, possibly others) which run on fixed schedules (e.g. dinstall runs at 01:52 UTC, 07:52 UTC, 13:52 UTC and 19:52 UTC).

Depending on timing, I estimated that you might wait for over 7 hours (!!) before your package is actually installable.

What’s worse for me is that feedback to your upload is asynchronous. I like to do one thing, be done with it, move to the next thing. The current setup requires a many-minute wait and costly task switch for no good technical reason. You might think a few minutes aren’t a big deal, but when all the time I can spend on Debian per day is measured in minutes, this makes a huge difference in perceived productivity and fun.

The last communication I can find about speeding up this process is ganneff’s post from 2008.

How would things look like in a better world?

  1. Anonymous FTP would be replaced by a web service which ingests my package and returns an authoritative accept or reject decision in its response.
  2. For accepted packages, there would be a status page displaying the build status and when the package will be available via the mirror network.
  3. Packages should be available within a few minutes after the build completed.

Old infrastructure: bug tracker

I dread interacting with the Debian bug tracker. debbugs is a piece of software (from 1994) which is only used by Debian and the GNU project these days.

Debbugs processes emails, which is to say it is asynchronous and cumbersome to deal with. Despite running on the fastest machines we have available in Debian (or so I was told when the subject last came up), its web interface loads very slowly.

Notably, the web interface at is read-only. Setting up a working email setup for reportbug(1) or manually dealing with attachments is a rather big hurdle.

For reasons I don’t understand, every interaction with debbugs results in many different email threads.

Aside from the technical implementation, I also can never remember the different ways that Debian uses pseudo-packages for bugs and processes. I need them rarely enough to establish a mental model of how they are set up, or working memory of how they are used, but frequently enough to be annoyed by this.

How would things look like in a better world?

  1. Debian would switch from a custom bug tracker to a (any) well-established one.
  2. Debian would offer automation around processes. It is great to have a paper-trail and artifacts of the process in the form of a bug report, but the primary interface should be more convenient (e.g. a web form).

Old infrastructure: mailing list archives

It baffles me that in 2019, we still don’t have a conveniently browsable threaded archive of mailing list discussions. Email and threading is more widely used in Debian than anywhere else, so this is somewhat ironic. Gmane used to paper over this issue, but Gmane’s availability over the last few years has been spotty, to say the least (it is down as I write this).

I tried to contribute a threaded list archive, but our listmasters didn’t seem to care or want to support the project.

Debian is hard to machine-read

While it is obviously possible to deal with Debian packages programmatically, the experience is far from pleasant. Everything seems slow and cumbersome. I have picked just 3 quick examples to illustrate my point.

debiman needs help from piuparts in analyzing the alternatives mechanism of each package to display the manpages of e.g. psql(1). This is because maintainer scripts modify the alternatives database by calling shell scripts. Without actually installing a package, you cannot know which changes it does to the alternatives database.

pk4 needs to maintain its own cache to look up package metadata based on the package name. Other tools parse the apt database from scratch on every invocation. A proper database format, or at least a binary interchange format, would go a long way.

Debian Code Search wants to ingest new packages as quickly as possible. There used to be a fedmsg instance for Debian, but it no longer seems to exist. It is unclear where to get notifications from for new packages, and where best to fetch those packages.

Complicated build stack

See my “Debian package build tools” post. It really bugs me that the sprawl of tools is not seen as a problem by others.

Developer experience pretty painful

Most of the points discussed so far deal with the experience in developing Debian, but as I recently described in my post “Debugging experience in Debian”, the experience when developing using Debian leaves a lot to be desired, too.

I have more ideas

At this point, the article is getting pretty long, and hopefully you got a rough idea of my motivation.

While I described a number of specific shortcomings above, the final nail in the coffin is actually the lack of a positive outlook. I have more ideas that seem really compelling to me, but, based on how my previous projects have been going, I don’t think I can make any of these ideas happen within the Debian project.

I intend to publish a few more posts about specific ideas for improving operating systems here. Stay tuned.

Lastly, I hope this post inspires someone, ideally a group of people, to improve the developer experience within Debian.

10 March, 2019 08:43PM

hackergotchi for Andy Simpkins

Andy Simpkins

Debian BSP: Cambridge continued

I am slowly making progress.  I am quite pleased with myself for slowly moving beyond triage, test, verify to now beginning to understand what is going on with some bugs and being able to suggest fixes :-)  That said my C++ foo is poor and add in QT as well and #917711 is beyond me.

Not only does quite a lot of work get done at a BSP; it is also very good to catch up with people, especially those who traveled to Cambridge from out of the area.  Thank you for taking your weekend to contribute to making Buster.

I must also take the opportunity to thank Sledge and Randombird for opening up their home to host the BSP and provide overnight accommodation as well.

More hacking is still going on…  Some different people from yesterday.

Differing people ++smcv –andrewsh  ++cjwatson –lamby

10 March, 2019 03:06PM by andy

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 3.5.1

Weblate 3.5.1 has been released today. Compared to the 3.5 release it brings several bug fixes and performance improvements.

Full list of changes:

  • Fixed Celery systemd unit example.
  • Fixed notifications from http repositories with login.
  • Fixed race condition in editing source string for monolingual translations.
  • Include output of failed addon execution in the logs.
  • Improved validation of choices for adding new language.
  • Allow to edit file format in component settings.
  • Update installation instructions to prefer Python 3.
  • Performance and consistency improvements for loading translations.
  • Make Microsoft Terminology service compatible with current zeep releases.
  • Localization updates.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

10 March, 2019 03:00PM

Andrew Cater

Debian BSP Cambridge March 10th 2019 - post 2

Lots of very busy people chasing down bugs. A couple of folk have left. It's a good day and very productive: thanks to Steve an Jo, as ever, for food, coffee, coffee, beds and coffee.

10 March, 2019 02:49PM by Andrew Cater (

Debian BSP Cambridge 10th March 2019

Folks are starting to turn up this morning. Kitchen full of people talking and cooking sausages and talking. A quiet room except for Pepper the dog chasing squeaky toys and people chasing into the kitchen for food. Folk are now gradually setting down to code and bug fix. All is good

10 March, 2019 11:06AM by Andrew Cater (

March 09, 2019

Debian BSP Cambridge March 9th 2019 - post 3

Pub meal with lots of people in a crowded pub and lots of chatting. Various folk have come back to Steve's to carry on to be met with a very, very bouncy friendly dog. And on we go :)

09 March, 2019 09:40PM by Andrew Cater (

hackergotchi for Andy Simpkins

Andy Simpkins

Debian BSP: Cambridge

I didn’t get a huge amount done today – A bit of email config for Andy followed by many installation tests to reproduce #911036, then to test and confirm the patch has fixed it.  I can’t read a word of Japanese but thankfully Hideki provided the appropriate runes I needed to ‘match’ so I was able to follow the steps to reproduce, that couples with running a separate qemu instance in English so I could shadow my actions to keep track of menu state…


Andy Cater – Multi tasking (and trying to look like Andy Hamilton)

main table space hard at work…

Overflow hack space…  stress relief service being provided by Pepper

09 March, 2019 06:38PM by andy

Andrew Cater

Debian BSP Cambridge March 9th 2019 - post 2

Large amounts of conversation: on chat and simultaneously in real life. eight people sat round a dining table. Two on a sofa, me on a chair. ThinkPad quotient has increased. Lots of bugs being closed: Buster steadily becoming less RC-buggy Coffee machine is getting a hammering and Debian is improving steadily. All is good.

09 March, 2019 05:14PM by Andrew Cater (

Ian Jackson

Nailing Cargo (the Rust build tool)


I quite like the programming language Rust, although it's not without flaws and annoyances.

One of those annoyances is Cargo, the language-specific package manager. Like all modern programming languages, Rust has a combined build tool and curlbashware package manager. Apparently people today expect their computers to download and run code all the time and get annoyed when that's not sufficiently automatic.

I don't want anything on my computer that automatically downloads and executes code from minimally-curated repositories like So this is a bit of a problem.

Dependencies available in Debian

Luckily I can get nearly all what I have needed so far from Debian, at least if I'm prepared to use Debian testing (buster, now in freeze). Debian's approach to curation is not perfect, but it's mostly good enough for me.

But I still need to arrange to use the packages from Debian instead of downloading things.

Of course anything in Debian written in Rust faces the same problem: Debian source packages are quite rightly Not Allowed to download random stuff from the Internet during the package build. So when I tripped across this I reasoned that the Debian Rust team must already have fixed this problem somehow. And, indeed they have. The result is not too awful:

$ egrep '^[^#]' ~/.cargo/config
directory = "/usr/share/cargo/registry"
replace-with = "debian-packages"
I cloned and hacked this from the cargo docs after reading the Debian Rust Team packaging policy.

The effect is to cause my copy of cargo to think that is actually located (only) on my local machine in /usr/share/cargo. If I mention a dependency, cargo will look in /usr/share/cargo for it. If it's not there I get an error, which I fix by installing the appropriate Debian rust package using apt.

So far so good.

Edited 2019-03-07, to add: To publish things on I then needed another workaround too (scroll down to "Further annoyance from Cargo"

Dependencies not in Debian

A recent project of mine involved some dependencies which were not in Debian, notably Rust bindings to the GNU Scientific Library and to the NLopt nonlinear optimisation suite. A quick web search found me the libraries rust-GSL and rust-nlopt. They are on, of course. I thought I would check them out so that I could check them out. Ie, decide if they looked reasonable, and do a bit of a check on the author, and so on.

Digression - a rant about Javascript

The first problem is of course that itself does not work at all without enabling Javascript in my browser. I have always been reluctant about JS. Its security properties have always been suboptimal and much JS is used for purposes which are against the interests of the user.

But I really don't like the way that everyone is still pretending that it is safe to run untrusted code on one's computer, despite Spectre. Most people seem to think that if you are up to date with your patches, Spectre isn't a problem. This is not true at all. Spectre is basically unfixable on affected CPUs and it is not even possible to make a comparably fast CPU without this bug. The patches are bodges which make the attacks more complicated and less effective. (At least, no-one knows how to do so yet.)

And of course Javascript is a way of automatically downloading megabytes of stuff from all over the internet and running it on your computer. Urgh. So I run my browser with JS off by default.

There is absolutely no good reason why won't let me even look at the details of some library without downloading a pile of code from their website and running it on my computer. But, I guess, it is probably OK to allow it? So on I go, granting JS permission. Then I can click through to the actual git repository. JS just to click a link! At least we can get back to the main plot now...

The "unpublished local crate" problem

So I git clone rust-GSL and have a look about. It seems to contain the kind of things I expect. The author seems to actually exist. The git history seems OK on cursory examination. I decide I am going to actually to use it. So I write
GSL = "1"
in my own crate's metadata file.

I realise that I am going to have to tell cargo where it is. Experimentally, I run cargo build and indeed it complains that it's never heard of GSL. Fair enough. So I read the cargo docs for the local config file, to see how to tell it to look at ../rust-GSL.

It turns out that there is no sensible way to do this!

There is a paths thing you can put in your config, but it does not work for an unpublished crate. (And of course, from the point of view of my cargo, GSL is a unpublished crate - because only crates that I have installed from .debs are "published".)

Also paths actually uses some of the metadata from the repository, which is of course not what you wanted. In my reading I found someone having trouble because had a corrupted metadata file for some crate, which their cargo rejected. They could just get the source themselves and fix it, but they had serious difficulty hitting cargo on the head hard enough to stop it trying to read the broken online metadata.

The same overall problem would arise if I had simply written the other crate myself and not published it yet. (And indeed I do have a crate split out from my graph layout project, which I have yet to publish.)

You can edit your own crate's Cargo.toml metadata file to say something like this:

GSL = { path = "../rust-GSL", optional = true }
but of course that's completely wrong. That's a fact about the setup on my laptop and I dont want to commit it to my git tree. And this approach gets quickly ridiculous if I have indirect dependencies: I would have to make a little local branch in each one just to edit each one's Cargo.toml to refer to the others'. Awful.

Well, I have filed an issue. But that won't get me unblocked.

So, I also wrote a short but horrific shell script. It's a wrapper for cargo, which edits all your Cargo.toml's to refer to each other. Then, when cargo is done, it puts them all back, leaving your git trees clean.

Writing this wrapper script would have been a lot easier if Cargo had been less prescriptive about what things are called and where they must live. For example, if I could have specified an alternative name for Cargo.toml, my script wouldn't have had to save the existing file and write a replacement; it could just have idempotently written a new file.

Even so, nailing-cargo works surprisingly well. I run it around make. I sometimes notice some of the "nailed" Cargo.toml files if I update my magit while a build is running, but that always passes. Even in a nightmare horror of a script, it was worth paying attention to the error handling.

I hope the cargo folks fix this before I have to soup up nailing-cargo to use a proper TOML parser, teach it a greater variety of Cargo.toml structures, give it a proper config file reader and a manpage, and generally turn it into a proper, but still hideous, product.

edited 2019-03-09 to fix tag spelling

comment count unavailable comments

09 March, 2019 02:21PM

Rust doubly-linked list

I have now released (and published on my doubly-linked list library for Rust.

Of course in Rust you don't usually want a doubly-linked list. The VecDeque array-based double-ended queue is usually much better. I discuss this in detail in my module's documentation.

Why a new library

Weirdly, there is a doubly linked list in the Rust standard library but it is good for literally nothing at all. Its API is so limited that you can always do better with a VecDeque. There's a discussion (sorry, requires JS) about maybe deprecating it.

There's also another doubly-linked list available but despite being an 'intrusive' list (in C terminology) list it only supports one link per node, and insists on owning the items you put into it. I needed several links per node for my planar graph work, and I needed Rc-based ownership.

Indeed given my analysis of when a doubly-linked list is needed, rather than a VecDeque, I think it will nearly always involve something like Rc too.

My module

You can read the documentation online.

It provides the facilities I needed, including lists where each node can be on multiple lists with runtime selection of the list link within each node. It's not threadsafe (so Rust will stop you using it across multiple threads) and would be hard to make threadsafe, I think.

Notable wishlist items: entrypoints for splitting and joining lists, and good examples in the documentation. Both of these would be quite easy to add.

Further annoyance from Cargo

As I wrote earlier, because I am some kind of paranoid from the last century, I have hit cargo on the head so that it doesn't randomly download and run code from the internet.

This is done with stuff in my ~/.cargo/config. Of course this stops me actually accessing the real public repository (cargo automatically looks for .cargo/config in all parent directories, not just in $PWD and $HOME). No problem - I was expecting to have to override it.

However there is no way to sensibly override a config file!

So I have had to override it in a silly way: I made a separate area on my laptop which belongs to me but which is not underneath my home directory. Whenever I want to run cargo publish, I copy the crate to be published to that other area, which is not a direct or indirect subdirectory of anything containing my usual .cargo/config.

Cargo really is quite annoying: it has opinions about how everything is and how everything ought to be done. I wouldn't mind that, but unfortunately when it happens to be wrong it is often lacking a good way to tell it what should be done instead. This is kind of thing is serious problem in a build system tool.

comment count unavailable comments

09 March, 2019 02:19PM

hackergotchi for Chris Lamb

Chris Lamb

Book Review: Jeeves and the King of Clubs

Jeeves and the King of Clubs (2018)

Ben Schott

For the P.G. Wodehouse fan the idea of bringing back such a beloved duo such as Jeeves and Wooster will either bring out delight or dread. Indeed, the words you find others using often reveals their framing of such endeavours; is this a tribute, homage, pastiche, an imitation…?

Whilst neither parody nor insult, let us start with the "most disagreeable, sir." Rather jarring were the voluminous and Miscellany-like footnotes that let you know that the many allusions and references are all checked, correct and contemporaneous. All too clever by half and would ironically be a negative trait if this was personified by a character within the novel itself. Bertie's uncharactestic knowledge of literature was also eyebrow-raising: whilst he should always have the mot juste within easy reach — especially for that perfect parliamentary insult — Schott's Wooster was just a bit too learned and bookish, ultimately lacking that blithe An Idiot Abroad element that makes him so affably charming.

Furthermore, Wodehouse's far-right Black Shorts group (who "seek to promote the British way of life, the British sense of fair play and the British love of Britishness") was foregrounded a little too much for my taste. One surely reaches for Wodehouse to escape contemporary political noise and nonsense, to be transported to that almost-timeless antebellum world which, of course, never really existed in the first place?

Saying that, the all-important vernacular is full of "snap and vim", the eponymous valet himself is superbly captured, and the plot has enough derring-do and high jinks to possibly assuage even the most ardent fan. The fantastic set pieces in both a Savile Row tailor and a ladies underwear store might be worth the price of admission alone.

To be sure, this is certainly ersatz Wodehouse, but should one acquire it? «Indeed, sir,» intoned Jeeves.

09 March, 2019 01:39PM

Andrew Cater

Debian BSP Cambridge March 9th 2019 - post 1

At Steve's. Breakfast and coffee have happened. There's a table full of developers, release managers, cables snaking all over the floor and a coffee machine. This is heaven for Debian developers (probably powered by ThinkPads). On IRC on #debian-uk (as ever) and #debian-bugs Catch up with us there

09 March, 2019 12:32PM by Andrew Cater (

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


armadillo image

A minor RcppArmadillo bugfix release arrived on CRAN today. This version has two local changes. R 3.6.0 will bring a change in sample() (to correct a subtle bug for large samples) meaning many tests will fail, so in one unit test file we reset the generator to the old behaviour to ensure we match the (old) test expectation. We also backported a prompt upstream fix for an issue with drawing Wishart-distributed random numbers via Armadillo which was uncovered this week. I also just uploaded the Debian version.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 559 other packages on CRAN.

Changed are listed below:

Changes in RcppArmadillo version (2019-03-08)

  • Explicit setting of RNGversion("3.5.0") in one unit test to accomodate the change in sample() in R 3.6.0

  • Back-ported a fix to the Wishart RNG from upstream (Dirk in #248 fixing #247)

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 March, 2019 02:48AM

March 08, 2019

Muammar El Khatib

Spotify And Local Files

My favorite music player is undoubtedly Spotify. It is not a secret that its Linux support might not be the best and that some artists have just decided not to upload the music in the service. One of them is Tool, one of my favorite bands, too. I recently decided to play my Tool mp3 files with Spotify as local files and they were not playing. In order to fix that one has to:

08 March, 2019 11:38PM

hackergotchi for Jo Shields

Jo Shields

Bootstrapping RHEL 8 support on


On, we ship packages for Debian 8, Debian 9, Raspbian 8, Raspbian 9, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, RHEL/CentOS 6, and RHEL/CentOS 7. Because this is Linux packaging we’re talking about, making one or two repositories to serve every need just isn’t feasible – incompatible versions of libgif, libjpeg, libtiff, OpenSSL, GNUTLS, etc, mean we really do need to build once per target distribution.

For the most part, this level of “LTS-only” coverage has served us reasonably well – the Ubuntu 18.04 packages work in 18.10, the RHEL 7 packages work in Fedora 28, and so on.

However, when Fedora 29 shipped, users found themselves running into installation problems.

I was not at all keen on adding non-LTS Fedora 29 to our build matrix, due to the time and effort required to bootstrap a new distribution into our package release system. And, as if in answer to my pain, the beta release of Red Hat Enterprise 8 landed.

Cramming a square RPM into a round Ubuntu

Our packaging infrastructure relies upon a homogenous pool of Ubuntu 16.04 machines (x64 on Azure, ARM64 and PPC64el on-site at Microsoft), using pbuilder to target Debian-like distributions (building i386 on the x64 VMs, and various ARM flavours on the ARM64 servers); and mock to target RPM-like distributions. So in theory, all I needed to do was drop a new RHEL 8 beta mock config file into place, and get on with building packages.

Just one problem – between RHEL 7 (based on Fedora 19) and RHEL 8 (based on Fedora 28), the Red Hat folks had changed package manager, dropping Yum in favour of DNF. And mock works by using the host distribution’s package manager to perform operations inside the build root – i.e. yum.deb from Ubuntu.

It’s not possible to install RHEL 8 beta with Yum. It just doesn’t work. It’s also not possible to update mock to $latest and use a bootstrap chroot, because reasons. The only options: either set up Fedora VMs to do our RHEL 8 builds (since they have DNF), or package DNF for Ubuntu 16.04.

For my sins, I opted for the latter. It turns out DNF has a lot of dependencies, only some of which are backportable from post-16.04 Ubuntu. The dependency tree looked something like:

  •  Update mock and put it in a PPA
    •  Backport RPM 4.14+ and put it in a PPA
    •  Backport python3-distro and put it in a PPA
    •  Package dnf and put it in a PPA
      •  Package libdnf and put it in a PPA
        •  Backport util-linux 2.29+ and put it in a PPA
        •  Update libsolv and put it in a PPA
        •  Package librepo and put it in a PPA
          •  Backport python3-xattr and put it in a PPA
          •  Backport gpgme1.0 and put it in a PPA
            •  Backport libgpg-error and put it in a PPA
        •  Package modulemd and put it in a PPA
          •  Backport gobject-introspection 1.54+ and put it in a PPA
          •  Backport meson 0.47.0+ and put it in a PPA
            •  Backport googletest and put it in a PPA
        •  Package libcomps and put it in a PPA
    •  Package dnf-plugins-core and put it in a PPA
  •  Hit all the above with sticks until it actually works
  •  Communicate to community stakeholders about all this, in case they want it

This ended up in two PPAs – the end-user usable one here, and the “you need these to build the other PPA, but probably don’t want them overwriting your system packages” one here. Once I convinced everything to build, it didn’t actually work – a problem I eventually tracked down and proposed a fix for here.

All told it took a bit less than two weeks to do all the above. The end result is, on our Ubuntu 16.04 infrastructure, we now install a version of mock capable of bootstrapping DNF-requiring RPM distributions, like RHEL 8.

RHEL isn’t CentOS

We make various assumptions about package availability, which are true for CentOS, but not RHEL (8). The (lack of) availability of the EPEL repository for RHEL 8 was a major hurdle – in the end I just grabbed the relevant packages from EPEL 7, shoved them in a web server, and got away with it. The second is structural – for a bunch of the libraries we build against, the packages are available in the public RHEL 8 repo, but the corresponding -devel packages are in a (paid, subscription required) repository called “CodeReady Linux Builder” – and using this repo isn’t mock-friendly. In the end, I just grabbed the three packages I needed via curl, and transferred them to the same place as the EPEL 7 packages I grabbed.

Finally, I was able to begin the bootstrapping process.

RHEL isn’t Fedora

After re-bootstrapping all the packages from the CentOS 7 repo into our “””CentOS 8″”” repo (we make lots of naming assumptions in our control flow, so the world would break if we didn’t call it CentOS), I tried installing on Fedora 29, and… Nope. Dependency errors. Turns out there are important differences between the two distributions. The main one is that any package with a Python dependency is incompatible, as the two handle Python paths very differently. Thankfully, the diff here was pretty small.

The final, final end result: we now do every RPM build on CentOS 6, CentOS 7, and RHEL 8. And the RHEL 8 repo works on Fedora 29

MonoDevelop 7.7 on Fedora 29.

The only errata: MonoDevelop’s version control addin is built without support for ssh+git:// repositories, because RHEL 8 does not offer a libssh2-devel. Other than that, hooray!

08 March, 2019 04:15PM by directhex

March 07, 2019

Enrico Zini

Starting tornado on a random free port

One of the software I maintain for work is a GUI data browser that uses Tornado as a backend and a web browser as a front-end.

It is quite convenient to start the command and have the browser open automatically on the right URL. It's quite annoying to start the command and be told that the default port is already in use.

I've needed this trick quite often, also when writing unit tests, and it's time I note it down somewhere, so it's easier to find than going through Tornado's unittest code where I found it the first time.

This is how to start Tornado on a free random port:

from tornado.options import define, options
import tornado.netutil
import tornado.httpserver

define("web_port", type=int, default=None, help="listening port for web interface")

application = Application(self.db_url)

if options.web_port is None:
    sockets = tornado.netutil.bind_sockets(0, '')
    self.web_port = sockets[0].getsockname()[:2][1]
    server = tornado.httpserver.HTTPServer(application)
    server = tornado.httpserver.HTTPServer(application)

07 March, 2019 11:00PM

hackergotchi for Chris Lamb

Chris Lamb

Book Review: The Sellout

The Sellout (2016)

Paul Beatty

I couldn't put it down… is the go-to cliché for literature so I found it deeply ironic to catch myself in quite-literally this state at times. Winner of the 2016 Man Booker Prize, the first third of this were perhaps the most engrossing and compulsive reading experience I've had since I started seriously reading.

This book opens in medias res within the Supreme Court of the United States where the narrator lights a spliff under the table. As the book unfolds, it is revealed that this very presence was humbly requested by the Court due to his attempt to reinstate black slavery and segregation in his local Los Angeles neighbourhood. Saying that, outlining the plot would be misleading here as it is far more the ad-hoc references, allusions and social commentary that hang from this that make this such an engrossing work.

The tranchant, deep and unreserved satire might perhaps be merely enough for an interesting book but where it got really fascinating to me (in a rather inside baseball manner) is how the the latter pages of the book somehow don't live up the first 100. That appears like a straight-up criticism but this flaw is actually part of this book's appeal to me — what actually changed in these latter parts? It's not overuse of the idiom or style and neither is it that it strays too far from the original tone or direction, but I cannot put my finger on why which has meant the book sticks to this day in my mind. I can almost, just almost, imagine a devilish author such as Paul deliberately crippling one's output for such an effect…

Now, one cannot unreservedly recommend this book. The subject matter itself, compounded by being dealt with in such an flippant manner will be unpenetrable to many and deeply offensive to others, but if you can see your way past that then you'll be sure to get something—whatever that may be—from this work.

07 March, 2019 06:03PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

How to deal with insanity ?

There is a friend of mine, from college days. As it happens, we drifted apart from each other, he chose some other vocation and I chose another. At the time we were going to college, mobile phones were expensive. E-mail was also expensive but I chose to spend my money on e-mail and other methods of communication. Then about 6-8 months back, out of the blue, my friend called me back. It took me sometime as I wasn’t able to place him (I just cannot remember people and also features change a lot.) but do remember experiences. We promised to meet but at the time we were supposed to meet, it rained like it never rained before. I waited for an hour but wasn’t able to see him. I tried SMS, called him but no answer. I did try few times but never got him. He used to send me a message once in a while and I used to send a reply back. I was able to talk with his mum some days after that. Yesterday, I was trying to call some people, and his name popped up. On a hunch, dialed his number and his sis. came on the line. She was able to place me (I guess I might have met 6-8 years back or more) but still she was able to place me and she told me he’s gone insane. While I’m supposed to meet the family on the week-end to know what happened, I am still not able to process how it happened. I had known he had fallen into some bad company (his mum had shared this titbit) but can’t figure out for the life of me what could have driven him insane. I told I would be coming on Sunday as I have work, but more importantly trying to create some sense of space or control so I can digest what’s happened. While I know it happens to people, not to people I know, not to people I do care about. I also came to know that all the time my phone was not able to get through is because he has a shitty jio connection or the place where they live, jio doesn’t have a good presence.

Now one part of me has a sort of morbid curiousity as to what chain of events lead to it while at the same time dunno if I would be able to help them or what I should say or do ? Feel totally helpless. If anybody have any ideas, please yell, comment.

07 March, 2019 02:20PM by shirishag75

Daniel Leidert

Exclude files from being exported into the zip/tar source archives on (and probably GitLab too) provides various ways to export the Git branch contents or tags and releases as Zip- or Tar-archives. When creating a release, these tar-/zipballs are automatically created and added to the release. I often find archives, which contain a lot of files not useful to the end user, like .github directories, Git (.gitignore, .gitattributes) or CI related files (.travis.yml, .appveyor.yml). Sometimes they also contain directories (e.g. for test files), upstream hosts in Git, but does not need for the source distribution. But there is an easy way to keep these files out of the automatically created source archives and keep the latter clean by using the export-ignore attribute in the .gitattributes files:

# don't export the github-pages source
/docs export-ignore
# export some other irrelevant directories
/foo export-ignore
# don't export the files necessary for CI
Gemfile export-ignore
.appveyor.yml export-ignore
.travis.yml export-ignore
# ignore Git related files
.gitattributes export-ignore
.gitignore export-ignore

07 March, 2019 12:52PM by Daniel Leidert (

hackergotchi for Jonathan Dowland

Jonathan Dowland

glitched Amiga video

This is the fifth part in a series of blog posts. The previous post was Amiga/Gotek boot test. The next post is Learning new things about my old Amiga A500.

Glitchy component-video out

Glitchy component-video out

As I was planning out my next Gotek-floppy-adaptor experiment, disaster struck: the video out from my Amiga had become terribly distorted, in a delightfully Rob Sheridan fashion, sufficiently so that it was impossible to operate the machine.

Reading around, the most likely explanation seemed to be a blown capacitor. These devices are nearly 30 years old, and blown capacitors are a common problem. If it were in the Amiga, then the advice is to replace all the capacitors on the mainboard. This is something that can be done by an amateur enthusiast with some soldering skills. I'm too much of a beginner with soldering to attempt something like this. I was recommended a company in Aberystwyth called Mutant Caterpillar who do a full recap and repair service for £60 which seems very reasonable.

Philips CRT

Philips CRT

Luckily, the blown capacitor (if that's what it was) wasn't in the Amiga, but in the A520 video adaptor. I dug my old Philips CRT monitor out of the loft and connected it directly to the Amiga and the picture was perfect. I had been hoping to avoid fetching it down, as I don't have enough space on my desk to leave it in situ, and instead must lug it over whenever I've found a spare minute to play with the Amiga. But it's probably not worth repairing the A520 (or sourcing a replacement) and the upshot is the picture via the RGB out is much clearer.

As I write this, I'm in a hotel room recovering after my first day at FOSDEM 2019, my first FOSDEM conference. There was a Retrocomputing devroom this year that looked really interesting but I was fully booked into the Java room all day today. (And I don't see mention of Amigas in any of the abstracts)

07 March, 2019 12:01PM

Learning new things about my old Amiga A500

This is the sixth part in a series of blog posts. The previous post was glitched Amiga video.

Sysinfo output for my A500

Sysinfo output for my A500

I saw a tweet from Sophie Haskins who is exploring her own A500 and discovered that it had an upgraded Agnus chip. The original A500 shipped with a set of chips which are referred to as the Original Chip Set (OCS). The second generation of the chips were labelled Enhanced Chip Set (ECS). A500s towards the end of their production lifetime were manufactured with some ECS chips instead. I had no idea which chipset was in my A500, but Sophie's tweet gave me a useful tip, she was using some software called sysinfo to enumerate what was going on. I found an ADF disk image that included Sysinfo ("LSD tools") and gave it a try. To my surprise, my Amiga has an ECS "AGNUS" chip too!

I originally discovered Sophie due to her Pizzabox Computer project: An effort to acquire, renovate and activate a pantheon of vintage "pizzabox" form-factor workstation computers. I once had one of these, the Sun SPARCStation 10, but it's long since gone. I'm mildly fascinated to learn more about some of these other machines. After proofreading Fabien Senglard's DOOM book, I was interested to know more about NeXTstations, and Sophie is resurrecting a NeXTstation mono, but there are plenty of other interesting esoteric things on that site, such as Apple A/UX UNIX on a Quadra 610 (the first I'd heard of both Apple's non-macOS UNIX, and their pizzabox form-factor machines).

07 March, 2019 11:55AM

hackergotchi for Daniel Silverstone

Daniel Silverstone

Releasing Rustup 1.17.0

Today marks the release of rustup version 1.17.0 which is both the first version of rustup which I have contributed code to, and also the first version which I was responsible for preparing the release of. I thought I ought to detail the experience, but first, a little background…

At the end of last year, leading into this year, I made some plans which included an explicit statement to "give back" to the Rust community as I'd received a lot of help with, and enjoyment in, Rust from the community over the previous couple of years. I looked for ways I could contribute, including making a tiny wording PR against the compiler which I won't even bother linking here, but eventually I decided to try and help with the rust-lang/ repository and tried to tackle some of the issues therein.

Nick Cameron was, at the time, about to step down as a lead of the tools team and he ended up talking to me about maybe joining a working group to look after Rustup. I agreed and a little earlier this year, I became part of the Rustup working group, which is a sub-group of the Cargo team, part of the Rust developer tools teams.

Over the past few weeks we've been preparing a new release of Rustup to include some useful bug fixes and a few little feature tweaks. Rustup is not as glamorous a part of the ecosystem as perhaps Cargo or Rustc itself, but it's just as important I think, since it's the primary gateway through which people acquire Rust, and interact with the Rust toolchain ecosystem.

On Tuesday evening, as part of our weekly meeting, we discussed the 1.17.0 release plans and process, and since I'm very bad at stepping back at the right moment, I ended up volunteering to run the release checklist through and push 1.17.0 out of the door. Thankfully, between Nick and Alex Crichton we had a good set of instructions and so I set about making the release. I prepared a nice series of commits updating the version numbers, ensuring the lock file was up to date, making the shell script installer frontend include the right version numbers, and pushed them off to be built by the CI. Unfortunately a break in a library we depend on, which only showed its face on our mingw builders (not normally part of the CI since there are so few available to the org) meant that I had to reissue the build and go to bed.

Note that I said I had to go to bed - this was nearing midnight and I was due up before 7am the following day. This might give you some impression of the state of mind I was in trying to do this release and thus perhaps a hint of where I'm going to be going with this post…

In the morning, I checked and the CI pipelines had gone green, so I waited until Alex showed up (since he was on UTC-6) and as soon as I spotted him online, around 14:45 UTC, I pinged him and he pushed the button to prep the release after we did a final check that things looked okay together. The release went live at 14:47 UTC.

And by 15:00 UTC we'd found a previously unnoticed bug - in the shell installer frontend - that I had definitely tested the night before. A "that can't possibly have ever worked" kind of bug which was breaking any CI job which downloaded rustup from scratch. Alex deployed a hotfix straight to the dist server at 15:06 UTC to ensure that as few people as possible encountered the issue, though we did get one bug report (filed a smidge later at 15:15 UTC) about it.

By this point I was frantic - I KNEW that I'd tested this code, so how on earth was it broken? I went rummaging back through the shell history on the system where I'd done the testing, reconstructing the previous night's fevered testing process and eventually discovered what had gone wrong. I'd been diffing the 1.16.0 and 1.17.0 releases and had somehow managed to test the OLD shell frontend rather than the new one. So the change to it which broke the world hadn't been noticed by me at that point.

I sorted a fix PR out and we now have some issues open regarding ensuring that this never happens again. But what can we do to ensure that the next release goes more smoothly? For one, we need as a team to work out how to run mingw tests more regularly, and ideally on the PRs. For two, we need to work out how we can better test, the shell frontend which is currently only manually verified, under CI when its sole purpose is to download rustup from the Internet, making it a bit of a pain to verify in a CI environment.

But… we will learn, we will grow, and we won't make these mistakes again. I had been as careful as I thought I could be in preparing 1.17.0, and I still had two painful spikes, one from uncommonly run CI, and one from untested code. No matter how careful one is, one can still be bitten by things.

On a lighter note, for those who use rustup and wonder what's in 1.17.0 over the previous (1.16.0) release, here's a simplified view onto a mere subset of the changes...

  • Better formatting of long download times. Manish Goregaokar
  • Various improvements to Lzu Tao
  • A variety of error message improvements. Hirokazu Hata
  • Prevent panic on missing components. Nick Cameron
  • Support non-utf8 arguments in proxies. Andy Russell
  • More support for homebrew. Markus Reiter
  • Support for more documents in rustup doc. Wang Kong
  • Display progress during component unpack. Daniel Silverstone
  • Don't panic on bad default-host. Daniel Silverstone
  • A variety of code cleanups and fixes. So many of them. Dale Wijnand
  • Better error reporting for missing binaries. Alik Aslanyan
  • Documentation of, and testing for, powershell completions. Matt Gucci
  • Various improvements to display of information in things like rustup default or rustup status. Trevor Miranda
  • Ignoring of EPIPE in certain circumstances to improve scripting use of rustup. Niklas Claesson
  • Deprecating cURL in rustup's download internal crate. Trevor Miranda
  • Error message improvements wrt. unavailable components. Daniel Silverstone
  • Improvements in component listing API for better automation. Naftuli Kay

If I missed your commits out, it doesn't mean I thought they weren't important, it merely means I am lazy

As you can see, we had a nice selection of contributors, from Rustup WG members, to drive-by typo fixes (unlisted for the most part) to some excellent new contributors who are being more and more involved as time passes.

We have plenty of plans for 1.18.0, mostly centered around tidying up the codebase more, getting rid of legacies in the code where we can, and making it easier to see the wood for the trees as we bring rustup up-to-snuff as a modern part of the Rust ecosystem.

If you'd like to participate in Rustup development, why not join us on our discord server? You can visit and once you've jumped through some of the anti-spam hoops (check your DMs on joining) you can come along to #wg-rustup and we'll be pleased to have you help. Failing that, you can always just open issues or PRs on if you have something useful to contribute.

07 March, 2019 09:12AM by Daniel Silverstone

Russ Allbery

Net::Duo 1.02

This is an alternative Perl interface to the Duo Security second-factor authentication service. This release supports the new required pagination for returning lists of users and integrations, which will take effect on March 15, 2019. It does this in the simplest way possible: just making repeated calls until it retrieves the full list.

With this release, I'm also orphaning the package. I wrote this package originally for Stanford, and had thought I'd continue to find a reason to maintain it after I left. But I'm not currently maintaining any Duo integrations that would benefit from it, and my current employer doesn't use Perl. Given that, I want to make it obvious that it's not keeping up with the current Duo API and you would be better off using the Perl code that Duo themselves provide.

That said, I think the object-oriented model this package exposes is nicer and makes for cleaner Perl code when interacting with Duo. If you agree, please feel welcome to pick up maintenance, and let me know if you want the web site redirected to its new home.

This release also updates test code, supporting files, and documentation to my current standards, since I was making a release anyway.

You can get the current release from the Net::Duo distribution page.

07 March, 2019 05:52AM

March 06, 2019

Enrico Zini

Getting rusage of child processes on python asyncio

I am writing a little application server for microservices written as compiled binaries, and I would like to log execution statistics from getrusage(2).

The application server is written using asyncio, and processes are managed using asyncio subprocesses.

Unfortunately, asyncio uses os.waitpid instead of os.wait4 to reap child processes, and to get rusage information one has to delve into the asyncio innards, and provide a custom ChildWatcher implementation. Here's how I did it:

import asyncio
from asyncio.log import logger
from contextlib import contextmanager
import os

class ExtendedResults:
    def __init__(self):
        self.rusage = None
        self.returncode = None

class SafeChildWatcherWithRusage(asyncio.SafeChildWatcher):
    SafeChildWatcher that uses os.wait4 to also get rusage information.
    rusage_results = {}

    def monitor(cls, proc):
        Return an ExtendedResults that gets filled when the process exits
        assert > 0
        pid =
        extended_results = ExtendedResults()
        cls.rusage_results[pid] = extended_results
            yield extended_results
            cls.rusage_results.pop(pid, None)

    def _do_waitpid(self, expected_pid):
        # The original is in asyncio/; on new python versions, it
        # makes sense to check changes to it and port them here
        assert expected_pid > 0

            pid, status, rusage = os.wait4(expected_pid, os.WNOHANG)
        except ChildProcessError:
            # The child process is already reaped
            # (may happen if waitpid() is called elsewhere).
            pid = expected_pid
            returncode = 255
                "Unknown child process pid %d, will report returncode 255",
            if pid == 0:
                # The child process is still alive.

            returncode = self._compute_returncode(status)
            if self._loop.get_debug():
                logger.debug('process %s exited with returncode %s',
                             expected_pid, returncode)

        extended_results = self.rusage_results.get(pid)
        if extended_results is not None:
            extended_results.rusage = rusage
            extended_results.returncode = returncode

            callback, args = self._callbacks.pop(pid)
        except KeyError:  # pragma: no cover
            # May happen if .remove_child_handler() is called
            # after os.waitpid() returns.
            if self._loop.get_debug():
                logger.warning("Child watcher got an unexpected pid: %r",
                               pid, exc_info=True)
            callback(pid, returncode, *args)

    def install(cls):
        loop = asyncio.get_event_loop()
        child_watcher = cls()

To use it:

from .hacks import SafeChildWatcherWithRusage


    def run(self, *args, **kw):
        kw["stdin"] = asyncio.subprocess.PIPE
        kw["stdout"] = asyncio.subprocess.PIPE
        kw["stderr"] = asyncio.subprocess.PIPE
        self.started = time.time()

        self.proc = yield from asyncio.create_subprocess_exec(*args, **kw)

        from .hacks import SafeChildWatcherWithRusage
        with SafeChildWatcherWithRusage.monitor(self.proc) as results:
            yield from asyncio.tasks.gather(
        self.returncode = yield from self.proc.wait()
        self.rusage = results.rusage
        self.ended = time.time()

06 March, 2019 11:00PM

Molly de Blanc


For about a year now I’ve had the occasional run-ins with “light” internet abuse and cyberbullying. There are a lot of resources around youth (and sometimes even college students) who are being cyberbullied, but not a lot for adults.

I wanted to write a bit about my experiences. As I write this, I have had eight instances of being the recipient of abuse from threads on popular forum sites, emails, and blog posts. I’ve tried to be blithe by calling it cute things like “people being mean to me on the Internet,” but it’s cyberbullying. I’ve never been threatened, per se, but I do find the experiences traumatic and stressful.

Here’s my advice on how to deal with being the recipient (I hesitate to use the word “victim”) of cyberbullying. I spoke with a few people — people I know who have dealt with internet abuse and some professionals — and this is what I came up with.

Take care of yourself.

First and foremost, take care of yourself. Stop reading the comments or the blog post. Close your email or laptop. Remove yourself from the direct interaction with the bullying. I know this is hard, sometimes it’s really hard to look away, but it’s important to do, at least for a bit.

I like to get myself a mocha (if you like to use food/treats as a source of comfort — this may not be your style). I joke that this is me celebrating being successful enough to make people publicly upset with me.

I joke a lot about it. Some of it I find genuinely funny — someone on Slashdot said of me: Molly has trust issues, which is why she’s single. I think this is -hilarious-. Humor helps me deal with difficult situations, but that’s just me.

I also have a file of nice things people have said about me. I don’t feel the need to reference it, but I have it there just in case.

Reach out to your support network.

Tell your friends, family, or whomever. Even if you’re not interested in talking about your feelings — tell them that — just let other people know what you’re going through. In my experience, I enjoy a little solidarity.

Don’t engage.

Really. This is the hardest part. Part of me wants to talk with people who are obviously hurting and suffering a lot, part of me wants to correct factual errors, or even share with others the things I find funny. Engaging is about the worst thing you can do, according to everything I’ve heard.

Talk to a lawyer or reach out to local law enforcement.

This is for more extreme cases — especially when people are threatening you harm. This particular episode of Reply All, “The Snapchat Thief,” covers a bit about when talking to law enforcement might be the right thing to do.

This part is easier to figure out when you are in the same general area or country, or know the identities of those harassing you. Several people I know (myself included) have dealt with international harassment.

On not being a man.

A number of the men in my life are upset about this recent round of abuse — they’re generally more upset than the women. The men come off as shocked or surprised, angry and upset, and some of them are desperately searching for something to do.

The women and enbies in my life are a lot more blase about the whole thing. They respond with commiseration, but, like me, accept this as a part of life.

Women and enbies I have spoken with about this just assume that people are going to be trashing them on the web. When I decided to become more visible within free software, I understood that I was going to be abused by strangers on the internet (enbies, men, and women — all of which have said harmful things about me).

Abuse is an assumption, rather than a possibility.

I was discussing this with a friend and we considered the problem of trying to not be a target. Bullies will find targets. If you try to hold back and be unobjectionable, other people are being abused in your place. Abusers gonna abuse. If you’re strong (or self-sacrificing) you may decide to  make yourself a target, or at least accept the risk of being a target, by being visible in your work.

A few final thoughts

Being bullied, in any form, is terrible. I was badly bullied when I was younger, and facing that again as an adult is equally traumatic.

I’m sorry if you’re going through this experience. Solidarity, empathy, and sympathy.

06 March, 2019 02:19PM by mollydb

Antoine Beaupré

February 2019 report: LTS, HTML mail, new phone and new job

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

This is my final LTS report. I have found other work and will unfortunately not be able to continue working on the LTS project in the foreseeable future. I will continue my volunteer work on Debian and might even contribute to LTS in my normal job, but not directly part of the LTS team.

It is too bad because that team is doing essential work, and needs more help. Security is, at best, lacking everywhere and I do not believe the current approach of "minimal viable product, move fast, then break things" is sustainable. The people working on Linux distributions and also the LTS people are doing hard, dirty work of maintaining free software in the long term. It's thankless but I believe it's one of the most important jobs out there right now. And I suspect there will be only more of it as time goes by.

Legacy systems are not going anywhere: this is the next generation's "y2k bug": old, forgotten software no one understands or cares to work with that suddenly break or have a critical vulnerability that needs patching. Moving faster will not help us fix this problem: it only piles up more crap to deal with for real systems running in production.

The survival of humans and other species on planet Earth in my view can only be guaranteed via a timely transition towards a stationary state, a world economy without growth.

-- Peter Custers

Website work

I again worked on the website this month, doing one more mass import (MR 53) which was finally merged by Holger Levsen, after I fixed an issue with PGP signatures showing up on the website.

I also polished the misnamed "audit" script that checks for missing announcements on the website and published it as MR 1 on the "cron" project of the webmaster team. It's still a "work in progress" because it is still too noisy: there are a few DLAs missing already and we haven't published the latest DLAs on the website.

The remaining work here is to automate the import of new announcements on the website (bug #859123). I've done what is hopefully the last mass import and updated the workflow in the wiki.

Finally, I have also done a bit of cleanup on the website that was necessary after the mass import which also required rewrite rules at the server level. Hopefully, I will have this fairly well wrapped up for whoever picks this up next.

Python GPG concerns

Following a new vulnerability (CVE-2019-6690) disclosed in the python-gnupg library, I have expressed concerns at the security reliability of the project in future updates, referring to wider issues identified by isis lovecroft in this post.

I suggested we should simply drop security support for the project, citing it didn't have many reverse dependencies. But it seems that wasn't practical and the response was that it was actually possible to keep on maintaining it an such an update was issued for jessie.

Golang concerns

Similarly, I have expressed more concerns about the maintenance of Golang packages following the disclosure of a vulnerability (CVE-2019-6486) regarding elliptic curve implementations in the core Golang libraries. An update (DLA-1664-1) was issued for the core, but because Golang is statically compiled, I was worried the update wasn't sufficient: we also needed to upload updates for any build dependency using the affected code as well.

Holger asked the golang team for help and i also asked on irc. Apparently, all the non-dev packages (with some exceptions) were binNMU'd in stretch but the process needs to be clarified.

I also wondered if this maintenance problem could be resolved in the long term by switching to dynamic linking. Ubuntu tried to switch to dynamic linking but abandoned the effort, so it seems Golang will be quite difficult to maintain for security updates in the foreseeable future.

Libarchive updates

I have reproduced the problem described in CVE-2019-1000020 and CVE-2019-1000019 in jessie. I published a fix as DLA-1668-1. I had to build the update without sbuild's overlay system (in a tar chroot) otherwise the cpio tests fail.

Netmask updates

This one was minimal: a patch was sent by the maintainer so I only wrote and sent DLA 1665-1. Interestingly, I didn't have access to the .changes file which made writing the DLA a little harder, as my workflow normally involves calling gen-DLA --save with the .changes file which autopopulates a template. I learned that .changes files are normally archived on (specifically in /srv/, but not in the case of security uploads.


I once again tried to tackle an issue (CVE-2018-16858) with Libreoffice. The last time I tried to work on LibreOffice, the test suite was failing and the linker was crashing after hours of compilation and I never got anywhere. But that was wheezy, so I figured jessie might be in better shape.

I quickly got into trouble with sbuild: I ran out of space on both / and /home so I moved all my photos to external drive (!). The patch ended up being trivial. I could reproduce with a simple proof of concept, but could not quite get code execution going. It might just be I haven't found the right Python module to load, so I assumed the code was vulnerable and, given the patch was simple, it was worth doing an update.

The build ended up taking close to nine hours and 35GiB of disk space. I published DLA-1669-1 as a result.

I also opened a bug report against dput-ng against dput-ng because it still doesn't warn users about uploads to security-master the same way dput does.


Finally, Enigmail was finally taken off the official support list in jessie when the debian-security-support proposed update was approved.

Other free software work

Since I was going to start that new job in March, I figured I would try to take some time off before work starts. I therefore mostly tried to wrap things up and didn't do as much volunteer work as I usually do. I'm unsure I'll be able to do as much volunteer work now that I start a full time job either, so this might possibly be my last report for a while.

Debian work before the freeze

I uploaded new versions of bitlbee-mastodon (1.4.1-1), sopel (6.6.3-1 and 6.6.3-2) and dateparser (0.7.1-1). I've also sponsored new uploads of smokeping and tuptime.

I also uploaded convertdate to NEW as it was a (missing but optional) dependency of dateparser. Unfortunately, it didn't make it through NEW in time for the freeze so dateparser won't be totally fixed in buster.

I also made two new releases of feed2exec, my programmable feed reader, to fix date parsing on broken feeds, add a JSON output plugin, and fix an issue with the ikiwiki_recentchanges plugin.

New phone

I got tired and bought a new phone. Even though I have almost a dozen old phones in a plastic box here, most of them are basically unusable:

  • two are just "feature phones" - I need OSMand
  • two are Nokia n900 phones that can't read a SIM card
  • at least two have broken screens
  • one is "declared stolen or lost" (same, right?) which means it can't be used as a phone at all, which is totally stupid if you ask me

I managed to salvage the old htc-one-s I had. It's still a little buggy (it crashes randomly) and a little slow, but generally works and I really like how small it is. It's going to be hard to go back to a bigger format.

I bought fairphone2 (FP2). It was pricey, and it's crazy because they might come up with the FP3 this year, but I was sick of trying to cross-reference specification tables and LineageOS download pages. The FP2 just works with an "open" Android version (and LOS) out of the box. But more importantly, the FP project tries to avoid major human rights issues in the source of components and the production of the device, something that's way too often overlooked. Many minerals involved in the fabrication of modern electronics come from conflict zones or involve horrible (child) labour conditions. Fixing those issues should be our priority, maybe even before hardware or software freedom.

Even without addressing completely those issues, the fact that it scored a perfect 10 in iFixit's reparability score is amazing. It seems parts are difficult to find, even in Europe. The phone doesn't ship to the Americas from the original website, which makes it difficult to buy, but some shops do ship to Canada, like Ecosto.

So we'll see how that goes. I will, as usual, document my experiences in the wiki, in fairphone2.

Mailing list experiments

As part of my calendar project, I figured I would keep my "readers" informed of my progress this year and send them an update every month or so. I was inspired by this post as I said last week: I can't stop thinking about it.

So I kept working on Mailman 3. Unfortunately, only a single of my proposed patches was merged. Many of them are "work in progress" (WIP) of course, but I was hoping to get more feedback on the proposals, especially the no notification workflow. Such a workflow delegates the sending of confirmation mails to the caller, which enables them to send more complex email than the straitjacket the templating system forces you into: you could then control every part of the email, not just the body and subject, but also content type, attachments and so on. That didn't seem to get traction: some informal comments I received said this wasn't the right fix for the invite problem, but then no one is working on fixing the invite problem either, so I wonder where that is going to go.

Unabashed, I tried to provide a french translation which allowed me to send an actual invite fully translated. This was a lot of work for not much benefit, so that was frustrating as well.

In the end, I ended up just with a Bcc list that I keep as an alias in my ~/.mutt/aliases, which notmuch reads thanks to my notmuch-address hack. In the email, I proposed my readers an "opt-out": if they don't write back, they're on the mailing list. It's spammy, but the readers are not just the general public: they are people I know well, that are close to me, and to who I have given a friggin' calendar (at least most of them).

If I find the energy, I'll finish setting up Mailman 3 just the way I like and use it to do the next mailing. But I can't help but think the mailing list is overkill for this now: the mailing with a Bcc list worked without a flaw, as far as I could tell, and it means minimal maintenance. So I'm not sure I'll battle Mailman 3 much longer, which is a shame because I happen to believe it's probably our best bet to keep mailing lists (and therefore probably email itself) alive in the future.

Emailing HTML in Notmuch

I actually had to write content for that email too - just messing around with the mailing list server is one thing, but the whole point is to actually say something. Or, in my case, show something, which is difficult using plain text. So I went crazy and tried to send HTML mail with notmuch. The thread is interesting: I encourage you to read it in full, but I'll quote the first post here for posterity:

I know, I know, HTML email is "evil"[1]. I mostly never ever use it, in fact, I don't remember the last time I consciously sent HTML. Maybe I did so back when I was using Netscape Communicator[2][3], but whatever.

The reason I thought about this again is I have been doing more photography these days and, well, being allergic to social media, I have very few ways of sharing those photographs with families and friends. I have tried creating a gallery website with an RSS feed but I'm sure no one here will be surprised that the uptake is minimal, if non-existent. People expect to have stuff pushed to them, like Instagram, Facebook, Twitter or Spam does.

So I thought[4] of Email again: the original social network! I figured I would just make a mailing list, and write to my people once in a while to let them new about my new pictures. And while writing the first email, I realized it was pretty silly to not include images, or at least links to images in the email.

I'm sure you can see where this is going. A link in the email: who's going to click that. Who clicks now anyways, with all the tapping[5] going on. So the answer comes naturally: just write frigging HTML email. Don't be a rms^Wreligious zealot and do the right thing, what works basically everywhere[6] (even notmuch!).

So I started Thunderbird and thought "what the heck am I doing! there must be a better way!" After searching for "message mode emacs html email ktxbye", I found some people already thought about this problem and came up with somewhat elegant solutions[7]. I built on that by trying to come up with a pure elisp solution, which goes a little like this:

(defun anarcat/notmuch-html-convert ()
  """create an HTML part from a Markdown body

This will not work if there are *any* attachments of any form, those should be added after."""
    ;; fetch subject, it will be the HTML version title
    (message "building HTML attachment...")
    (search-forward ":")
    (let ((beg (point))) (end-of-line)
         (setq subject (buffer-substring beg (point))))
    (message "determined title is %s..." subject)
    ;; wrap signature in a <pre>
    (forward-line -1)
    ;; save and delete signature which requires special formatting
    (setq signature (buffer-substring (point) (point-max)))
    (delete-region (point) (point-max))
    ;; set region to top of body then end of buffer
    (narrow-to-region (point) (mark))
    ;; run markdown on region
    (setq output-buffer-name "*notmuch-markdown-output*")
    (message "running markdown...")
    (markdown output-buffer-name)
      (set-buffer output-buffer-name)
      ;; add signature formatted as <pre>
      (insert "\n<pre>")
      (insert signature)
      (insert "</pre>\n")
      (markdown-add-xhtml-header-and-footer subject))
    (message "done the dirty work, re-inserting everything...")
    ;; restore signature
    (insert signature)
    (insert "<#multipart type=alternative>\n")
    (insert "<#part type=text/html>\n")
    (insert-buffer output-buffer-name)
    (insert "<#/multipart>\n")
    (let ((f (buffer-size (get-buffer output-buffer-name))))
      (message "appended HTML part (%s bytes)" f))))

For those who can't read elisp for breakfast, this does the following:

  1. parse the current email body as markdown, in a separate buffer
  2. make the current email multipart/alternative
  3. add an HTML part
  4. inject the HTML version in the HTML part

There's some nasty business with formatting the signature correctly by wrapping it in a <pre> that's going on there - I took that from Thunderbird as well.

(For those who do read elisp for breakfast, improvements and comments on the coding style are very welcome.)

The idea is that you write your email normally, but in markdown. When you're done writing that email, you launch the above function (carefully bound to "M-x anarcat/notmuch-html-convert" here) which takes that email and adds an equivalent HTML part to it. You can then even tweak that part to screw around with the raw HTML if you feel depressed or nostalgic.

What do people think? Am I insane? Could this work? Does this belong in notmuch? Or maybe in the tips section? Should I seek therapy? Do you hate markdown? Expand on the relationship between your parents and text editors.

Thanks for any feedback,


PS: the above, naturally, could be adapted to parse the body as RST, asciidoc, texinfo, latex or whatever insanity you think would be more appropriate, I don't care. The idea is the same.

PPS: I remember reading about someone wanting to declare a text/markdown mimetype for email, and remembering it was all backwards and weird and I can't find the reference anymore. If some lazyweb magic person could forward the link to me I would be grateful.

 [1]: one of so many:
 [3]: yes my age is showing
 [4]: to be fair, this article encouraged me quite a bit:
 [5]: not the bass guitar one, unfortunately

I edited the original message to include the latest version of the script, which (unfortunately) lives in my private dotfiles git repository.

In the end, all that effort didn't quite do it: the image links would break in webmail when seen from Chromium. This is apparently intended behaviour: the problem was that I am embedding the username/password of the gallery in the HTTP URL, using in-URL credentials which is apparently "deprecated" even though no standards actually says so. So I ended up generating a full HTML version of the frigging email, complete with a link on top of the email saying "if this email doesn't display properly, click the following".

Now I remember why I dislike HTML email. Yet my readers were quite happy to see the images directly and I suspect most of them wouldn't click through on individual images to see each photo, so I think it's worth the trouble.

And now that I think about it, it feels silly not to post those updates on this blog now. But the gallery is private right now, and I think I'd like to keep it that way: it gives me more freedom to share more intimate pictures with people.

Using dtach instead of screen for my IRC bouncer

I have been using irssi in a screen session for a long time now. Recently I started thinking about simplifying that setup by setting up password-less authentication to the session, but also running it as a separate user. This was especially important to keep possible compromises of the IRC client limited to a sandboxed account instead of my more powerful user.

To further limit the impact of a possible compromise, I also started using dtach instead of GNU screen to handle my irssi session: irssi can still run arbitrary code, but at least you can't just open a new window in screen and need to think a little more about how to do it.

Eventually, I could make a profile in systemd to keep it from forking at all, although I'm not sure irssi could still work in such an environment. The change broke the "auto-away script" which relies on screen's peculiar handling of the socket to signify if the session is attached, so I filed that as a feature request.

Other work

06 March, 2019 02:04AM

March 05, 2019

Enrico Zini

Serving debian-distributed javascript libraries in Tornado

Debian conveniently distribute JavaScript libraries, and expects packaged software to use them rather than embedding their own copy.

Here is a convenient custom StaticFileHandler for Tornado that looks for the Debian-distributed versions of JavaScript libraries, and falls back to the vendored versions if they are not found:

from tornado import web
import pathlib

class StaticFileHandler(web.StaticFileHandler):
    StaticFileHandler that allows overriding paths in the static directory with
    system provided versions
    SYSTEM_ASSET_PATH = pathlib.Path("/usr/share/javascript")

    def get_absolute_path(self, root, path):
        path = pathlib.PurePath(path)
        if not
            return super().get_absolute_path(root, path)

        system_dir = self.SYSTEM_ASSET_PATH.joinpath([0])
        if system_dir.is_dir():
            # If that asset directory exists in the system, look for things in
            # there
            return self.SYSTEM_ASSET_PATH.joinpath(path)
            # Else go ahead with the default static dir
            return super().get_absolute_path(root, path)

    def validate_absolute_path(self, root, absolute_path):
        Rewrite of tornado's validate_absolute_path not to raise an error for
        paths in /usr/share/javascript/
        root = pathlib.Path(root)
        absolute_path = pathlib.Path(absolute_path)

        is_system_root =[:len(] ==
        is_static_root =[:len(] ==

        if not is_system_root and not is_static_root:
            raise web.HTTPError(403, "%s is not in root static directory or system assets path",

        if absolute_path.is_dir() and self.default_filename is not None:
            # need to look at the request.path here for when path is empty
            # but there is some prefix to the path that was already
            # trimmed by the routing
            if not self.request.path.endswith("/"):
                self.redirect(self.request.path + "/", permanent=True)
            absolute_path = absolute_path.joinpath(self.default_filename)
        if not absolute_path.exists():
            raise web.HTTPError(404)
        if not absolute_path.is_file():
            raise web.HTTPError(403, "%s is not a file", self.path)
        return str(absolute_path)

This is how to use it:

class DebianApplication(tornado.web.Application):
    def __init__(self, *args, **settings):
        from .static import StaticFileHandler
        settings.setdefault("static_handler_class", StaticFileHandler)
        super().__init__(*args, **settings)

And from HTML it's simply a matter of matching the first path component to what is used by Debian's packages under /usr/share/javascript:

    <link rel="stylesheet" href="{{static_url('bootstrap4/css/bootstrap.min.css')}}">
    <script src="{{static_url('jquery/jquery.min.js')}}"></script>
    <script src="{{static_url('popper.js/umd/popper.min.js')}}"></script>
    <script src="{{static_url('bootstrap4/js/bootstrap.min.js')}}"></script>

I find it quite convenient: this way I can start writing prototype code without worrying about fetching javascript libraries to bundle.

I only need to start worrying about it if I need to deploy outside of Debian, or to old stable versions of Debian that don't contain the required JavaScript dependencies. In that case, I just cp -r from a working /usr/share/javascript into Tornado's static directory, and I'm done.

05 March, 2019 11:00PM

Python hacks: opening a compressed mailbox

Python mailbox.mbox is not good at opening compressed mailboxes:

>>> import mailbox
>>> print(len(mailbox.mbox("/tmp/test.mbox")))
>>> print(len(mailbox.mbox("/tmp/test.mbox.gz")))
>>> print(len(mailbox.mbox("/tmp/test1.mbox.xz")))

For a prototype rewrite of the MIA team's Echelon (the engine behind mia-query), I needed to scan compressed mailboxes, and I had to work around this limitation.

Here is the alternative mailbox.mbox implementation:

import lzma
import gzip
import bz2
import mailbox

class StreamMbox(mailbox.mbox):
    mailbox.mbox does not support opening a stream, which is sad.

    This is a subclass that works around it
    def __init__(self, fd: BinaryIO, factory=None, create: bool = True):
        # Do not call parent __init__, just redo everything here to be able to
        # open a stream. This will need to be re-reviewed for every new version
        # of python's stdlib.

        # Mailbox constructor
        self._path = None
        self._factory = factory

        # _singlefileMailbox constructor
        self._file = fd
        self._toc = None
        self._next_key = 0
        self._pending = False       # No changes require rewriting the file.
        self._pending_sync = False  # No need to sync the file
        self._locked = False
        self._file_length = None    # Used to record mailbox size

        # mbox constructor
        self._message_factory = mailbox.mboxMessage

    def flush(self):
        raise NotImplementedError("StreamMbox is a readonly class")

class UsageExample:

    def scan(cls, path: Path) -> Generator[ScannedEmail, None, None]:
        decompress = cls.DECOMPRESS.get(path.suffix)
        if decompress is None:
            with open(path.as_posix(), "rb") as fd:
                yield from cls.scan_fd(path, fd)
            with decompress(path.as_posix(), "rb") as fd:
                yield from cls.scan_fd(path, fd)

    def scan_fd(cls, path: Path, fd: BinaryIO) -> Generator[ScannedEmail, None, None]:
        mbox = StreamMbox(fd)
        for msg in mbox:

05 March, 2019 04:57PM

Reproducible builds folks

Reproducible Builds: Weekly report #201

Here’s what happened in the Reproducible Builds effort between Sunday February 24 and Saturday March 2 2019:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week:

  • Chris Lamb:
    • Improved the displayed comment when falling back to a binary diff to include the file type. (#49)
    • Tidied definition of “no file-specific differences were detected” message suffix. []
    • Corrected a “recurse” typo. []
  • Vagrant Cascadian updated diffoscope in GNU Guix. []

Packages reviewed and fixed, and bugs filed

In addition, one of Chris Lamb’s previous patches for the Sphinx documentation system was merged upstream. He also updated his branch against the shadow password utility.

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers This week, Holger Levsen made the following improvements:

  • Improve the output of the Debian reproducible “SHA1” checker [], also including stats for non-reproducible binNMUs, arch:all and arch:amd64 packages [].
  • Deal with zero results in the SHA1 checker. []
  • Move SHA1 checker to osuosl173 node. []
  • Add setup_schroot_buster_diffoscope job on osuosl173 node. []
  • Node maintenance. [][][]

In addition, Mattia Rizzolo performed some armhf node maintenance. []

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 March, 2019 01:14PM

hackergotchi for Steve Kemp

Steve Kemp

Raising a bilingual child

The last time I talked about parenting it was in the context of a childcare timetable, where my wife and I divide the day explicitly hour by hour so that one of us is "in charge" at all times.

For example, I might take care of Oiva from 7AM-12pm on Saturdays, then she takes over until 5pm, and I take 5-7PM (bed-time). We alternate who gives him a bath and sits/reads with him until he's asleep.

Even if all three of us are together there is always one person who is in-charge, and will handle nappies, food, and complaints. The system works well, and has done since he was a few weeks old. The big benefit is that both of us can take time off, avoiding burnout and frustration.

Anyway that's all stable, although my wifes overnight shifts sometimes play havoc with the equality, and I think we're all happy with it. The child himself seems to recognize who is in charge, and usually screams for the appropriate parent as required.

Today's post is more interesting, because it covers bilingual children, which our child is:

  • His mother is Finnish.
    • She speaks Finnish to him, exclusively.
  • I'm from the UK.
    • I speak English to him, exclusively.

Between ourselves we speak English 95% of the time and Finnish 2% of the time. The rest of our communication involves grunting, pointing, and eye-contact.

He's of an age now where he's getting really good at learning new words, and you can usually see who he learned them from. For example he's obsessed with (toy) cars. One of his earlier words was "auto", but these days he sometimes says "car" to me. He's been saying "ei" for months now, which is Finnish for "no". But now he's also started to say "no" in English.

We took care of a neighbours dog over the weekend, and when the dog tried to sniff one of his cars he pointed a finger at it, and said "No!". That was adorable.

Anyway his communication is almost exclusively single-words so far. If he's hungry he might say:

  • leipä! leipä! leipä!
    • Bread! Bread! Bread!
  • muesli! muesli! muesli!
    • muesli! muesli! muesli!

He understands complex ideas, commands, instructions, and sentences in both English and Finnish ("We're going to the shop", "Would you like to play in the park?", and many many more). But he's only really starting to understand that he can say the same thing in multiple languages - as per the example above of "ei" vs "no", or "car" vs "auto".

Usually he uses the word in the language he heard it in first. For example he'll say goodbye to people by saying "moi moi", but greet them with "hello". There are fun words though. For example 99% of the time a dog is a "woof woof", but sometimes recently he's been describing them as "hauva". A train is a "choo choo", as is a tram, and a rabbit is a "pupu".

He's started saying "kissa" for cat, but when watching cartoons or reading books he's more likely to identify them as dogs.

No real conclusion here, but it's adorable when he says isä/isi for Daddy, and äiti for Mummy. Or when he's finished at the dining table and sometimes he says "pois" and other times says "away".

Sometimes you can see confusion when we both refer to something with different words, but he seems pretty adept at understanding. I'm looking forward to seeing him flip words between languages more often - using each one within a couple of minutes. He has done that sometimes, but it's a rare thing. He'll sometimes say "daddy car" and "äiti auto", but more often than not the association seems random. He's just as likely to say "more kala" as "more fish".

05 March, 2019 10:01AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Mob justice and extreme violence in Copilco Universidad — @Alcaldia_Coy @CopilcoUniv @CopilcoVecinos @manuelnegrete22

Some days ago I read a piece of news that shocked me at different levels: Three blocks away from my home, and after being "unclearly" denounced for harassing a woman, a guy was beaten to death. Several sources for this: El Diario MX: Por acosar a mujer lo golpean hasta la muerte; El Siglo de Torreón: Asesinan a hombre por presuntamente acosar a mujer en Coyoacán; Zócalo: Matan a hombre en Coyoacán; Milenio: Por presuntamente acosar a mujer, golpean y matan a hombre en CU.

Of course, when anybody cries for help, it should be our natural response (everybody's!) to rush and try to help. However, stopping an aggression is a far cry from taking justice in our own hands and killing a guy.

Mob justice is usually associated with peri-urban or rural areas, with higher socioeconomic margination and less faith in authority. Usually, lynching mobs generate a very bad and persistent name to wherever said acts of brutality happened. While I don't want to say we are better than..., it shocks me even more to have found this kind of brutality in the midst of the Universitary neighbourhood, at a very busy pedestrian street, at all times (this happened somewhat after noon on Thursday) full of teachers and students.

Not only that. The guy who was attacked was allegedly a homeless guy, in his mid 20s. Some reports say that after the beating took place, he was still alive, but when the emergency services arrived (30 minutes later!) he had died. We are literally less than 200m away from Facultad de Medicina, and hundreds of students and teachers walk there. Was nobody able to help? Did nobody feel the urge to help?

If this guy was a homeless person, quite probably he was weak from malnutrition, maybe crossed with some addictions, and that's what precipitated his death. But, again — This raises other suspicions. Maybe he was pointed to by some of the store owners that wanted to drive him away from their premises? (he was attacked inside a commercial passageway, not in the open street)

Also... While there is not much information regarding this attack, I'm quite amazed almost no important local (or even national!) media have picked this up. We are less than 1Km away from the central offices of Grupo Imágen! This is no small issue. Remember the terrible circus raised around the Tláhuac lynches in ~2005 (and how Tláhuac still carries that memory almost 15 years later)? What is the difference here?

No attack on women should be tolerated quietly. But no lynchmob should be given a blind eye to. This deeply worries and saddens me.

05 March, 2019 06:03AM by gwolf