February 28, 2020

hackergotchi for Jonathan Dowland

Jonathan Dowland

3D-printed castle, iteration 2

A more Disney-fied castle: conical spires and an archway.

A more Disney-fied castle: conical spires and an archway.

Here's iteration 2 of my 3D-printed castle design. I'm mostly focussed on helping to improve our internal instructions on how to get up and running with 3D printing, submitting jobs to the office printer etc., rather than refining this particular model, so I limited myself to a train journey's worth of adjustments. My Daughter was keen that it looked more like the Disney castle.

The plan for the office is still for folks to use Cura as a slicer, which they can download and run as an AppImage, and for us to provide a ZIP of configuration that automatically sets up the printer specifications and connects it to the Octoprint instance we've got set up to receive jobs. Unfortunately merely zipping ~/.config/cura/4.4 is not sufficient to make the experience seamless, so further experimentation is needed.

28 February, 2020 04:28PM

hackergotchi for Martin Pitt

Martin Pitt

First steps in system-wide Linux tracing

Motivation Today at Red Hat we have a “Learn something new” day. After so many years of doing software development, I’m quite well versed in tools like strace, gdb, or good old printf() debugging to examine how an individual process (mis)behaves. But I occasionally run into situations where a system-wide scope of examination is necessary. For example, when I was working on optimizing Ubuntu’s power usage ages ago, I wanted to hunt down processes which were opening/reading/writing files and thus waking up the hard disk.

28 February, 2020 12:00AM

February 27, 2020

hackergotchi for Mike Gabriel

Mike Gabriel

Lomiri - Operating Environment for Everywhere

It is my pleasure to spread the word about the new name of Unity8 (UI running on the Ubuntu Phone and the Ubuntu Tablet) and its related projects: Lomiri (low-mee-ree).

Lomiri: New Name, Same Great Unity8

Lomiri is the operating environment for everywhere: phone, tablet, laptop, and desktop. It features a slick and easy-to-use interface based on the design of its predecessor, Canonical's Unity desktop environment.

Change is never Easy

I was honoured to witness the process of the long outstanding name change +/- in real time over the last couple of days / weeks. I was touched by the gentleness of the discussion and the weighing of pros and cons, this name and that name; also by the jokes being injected into the discussions.

Dalton Durst, release manager on the UBports [2] team, explains in depth [1] about the reasoning and necessities behind the name change. Please (esp. if you feel sad or irritated by the name change), read the official announcement and detailled explanation. If you need time to adjust, Dalton's explanations will help.



27 February, 2020 08:02PM by sunweaver

hackergotchi for Jonathan McDowell

Jonathan McDowell

New laptop: Walmart Motile M142

Over Christmas I found myself playing some Civilization V - I bought it for Linux some time ago on Steam but hadn’t played it a lot. So I fired it up to play in the background (one of the advantages of turn based games). Turns out it’s quite CPU intensive (the last Civilization I played was the original, which ran under DOS so there was no doing anything else while it ran anyway), and my 6 year old Dell E7240 couldn’t cope very well with switching back and forth. It still played well enough to be enjoyable, just not in a lightweight “while I’m doing other things” manner. On top of that the battery life on the E7240 hadn’t been that great; I’d had to replace the battery in November because the original was starting to bulge significantly and while the 3rd party battery I bought had much better life it was nowhere near the capacity of the old one when it was new.

The problem is I like subnotebooks, and my E7240 was mostly maxed out (i5-4300U, 16G RAM, 500G SATA SSD). A new subnotebook would have a much better CPU, and probably an NVMe SSD instead of SATA, but I’d still have the 16G RAM cap and if I’m looking for a machine to last me another 5 years that doesn’t seem enough. Then in January there were announcements about a Dell XPS 13 that would come with 32G. Perfect, thought I. 10th Gen i7, 32G, 1920x1200 screen in 13”. I have an XPS 13 for work and I’m very happy with it.

Then, while at FOSDEM I saw an article in Phoronix about a $200 Ryzen 3 (the M141) from Walmart. It looked like it would end up similar in performance to the E7240, but with a bit better battery life and for $200 it was well worth a shot (luckily I already had a work trip to the US planned for the middle of February, and the office is near a Walmart). Unfortunately I decided to sleep on it and when I woke up the price had jumped to $279. Not quite as appealing, but looking around their site I saw a Ryzen 5 variant (the M142) going for $400. It featured a Ryzen 5 3500U, which means 4 cores (8 threads), which was a much nicer boost over my i5. Plus AMD instead of Intel removes a whole set of the speculative execution issues that are going around. So I ordered one. And then it got cancelled a couple of days later because they claimed they couldn’t take payment. So I tried again, and that seemed to work. Total cost including taxes etc was about $440 (or £350 in real money).

Base spec as shipped is Ryzen 5 3500U, 8G RAM + 256G SATA m.2 SSD. Provided OS is Windows 10 Home. I managed to get it near the start of my US trip, and I’d brought a USB stick with the Debian installer on it, so I decided to reinstall. Sadly the Buster installer didn’t work - booted fine but the hardware discovery part took ages and generally seemed unhappy. I took the easy option and grabbed the Bullseye Alpha 1 netinst image instead (I run testing on my personal laptop, so this is what I was going to end up with). That worked fine (including, impressively, just working on the hotel wifi but I think that was partly because doing the T+Cs acceptance under Windows was remembered so I didn’t have to do it again to get routed access for the installer). I did need to manually install firmware-amd-graphics to make X happy, but everything else was smooth and I was able to use the laptop in the evenings for the rest of my trip.

The interesting thing to me about this laptop was that the RAM was easily upgradable, and there was some suggestion (though conflicting reports) that it might take 32G. It’s only got a single slot (so a single channel, which cripples things a bit especially with the built in graphics), but I found a Timetec 32G DDR4 SODIMM and ordered it to try out. It had arrived by the time I got home from the US and I eagerly installed it. Only to find the system was unreliable, so I went back to the 8G. Once I had a little more time I played again, running memtest86 (the UEFI variant) to test the RAM and hitting no problems. So I tried limiting memory to 16G (mem=16G on the Linux command line), no issues even while compiling kernels. 24G, no issues. Didn’t limit it at all, no issues (so far). So I don’t know if I missed something the first time round such as cooling issues, or if it’s something else entirely that was the issue then. The BIOS detects the 32G just fine though, so there’s obviously support there it just might be a bit picky about RAM type.

Next thing was I had hoped to transplant the drive from my old laptop across; 500G has been plenty for my laptop so I didn’t feel the need to upgrade. Except the old machine was old enough it was an mSATA drive, which wouldn’t fit. I’ve a spare Optane drive so I was hoping I’d be able to use that, but it’s a 22110 form factor and while the M142 has 2 m.2 slots they’re both only 2280. Also the Optane turned out to not be detected by the BIOS when I set it in. So I had to order a new m.2 drive and ended up with a 1T WD Blue, which has worked just fine so far.

How do I find it? It’s worked out a bit pricer overall than I hoped (about £550 once I bought the RAM and the SSD) but I’ve ended up with twice the RAM, twice the disk space and twice the cores of my old machine (for probably less than half what I paid for it). Civ V is still slower than I’d like, but it’s much happier about multitasking (and the fans don’t spin up so much). The machine is a bit bigger, but not terribly so (about 1cm wider) and for a cheap laptop it is light (I had it and my work laptop on my flight home in my hand baggage and it wasn’t uncomfortably heavy). The USB-C isn’t proper Thunderbolt, so I can’t dock with one cable, but the power/HDMI/USB3 are beside each other and easy enough to connect. Plus it’s HDMI 2.0 which means I can drive my almost-4K monitor at 60Hz without problems (my old laptop was slightly out of spec driving the monitor and would get unhappy now and then). Other than Civ V I’m not really noticing the CPU boost, but day to day I wasn’t expecting to. And while it’s mostly on my desk and mains powered powertop is estimating nearly 6 hours of battery life. Not the full working day I can get out of XPS 13, but pretty respectable for the price point. So, yeah, I’m pretty happy with the purchase. My only complaint would be the keyboard isn’t great, but I’m mostly using an external one and the internal one is fine on the move.

The Dell XPS 13 with 32GB? Still not available on Dell’s site at the time of writing. I won’t be holding my breath.

27 February, 2020 08:00PM

hackergotchi for Charles Plessy

Charles Plessy

How to not open a PDF with GIMP

Some tools such as the command-line email client neomutt can launch graphical applications. In order to select which application for which file, mutt uses the mailcap system, provided by the mime-support package.

mailcap gets its default informations from two sources: some files installed by packages distributing the applications in either /usr/lib/mime/packages in mailcap format or in /usr/share/applications in FreeDesktop format. The Debian Policy specifies that the packages that provide informations in the FreeDesktop format refrain from repeat them in mailcap format (9.7.2).

The GIMP image editor declares its capacity to open PDF files in the file /usr/share/applications/gimp.desktop. GNOME's default PDF reader, Evince, declares this in /usr/share/applications/org.gnome.Evince.desktop. Desktop environments that follow the FreeDesktop standard have access to extra informations that give the priority to Evince. The mailcap system does not access them and gives the priority to alphabetic order. Therefore when one opens a PDF with mutt, it opens with GIMP, which is not convenient.

Fortunately, mailcap is easy to configure. In order to change the priority for one's personal account, one just has to copy the evince entry that is found in /etc/mailcap and place it into $HOME/.mailcap. For instance (but beware, it is simplistic):

grep evince /etc/mailcap >> $HOME/.mailcap

edited on feb. 28th to add $HOME to the example.

27 February, 2020 01:33PM

Petter Reinholdtsen

Blockchain and IoT articles accepted into Records Management Journal

On Tuesday, two scietific articles we have been working on for a while, was finally accepted for publication into Records Management Journal. Still waiting for the assigned DOI urls to start working, but you can have a look at the LaTeX originals here.

The first article is "A record-keeping approach to managing IoT-data for government agencies" (DOI 10.1108/RMJ-09-2019-0056) by Thomas Sødring, Petter Reinholdtsen and David Massey, and sketches some approaches for storing measurement data (aka Internet of Things sensor data) in a archive, thus providing a well defined mechanism for screening and deletion of the information

The second article is "Publishing and using record-keeping structural information in a blockchain" (DOI 10.1108/RMJ-09-2019-0050) by Thomas Sødring, Petter Reinholdtsen and Svein Ølnes, where we describe a way for third parties to validate authenticity and thus improve trust in the records kept in a archive.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

27 February, 2020 08:05AM

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Edu on TV (NDR broadcast station, Germany)

One of my Debian Edu customers has recently been on German television...


(URL is valid until 24th May 2020).

Have fun watching. (Access might not be possible world-wide).

27 February, 2020 07:47AM by sunweaver

hackergotchi for Junichi Uekawa

Junichi Uekawa

Book reading club: The Book Of Tea.

Book reading club: The Book Of Tea. Had a session introducing the Book Of Tea. It's a nice introductory book written in 1906, by Okakura Kakuzo, while he was working for Boston Museum, in English. Although it is old and feels too dramatic in how he expresses things, the book covers many aspects of Teaism and good introduction to the many aspects. The book is avilable on the Project Gutenberg.

27 February, 2020 07:36AM by Junichi Uekawa

Andreas Rönnquist

Freedb is closing it’s service

Freedb, which is a free version of Cddb, and is used by the asunder cd-ripper (which I am the Debian maintainer of), is closing down it’s services March 31st.

This means you won’t have the program fill in artist, album title and song titles automatically when using it to record the music of a CD to the file format of your choice. I have provided a patch to asunder which removes the cddb code, and replaces it with support for the MusicBrainz database, which seems to work fine. The code is available in the asunder bug tracker, and you can find my patched version of asunder here. If you are using asunder, please try it out.

[edit] I have reported a bug on libcddb0 in the Debian BTS: #952689

27 February, 2020 04:31AM by gusnan

February 26, 2020

hackergotchi for Norbert Preining

Norbert Preining

Changing cinnamon’s lockscreen background

Recently I want to have a different background image on the desktop and the lock screen. With my preferred desktop environment, Cinnamon, I found that this is not planned for, how disappointing. Searching the internet pointed me to a solution using a background task. The script is contained in this repository, which is btw full of nice tricks. The script there does much more than I need (no need for slide shows etc), and also uses qdbus in a sleep loop, which is somehow overdoing imho.

It turned out that using dbus-monitor alone, but not in profile output mode, but default output mode, allows for reading and acting upon the state of the screensaver. That allowed me to greatly simplify the script to the following:

DESKTOP_BACKGROUND=$(gsettings get org.cinnamon.desktop.background picture-uri)
DESKTOP_SLIDESHOW=$(gsettings get org.cinnamon.desktop.background.slideshow slideshow-enabled)
# Check for existing instances and kill them leaving current instance running
for PID in $(pidof -o %PPID -x "${0##*/}"); do
    if [ "$PID" != $$ ]; then
        kill -9 "$PID"
dbus-monitor --session "interface='org.cinnamon.ScreenSaver', member='ActiveChanged'" | while read -r msg
  # The profile mode does not contain the actual value of the state, while in
  # normal output mode the state is shown in the second line:
  #signal time=1582609490.461661 sender=:1.17859 -> destination=(null destination) serial=132 path=/org/cinnamon/ScreenSaver; interface=org.cinnamon.ScreenSaver; member=ActiveChanged
  #   boolean true
  #signal time=1582609495.138523 sender=:1.17859 -> destination=(null destination) serial=143 path=/org/cinnamon/ScreenSaver; interface=org.cinnamon.ScreenSaver; member=ActiveChanged
  #   boolean false
  case "$msg" in
    "boolean true")
      DESKTOP_BACKGROUND=$(gsettings get org.cinnamon.desktop.background picture-uri)
      DESKTOP_SLIDESHOW=$(gsettings get org.cinnamon.desktop.background.slideshow slideshow-enabled)
      gsettings set org.cinnamon.desktop.background.slideshow slideshow-enabled false
      gsettings set org.cinnamon.desktop.background picture-uri "file://$LOCKSCREEN_BACKGROUND"
    "boolean false")
      gsettings set org.cinnamon.desktop.background picture-uri "$DESKTOP_BACKGROUND"
      gsettings set org.cinnamon.desktop.background.slideshow slideshow-enabled $DESKTOP_SLIDESHOW
    *) ;;

With that simple script running in the background (best via an autostart item in ~/.config/autostart/ like the following (where the above script is located in /usr/local/bin/cinnamon-lockscreen-background.sh):

[Desktop Entry]
Comment=Change lockscreen background image

gives me my preferred background image on both the lock screen and the main desktop. Great.

26 February, 2020 12:47AM by Norbert Preining

February 25, 2020

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Conflict solving has many layers

In January I started a mediation training which is specifically aimed at mediation and conflict in workplaces. I'd like to share some of my insights from this training in a series of posts.

Visibility of conflicts

Conflicts in the workplace are most of the time structural and systemic. However, they generally burst out on the individual, personal level, and ultimately on the relational level - making it look like they were individual problems. Often times, when we see a conflict, we only see the tip of the iceberg.

I'll try to very briefly outline ways to look at conflicts.

Layers of conflict solving

When working on a conflict, there are generally three layers involved: the past, the present and the future:

  1. The past. What happened? When something hurtful or unlawful happened in the past, this layer calls for compensation and/or conciliation, i.e. giving back something that was taken, paying a fine, apologizing, or simply that both parties acknowledge the facts — or the hurt — that was caused.
  2. The present. Where are we, the conflicting parties, standing now, where is our center? At this layer we can seek compromise through negotiation, and communication.
  3. The future. How do we want to (not) interact in the future? In order to learn from conflict, the conflicting parties need to engage in a process of transformation.

When you think about a conflict that you solved in the past months or years, to which extent were these three layers involved?

You'll probably come to the conclusion that all of three of them were involved — but not necessarily to the same extent. If compensation/conciliation does not take place, we cannot operate on the layer involving the present and even less so on the layer involving future transformation.

Conflict as process

People generally approach conflict like they would approach a problem with their car: they want to solve it via expert advice. Something is wrong, please repair the problem. "Kim is not happy that we put them on another task with less pay. Let's hire an HR person who can "convince" Kim of the advantages of the new job."

Other times people think that the conflict can only be the fault of one of the people involved in it, and ascribe the person all sorts of bad character traits which could only get softened through relaxation techniques, or fixed through therapy (1). "Toni is so frustrated all the time, they should really learn some positive thinking and do more yoga so they don't bother everyone with their bad mood at work." OR "Jawad has a depression, let's ignore his negativity."

But there is something in between:

PERSON       →        PROBLEM

In between there is a relation, or a process: How do persons relate to problems?

There is no right or wrong in this in-between of person and problem, because this relation is always only subjective: how do I as a person relate to the problem, how do I understand it, where am I standing with regards to this problem? What are my experiences, my triggers, my values, my boundaries?

There are many different ways to deal with this process, only one of them is mediation. In some situations, other possibilities can be more appropriate: for example coaching, supervision, leadership training, legal advice, or even legal measures.


When a conflict in your workplace arises, make sure to ask and research if there could be one or more underlying structural issues that may have led to this conflict.

(1) It is very important to distinguish an actual mental health issue such as schizophrenia, bipolar disorder, as merely two examples, from behaviours that we dislike in other people. Mental health issues might be at the core of a conflict, or a dysfunctional relation at work. However, diagnosing a person with some kind of mental health issue is most often used as a way to dismiss their criticism, their way of voicing an opinion, or as a way to silence them. Rule of thumb: If you're not their doctor, but have to work with them, please seek external medical — or even legal — advice.

25 February, 2020 10:00AM by ulrike

Russ Allbery

Review: Digital Minimalism

Review: Digital Minimalism, by Cal Newport

Publisher: Porfolio/Penguin
Copyright: 2019
ISBN: 0-525-53654-X
Format: Kindle
Pages: 256

Cal Newport is a computer science professor at Georgetown with a long-standing side interest in personal productivity and career development. I first ran across his work with Deep Work, the thesis of which is that the most valuable resource for knowledge workers is concentration and the ability to think deeply about a topic, but our work environments and tools are structured to undermine that concentration. I found, and still find, Deep Work persuasive, even if that hasn't fully translated into following its recommendations.

This book is only glancingly about concentration, however. Newport has gotten interested in what he calls "digital minimalism," joining the chorus of people who say that smart phones and social media are bad for your brain. If you're already starting to roll your eyes, you're not alone. I think Newport has a few interesting things to say and successfully avoids most of the moral panic that infests news media coverage of this topic, but I'd rather read more in the vein of Deep Work.

Newport's basic thesis is sound: Social networks, and to a lesser extent smart phones and mobile apps in general, are designed to make money for their authors by monetizing your attention. The companies behind them aren't opposed to making your life better if that helps hold your attention, but it's not their primary goal, nor is it clear if they know how to improve your life in any meaningful way. They do know, extremely well, how to exploit human psychology to keep you returning to their product.

How they do this is a topic of much speculation and analysis. Newport primarily blames three things: the ubiquitous availability of mobile devices, the addictive power of intermittent positive reinforcement, and exploitation of the human desire for social approval.

The second of those is, I think, the least obvious and the one with the most interesting psychological research. Behavioral experiments in psychology (specifically, Michael Zeiler's 1971 experiment with pigeons) seem to indicate that unpredictable rewards can be more addictive than predictable rewards. Zeiler compared pigeons who were rewarded for every button press with pigeons who were sometimes rewarded and sometimes weren't at random, and found that the second group pressed the button twice as much. Newport argues that social media interactions such as the like button, or even just searching posts for something interesting or unexpected, produce exactly this sort of unpredictable, random positive reinforcement, and are thus more addictive than reliable and predictable rewards would be.

The other points are more obvious, and expand on themes Newport discussed in his previous books. Mobile devices plus social media provide convenient and immediate access to lightweight social interactions. We can stay lightly in touch with far more people than we could interact with in person, and easily access the small mental and social rewards of curiosity, life news, and content-free moments of connection. As you might expect from Newport's focus on concentration and deep thinking, he considers this ubiquitous, shallow distraction to be dangerous. It requires little sustained effort, offers few meaningful rewards, and is developed and marketed by companies with an incentive to make it mildly addictive. Newport believes these sorts of trivial interactions crowd out deep and meaningful ones and can make us feel perpetually distracted and harried.

So far, so good; this is a defensible take on social media. But it's also not very groundbreaking, and is only a small part of this book. Most of Digital Minimalism is about what Newport proposes we do about it: Cut back significantly (at first, completely) on social media and replace it with other activities Newport finds more worthy.

To give him credit, he doesn't fall for the moralizing simplification that either screens or social media are inherently bad. His thesis is that social media is one of a number of tools we can choose to use, and that we should make that choice thoughtfully, base it on the value that tool can bring compared to other ways we could spend the same time, and restrict any tool we do choose to only the purposes for which it has value. He therefore doesn't propose dropping social media entirely; rather, he recommends deciding what purpose it serves for you and then using it only for that purpose, which can often be done in a half-hour on the weekend from a desktop computer rather than in numerous interrupted intervals throughout the day on a phone.

I think this is sensible, but maybe that's because I'm already (mostly) doing what Newport suggests. I never created a Facebook account (thankfully, my family doesn't use it). My one social media time sink is reading Twitter, but knowing my tendency to get into arguments on-line, I have an iron-clad rule to treat Twitter as strictly read-only and never post, thus avoiding at least the social approval aspects. I learned my lesson on Usenet that on-line arguments can expand to fill all available time, and it's worth thinking very hard about what I'm trying to accomplish. There's usually something else I could be doing that would either be more fun or more productive (and often both).

I was therefore less interested in Newport's advice and more interested in how he chose to provide it and in what he recommends people substitute for social media. This is a mixed bag.

Those who follow his blog will know that Newport is relentlessly apolitical in public. That's a more severe problem for Digital Minimalism than it is for Deep Work because a critique of social media begs to be a critique of marketing-driven capitalism and the economic and social systems that support building commercial empires on top of advertising. Newport predictably refuses to follow that thread. He makes specific, limited, and targeted criticisms of social media companies that go only as far as observing that they commodify our attention, but absolutely refuses to look at why attention is a commodity or what that implies about our economic system. I was unsurprised by this, but it's still disappointing.

Newport also freely mixes in his personal biases and is rather too credulous when reading studies or authors who agree with him. Frequent references to Thoreau and Walden as examples of minimalism sound a bit odd once you know that Thoreau's mother occasionally cooked for him and did his laundry. Minimalism based on other people's (gendered) labor is perhaps not the note Newport was trying to strike. In another version of the same problem, he's enamored of the modern minimalist movement and FIRE bloggers (Financial Independence, Retire Early), and while I'm generally sympathetic to people who opt out of the endless advertising-driven quest to acquire more things, presenting these folks as successes of minimalism rather than a choice made available via inherited wealth or access to high-paying contract work is dubious. I suppose I'll take my allies against capitalism where I can find them, but I'd rather they be a bit more politically aware.

Also on the bias front, Newport is oddly obsessed with in-person conversations and physical hobbies. He's dismissive of on-line relationships and friendships, throws out some dubious arguments about the lack of depth and emotional nuance in text-based communications, and claims that nothing done on a computer, even programming, fully counts as craft. This may well be true of him personally, but speaking as an introvert who has had multiple deep, decades-long friendships conducted entirely via letters and on-line chat, he is wrong to universalize his own preferences. Writing is different than in-person interactions with the full range of verbal and physical cues, and I wouldn't recommend eliminating the latter entirely, but there are forms of written interaction that are not shallow social media. And I will vigorously defend the thoughtful maintenance of a free software project as craft and high-quality leisure equal to woodworking.

This is not a bad book, exactly. It has an even narrower target audience than Newport's other books, namely well-off people who use social media, but those people exist and buy books. (Newport probably thinks that the book might be helpful to people who are less well off. I think he's wrong; the book is full of unmarked assumptions about availability of the life choices that come with money.) It says some sensible things about the motives of social media companies, although it doesn't take that analysis nearly far enough. And it contains some reasonable suggestions about how to significantly reduce one's personal use of social media if you need that sort of thing (and if your biases are mostly compatible with the author's).

That said, I thought Newport was saying something interesting and somewhat more novel in Deep Work. Digital Minimalism is in line with numerous other articles about clawing bits of your life back from social media — more moderate than most, more detailed, and a bit more applied, but nothing you can't find elsewhere. Hopefully Newport has gotten it out of his system and will go back to writing about practicing concentration and improving workplace communication methods.

Rating: 6 out of 10

25 February, 2020 05:00AM

hackergotchi for Norbert Preining

Norbert Preining

Gaming: The Turing Test

In a world without Portal and The Talos Principle, The Turing Test would have been a great game. Fortunately there is Portal and The Talos Principle, which leaves The Turing Test as an interesting clone with a lot of (sometimes) challenging levels, but no real innovation.

You are playing as Ava Turing sent to Jupiter’s moon Europe to investigate strange things that have occurred on the mission while Ava was in cryogenic sleep. She is sent through a long list of levels that, in particular in the later stages, require cooperation between the supervising “Technical Operations Machine” and the human Ava.

Every level is accompanied with more or less boring discourses, often superficially touching on the meaning of human intelligence and a (real) Turing test. None of the dialogues contribute considerably, if at all, to the story – which is anyway weak. Most details about what has happened are put together from remnants of data recordings and notes to be found, most often in separate extra levels.

The game play is rather repetitive, and most of the puzzles are not too difficult, often the most challenging part is getting an overview on the many rooms and items available, and where the exit is. I liked the part where Ava has to switch to some service robots or one of TOM’s video cameras to solve puzzles, this generated a bit of a diversion.

All in all a good puzzle game, albeit far from Portal (including Portal II, and of course Portal Stories Mel) and The Talos Principle. The many often short puzzles allowed me to play the game as a sort of break filler. I don’t play often, and if, then I have hardly time for long sessions, so these kind of level games come in handy.

Total play time: 9 hours

25 February, 2020 02:48AM by Norbert Preining

February 24, 2020

Antoine Beaupré

The CLA Denial-Of-Service attack

I just stumbled upon this weird mind bender this morning. I have found what I believe is a simple typo in the Ganeti documentation which has a trivial fix. But then, before I submitted a PR to fix it, I remembered that I had trouble getting stuff merged in Ganeti before. That's because they require a CLA (which is already annoying enough) that requires a Google account to sign (which is simply unacceptable). So that patch has been sitting there for months, unused and I haven't provided a patch for the other issue because of this very problem.

But that got me thinking. If I would want to mess things up real bad in a CLA-using project I don't like and:

  1. find a critical bug
  2. figure out a patch for the bug
  3. publish the patch in their issue tracker
  4. forever refuse to sign the CLA

Then my patch, and any derivative, would be unmergeable. If the bug is trivial enough, it might even be impossible to fix it without violating the letter of the law, or at least the process that project as adhered to.

Obviously, there's a flaw in that logic. A CLA is an agreement between a project and a (new) contributor. A project does not absolutely requires the contributor to sign the agreement to accept its contributions, in theory. It's the reverse: for the contributor to have their patch accepted, they need to accept the CLA. But the project could accept contributions without CLA without violating the law.

But it seems that projects sometimes end up doing a DOS on themselves by refusing perfectly fine contributions from drive-by contributors who don't have time to waste filling forms on all projects they stumble upon.

In the case of this typo, I could have submitted a patch, but because I didn't sign a CLA, again, the project couldn't have merged it without breaking their own rules, even if someone else submits the same patch, after agreeing to the CLA. So, in effect, I would have DOS'd the project by providing the patch, so I just opened an issue which strangely — and hopefully — isn't covered by the CLA.

Feels kind of stupid, really...

Instances of known self-imposed CLA DOS attacks:

24 February, 2020 03:32PM

Russ Allbery

Book haul

I have been reading rather more than my stream of reviews might indicate, although it's been almost all non-fiction. (Since I've just started a job in astronomy, I decided I should learn something about astronomy. Also, there has been some great non-fiction published recently.)

Ilona Andrews — Sweep with Me (sff)
Conor Dougherty — Golden Gates (non-fiction)
Ann K. Finkbeiner — A Grand and Bold Thing (non-fiction)
Susan Fowler — Whistleblower (non-fiction)
Evalyn Gates — Einstein's Telescope (non-fiction)
T. Kingfisher — Paladin's Grace (sff)
A.K. Larkwood — The Unspoken Name (sff)
Murphy Lawless — Raven Heart (sff)
W. Patrick McCray — Giant Telescopes (non-fiction)
Terry Pratchett — Men at Arms (sff)
Terry Pratchett — Soul Music (sff)
Terry Pratchett — Interesting Times (sff)
Terry Pratchett — Maskerade (sff)
Terry Pratchett — Feet of Clay (sff)
Ethan Siegel — Beyond the Galaxy (non-fiction)
Tor.com (ed.) — Some of the Best from Tor.Com 2019 (sff anthology)

I have also done my one-book experiment of reading Terry Pratchett on the Kindle and it was a miserable experience due to the footnotes, so I'm back to buying Pratchett in mass market paperback.

24 February, 2020 05:04AM

Review: Sweep with Me

Review: Sweep with Me, by Ilona Andrews

Series: Innkeeper Chronicles #5
Publisher: NYLA
Copyright: 2020
ISBN: 1-64197-136-3
Format: Kindle
Pages: 146

Sweep with Me is the fifth book in the Innkeeper Chronicles series. It's a novella rather than a full novel, a bit of a Christmas bonus story. Don't read this before One Fell Sweep; it will significantly spoil that book. I don't believe it spoils Sweep of the Blade, but it may in some way that I don't remember.

Dina and Sean are due to appear before the Assembly for evaluation of their actions as Innkeepers, a nerve-wracking event that could have unknown consequences for their inn. The good news is that this appointment is going to be postponed. The bad news is that the postponement is to allow them to handle a special guest. A Drífan is coming to stay in the Gertrude Hunt.

One of the drawbacks of this story is that it's never clear about what a Drífan is, only that they are extremely magical, the inns dislike them, and they're incredibly dangerous. Unfortunately for Dina, the Drífan is coming for Treaty Stay, which means she cannot turn them down. Treaty Stay is the anniversary of the Treaty of Earth, which established the inns and declared Earth's neutrality. During Treaty Stay, no guest can be turned away from an inn. And a Drífan was one of the signatories of the treaty.

Given some of the guests and problems that Dina has had, I'm a little dubious of this rule from a world-building perspective. It sounds like the kind of absolute rule that's tempting to invent during the first draft of a world background, but that falls apart when one starts thinking about how it might be abused. There's a reason why very few principles of law are absolute. But perhaps we only got the simplified version of the rules of Treaty Stay, and the actual rules have more nuance. In any event, it serves its role as story setup.

Sweep with Me is a bit of a throwback to the early books of the series. The challenge is to handle guests without endangering the inn or letting other people know what's going on. The primary plot involves the Drífan and an asshole businessman who is quite easy to hate. The secondary plots involve a colloquium of bickering, homicidal chickens, a carnivorous hunter who wants to learn how Dina and Sean resolved a war, and the attempts by Dina's chef to reproduce a fast-food hamburger for the Drífan.

I enjoyed the last subplot the best, even if it was a bit predictable. Orro's obsession with (and mistaken impressions about) an Earth cooking show are the sort of alien cultural conflict that makes this series fun, and Dina's willingness to take time away from various crises to find a way to restore his faith in his cooking is the type of action that gives this series its heart. Caldenia, Dina's resident murderous empress, also gets some enjoyable characterization. I'm not sure what I thought a manipulative alien dictator would amuse herself with on Earth, but I liked this answer.

The main plot was a bit less satisfying. I'm happy to read as many stories about Dina managing alien guests as Andrews wants to write, but I like them best when I learn a lot about a new alien culture. The Drífan feel more like a concept than a culture, and the story turns out to revolve around human rivalries far more than alien cultures. It's the world-building that sucks me into these sorts of series; my preference is to learn something grand about the rest of the universe that builds on the ideas already established in the series and deepens them, but that doesn't happen.

The edges of a decent portal fantasy are hiding underneath this plot, but it all happened in the past and we don't get any of the details. I liked the Drífan liege a great deal, but her background felt disappointingly generic and I don't think I learned anything more about the universe.

If you like the other Innkeeper Chronicles books, you'll probably like this, but it's a minor side story, not a continuation of the series arc. Don't expect too much from it, but it's a pleasant diversion to bide the time until the next full novel.

Rating: 7 out of 10

24 February, 2020 03:21AM

hackergotchi for Steve McIntyre

Steve McIntyre

What can you preseed when installing Debian?

Preseeding is a very useful way of installing and pre-configuring a Debian system in one go. You simply supply lots of the settings that your new system will need up front, in a preseed file. The installer will use those settings instead of asking questions, and it will also pass on any extra settings via the debconf database so that any further package setup will use them.

There is documentation about how to do this in the Debian wiki at https://wiki.debian.org/DebianInstaller/Preseed, and an example preseed file for our current stable release (Debian 10, "buster") in the release notes.

One complaint I've heard is that it can be difficult to work out exactly the right data to use in a preseed file, as the format is not the easiest to work with by hand. It's also difficult to find exactly what settings can be changed in a preseed.

So, I've written a script to parse all the debconf templates in each release in the Debian archive and dump all the possible settings in each. I've put the results up online at my debian-preseed site in case it's useful. The data will be updated daily as needed to make sure it's current.

24 February, 2020 12:55AM

February 23, 2020

Enrico Zini

Assorted wonders

Daily Science Fiction :: Rules For Living in a Simulation by Aubrey Hirsch
«Listen. We're fairly certain it's true. The laws of the universe just don't make sense the way they should and it's more and more apparent with every atom of gold we run through the Relativistic Heavy Ion Collider and every electron we smash up at the Large Hadron Collider that we are living in a universe especially constructed for us. And, since we all know infinities cannot be constructed, we must conclude that our universe has been simulated.…»
The Missionary Church of Kopimism (in Swedish Missionerande Kopimistsamfundet), is a congregation of file sharers who believe that copying information is a sacred virtue and was founded by Isak Gerson, a 19-year-old philosophy student, and Gustav Nipe in Uppsala, Sweden in the autumn of 2010.[6] The Church, based in Sweden, has been officially recognized by the Legal, Financial and Administrative Services Agency as a religious community in January 2012, after three application attempts.
I cannibali Korowai vivono in cima agli alberi. Ma è tutto vero? The Korowai cannibals live on top of trees. But is it true?
“Siccome @ciocci mi ha confessato che la cosa gli stava facendo esplodere la testa, e siccome io stesso da tempo ero alla ricerca di risposte adeguate sul tema, ho fatto un po’ di ricerche sull'usanza tutta islandese di celebrare il Natale intonando canzoni pop italiane 🎄🇮🇸🇮🇹”
Sono qui riportate le conversioni tra le antiche unità di misura in uso nel circondario di Bologna e il sistema metrico decimale, così come stabilite ufficialmente nel 1877. Nonostante l'apparente precisione nelle tavole, in molti casi è necessario considerare che i campioni utilizzati (anche per le tavole di epoca napoleonica) erano di fattura approssimativa o discordanti tra loro.[1]
Elenco di popolari creature leggendarie e animali mitologici presenti nei miti, leggende e folclore dei diversi popoli e culture del mondo, in ordine alfabetico. Note Questa lista elenca solo creat…
Last week I wrote about about Meido, the Japanese Underworld, and how it has roots in Indian Buddhism and Chinese Buddhist-Taoist concepts. Today I'll write a little bit about where some unlucky
The Vegetable Lamb of Tartary (Latin: Agnus scythicus or Planta Tartarica Barometz[1]) is a legendary zoophyte of Central Asia, once believed to grow sheep as its fruit. It was believed the sheep were connected to the plant by an umbilical cord and grazed the land around the plant. When all accessible foliage was gone, both the plant and sheep died.

23 February, 2020 11:00PM

Russ Allbery

Review: Exit Strategy

Review: Exit Strategy, by Martha Wells

Series: Murderbot Diaries #4
Publisher: Tor.com
Copyright: October 2018
ISBN: 1-250-18546-7
Format: Kindle
Pages: 172

Exit Strategy is the fourth of the original four Murderbot novellas. As you might expect, this is not the place to begin. Both All Systems Red (the first of the series) and Rogue Protocol (the previous book) are vital to understanding this story.

Be warned that All Systems Red sets up the plot for the rest of the series, and thus any reviews of subsequent books (this one included) run the risk of spoiling parts of that story. If you haven't read it already, I recommend reading it before this review. It's inexpensive and very good!

When I got back to HaveRotten Station, a bunch of humans tried to kill me. Considering how much I'd been thinking about killing a bunch of humans, it was only fair.

Murderbot is now in possession of damning evidence against GrayCris. GrayCris knows that, and is very interested in catching Murderbot. That problem is relatively easy to handle. The harder problem is that GrayCris has gone on the offensive against Murderbot's former client, accusing her of corporate espionage and maneuvering her into their territory. Dr. Mensah is now effectively a hostage, held deep in enemy territory. If she's killed, the newly-gathered evidence will be cold comfort.

Exit Strategy, as befitting the last chapter of Murderbot's initial story arc, returns to and resolves the plot of the first novella. Murderbot reunites with its initial clients, takes on GrayCris directly (or at least their minions), and has to break out of yet another station. It also has to talk to other people about what relationship it wants to have with them, and with the rest of the world, since it's fast running out of emergencies and special situations where that question is pointless.

Murderbot doesn't want to have those conversations very badly because they result in a lot of emotions.

I was having an emotion, and I hate that. I'd rather have nice safe emotions about shows on the entertainment media; having them about things real-life humans said and did just led to stupid decisions like coming to TransRollinHyfa.

There is, of course, a lot of the normal series action: Murderbot grumbling about other people's clear incompetence, coming up with tactical plans on the fly, getting its clients out of tricky situations, and having some very satisfying fights. But the best part of this story is the reunion with Dr. Mensah. Here, Wells does something subtle and important that I've frequently encountered in life but less commonly in stories. Murderbot has played out various iterations of these conversations in its head, trying to decide what it would say. But those imagined conversations were with its fixed and unchanging memory of Dr. Mensah. Meanwhile, the person underlying those memories has been doing her own thinking and reconsideration, and is far more capable of having an insightful conversation than Murderbot expects. The result is satisfying thoughtfulness and one of the first times in the series where Murderbot doesn't have to handle the entire situation by itself.

This is one of those conclusions that's fully as satisfying as I was hoping it would be without losing any of the complexity. The tactics and fighting are more of the same (meaning that they're entertaining and full of snark), but Dr. Mensah's interactions with Murderbot now that she's had the time span of two intervening books to think about how to treat it are some of the best parts of the series. The conclusion doesn't answer all of the questions raised by the series (which is a good thing, since I want more), but it's a solid end to the plot arc.

The sequel, a full-length Murderbot novel (hopefully the first of many) titled Network Effect, is due out in May of 2020.

Rating: 9 out of 10

23 February, 2020 04:46AM

February 22, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.25: Spookyhash bugfix

And a new version of digest is getting onto CRAN now, and to Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 889k monthly downloads with 255 direct reverse dependencies and 7340 indirect reverse dependencies) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release is a one issue fix. Aaron Lun noticed some issues when spookyhash is used in streaming mode. Kendon Bell, who also contributed spookyhash quickly found the issue which is a simple oversight. This was worth addressing in new release, so I pushed 0.6.25.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 February, 2020 11:42PM

hackergotchi for Norbert Preining

Norbert Preining

QOwnNotes for Debian

QOwnNotes is a cross-platform plain text and markdown note taking application. By itself, it wouldn’t be something to talk about, we have vim and emacs and everything in between. But QOwnNotes integrates nicely with the Notes application from NextCloud and OwnCloud, as well as providing useful integration with NextCloud like old version of notes, access to deleted files, watching changes, etc.

The program is written using Qt and contains, besides language files and desktop entries only one binary. There is a package in a PPA for Ubuntu, so it was a breeze to package, converting the cdbs packaging from the ppa to debhelper on the way.

Source packages and amd64 binaries for sid/testing and buster are available at

deb https://www.preining.info/debian unstable main
deb-src https://www.preining.info/debian unstable main


deb https://www.preining.info/debian buster main
deb-src https://www.preining.info/debian buster main

respectively. The git repository is als available.


22 February, 2020 10:07PM by Norbert Preining

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.0 released

I released version 2.0 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.0:

  • Handle comments in account and commodity declarations
  • Handle transactions with a single posting (without bucket)
  • Handle empty metadata values
  • Rewrite Emacs modeline

You can get ledger2beancount from GitHub.

22 February, 2020 01:28PM by Martin Michlmayr

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.0.2: First Update!

Following up on the initial RcppSimdJson release, a first updated arrived on CRAN yesterday.

RcppSimdJson wraps the fantastic simdjson library by Daniel Lemire which truly impressive. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in persing gigabytes of JSON parsed per second which is quite mindboggling. I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed.

This release syncs the simdjson headers with upstream, and polishes the build a little by conditioning on actually having a C++17 compiler rather than just suggesting it. The NEWS entry follows.

Changes in version 0.0.2 (2020-02-21)

  • Sychronized with upstream (Dirk in #4 and #5).

  • The R side of validateJSON now globs the file argument, expanding symbols like ~ appropriately.

  • C++ code in validateJSON now conditional on C++17 allowing (incomplete) compilation on lesser systems.

  • New helper function returning value of __cplusplus macro, used in package startup to warn if insufficient compiler used.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 February, 2020 12:55PM

Russ Allbery

Review: All About Emily

Review: All About Emily, by Connie Willis

Publisher: Subterranean
Copyright: 2011
ISBN: 1-59606-488-9
Format: Kindle
Pages: 96

Claire Havilland is a Broadway star, three-time Tony winner, and the first-person narrator of this story. She is also, at least in her opinion, much too old to star in the revival of Chicago, given that the role would require wearing a leotard and fishnet stockings. But that long-standing argument with her manager was just the warm-up request this time. The actual request was to meet with a Nobel-Prize-winning physicist and robotics engineer who will be the Grand Marshal of the Macy's Day Parade. Or, more importantly, to meet with the roboticist's niece, Emily, who has a charmingly encyclopedic knowledge of theater and of Claire Havilland's career in particular.

I'll warn that the upcoming discussion of the background of this story is a spoiler for the introductory twist, but you've probably guessed that spoiler anyway.

I feel bad when someone highly recommends something to me, but it doesn't click with me. That's the case with this novella. My mother loved the character dynamics, which, I'll grant, are charming and tug on the heartstrings, particularly if you enjoy watching two people geek at each other about theater. I got stuck on the world-building and then got frustrated with the near-total lack of engagement with the core problem presented by the story.

The social fear around robotics in All About Emily is the old industrialization fear given new form: new, better robots will be able to do jobs better than humans, and thus threaten human livelihoods. (As is depressingly common in stories like this, the assumptions of capitalism are taken for granted and left entirely unquestioned.) Willis's take on this idea is based on All About Eve, the 1950 film in which an ambitious young fan maneuvers her way into becoming the understudy of an aging Broadway star and then tries to replace her. What if even Broadway actresses could be replaced by robots?

As it turns out, the robot in question has a different Broadway role in mind. To give Willis full credit, it's one that plays adroitly with some stereotypes about robots.

Emily and Claire have good chemistry. Their effusive discussions and Emily's delighted commitment to research are fun to read. But the plot rests on two old SF ideas: the social impact of humans being replaced by machines, and the question of whether simulated emotions in robots should be treated as real (a slightly different question than whether they are real). Willis raises both issues and then does nothing with either of them. The result is an ending that hits the expected emotional notes of an equivalent story that raises no social questions, but which gives the SF reader nothing to work with.

Will robots replace humans? Based on this story, the answer seems to be yes. Should they be allowed to? To avoid spoilers, I'll just say that that decision seems to be made on the basis of factors that won't scale, and on experiences that a cynic like me thinks could be easily manipulated.

Should simulated emotions be treated as real? Willis doesn't seem to realize that's a question. Certainly, Claire never seems to give it a moment's thought.

I think All About Emily could have easily been published in the 1960s. It feels like it belongs to another era in which emotional manipulation by computers is either impossible or, at worst, a happy accident. In today's far more cynical time, when we're increasingly aware that large corporations are deeply invested in manipulating our emotions and quite good at building elaborate computer models for how to do so, it struck me as hollow and tone-deaf. The story is very sweet if you can enjoy it on the same level that the characters engage with it, but is not of much help in grappling with the consequences for abuse.

Rating: 6 out of 10

22 February, 2020 04:38AM

February 21, 2020

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, January 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, 252 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

January started calm until at the end of the month some LTS contributors met, some for the first time ever, at the Mini-DebCamp preceeding FOSDEM in Brussels. While there were no formal events about LTS at both events, such face2face meetings have proven to be very useful for future collaborations!
We currently have 59 LTS sponsors sponsoring 219h each month. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 42 packages with a known CVE and the dla-needed.txt file has 33 packages needing an update.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

21 February, 2020 05:00PM by Raphaël Hertzog

Andrej Shadura

Follow-up on the train journey to FOSDEM

Here’s a recap of my train journey based on the Twitter thread I kept posting as I travelled.


The departure from Bratislava was as planned:

Ready to depart from Bratislava hl. st.Ready to depart from Bratislava hl. st.

Half an hour in Vienna was just enough for me to grab some coffee and breakfast and board the train to Frankfurt without a hurry:

Boarding a Deutsche Bahn ICE to Frankfurt am MainBoarding a Deutsche Bahn ICE to Frankfurt am Main

Unfortunately, soon after we left Linz and headed to Passau, the train broke down. Apparently, it powered down and the driver was struggling to reboot it. After more than an hour at Haiding, we finally departed with a huge delay:

ICE standing at a platform of a railway station at HaidingTrapped in Haiding near Linz

Since the 18:29 train to Brussels I needed to catch in Frankfurt was the last one that day, I was put into a hotel Leonardo across the street from Frankfurt Hbf, paid by Deutsche Bahn, of course. By the time of our arrival in Frankfurt, the delay was 88 minutes.

Hotel room in Frankfurt am MainHotel room in Frankfurt am Main

Luckily, I didn’t have to convince Deutsche Bahn to let me sleep in the morning, they happily booked me (for free) onto a 10:29 ICE to Brussels so I had an opportunity to have a proper breakfast at the hotel and spend some time at Coffee Fellows at the station.

Frankfurt Hbf building in the morningGuten Morgen Frankfurt
ICE 16 to Brussels waiting at platform 19About to depart for Brussels

Fun fact: Aachen is called Cáchy in Czech, apparently as a corruption of an older German form ze Aachen.

Platform sign saying Aachen Hbf with a double-decker red DB regional trainStopping at Aachen

Having met some Debian people on the train, I have finally arrived in Brussels, albeit with some delay. This, unfortunately meant that I haven’t gone to Vilvoorde to see a friend, so the regional tickets I bought online were useless.

Platform at Bruxelles-MidiFinally, Brussels!

… and back!

The trip home was much better in terms of missed trains, only if a tiny bit more tiring since I took it in one day.

Platform at Bruxelles-Midi with an ICE almost ready to be boardedLeaving Brussels on time

Going to Frankfurt, I’ve spent most of the time in the bistro carriage. Unfortunately, the espresso machine was broken and they didn’t have any croissants, but the tea with milk was good enough.

In the bistro carriageIn the bistro carriage

I’ve used the fifty minutes I had in Frankfurt to claim the compensation for the delay, which (€33) I received in my bank account the next week.

The ICE train to Wien Hbf is about to departThe ICE train to Wien Hbf is about to depart
The view out of the window: going along the river from Passau to LinzHerzlich willkommen in Österreich!

The ICE train at platform 11Arrived at Wien Hbf
The REX to Bratislava waiting at platform 4The last leg

Finally, exactly twelve hours and one minute after the departure, almost home:

The REX from Vienna arrived at platform 2Finally home

21 February, 2020 02:09PM by Andrej Shadura

hackergotchi for Norbert Preining

Norbert Preining

Okular update for Debian

The quest for a good tabbed pdf viewer lead me okular. While Gnome3 has gone they way of “keep it stupid keep it simple” to appeal to less versed users, KDE has gone the opposite direction and provides lots of bells and knobs to configure their application. Not surprisingly, I am tending more and more to KDE apps away from the redux stuff of Gnome apps.

Unfortunately, okular in Debian is horrible outdated. The version shipped in unstable is 17.12.2, there is a version 18.04 in experimental, and the latest from upstream git is 19.12.2. Fortunately, and thanks to the Debian maintainers, the packaging of the version in experimental can be adjusted without too much pain to the latest version, see this git repo.

You can find the sources and amd64 packages in my Debian repository:

deb https://www.preining.info/debian unstable main
deb-src https://www.preining.info/debian unstable main


21 February, 2020 12:17AM by Norbert Preining

February 20, 2020

hackergotchi for Matthew Garrett

Matthew Garrett

What usage restrictions can we place in a free software license?

Growing awareness of the wider social and political impact of software development has led to efforts to write licenses that prevent software being used to engage in acts that are seen as socially harmful, with the Hippocratic License being perhaps the most discussed example (although the JSON license's requirement that the software be used for good, not evil, is arguably an earlier version of the theme). The problem with these licenses is that they're pretty much universally considered to fall outside the definition of free software or open source licenses due to their restrictions on use, and there's a whole bunch of people who have very strong feelings that this is a very important thing. There's also the more fundamental underlying point that it's hard to write a license like this where everyone agrees on whether a specific thing is bad or not (eg, while many people working on a project may feel that it's reasonable to prohibit the software being used to support drone strikes, others may feel that the project shouldn't have a position on the use of the software to support drone strikes and some may even feel that some people should be the victims of drone strikes). This is, it turns out, all quite complicated.

But there is something that many (but not all) people in the free software community agree on - certain restrictions are legitimate if they ultimately provide more freedom. Traditionally this was limited to restrictions on distribution (eg, the GPL requires that your recipient be able to obtain corresponding source code, and for GPLv3 must also be able to obtain the necessary signing keys to be able to replace it in covered devices), but more recently there's been some restrictions that don't require distribution. The best known is probably the clause in the Affero GPL (or AGPL) that requires that users interacting with covered code over a network be able to download the source code, but the Cryptographic Autonomy License (recently approved as an Open Source license) goes further and requires that users be able to obtain their data in order to self-host an equivalent instance.

We can construct examples of where these prevent certain fields of endeavour, but the tradeoff has been deemed worth it - the benefits to user freedom that these licenses provide is greater than the corresponding cost to what you can do. How far can that tradeoff be pushed? So, here's a thought experiment. What if we write a license that's something like the following:

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. All permissions granted by this license must be passed on to all recipients of modified or unmodified versions of this work
2. This work may not be used in any way that impairs any individual's ability to exercise the permissions granted by this license, whether or not they have received a copy of the covered work

This feels like the logical extreme of the argument. Any way you could use the covered work that would restrict someone else's ability to do the same is prohibited. This means that, for example, you couldn't use the software to implement a DRM mechanism that the user couldn't replace (along the lines of GPLv3's anti-Tivoisation clause), but it would also mean that you couldn't use the software to kill someone with a drone (doing so would impair their ability to make use of the software). The net effect is along the lines of the Hippocratic license, but it's framed in a way that is focused on user freedom.

To be clear, I don't think this is a good license - it has a bunch of unfortunate consequences like it being impossible to use covered code in self-defence if doing so would impair your attacker's ability to use the software. I'm not advocating this as a solution to anything. But I am interested in seeing whether the perception of the argument changes when we refocus it on user freedom as opposed to an independent ethical goal.



Rich Felker on Twitter had an interesting thought - if clause 2 above is replaced with:

2. Your rights under this license terminate if you impair any individual's ability to exercise the permissions granted by this license, even if the covered work is not used to do so

how does that change things? My gut feeling is that covering actions that are unrelated to the use of the software might be a reach too far, but it gets away from the idea that it's your use of the software that triggers the clause.

comment count unavailable comments

20 February, 2020 12:45AM

February 19, 2020

hackergotchi for Gunnar Wolf

Gunnar Wolf

Made with Creative Commons at FIL Minería

Book presentation! Again, this message is mostly for people that can be at Mexico City on a relatively short notice. [](https://gwolf.org/files/mwcc_filpm.jpg) Do you want to get the latest scoop on our translation of _Made with Creative Commons_? Are you interested in being at a most interesting session presented by the two officials of Creative Commons Mexico chapter, [Irene Soria](http://irenesoria.com/) ([@arenita](https://mobile.twitter.com/arenitasoria)) and [Iván Martínez](https://meta.wikimedia.org/wiki/User:ProtoplasmaKid) ([@protoplasmakid](https://mobile.twitter.com/protoplasmakid)) and myself? Then... Come to the always great [41 Feria Internacional del Libro del Palacio de Minería](http://filmineria.unam.mx/feria/41fil/)! We will have the presentation next Monday (2020.02.24), 12:00, in Auditorio Sotero Prieto (Palacio de Minería). How to get there? Come on... Don't you know one of the most iconic and beautiful buildings in our historic center? 😉 [Information on getting to Palacio de Minería](http://filmineria.unam.mx/feria/41fil/como-llegar-a-la-feria.html). See you all there!

19 February, 2020 08:00AM by Gunnar Wolf

hackergotchi for Kees Cook

Kees Cook

security things in Linux v5.4

Previously: v5.3.

Linux kernel v5.4 was released in late November. The holidays got the best of me, but better late than never! ;) Here are some security-related things I found interesting:

waitid() gains P_PIDFD
Christian Brauner has continued his pidfd work by adding a critical mode to waitid(): P_PIDFD. This makes it possible to reap child processes via a pidfd, and completes the interfaces needed for the bulk of programs performing process lifecycle management. (i.e. a pidfd can come from /proc or clone(), and can be waited on with waitid().)

kernel lockdown
After something on the order of 8 years, Linux can now draw a bright line between “ring 0” (kernel memory) and “uid 0” (highest privilege level in userspace). The “kernel lockdown” feature, which has been an out-of-tree patch series in most Linux distros for almost as many years, attempts to enumerate all the intentional ways (i.e. interfaces not flaws) userspace might be able to read or modify kernel memory (or execute in kernel space), and disable them. While Matthew Garrett made the internal details fine-grained controllable, the basic lockdown LSM can be set to either disabled, “integrity” (kernel memory can be read but not written), or “confidentiality” (no kernel memory reads or writes). Beyond closing the many holes between userspace and the kernel, if new interfaces are added to the kernel that might violate kernel integrity or confidentiality, now there is a place to put the access control to make everyone happy and there doesn’t need to be a rehashing of the age old fight between “but root has full kernel access” vs “not in some system configurations”.

tagged memory relaxed syscall ABI
Andrey Konovalov (with Catalin Marinas and others) introduced a way to enable a “relaxed” tagged memory syscall ABI in the kernel. This means programs running on hardware that supports memory tags (or “versioning”, or “coloring”) in the upper (non-VMA) bits of a pointer address can use these addresses with the kernel without things going crazy. This is effectively teaching the kernel to ignore these high bits in places where they make no sense (i.e. mathematical comparisons) and keeping them in place where they have meaning (i.e. pointer dereferences).

As an example, if a userspace memory allocator had returned the address 0x0f00000010000000 (VMA address 0x10000000, with, say, a “high bits” tag of 0x0f), and a program used this range during a syscall that ultimately called copy_from_user() on it, the initial range check would fail if the tag bits were left in place: “that’s not a userspace address; it is greater than TASK_SIZE (0x0000800000000000)!”, so they are stripped for that check. During the actual copy into kernel memory, the tag is left in place so that when the hardware dereferences the pointer, the pointer tag can be checked against the expected tag assigned to referenced memory region. If there is a mismatch, the hardware will trigger the memory tagging protection.

Right now programs running on Sparc M7 CPUs with ADI (Application Data Integrity) can use this for hardware tagged memory, ARMv8 CPUs can use TBI (Top Byte Ignore) for software memory tagging, and eventually there will be ARMv8.5-A CPUs with MTE (Memory Tagging Extension).

boot entropy improvement
Thomas Gleixner got fed up with poor boot-time entropy and trolled Linus into coming up with reasonable way to add entropy on modern CPUs, taking advantage of timing noise, cycle counter jitter, and perhaps even the variability of speculative execution. This means that there shouldn’t be mysterious multi-second (or multi-minute!) hangs at boot when some systems don’t have enough entropy to service getrandom() syscalls from systemd or the like.

userspace writes to swap files blocked
From the department of “how did this go unnoticed for so long?”, Darrick J. Wong fixed the kernel to not allow writes from userspace to active swap files. Without this, it was possible for a user (usually root) with write access to a swap file to modify its contents, thereby changing memory contents of a process once it got paged back in. While root normally could just use CAP_PTRACE to modify a running process directly, this was a loophole that allowed lesser-privileged users (e.g. anyone in the “disk” group) without the needed capabilities to still bypass ptrace restrictions.

limit strscpy() sizes to INT_MAX
Generally speaking, if a size variable ends up larger than INT_MAX, some calculation somewhere has overflowed. And even if not, it’s probably going to hit code somewhere nearby that won’t deal well with the result. As already done in the VFS core, and vsprintf(), I added a check to strscpy() to reject sizes larger than INT_MAX.

ld.gold support removed
Thomas Gleixner removed support for the gold linker. While this isn’t providing a direct security benefit, ld.gold has been a constant source of weird bugs. Specifically where I’ve noticed, it had been pain while developing KASLR, and has more recently been causing problems while stabilizing building the kernel with Clang. Having this linker support removed makes things much easier going forward. There are enough weird bugs to fix in Clang and ld.lld. ;)

Intel TSX disabled
Given the use of Intel’s Transactional Synchronization Extensions (TSX) CPU feature by attackers to exploit speculation flaws, Pawan Gupta disabled the feature by default on CPUs that support disabling TSX.

That’s all I have for this version. Let me know if I missed anything. :) Next up is Linux v5.5!

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

19 February, 2020 12:37AM by kees

February 18, 2020

hackergotchi for Daniel Silverstone

Daniel Silverstone

Subplot volunteers? (Acceptance testing tool)

Note: This is a repost from Lars' blog made to widen the reach and hopefully find the right interested parties.

Would you be willing to try Subplot for acceptance testing for one of your real projects, and give us feedback? We're looking for two volunteers.

given a project
when it uses Subplot
then it is successful

Subplot is a tool for capturing and automatically verifying the acceptance criteria for a software project or a system, in a way that's understood by all stakeholders.

In a software project there are always more than one stakeholder. Even in a project one writes for oneself, there are two stakeholders: oneself, and that malicious cretin oneself-in-the-future. More importantly, though, there are typically stakeholders such as end users, sysadmins, clients, software architects, developers, and testers. They all need to understand what the software should do, and when it's in an acceptable state to be put into use: in other words, what the acceptance criteria are.

Crucially, all stakeholders should understand the acceptance criteria the same way, and also how to verify they are met. In an ideal situation, all verification is automated, and happens very frequently.

There are various tools for this, from generic documentation tooling (word processors, text editors, markup languages, etc) to test automation (Cucumber, Selenium, etc). On the one hand, documenting acceptance criteria in a way that all stakeholders understand is crucial: otherwise the end users are at risk of getting something that's not useful to help them, and the project is a waste of everyone's time and money. On the other hand, automating the verification of how acceptance criteria is met is also crucial: otherwise it's done manually, which is slow, costly, and error prone, which increases the risk of project failure.

Subplot aims to solve this by an approach that combines documentation tooling with automated verification.

  • The stakeholders in a project jointly produce a document that captures all relevant acceptance criteria and also describes how they can be verified automatically, using scenarios. The document is written using Markdown.

  • The developer stakeholders produce code to implement the steps in the scenarios. The Subplot approach allows the step implementations to be done in a highly cohesive, de-coupled manner, making such code usually be quite simple. (Test code should be your best code.)

  • Subplot's "docgen" program produces a typeset version as PDF or HTML. This is meant to be easily comprehensible by all stakeholders.

  • Subplot's "codegen" program produces a test program in the language used by the developer stakeholders. This test program can be run to verify that acceptance criteria are met.

Subplot started in in late 2018, and was initially called Fable. It is based on the yarn tool for the same purpose, from 2013. Yarn has been in active use all its life, if not popular outside a small circle. Subplot improves on yarn by improving document generation, markup, and decoupling of concerns. Subplot is not compatible with yarn.

Subplot is developed by Lars Wirzenius and Daniel Silverstone as a hobby project. It is free software, implemented in Rust, developed on Debian, and uses Pandoc and LaTeX for typesetting. The code is hosted on gitlab.com. Subplot verifies its own acceptance criteria. It is alpha level software.

We're looking for one or two volunteers to try Subplot on real projects of their own, and give us feedback. We want to make Subplot good for its purpose, also for people other than us. If you'd be willing to give it a try, start with the Subplot website, then tell us you're using Subplot. We're happy to respond to questions from the first two volunteers, and from others, time permitting. (The reality of life and time constraints is that we can't commit to supporting more people at this time.)

We'd love your feedback, whether you use Subplot or not.

18 February, 2020 08:24PM by Daniel Silverstone

hackergotchi for Mike Gabriel

Mike Gabriel

MATE 1.24 landed in Debian unstable

Last week, Martin Wimpress (from Ubuntu MATE) and I did a 2.5-day packaging sprint and after that I bundle-uploaded all MATE 1.24 related components to Debian unstable. Thus, MATE 1.24 landed in Debian unstable only four days after the upstream release. I think this was the fastest version bump of MATE in Debian ever.

Packages should have been built by now for most of the 22 architectures supported by Debian. The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Please also refer to the MATE 1.24 upstream release notes for details on what's new and what's changed [2].


One big thanks goes to Martin Wimpress. Martin and I worked on all the related packages hand in hand. Only this team work made this very fast upload possible. Martin especially found the fix for a flaw in Python Caja that caused all Python3 based Caja extensions to fail in Caja 1.24 / Python Caja 1.24. Well done!

Another big thanks goes to the MATE upstream team. You again did an awesome job, folks. Much, much appreciated.

Last but not least, a big thanks goes to Svante Signell for providing Debian architecture specific patches for Debian's non-Linux distributions (GNU/Hurd, GNU/kFreeBSD). We will wait now until all MATE 1.24 packages have initially migrated to Debian testing and then follow-up upload his fixes. As in the past, MATE shall be available on as many Debian architectures as possible (ideally: all of them). Saying this, all Debian porters are invited to send us patches, if they see components of MATE Desktop fail on not-so-common architectures.


Mike Gabriel (aka sunweaver)

18 February, 2020 10:03AM by sunweaver

hackergotchi for Keith Packard

Keith Packard


Slightly Better Iterative Spline Decomposition

My colleague Bart Massey (who is a CS professor at Portland State University) reviewed my iterative spline algorithm article and had an insightful comment — we don't just want any spline decomposition which is flat enough, what we really want is a decomposition for which every line segment is barely within the specified flatness value.

My initial approach was to keep halving the length of the spline segment until it was flat enough. This definitely generates a decomposition which is flat enough everywhere, but some of the segments will be shorter than they need to be, by as much as a factor of two.

As we'll be taking the resulting spline and doing a lot more computation with each segment, it makes sense to spend a bit more time finding a decomposition with fewer segments.

The Initial Search

Here's how the first post searched for a 'flat enough' spline section:

t = 1.0f;

/* Iterate until s1 is flat */
do {
    t = t/2.0f;
    _de_casteljau(s, s1, s2, t);
} while (!_is_flat(s1));

Bisection Method

What we want to do is find an approximate solution for the function:

flatness(t) = tolerance

We'll use the Bisection method to find the value of t for which the flatness is no larger than our target tolerance, but is at least as large as tolerance - ε, for some reasonably small ε.

float       hi = 1.0f;
float       lo = 0.0f;

/* Search for an initial section of the spline which
 * is flat, but not too flat
for (;;) {

    /* Average the lo and hi values for our
     * next estimate
    float t = (hi + lo) / 2.0f;

    /* Split the spline at the target location
    _de_casteljau(s, s1, s2, t);

    /* Compute the flatness and see if s1 is flat
     * enough
    float flat = _flatness(s1);


        /* Stop looking when s1 is close
         * enough to the target tolerance

        /* Flat: t is the new lower interval bound */
        lo = t;
    } else {

        /* Not flat: t is the new upper interval bound */
        hi =  t;

This searches for a place to split the spline where the initial portion is flat but not too flat. I set SNEK_FLAT_TOLERANCE to 0.01, so we'll pick segments which have flatness between 0.49 and 0.50.

The benefit from the search is pretty easy to understand by looking at the number of points generated compared with the number of _de_casteljau and _flatness calls:

Search Calls Points
Simple 150 33
Bisect 229 25

And here's an image comparing the two:

A Closed Form Approach?

Bart also suggests attempting to find an analytical solution to decompose the spline. What we need is to is take the flatness function and find the spline which makes it equal to the desired flatness. If the spline control points are a, b, c, and d, then the flatness function is:

ux = (3×b.x - 2×a.x - d.x)²
uy = (3×b.y - 2×a.y - d.y)²
vx = (3×c.x - 2×d.x - a.x)²
vy = (3×c.y - 2×d.y - a.y)²

flat = max(ux, vx) + max(uy, vy)

When the spline is split into two pieces, all of the control points for the new splines are determined by the original control points and the 't' value which sets where the split happens. What we want is to find the 't' value which makes the flat value equal to the desired tolerance. Given that the binary search runs De Casteljau and the flatness function almost 10 times for each generated point, there's a lot of opportunity to go faster with a closed form solution.

Update: Fancier Method Found!

Bart points me at two papers:

  1. Flattening quadratic Béziers by Raph Levien
  2. Precise Flattening of Cubic Bézier Segments by Thomas F. Hain, Athar L. Ahmad, and David D. Langan

Levien's paper offers a great solution for quadratic Béziers by directly computing the minimum set of line segments necessary to approximate within a specified flatness. However, it doesn't generalize to cubic Béziers.

Hain, Ahmad and Langan do provide a directly computed decomposition of a cubic Bézier. This is done by constructing a parabolic approximation to the first portion of the spline and finding a 't' value which produces the desired flatness. There are a pile of special cases to deal with when there isn't a good enough parabolic approximation. But, overall computational cost is lower than a straightforward binary decomposition, plus there's no recursion required.

This second algorithm has the same characteristics as my Bisection method as the last segment may have any flatness from zero through the specified tolerance; Levien's solution is neater in that it generates line segments of similar flatness across the whole spline.

Current Implementation

 * Copyright © 2020 Keith Packard <keithp@keithp.com>
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * General Public License for more details.
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.

#include <stdbool.h>
#include <stdio.h>
#include <string.h>
#include <stdint.h>
#include <math.h>

typedef float point_t[2];
typedef point_t spline_t[4];

uint64_t num_flats;
uint64_t num_points;


 * This actually returns flatness² * 16,
 * so we need to compare against scaled values
 * using the SCALE_FLAT macro
static float
_flatness(spline_t spline)
     * This computes the maximum deviation of the spline from a
     * straight line between the end points.
     * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf
    float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0];
    float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1];
    float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0];
    float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1];

    ux *= ux;
    uy *= uy;
    vx *= vx;
    vy *= vy;
    if (ux < vx)
        ux = vx;
    if (uy < vy)
        uy = vy;

     *If we wanted to return the true flatness, we'd use:
     * return sqrtf((ux + uy)/16.0f)
    return ux + uy;

/* Convert constants to values usable with _flatness() */
#define SCALE_FLAT(f)   ((f) * (f) * 16.0f)

 * Linear interpolate from a to b using distance t (0 <= t <= 1)
static void
_lerp (point_t a, point_t b, point_t r, float t)
    int i;
    for (i = 0; i < 2; i++)
        r[i] = a[i]*(1.0f - t) + b[i]*t;

 * Split 's' into two splines at distance t (0 <= t <= 1)
static void
_de_casteljau(spline_t s, spline_t s1, spline_t s2, float t)
    point_t first[3];
    point_t second[2];
    int i;

    for (i = 0; i < 3; i++)
        _lerp(s[i], s[i+1], first[i], t);

    for (i = 0; i < 2; i++)
        _lerp(first[i], first[i+1], second[i], t);

    _lerp(second[0], second[1], s1[3], t);

    for (i = 0; i < 2; i++) {
        s1[0][i] = s[0][i];
        s1[1][i] = first[0][i];
        s1[2][i] = second[0][i];

        s2[0][i] = s1[3][i];
        s2[1][i] = second[1][i];
        s2[2][i] = first[2][i];
        s2[3][i] = s[3][i];

 * Decompose 's' into straight lines which are
 * within SNEK_DRAW_TOLERANCE of the spline
static void
_spline_decompose(void (*draw)(float x, float y), spline_t s)
    /* Start at the beginning of the spline. */
    (*draw)(s[0][0], s[0][1]);

    /* Split the spline until it is flat enough */
    while (_flatness(s) > SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {
        spline_t    s1, s2;
        float       hi = 1.0f;
        float       lo = 0.0f;

        /* Search for an initial section of the spline which
         * is flat, but not too flat
        for (;;) {

            /* Average the lo and hi values for our
             * next estimate
            float t = (hi + lo) / 2.0f;

            /* Split the spline at the target location
            _de_casteljau(s, s1, s2, t);

            /* Compute the flatness and see if s1 is flat
             * enough
            float flat = _flatness(s1);

            if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) {

                /* Stop looking when s1 is close
                 * enough to the target tolerance

                /* Flat: t is the new lower interval bound */
                lo = t;
            } else {

                /* Not flat: t is the new upper interval bound */
                hi =  t;

        /* Draw to the end of s1 */
        (*draw)(s1[3][0], s1[3][1]);

        /* Replace s with s2 */
        memcpy(&s[0], &s2[0], sizeof (spline_t));

    /* S is now flat enough, so draw to the end */
    (*draw)(s[3][0], s[3][1]);

void draw(float x, float y)
    printf("%8g, %8g\n", x, y);

int main(int argc, char **argv)
    spline_t spline = {
        { 0.0f, 0.0f },
        { 0.0f, 256.0f },
        { 256.0f, -256.0f },
        { 256.0f, 0.0f }
    _spline_decompose(draw, spline);
    fprintf(stderr, "flats %lu points %lu\n", num_flats, num_points);
    return 0;

18 February, 2020 07:41AM

February 17, 2020

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Reasons for job burnout and what motivates people in their job

Burnout comes in many colours and flavours.

Often, burnout is conceived as a weakness of the person experiencing it: "they can't work under stress", "they lack organizational skills", "they are currently going through grief or a break up, that's why they can't keep up" — you've heard it all before, right?

But what if job burnout would actually be an indicator for a toxic work environment? Or for a toxic work setup?

I had read quite a bit of literature trying to explain burnout before stumbling upon the work of Christina Maslach. She has researched burnout for thirty years and is most well known for her research on occupational burnout. While she observed burnout in the 90ies mostly in caregiver professions, we can see an increase of burnout in many other fields in recent years, such as in the tech industry. Maslach outlines in one of her talks what this might be due to.

More interesting to me is the question why job burnout occurs at all? High workload is only one out of six factors that increase the risk for burnout, according to Christina Maslach and her team.

Factors increasing job burnout

  1. Workload. This could be demand overload, lots of different tasks, lots of context switching, unclear expectations, having several part time jobs, lack of resources, lack of work force, etc.
  2. Lack of control. Absence of agency. Absence of the possibility to make decisions. Impossibility to act on one's own account.
  3. Insufficient reward. Here, we are not solely talking about financial reward, but also about gratitude, recognition, visibility, and celebration of accomplishments.
  4. Lack of community. Remote work, asynchronous communication, poor communication skills, isolation in working on tasks, few/no in-person meetings, lack of organizational caring.
  5. Absence of fairness. Invisible hierarchies, lack of (fair) decision making processes, back channel decision making, financial or other rewards unfairly distributed.
  6. Value conflicts. This could be over-emphasizing on return on investment, making unethical requests, not respecting colleagues' boundaries, the lack of organizational vision, poor leadership.

Interestingly, it is possible to improve one area of risk, and see improvements in all the other areas.

What motivates people?

So, what is it that motivates people, what makes them like their work?
Here, Maslach comes up with another interesting list:

  • Autonomy. This could mean for example to trust colleagues to work on tasks autonomously. To let colleagues make their own decisions on how to implement a feature as long as it corresponds to the code writing guidelines. The responsibility for the task should be transferred along with the task. People need to be allowed to make mistakes (and fix them). Autonomy also means to say goodbye to the expectation that colleagues do everything exactly like we would do it. Instead, we can learn to trust in collective intelligence for coming up with different solutions.
  • Feeling of belonging. This one could mean to try to use synchronous communication whenever possible. To privilege in-person meetings. To celebrate achievements. To make collective decisions whenever the outcome affects the collective (or part of it). To have lunch together. To have lunch together and not talk about work.
  • Competence. Having a working feedback process. Valueing each others' competences. Having the possibility to evolve in the workplace. Having the possibility to get training, to try new setups, new methods, or new tools. Having the possibility to increase one's competences, possibly with the financial backing of the workplace.
  • Positive emotions. Encouraging people to take breaks. Make sure work plannings also include downtime. Encouraging people to take at least 5 weeks of vacation per year. Allowing people to have (paid) time off. Practicing gratitude. Acknowledging and celebrating achievements. Giving appreciation.
  • Psychological safety. Learn to communicate with kindness. Practice active listening. Have meetings facilitated. Condemn harassment, personal insults, sexism, racism, fascism. Condemn silencing of people. Have a possibility to report on code of ethics/conduct abuses. Making sure that people who experience problems or need to share something are not isolated.
  • Fairness. How about exploring inclusive leadership models? Making invisible hierarchies visible (See the concept of rank). Being aware of rank. Have clear and transparent decision making processes. Rewarding people equally. Making sure there is no invisible unpaid work done by always the same people.
  • Meaning. Are the issues that we work on meaningful per se? Do they contribute anything to the world, or to the common good? Making sure that tasks or roles of other colleagues are not belittled. Meaning can also be given by putting tasks into perspective, for example by making developers attend conferences where they can meet users and get feedback on their work. Making sure we don't forget why we wanted to do a job in first place. Getting familiar with the concept of bullshit jobs.

In this list, the words written in bold are what we could call "Needs". The descriptions behind them are what we could call "Strategies". There are always many different strategies to fulfill a need, I've only outlined some of them. I'm sure you can come up with others, please don't hesitate to share them with me.

17 February, 2020 11:00PM by ulrike

hackergotchi for Holger Levsen

Holger Levsen


SnowCamp 2020

This is just a late reminder that there are still some seats available for SnowCamp, taking place at the end of this week and during the whole weekend somewhere in the Italian mountains.

I believe it will be a really nice opportunity to hack on Debian things and thus I'd hope that there won't be empty seats, though atm this is the case.

The venue is reachable by train and Debian will be covering the cost of accomodation, so you just have to cover transportation and meals.

The event starts in three days, so hurry up and whatever you plans are, change them!

If you have any further questions, join #suncamp (yes!) on irc.debian.org.

17 February, 2020 07:56PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Amiga floppy recovery project scope

This is the eighth part in a series of blog posts. The previous post was First successful Amiga disk-dumping session. The whole series is available here: Amiga.

The main goal of my Amiga project is to read the data from my old floppy disks. After a bit of hiatus (and after some gentle encouragement from friends at FOSDEM) I'm nearly done, 150/200 disks attempted so far. Ultimately I intend to get rid of the disks to free up space in my house, and probably the Amiga, too. In the meantime, what could I do with it?

Gotek floppy emulator balanced on the Amiga

Gotek floppy emulator balanced on the Amiga

The most immediately obvious things are to improve the housing of the emulated floppy disk. My Gotek adaptor is unceremoniously balanced on top of the case. Housing it within the A500 would be much neater. I might try to follow this guide which requires no case modifications and no 3D printed brackets, but instead of soldering new push-buttons, add a separate OLED display and rotary encoder (knob) in a separate housing, such as this 3D-printed wedge-shaped mount on Thingiverse. I do wonder if some kind of side-mounted solution might be better, so the top casing could be removed without having to re-route the wires each time.

3D printed OLED mount, from Amibay

3D printed OLED mount, from Amibay

Next would be improving the video output. My A520 video modulator developed problems that are most likely caused by leaking or blown capacitors. At the moment, I have a choice of B&W RF out, or using a 30 year old Philips CRT monitor. The latter is too big to comfortably fit on my main desk, and the blue channel has started to fail. Learning the skills to fix the A520 could be useful as the same could happen to the Amiga itself. Alternatively replacements are very cheap on the second hand market. Or I could look at a 3rd-party equivalent like the RGB4ALL. I have tried a direct, passive socket adaptor on the off-chance my LCD TV supported 15kHz, but alas, it appears it doesn't. This list of monitors known to support 15kHz is very short, so sourcing one is not likely to be easy or cheap. It's possible to buy sophisticated "Flicker Fixers/Scan Doublers" that enable the use of any external display, but they're neither cheap nor common.

My original "tank" Amiga mouse (pictured above) is developing problems with the left mouse button. Replacing the switch looks simple (in this Youtube video) but will require me to invest in a soldering iron, multimeter and related equipment (not necessarily a bad thing). It might be easier to buy a different, more comfortable old serial mouse.

Once those are out of the way, It might be interesting to explore aspects of the system that I didn't touch on as a child: how do you program the thing? I don't remember ever writing any Amiga BASIC, although I had several doomed attempts to use "game makers" like AMOS or SEUCK. What programming language were the commercial games written in? Pure assembly? The 68k is supposed to have a pleasant instruction set for this. Was there ever a practically useful C compiler for the Amiga? I never networked my Amiga. I never played around with music sampling or trackers.

There's something oddly satisfying about the idea of taking a 30 year old computer and making it into a useful machine in the modern era. I could consider more involved hardware upgrades. The Amiga enthusiast community is old and the fans are very passionate. I've discovered a lot of incredible enhancements that fans have built to enhanced their machines, right up to FPGA-powered CPU replacements that can run several times faster than the fastest original m68ks, and also offer digital video out, hundreds of MB of RAM, modern storage options, etc. To give an idea, check out Epsilon's Amiga Blog, which outlines some of the improvements they've made to their fleet of machines.

This is a deep rabbit hole, and I'm not sure I can afford the time (or the money!) to explore it at the moment. It will certainly not rise above my more pressing responsibilities. But we'll see how things go.

17 February, 2020 04:05PM

February 16, 2020

Enrico Zini

hackergotchi for Ben Armstrong

Ben Armstrong

Introducing Dronefly, a Discord bot for naturalists

In the past few years, since first leaving Debian as a free software developer in 2016, I’ve taken up some new hobbies, or more accurately, renewed my interest in some old ones.

Screenshot from Dronefly bot tutorialScreenshot from Dronefly bot tutorial

During that hiatus, I also quietly un-retired from Debian, anticipating there would be some way to contribute to the project in these new areas of interest. That’s still an idea looking for the right opportunity to present itself, not to mention the available time to get involved again.

With age comes an increasing clamor of complaints from your body when you have a sedentary job in front of a screen, and hobbies that rarely take you away from it. You can’t just plunk down in front of a screen and do computer stuff non-stop & just bounce back again at the start of each new day. So in the past several years, getting outside more started to improve my well-being and address those complaints. That revived an old interest in me: nature photography. That, in turn, landed me at iNaturalist, re-ignited my childhood love of learning about the natural world, & hooked me on a regular habit of making observations & uploading them to iNat ever since.

Second, back in the late nineties, I wrote a little library loans renewal reminder project in Python. Python was a pleasure to work with, but that project never took off and soon was forgotten. Now once again, decades later, Python is a delight to be writing in, with its focus on writing readable code & backed by a strong culture of education.

Where Python came to bear on this new hobby was when the naturalists on the iNaturalist Discord server became a part of my life. Last spring, I stumbled upon this group & started hanging out. On this platform, we share what we are finding, we talk about those findings, and we challenge each other to get better at it. It wasn’t long before the idea to write some code to access the iNaturalist platform directly from our conversations started to take shape.

Now, ideally, what happened next would have been for an open platform, but this is where the community is. In many ways, too, other chat platforms (like irc) are not as capable vs. Discord to support the image-rich chat experience we enjoy. Thus, it seemed that’s where the code had to be. Dronefly, an open source Python bot for naturalists built on the Red DiscordBot bot framework, was born in the summer of 2019.

Dronefly is still alpha stage software, but in the short space of six months, has grown to roughly 3k lines of code and is used used by hundreds of users across 9 different Discord servers. It includes some innovative features requested by our users like the related command to discover the nearest common ancestor of one or more named taxa, and the map command to easily access a range map on the platform for all the named taxa. So far as I know, no equivalent features exist yet on the iNat website or apps for mobile. Commands like these put iNat data directly at users’ fingertips in chat, improving understanding of the material with minimal interruption to the flow of conversation.

This tutorial gives an overview of Dronefly’s features. If you’re intrigued, please look me up on the iNaturalist Discord server following the invite from the tutorial. You can try out the bot there, and I’d be happy to talk to you about our work. Even if this is not your thing, do have a look at iNaturalist itself. Perhaps, like me, you’ll find in this platform a fun, rewarding, & socially significant outlet that gets you outside more, with all the benefits that go along with that.

That’s what has been keeping me busy lately. I hope all my Debian friends are well & finding joy in what you’re doing. Keep up the good work!

16 February, 2020 04:51PM by Ben Armstrong

February 15, 2020

Russell Coker

DisplayPort and 4K

The Problem

Video playback looks better with a higher scan rate. A lot of content that was designed for TV (EG almost all historical documentaries) is going to be 25Hz interlaced (UK and Australia) or 30Hz interlaced (US). If you view that on a low refresh rate progressive scan display (EG a modern display at 30Hz) then my observation is that it looks a bit strange. Things that move seem to jump a bit and it’s distracting.

Getting HDMI to work with 4K resolution at a refresh rate higher than 30Hz seems difficult.

What HDMI Can Do

According to the HDMI Wikipedia page [1], HDMI 1.3–1.4b (introduced in June 2006) supports 30Hz refresh at 4K resolution and if you use 4:2:0 Chroma Subsampling (see the Chroma Subsampling Wikipedia page [2] you can do 60Hz or 75Hz on HDMI 1.3–1.4b. Basically for colour 4:2:0 means half the horizontal and half the vertical resolution while giving the same resolution for monochrome. For video that apparently works well (4:2:0 is standard for Blue Ray) and for games it might be OK, but for text (my primary use of computers) it would suck.

So I need support for HDMI 2.0 (introduced in September 2013) on the video card and monitor to do 4K at 60Hz. Apparently none of the combinations of video card and HDMI cable I use for Linux support that.

HDMI Cables

The Wikipedia page alleges that you need either a “Premium High Speed HDMI Cable” or a “Ultra High Speed HDMI Cable” for 4K resolution at 60Hz refresh rate. My problems probably aren’t related to the cable as my testing has shown that a cheap “High Speed HDMI Cable” can work at 60Hz with 4K resolution with the right combination of video card, monitor, and drivers. A Windows 10 system I maintain has a Samsung 4K monitor and a NVidia GT630 video card running 4K resolution at 60Hz (according to Windows). The NVidia GT630 card is one that I tried on two Linux systems at 4K resolution and causes random system crashes on both, it seems like a nice card for Windows but not for Linux.

Apparently the HDMI devices test the cable quality and use whatever speed seems to work (the cable isn’t identified to the devices). The prices at a local store are $3.98 for “high speed”, $19.88 for “premium high speed”, and $39.78 for “ultra high speed”. It seems that trying a “high speed” cable first before buying an expensive cable would make sense, especially for short cables which are likely to be less susceptible to noise.

What DisplayPort Can Do

According to the DisplayPort Wikipedia page [3] versions 1.2–1.2a (introduced in January 2010) support HBR2 which on a “Standard DisplayPort Cable” (which probably means almost all DisplayPort cables that are in use nowadays) allows 60Hz and 75Hz 4K resolution.

Comparing HDMI and DisplayPort

In summary to get 4K at 60Hz you need 2010 era DisplayPort or 2013 era HDMI. Apparently some video cards that I currently run for 4K (which were all bought new within the last 2 years) are somewhere between a 2010 and 2013 level of technology.

Also my testing (and reading review sites) shows that it’s common for video cards sold in the last 5 years or so to not support HDMI resolutions above FullHD, that means they would be HDMI version 1.1 at the greatest. HDMI 1.2 was introduced in August 2005 and supports 1440p at 30Hz. PCIe was introduced in 2003 so there really shouldn’t be many PCIe video cards that don’t support HDMI 1.2. I have about 8 different PCIe video cards in my spare parts pile that don’t support HDMI resolutions higher than FullHD so it seems that such a limitation is common.

The End Result

For my own workstation I plugged a DisplayPort cable between the monitor and video card and a Linux window appeared (from KDE I think) offering me some choices about what to do, I chose to switch to the “new monitor” on DisplayPort and that defaulted to 60Hz. After that change TV shows on NetFlix and Amazon Prime both look better. So it’s a good result.

As an aside DisplayPort cables are easier to scrounge as the HDMI cables get taken by non-computer people for use with their TV.

15 February, 2020 11:00PM by etbe

hackergotchi for Keith Packard

Keith Packard


Decomposing Splines Without Recursion

To make graphics usable in Snek, I need to avoid using a lot of memory, especially on the stack as there's no stack overflow checking on most embedded systems. Today, I worked on how to draw splines with a reasonable number of line segments without requiring any intermediate storage. Here's the results from this work:

The Usual Method

The usual method I've used to convert a spline into a sequence of line segments is split the spline in half using DeCasteljau's algorithm recursively until the spline can be approximated by a straight line within a defined tolerance.

Here's an example from twin:

static void
_twin_spline_decompose (twin_path_t *path,
            twin_spline_t   *spline, 
            twin_dfixed_t   tolerance_squared)
    if (_twin_spline_error_squared (spline) <= tolerance_squared)
    _twin_path_sdraw (path, spline->a.x, spline->a.y);
    twin_spline_t s1, s2;
    _de_casteljau (spline, &s1, &s2);
    _twin_spline_decompose (path, &s1, tolerance_squared);
    _twin_spline_decompose (path, &s2, tolerance_squared);

The _de_casteljau function splits the spline at the midpoint:

static void
_lerp_half (twin_spoint_t *a, twin_spoint_t *b, twin_spoint_t *result)
    result->x = a->x + ((b->x - a->x) >> 1);
    result->y = a->y + ((b->y - a->y) >> 1);

static void
_de_casteljau (twin_spline_t *spline, twin_spline_t *s1, twin_spline_t *s2)
    twin_spoint_t ab, bc, cd;
    twin_spoint_t abbc, bccd;
    twin_spoint_t final;

    _lerp_half (&spline->a, &spline->b, &ab);
    _lerp_half (&spline->b, &spline->c, &bc);
    _lerp_half (&spline->c, &spline->d, &cd);
    _lerp_half (&ab, &bc, &abbc);
    _lerp_half (&bc, &cd, &bccd);
    _lerp_half (&abbc, &bccd, &final);

    s1->a = spline->a;
    s1->b = ab;
    s1->c = abbc;
    s1->d = final;

    s2->a = final;
    s2->b = bccd;
    s2->c = cd;
    s2->d = spline->d;

This is certainly straightforward, but suffers from an obvious flaw — there's unbounded recursion. With two splines in the stack frame, each containing eight coordinates, the stack will grow rapidly; 4 levels of recursion will consume more than 64 coordinates space. This can easily overflow the stack of a tiny machine.

De Casteljau Splits At Any Point

De Casteljau's algorithm is not limited to splitting splines at the midpoint. You can supply an arbitrary position t, 0 < t < 1, and you will end up with two splines which, drawn together, exactly match the original spline. I use 1/2 in the above version because it provides a reasonable guess as to how an arbitrary spline might be decomposed efficiently. You can use any value and the decomposition will still work, it will just change the recursion depth along various portions of the spline.

Iterative Left-most Spline Decomposition

What our binary decomposition does is to pick points t0 - tn such that splines t0..t1 through tn-1 .. tn are all 'flat'. It does this by recursively bisecting the spline, storing two intermediate splines on the stack at each level. If we look at just how the first, or 'left-most' spline is generated, that can be represented as an iterative process. At each step in the iteration, we split the spline in half:

S' = _de_casteljau(s, 1/2)

We can re-write this using the broader capabilities of the De Casteljau algorithm by splitting the original spline at decreasing points along it:

S[n] = _de_casteljau(s0, (1/2)ⁿ)

Now recall that the De Casteljau algorithm generates two splines, not just one. One describes the spline from 0..(1/2)ⁿ, the second the spline from (1/2)ⁿ..1. This gives us an iterative approach to generating a sequence of 'flat' splines for the whole original spline:

while S is not flat:
    n = 1
        Sleft, Sright = _decasteljau(S, (1/2)ⁿ)
        n = n + 1
    until Sleft is flat
    result ← Sleft
    S = Sright
result ← S

We've added an inner loop that wasn't needed in the original algorithm, and we're introducing some cumulative errors as we step around the spline, but we don't use any additional memory at all.

Final Code

Here's the full implementation:

 * Copyright © 2020 Keith Packard <keithp@keithp.com>
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License as published by
 * the Free Software Foundation, either version 3 of the License, or
 * (at your option) any later version.
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * General Public License for more details.
 * You should have received a copy of the GNU General Public License along
 * with this program; if not, write to the Free Software Foundation, Inc.,
 * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.

#include <stdbool.h>
#include <stdio.h>
#include <string.h>

typedef float point_t[2];
typedef point_t spline_t[4];


/* Is this spline flat within the defined tolerance */
static bool
_is_flat(spline_t spline)
     * This computes the maximum deviation of the spline from a
     * straight line between the end points.
     * From https://hcklbrrfnn.files.wordpress.com/2012/08/bez.pdf
    float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0];
    float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1];
    float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0];
    float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1];

    ux *= ux;
    uy *= uy;
    vx *= vx;
    vy *= vy;
    if (ux < vx)
        ux = vx;
    if (uy < vy)
        uy = vy;
    return (ux + uy <= 16.0f * SNEK_DRAW_TOLERANCE * SNEK_DRAW_TOLERANCE);

static void
_lerp (point_t a, point_t b, point_t r, float t)
    int i;
    for (i = 0; i < 2; i++)
        r[i] = a[i]*(1.0f - t) + b[i]*t;

static void
_de_casteljau(spline_t s, spline_t s1, spline_t s2, float t)
    point_t first[3];
    point_t second[2];
    int i;

    for (i = 0; i < 3; i++)
        _lerp(s[i], s[i+1], first[i], t);

    for (i = 0; i < 2; i++)
        _lerp(first[i], first[i+1], second[i], t);

    _lerp(second[0], second[1], s1[3], t);

    for (i = 0; i < 2; i++) {
        s1[0][i] = s[0][i];
        s1[1][i] = first[0][i];
        s1[2][i] = second[0][i];

        s2[0][i] = s1[3][i];
        s2[1][i] = second[1][i];
        s2[2][i] = first[2][i];
        s2[3][i] = s[3][i];

static void
_spline_decompose(void (*draw)(float x, float y), spline_t s)
    float       t;
    spline_t    s1, s2;

    (*draw)(s[0][0], s[0][1]);

    /* If s is flat, we're done */
    while (!_is_flat(s)) {
        t = 1.0f;

        /* Iterate until s1 is flat */
        do {
            t = t/2.0f;
            _de_casteljau(s, s1, s2, t);
        } while (!_is_flat(s1));

        /* Draw to the end of s1 */
        (*draw)(s1[3][0], s1[3][1]);

        /* Replace s with s2 */
        memcpy(&s[0], &s2[0], sizeof (spline_t));
    (*draw)(s[3][0], s[3][1]);

void draw(float x, float y)
    printf("%8g, %8g\n", x, y);

int main(int argc, char **argv)
    spline_t spline = {
        { 0.0f, 0.0f },
        { 0.0f, 256.0f },
        { 256.0f, -256.0f },
        { 256.0f, 0.0f }
    _spline_decompose(draw, spline);
    return 0;

15 February, 2020 05:55AM

Russell Coker

Self Assessment

Background Knowledge

The Dunning Kruger Effect [1] is something everyone should read about. It’s the effect where people who are bad at something rate themselves higher than they deserve because their inability to notice their own mistakes prevents improvement, while people who are good at something rate themselves lower than they deserve because noticing all their mistakes is what allows them to improve.

Noticing all your mistakes all the time isn’t great (see Impostor Syndrome [2] for where this leads).

Erik Dietrich wrote an insightful article “How Developers Stop Learning: Rise of the Expert Beginner” [3] which I recommend that everyone reads. It is about how some people get stuck at a medium level of proficiency and find it impossible to unlearn bad practices which prevent them from achieving higher levels of skill.

What I’m Concerned About

A significant problem in large parts of the computer industry is that it’s not easy to compare various skills. In the sport of bowling (which Erik uses as an example) it’s easy to compare your score against people anywhere in the world, if you score 250 and people in another city score 280 then they are more skilled than you. If I design an IT project that’s 2 months late on delivery and someone else designs a project that’s only 1 month late are they more skilled than me? That isn’t enough information to know. I’m using the number of months late as an arbitrary metric of assessing projects, IT projects tend to run late and while delivery time might not be the best metric it’s something that can be measured (note that I am slightly joking about measuring IT projects by how late they are).

If the last project I personally controlled was 2 months late and I’m about to finish a project 1 month late does that mean I’ve increased my skills? I probably can’t assess this accurately as there are so many variables. The Impostor Syndrome factor might lead me to think that the second project was easier, or I might get egotistical and think I’m really great, or maybe both at the same time.

This is one of many resources recommending timely feedback for education [4], it says “Feedback needs to be timely” and “It needs to be given while there is still time for the learners to act on it and to monitor and adjust their own learning”. For basic programming tasks such as debugging a crashing program the feedback is reasonably quick. For longer term tasks like assessing whether the choice of technologies for a project was good the feedback cycle is almost impossibly long. If I used product A for a year long project does it seem easier than product B because it is easier or because I’ve just got used to it’s quirks? Did I make a mistake at the start of a year long project and if so do I remember why I made that choice I now regret?

Skills that Should be Easy to Compare

One would imagine that martial arts is a field where people have very realistic understanding of their own skills, a few minutes of contest in a ring, octagon, or dojo should show how your skills compare to others. But a YouTube search for “no touch knockout” or “chi” shows that there are more than a few “martial artists” who think that they can knock someone out without physical contact – with just telepathy or something. George Dillman [5] is one example of someone who had some real fighting skills until he convinced himself that he could use mental powers to knock people out. From watching YouTube videos it appears that such people convince the members of their dojo of their powers, and those people then faint on demand “proving” their mental powers.

The process of converting an entire dojo into believers in chi seems similar to the process of converting a software development team into “expert beginners”, except that martial art skills should be much easier to assess.

Is it ever possible to assess any skills if people trying to compare martial art skills often do it so badly?


It seems that any situation where one person is the undisputed expert has a risk of the “chi” problem if the expert doesn’t regularly meet peers to learn new techniques. If someone like George Dillman or one of the “expert beginners” that Erik Dietrich refers to was to regularly meet other people with similar skills and accept feedback from them they would be much less likely to become a “chi” master or “expert beginner”. For the computer industry meetup.com seems the best solution to this, whatever your IT skills are you can find a meetup where you can meet people with more skills than you in some area.

Here’s one of many guides to overcoming Imposter Syndrome [5]. Actually succeeding in following the advice of such web pages is not going to be easy.

I wonder if getting a realistic appraisal of your own skills is even generally useful. Maybe the best thing is to just recognise enough things that you are doing wrong to be able to improve and to recognise enough things that you do well to have the confidence to do things without hesitation.

15 February, 2020 03:57AM by etbe

February 14, 2020

hackergotchi for Anisa Kuci

Anisa Kuci

Outreachy post 4 - Career opportunities

As mentioned in my last blog posts, Outreachy is very interesting and I got to learn a lot already. Two months have already passed by quickly and there is still one month left for me to continue working and learning.

As I imagine all the other interns are thinking now, I am also thinking about what is going to be the next step for me. After such an interesting experience as this internship, thinking about the next steps is not that simple.

I have been contributing to Free Software projects for quite some years now. I have been part of the only FLOSS community in my country for many years and I grew up together with the community, advocating free software in and around Albania.

I have contributed to many projects, including Mozilla, OpenStreetMap, Debian, GNOME, Wikimedia projects etc. So, I am sure, the FLOSS world is definitely the right place for me to be. I have helped communities grow and I am very enthusiastic about it.

I have been growing up and evolved as a person through contributing to all the projects I have mentioned above. I have gained knowledge that I would not have had a chance to acquire, if it was not for the “sharing knowledge” ideology that is so strong in the FLOSS environment.

Through organizing big and small events from 300 people conferences to 30 people bug squashing parties to 5 people strategy workshops, I have been able to develop skills because the community trusted me with responsibility in event organizing even before I was able to prove myself. I have been supported by great mentors which helped me learn on the job and leave me with practical knowledge that I am happy to continue applying in the FLOSS community. I am thinking about formalizing my education in the marketing or communication areas to also learn some academic background and further strengthen the practical skills.

During Outreachy I have learned to use the bash command line much better. I have learned LaTeX as it was one of the tools that I needed to work on the fundraising materials. I have also improved a lot using git commands and feel much more confident now. I have worked a lot on fundraising while also learning Python very intensively, and programming is definitely a skill that I would love to profound.

I know that foreign languages are something that I enjoy, as I speak English, Italian, Greek and of course my native language Albanian, but lately I learned that programming languages can be as much fun as the natural languages and I am keen on learning more of both.

I love working with people, so I hope in the future I will be able to continue working in environments where you interact with a diverse set of people.

14 February, 2020 12:21PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.0.1 now on CRAN!

A fun weekend-morning project, namely wrapping the outstanding simdjson library by Daniel Lemire (with contributions by Geoff Langdale, John Keiser and many others) into something callable from R via a new package RcppSimdJson lead to a first tweet on January 20, a reference to the brand new github repo, and CRAN upload a few days later—and then two weeks of nothingness.

Well, a little more than nothing as Daniel is an excellent “upstream” to work with who promptly incorporated two changes that arose from preparing the CRAN upload. So we did that. But CRAN being as busy and swamped as they are we needed to wait. The ten days one is warned about. And then some more. So yesterday I did a cheeky bit of “bartering” as Kurt wanted a favour with an updated digest version so I hinted that some reciprocity would be appreciated. And lo and behold he admitted RcppSimdJson to CRAN. So there it is now!

We have some upstream changes already in git, but I will wait a few days to let a week pass before uploading the now synced upstream code. Anybody who wants it sooner knows where to get it on GitHub.

simdjson is a gem. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in persing gigabytes of JSON parsed per second which is quite mindboggling. I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

The NEWS entry (from a since-added NEWS file) for the initial RcppSimdJson upload follows.

Changes in version 0.0.1 (2020-01-24)

  • Initial CRAN upload of first version

  • Comment-out use of stdout (now updated upstream)

  • Deactivate use computed GOTOs for compiler compliance and CRAN Policy via #define

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 February, 2020 03:00AM

February 13, 2020

hackergotchi for Jonathan Carter

Jonathan Carter

Initial experiments with the Loongson Pi 2K

Recently, Loongson made some Pi 2K boards available to Debian developers and Aron Xu was kind enough to bring me one to FOSDEM earlier this month. It’s a MIPS64 based board with 2GB RAM, 2 gigabit ethernet cards, an m.2 (SATA) disk slot and a whole bunch more i/o. More details about the board itself is available on the Debian wiki, here is a quick board tour from there:

On my previous blog post I still had the protective wrapping on the acrylic case. Here it is all peeled off and polished after Holger pointed that out to me on IRC. I’ll admit I kind of liked the earthy feel that the protective covers had, but this is nice too.

The reason why I wanted this board is that I don’t have access to any MIPS64 hardware whatsoever, and it can be really useful for getting Calamares to run properly on MIPS64 on Debian. Calamares itself builds fine on this platform, but calamares-settings-debian will only work on amd64 and i386 right now (where it will either install grub-efi or grub-pc depending in which mode you booted, otherwise it will crash during installation). I already have lots of plans for the Bullseye release cycle (and even for Calamares specifically), so I’m not sure if I’ll get there but I’d like to get support for mips64 and arm64 into calamares-settings-debian for the bullseye release. I think it’s mostly just a case of detecting the platforms properly and installing/configuring the right bootloaders. Hopefully it’s that simple.

In the meantime, I decided to get to know this machine a bit better. I’m curious how it could be useful to me otherwise. All its expansion ports definitely seems interesting. First I plugged it into my power meter to check what power consumption looks like. According to this, it typically uses between 7.5W and 9W and about 8.5W on average.

I initially tried it out on an old Sun monitor that I salvaged from a recycling heap. It wasn’t working anymore but my anonymous friend replaced its power supply and its CFL backlight with an LED backlight, now it’s a really nice 4:3 monitor for my vintage computers. On a side-note, if you’re into electronics, follow his YouTube channel where you can see him repair things. Unfortunately the board doesn’t like this screen by default (just black screen when xorg started), I didn’t check if it was just a xorg configuration issue or a hardware limitiation, but I just moved it to an old 720P TV that I usually use for my mini collection and it displayed fine there. I thought I’d just mention it in case someone tries this board and wonders why they just see a black screen after it boots.

I was curious whether these Ethernet ports could realistically do anything more than 100mbps (sometimes they go on a bus that maxes out way before gigabit does), so I install iperf3 and gave it a shot. This went through 2 switches that has some existing traffic on it, but the ~85MB/s I got on my first test completely satisfied me that these ports are plenty fast enough.

Since I first saw the board, I was curious about the PCIe slot. I attached an older NVidia (that still runs fine with the free Nouveau driver), also attached some external power to the card and booted it all up…

The card powers on and the fan enthusiastically spins up, but sadly the card is not detected on the Loongson board. I think you need some PC BIOS equivelent stuff to poke the card at the right places so that it boots up properly.

Disk performance is great, as can be expected with the SSD it has on board. It’s significantly better than the extremely slow flash you typically get on development boards.

I was starting to get curious about whether Calamares would run on this. So I went ahead and installed it along with calamares-settings-debian. I wasn’t even sure it would start up, but lo and behold, it did. This is quite possibly the first time Calamares has ever started up on a MIPS64 machine. It started up in Chinese since I haven’t changed the language settings yet in Xfce.

I was curious whether Calamares would start up on the framebuffer. Linux framebuffer support can be really flaky on platforms with weird/incomplete Linux drivers. I ran ‘calamares -platform linuxfb’ from a virtual terminal and it just worked.

This is all very promising and makes me a lot more eager to get it all working properly and get a nice image generated that you can use Calamares to install Debian on a MIPS64 board. Unfortunately, at least for now, this board still needs its own kernel so it would need it’s own unique installation image. Hopefully all the special bits will make it into the mainline Linux kernel before too long. Graphic performance wasn’t good, but I noticed that they do have some drivers on GitHub that I haven’t tried yet, but that’s an experiment for another evening.


  • Price: A few people asked about the price, so I asked Aron if he can share some pricing information. I got this one for free, it’s an unreleased demo model. At least two models might be released that’s based on this, a smaller board with fewer pinouts for about €100, and the current demo version is about $200 (CNY 1399), so the final version might cost somewhere in that ballpark too. These aren’t any kind of final prices, and I don’t represent Loongson in any capacity, but at least this should give you some idea of what it would cost.
  • More boards: Not all Debian Developers who requested their board have received them, Aron said that more boards should become available by March/April.

13 February, 2020 08:29PM by jonathan

hackergotchi for Romain Perier

Romain Perier

Meetup Debian Toulouse

Hi there !

My company Viveris is opening its office for hosting a Debian Meetup in Toulouse this summer (June 5th or June 12th).

Everyone is welcome to this event, we're currently looking for volunteers for presenting demo, lightning talks or conferences (following the talks any kind of hacking session is possible like bugs triaging, coding sprints etc).

Any kind of topic is welcome.

See the announcement (in french) for more details.

13 February, 2020 06:50PM by Romain Perier (noreply@blogger.com)

February 12, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.24: Some more refinements

Another new version of digest arrived on CRAN (and also on Debian) earlier today.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 889k monthly downloads with 255 direct reverse dependencies and 7340 indirect reverse dependencies) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes a few month after the previous release. It contains a few contributed fixes, some of which prepare for R 4.0.0 in its current development. This includes a testing change to the matrix/array class, and corrects the registration for the PMurHash routine as pointed out by Tomas Kalibera and Kurt Hornik (who also kindly reminded me to finally upload this as I had made the fix already in December). Moreover, Will Landau sped up one operation affecting his popular drake pipeline toolkit. Lastly, Thierry Onkelinx corrected one more aspect related to sha1.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 February, 2020 11:17PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

Bits from MiniDebCamp Brussels and FOSDEM 2020

Bits from MiniDebCamp Brussels and FOSDEM 2020

I traveled to Brussels from January 28th to February 6th to join MiniDebCamp and FOSDEM 2020. It was my second trip to Brussels because I was there in 2019 to join Video Team Sprint and FOSDEM

MiniDebCamp took place at Hackerspace Brussels (HSBXL) for 3 days (January 29-31). My initial idea was travel on 27th and arrive in Brussels on 28th to rest and go to MiniDebCamp on the first day, but I had buy a ticket to leave Brazil on 28th because it was cheaper.

Trip from Curitiba to Brussels

I left Curitiba on 28th at 13:20 and I arrived in São Paulo at 14:30. The flight from São Paulo to Munich departured at 18h and after 12 hours I arrived there at 10h (local time). The flight was 30 minutes late because we had to wait airport staff remove ice on the ground. I was worried because my flight to Brussels would departure at 10:25 and I had to get through by immigration yet.


After walked a lot, I arrrived at immigration desk (there wasn’t line), I got my passaport stamp, walked a lot again, took a train, I arrived in my gate and the flight was late too. So, everything was going well. I departured Munich at 10:40 and I arrived in Brussels on 29th at 12h.

I went from airport to the Hostel Galia by bus, by train and by other bus to check-in and to leave my luggage. On the way I had lunch at “Station Brussel Noord” because I was really hungry, and I arrived at hostel at 15h.

My reservation was on a coletive bedroom, and when I arrived there, I meet Marcos, a brazilian guy from Brasília and he was there to join a internationl Magic card competion. He was in Brussels for the first time and he was a little lost about what he could do in the city. I invited him to go to downtown to looking for a cellphone store because we needed to buy sim-cards. I wanted to buy from Base, and hostel frontdesk people said to us to go to the store at Rue Neuve. I showed Grand-Place to Marcos and after we bought sim-cards, we went to Primark because he needed to buy a towel. It was night and we decided to buy food to have dinner at Hostel. I gave up to go to HSBXL because I was tired and I thought it was not a good idea to go there for the first time at night.

MiniDebCamp day 1

On Thursday (30th) morning I went to HSBXL. I walked from the hostel to “Gare du Midi”, and after walk from on side to other, I finally could find the bus stop. I got off the bus at the fourth stop in front of the hackerspace building. It was a little hard to find the right entrance, but I got it. I arrived at HSBXL room, talked to other DDs there and I could find a empty table to put my laptop. Other DDs were arriving during all day.


I read and answered e-mails and went out to walking in Anderlecht to meet the city and to looking for a place to have lunch because I didn’t want eat sandwich at restaurant on the building. I stoped at Lidl and Aldi stores to buy some food to eat later, and I stoped in a turkish restaurant to have lunch, and the food was very good. After that, I decided to walk a little more to visit the Jean-Claude Van Damme statue to take some photos :-)





Backing to HSBXL my mostly interest at MiniDebCamp was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I was asking some questions to Nicolas about that and he suggested I make a new instalation using the Video Team machine and Buster.


I installed Buster and using USB installer and ansible playbooks it was possible setup the machine as Voctotest. I already had done this setup at home using a simple machine without a Blackmagic card and a camera. From that point, I didn’t know what to do. So, Nicolas come and started to setup the machine first as Voctomix, and after as Gateway. I was watching and learning. After a while, everything worked perfect with a camera.

It was night and the group ordered some pizzas to eat with beers sold by HSBXL. I was celebreting too because during the day I received messages and a call from Rentcars because I was hired by them! Before travel, I went to a interview at Rentcars on the morning and I got a positive answer when I was in Brussels.


Before I left the hackerspace, I received doors codes to open HSBXL next day early. Some days before MiniDebCamp, Holger had asked if someone could open the room friday morning and I answered him I could. I left at 22h and back to the hostel to sleep.

MiniDebCamp day 2

On friday I arrived at HSBXL at 9h and opened the room and I took some photos with empty space. It is amazing how we can use spaces like that in Europe. Last year I was in MiniDebConf Hamburg at Dock Europe. I miss this kind of building and hackerspace in Curitiba.



I installed and setup the Video Team machine again, but this time, I was alone following what Nicolas did before. And everything worked perfectly again. Nicolas asked me to create a new ansible playbook joining voctomix and gateway to make instalation easier, send it as a MR, and test it.

I went out to have lunch in the same restaurant the day before and I discoveried there was a Leonidas factory outlet in front of HSBXL, meaning I could buy belgium chocolates cheaper. I went there and I bought a box with 1,5kg of chocolates.


When I come back to HSBXL, I started to test the new ansible playbook. The test was taking longer than I expected and on the end of the day, Nicolas needed to take away the equipments. It was really great make this hands-on with real equipments used by Video Team. I learned a lot!

To celebrate the MiniDebCamp ending, we had free beer sponsored! I have to say I drank to much and it was complicated arrived at hostel that night :-)


Bruxelas Bruxelas

Bruxelas Bruxelas

A complete report from DebConf Video Team can be read here.

Many thanks to Martin Michlmayr for helping with flight tickets, to Nicolas Dandrimont for teaching me Video Team stuff, to Kyle Robbertze for setting up the Video Sprint, to Holger Levsen for organizing MiniDebCamp, and to HSBXL people for receiving us there.

FOSDEM day 1

FOSDEM 2020 took place at ULB on February 1st and 2nd. On the first day I took a train and I listened a group of brazilians talking in portuguese and they were going to FOSDEM too. I arrived there around 9:30 and I went to Debian booth because I was volunteer to help and I was taking t-shirts from Brazil to sell. It was a madness with people buying Debian stuff.




After while I had to leave the booth because I was volunteer to film the talks at Janson auditorium from 11h to 13h. I had done this job last year I decided to do it again because It is a way to help the event, and they gave me a t-shirt and a free meal ticket that I changed for two sandwiches :-)




After lunch, I walked around the booths, got some stickers, talked with peolple, drank some beers from OpenSuse booth, until the end of the day. I left FOSDEM and went to hostel to leave my bag, and I went to the Debian dinner organized by Marco d’Itri at Chezleon.


The dinner was great, with 25 very nice Debian people. After the dinner, we ate waflles and some of us went to Delirium but I decided to go to the hostel to sleep.

FOSDEM day 2

On the second and last day I arrived around 9h, spent some time at Debian booth and I went to Janson auditorium to help again from 10h to 13h.


I got the free meal ticket and after lunch, I walked around, visited booths, and I went to Community devroom to watch talks. The first was “Recognising Burnout” by Andrew Hutchings and listening him I believe I had bournet symptoms organizing DebConf19. The second was “How Does Innersource Impact on the Future of Upstream Contributions?” by Bradley Kuhn. Both talks were great.

After the end of FOSDEM, we went in a group to have dinner at a restaurant near from ULB. We spent a great time together. After the dinnner we took the same train and we did a group photo.


Two days to join Brussels

With the end of MiniDebcamp and FOSDEM I had Monday and Tuesday free before returning to Brazil on Wednesday. I wanted to join Config Management Camp in Ghent, but I decided to stay in Brussels to visit some places. I visited:



  • Carrefour - to buy beers to bring to Brazil :-)






Last day and returning to Brazil

On Wednesday (5th) I woke up early to finish packing and do my check-out. I left the hostel and took a bus, a train and other bus to Brussels Airport. My flight departured at 15:05 to Frankfurt arriving there at 15:55. I thought to visit the city because I had to wait for 6 hours and I read it was possible to looking around with this time. But I was very tired and I decided to stay at airport.

I walked to my gate, got through by immigration to get my passaport stamp, and waited until 22:05 when my flight departured to São Paulo. After 12 hours flying, I arrived in São Paulo at 6h (local time). In São Paulo when we arrive from international flight, we must to take all luggages, and get through customs. After I left my luggage with local airplane company, I went to the gate to wait my flight to Curitiba.

The flight should departure at 8:30 but it was 20 minutes late. So I arrived in Curitiba 10h, took a uber and finally I was at home.

Last words

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here.

All my photos are here

Many thanks to Debian for sponsoring my trip to Brussels, and to DPL Sam Hartman for approving it. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs, and participate in a very important worldwide free software event.

12 February, 2020 10:00AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Announcing miniDebConf Montreal 2020 -- August 6th to August 9th 2020

This is a guest post by the miniDebConf Montreal 2020 orga team on pollo's blog.

Dear Debianites,

We are happy to announce miniDebConf Montreal 2020! The event will take place in Montreal, at Concordia University's John Molson School of Business from August 6th to August 9th 2020. Anybody interested in Debian development is welcome.

Following the announcement of the DebConf20 location, our desire to participate became incompatible with our commitment toward the Boycott, Divestment and Sanctions (BDS) campaign launched by Palestinian civil society in 2005. Hence, many active Montreal-based Debian developpers, along with a number of other Debian developpers, have decided not to travel to Israel in August 2020 for DebConf20.

Nevertheless, recognizing the importance of DebConf for the health of both the developper community and the project as a whole, we decided to organize a miniDebConf just prior to DebConf20 in the hope that fellow developpers who may have otherwise skipped DebConf entirely this year might join us instead. Fellow developpers who decide to travel to both events are of course most welcome.

Registration is open

Registration is open now, and free, so go add your name and details on the Debian wiki.

We'll accept registrations until July 25th, but don't wait too much before making your travel plans! Finding reasonnable accommodation in Montreal during the summer can be hard if you don't make plans in advance.

We have you covered with lots of attendee information already.

Sponsors wanted

We're looking for sponsors willing to help making this event possible. Information on sponsorship tiers can be found here.

Get in touch

We gather on the #debian-quebec on irc.debian.org and the debian-dug-quebec@lists.debian.org list.

12 February, 2020 05:00AM by The miniDebConf Montreal 2020 orga team

hackergotchi for Norbert Preining

Norbert Preining

MuPDF, QPDFView and other Debian updates

Update 2020-02-24: The default Debian packages for pupdf and pymupdf have been updated to the current version (or newer), and thus I have removed the packages from my repo. Thanks to the maintainers for updating! Qpdfview is still outdated, though.

For those interested, I have updated mupdf (1.16.1), pymupdf (1.16.10), and qpdfview (current bzr sources) to the latest versions and added to my local Debian apt repository:

deb https://www.preining.info/debian unstable main
deb-src https://www.preining.info/debian unstable main

QPDFView has now the Fitz (MuPDF) backend available.

At the same time I have updated Elixir to 1.10.1. All packages are in source and amd64 binary format. Information on other apt repositories available here can be found at this post.


12 February, 2020 03:03AM by Norbert Preining

February 11, 2020

hackergotchi for Sean Whitton

Sean Whitton

Traditional Perl 5 classes and objects

Last summer I read chromatic’s Modern Perl, and was recommended to default to using Moo or Moose to define classes, rather than writing code to bless things into objecthood myself. At the time the project I was working on needed to avoid any dependencies outside of the Perl core, so I made a mental note of the advice, but didn’t learn how to use Moo or Moose. I do remember feeling like I was typing out a lot of boilerplate, and wishing I could use Moo or Moose to reduce that.

In recent weeks I’ve been working on a Perl distribution which can freely use non-core dependencies from CPAN, and so right from the start I used Moo to define my classes. It seemed like a no-brainer because it’s more declarative; it didn’t seem like there could be any disadvantages.

At one point, when writing a new class, I got stuck. I needed to call one of the object’s methods immediately after instantiation of the object. BUILDARGS is, roughly, the constructor for Moo/Moose classes, so I started there, but you don’t have access to the new object during BUILDARGS, so you can’t simply call its methods on it. So what I needed to do was change my design around so as to be more conformant to the Moo/Moose view of the world, such that the work of the method call could get done at the right time. I mustn’t have been in a frame of mind for that sort of thinking at the time because what I ended up doing was dropping Moo from the package and writing a constructor which called the method on the new object, after blessing the hash, but before returning a hashref to the caller.

This was my first experience of having the call to bless() not be the last line of my constructor, and I believe that this simple dislocation helped significantly improved my grip on core Perl 5 classes and objects: the point is that they’re not declarative—they’re collections of functionality to operate on encapsulated data, where the instantiation of that data, too, is a piece of functionality. I had been thinking about classes too declaratively, and this is why writing out constructors and accessors felt like boilerplate. Now writing those out feels like carefully setting down precisely what functionality for operating on the encapsulated data I want to expose. I also find core Perl 5 OO quite elegant (in fact I find pretty much everything about Perl 5 highly elegant, except of course for its dereferencing syntax; not sure why this opinion is so unpopular).

I then came across the Cor proposal and followed a link to this recent talk criticising Moo/Moose. The speaker, Tadeusz Sośnierz, argues that Moo/Moose implicitly encourages you to have an accessor for each and every piece of the encapsulated data in your class, which is bad OO. Sośnierz pointed out that if you take care to avoid generating all these accessors, while still having Moo/Moose store the arguments to the constructor provided by the user in the right places, you end up back with a new kind of boilerplate, which is Moo/Moose-specific, and arguably worse than what’s involved in defining core Perl 5 classes. So, he asks, if we are going to take care to avoid generating too many accessors, and thereby end up with boilerplate, what are we getting out of using Moo/Moose over just core Perl 5 OO? There is some functionality for typechecking and method signatures, and we have the ability to use roles instead of multiple-inheritance.

After watching Sośnierz talk, I have been rethinking about whether I should follow Modern Perl’s advice to default to using Moo/Moose to define new classes, because I want to avoid the problem of too many accessors. Considering the advantages of Moo/Moose Sośnierz ends up with at the end of his talk: I find the way that Perl provides parameters to subroutines and methods intuitive and flexible, and don’t see the need to build typechecking into that process—just throw some exceptions with croak() if the types aren’t right, before getting on with the business logic of the subroutine or method. Roles are a different matter. These are certainly an improvement on multiple inheritance. But there is Role::Tiny that you can use instead of Moo/Moose.

So for the time being it seems I should go back to blessing hashes, and that I should also get to grips with Role::Tiny. I don’t have a lot of experience with OO design, so can certainly imagine changing my mind about things like Perlish typechecking and subroutine signatures (I also don’t understand, yet, why some people find the convention of prefixing private methods and attributes with an underscore not to be sufficient—Cor wants to add attribute and method privacy to Perl). However, it seems sensible to avoid using things like Moo/Moose until I can be very clear in my own mind about what advantages using them is getting me. Bad OO with Moo/Moose seems worse than occasionally simplistic, occasionally tedious, but correct OO with the Perl 5 core.

11 February, 2020 04:23PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

My free software activities in january 2020

My free software activities in january 2020

Hello, this is my first monthly report about activities in Debian and Free Software in general.

Since the end of DebConf19 in July 2020 I was avoiding to work in Debian stuff because the event was too stresseful to me. For months I felt discouraged to contribute to the project, until December.


On december I watched two news video tutorials from João Eriberto about:

  • Debian Packaging - using git and gbp, parts 1, 2, 3, 4, 5 and 6
  • Debian Packaging with docker, parts 1 and 2

Since then, I decided update my packages using gbd and docker and it have been great. On December and January I worked on these following packages.

I did QA Uploads of:

I adopted and packaged new release of:

  • ddir 2019.0505 closing bugs #903093 and #920066.

I packaged new releases of:

I packaged new upstream versions of:

I backported to buster-backports:

I packaged:

MiniDebConf Maceió 2020

I helped to edit the MiniDebConf Maceió 2020 website.

I wrote the sponsorship brochure and I sent it some brazilian companies.

I sent a message with call for activities to national and international mailing lists.

I sent a post to Debian Micronews.


I sent a message to UFPR Education Director asking him if we could use the Campus Rebouças auditorium to organize FLISOL there on april, but he denied. We still looking for a place to FLISOL.


I started to study DevOps culture and for that, I watch a lot of vídeos from LINUXtips

And I read the book “Docker para desenvolvedores” wrote by Rafael Gomes.

MiniDebCamp in Brussels

I traveled to Brussels to join MiniDebCamp on January 29-31 and FOSDEM on February 1-2.

At MiniDebCamp my mostly interest was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I could setup the Video Team machine installing Buster and using ansible playbooks. It was a very nice opportunity to learn how to do that.

A complete report from DebConf Video Team can be read here.

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here. I intend to write more in english about MiniDebCamp and FOSDEM in a specific post.

Many thanks to Debian for sponsor my trip to Brussels. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs.


I did a MR to the DebConf20 website fixing some texts.

I joined the WordPress Meetup

I joined a live streaming from Comunidade Debian Brasil to talk about MiniDebConf Maceió 2020.

I watched an interesting vídeo “Who has afraid of Debian Sid” from debxp channel

I deleted the Agenda de eventos de Sofware Livre e Código Aberto because I wasn’t receiving events to add there, and I was not having free time to publicize it.

I started to write the list of FLOSS events for 2020 that I keep in my website for many years.

Finally I have watched vídeos from DebConf19. Until now, I saw these great talks:

  • Bastidores Debian - entenda como a distribuição funciona
  • Benefícios de uma comunidade local de contribuidores FLOSS
  • Caninos Loucos: a plataforma nacional de Single Board Computers para IoT
  • Como obter ajuda de forma eficiente sobre Debian
  • Comunidades: o bom o ruim e o maravilhoso
  • O Projeto Debian quer você!
  • A newbie’s perspective towards Debian
  • Bits from the DPL
  • I’m (a bit) sick of maintaining piuparts.debian.org (mostly) alone, please help

That’s all folks!

11 February, 2020 10:00AM

February 10, 2020

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in January 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in February) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Again Reiner Herrman did a very good job with updating some of the most famous FOSS games in Debian. I reviewed and sponsored supertux, supertuxkart 1.1 and love 11.3, also several updates to fix build failures with the latest version of scons in Debian. Reiner Herrmann, Moritz Mühlenhoff and Phil Wyett contributed patches to fix release critical bugs in netpanzer, boswars, btanks, and xboxdrv.
  • I packaged new upstream versions of minetest 5.1.1, empire 1.15 and bullet 2.89.
  • I backported freeciv 2.6.1 to buster-backports and
  • applied a patch by Asher Gordon to fix a teleporter bug in berusky2. He also submitted another patch to address even more bugs and I hope to review and upload a new revision soon.

Debian Java


  • As the maintainer I requested the removal of pyblosxom, a web blog engine written in Python 2. Unfortunately pyblosxom is no longer actively maintained and the port to Python 3 has never been finished. I thought it would be better to remove the package now since we have a couple of good alternatives like Hugo or Jekyll.
  • I packaged new upstream versions of wabt and privacybadger.

Debian LTS

This was my 47. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2065-1. Issued a security update for apache-log4j1.2 fixing 1 CVE.
  • DLA-2077-1. Issued a security update for tomcat7 fixing 2 CVE.
  • DLA-2078-1. Issued a security update for libxmlrpc3-java fixing 1 CVE.
  • DLA-2097-1. Issued a security update for ppp fixing 1 CVE.
  • DLA-2098-1. Issued a security update for ipmitool fixing 1 CVE.
  • DLA-2099-1. Issued a security update for checkstyle fixing 1 CVE.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my twentieth month and I have been paid to work 10 hours on ELTS.

  • ELA-208-1. Issued a security update for tomcat7 fixing 2 CVE.
  • ELA-209-1. Issued a security update for linux fixing 41 CVE.
  • Investigated CVE-2019-17023 in nss which is needed to build and run OpenJDK 7. I found that the vulnerability did not affect this version of nss because of the incomplete and experimental support for TLS 1.3.

Thanks for reading and see you next time.

10 February, 2020 10:57PM by apo

Ruby Team

Ruby Team Sprint 2020 in Paris - Day Five - We’ve brok^done it

On our last day we met like every day before, working on our packages, fixing and uploading them. The transitions went on. Antonio, Utkarsh, Lucas, Deivid, and Cédric took some time to examine the gem2deb bug reports. We uploaded the last missing Kali Ruby package. And we had our last discussion, covering the future of the team and an evaluation of the sprint:

Last discussion of the Ruby Team Sprint 2020 in Paris Last discussion round of the Ruby Team Sprint 2020 in Paris

As a result:

  • We will examine ways to find leaf packages.
  • We plan to organize another sprint next year right before the release freeze, probably again about FOSDEM time. We tend to have it in Berlin but will explore the locations available and the costs.
  • We will have monthly IRC meetings.

We think the sprint was a success. Some stuff got (intentionally and less intentionally) broken on the way. And also a lot of stuff got fixed. Eventually we made our step towards a successful Ruby 2.7 transition.

So we want to thank

  • the Debian project and our DPL Sam for sponsoring the event,
  • Offensive Security for sponsoring the event too,
  • Sorbonne Université and LPSM for hosting us,
  • Cédric Boutillier for organizing the sprint and kindly hosting us,
  • and really everyone who attended, making this a success: Antonio, Abhijith, Georg, Utkarsh, Balu, Praveen, Sruthi, Marc, Lucas, Cédric, Sebastien, Deivid, Daniel.
Group photo of the attendees of the Ruby Team Sprint 2020 in Paris Group photo; from the left in the Back: Antonio, Abhijith, Georg, Utkarsh, Balu, Praveen, Sruthi, Josy. And in the Front: Marc, Lucas, Cédric, Sebastien, Deivid, Daniel.

In the evening we finally closed the venue which hosted us for 5 days, cleaned up, and went for a last beer together (at least for now). Some of us will stay in Paris a few days longer and finally get to see the city.

Eiffel tower in Paris Eiffel Tower Paris (February 2020)

Goodbye Paris and save travels to everyone. It was a pleasure.

10 February, 2020 10:45PM by Daniel Leidert (dleidert@debian.org)

Utkarsh Gupta

Debian Activities for January 2020

Here’s my (fourth) monthly update about the activities I’ve done in Debian this January.

Debian LTS

This was my fourth month as a Debian LTS paid contributor.
I was assigned 23.75 hours and worked on the following things:

CVE Fixes and Announcements:

  • Issued DLA 2060-1, fixing CVE-2020-5504, for phpmyadmin.
    Details here:

    In phpMyAdmin 4 before 4.9.4 and 5 before 5.0.1, SQL injection exists in the user accounts page. A malicious user could inject custom SQL in place of their own username when creating queries to this page. An attacker must have a valid MySQL account to access the server.

    For Debian 8 “Jessie”, this problem has been fixed in version 4:4.2.12-2+deb8u8.
    Furthermore, worked on preparing the security update for Stretch and Buster with the original maintainer.

  • Issued DLA 2063-1, fixing CVE-2019-3467 for debian-lan-config.
    Details here:

    In debian-lan-config < 0.26, configured too permissive ACLs for the Kerberos admin server allowed password changes for other Kerberos user principals.

    For Debian 8 “Jessie”, this problem has been fixed in version 0.19+deb8u2.

  • Issued DLA 2070-1, fixing CVE-2019-16779, for ruby-excon.
    Details here:

    In RubyGem excon before 0.71.0, there was a race condition around persistent connections, where a connection which is interrupted (such as by a timeout) would leave data on the socket. Subsequent requests would then read this data, returning content from the previous response.

    For Debian 8 “Jessie”, this problem has been fixed in version 0.33.0-2+deb8u1.
    Furthermore, sent a patch to the Security team for Stretch and Buster.

    P.S. this backporting took the most time and effort this month.

  • Issued DLA 2090-1, fixing CVE-2020-7039, for qemu.
    Details here:

    tcp_emu in tcp_subr.c in libslirp 4.1.0, as used in QEMU 4.2.0, mismanages memory, as demonstrated by IRC DCC commands in EMU_IRC. This can cause a heap-based buffer overflow or other out-of-bounds access which can lead to a DoS or potential execute arbitrary code.

    For Debian 8 “Jessie”, this problem has been fixed in version 1:2.1+dfsg-12+deb8u13.


  • Triaged samba, cacti, storebackup, and qemu.

  • Checked with upstream of ruby-rack for their CVE fix which induces regression.

  • Worked a bit on ruby-rack-cors but couldn’t complete because of Amsterdam -> Brussels travel. Thanks to Brian for completing it \o/

Debian Uploads

This was a great month! MiniDebCamp -> FOSDEM -> Ruby Sprints. Blog post soon :D
In any case, in the month of January, I did the following work:

New Version:

  • ruby-haml-rails ~ 2.0.1-1 (to unstable).

Source-Only and Other Uploads:

  • golang-github-zyedidia-pty ~ 1.1.1+git20180126.3036466-3 (to unstable).
  • ruby-benchmark-suite ~ 1.0.0+git.20130122.5bded6-3 (to unstable).
  • golang-github-robertkrimen-otto ~ 0.0+git20180617.15f95af-2~bpo10+1 (to buster-backports).
  • golang-github-zyedidia-pty ~ 1.1.1+git20180126.3036466-3~bpo10+1 (to buster-backports).
  • golang-github-mitchellh-go-homedir ~ 1.1.0-1~bpo10+1 (to buster-backports).
  • golang-golang-x-sys ~ 0.0+git20190726.fc99dfb-1~bpo10+1 (to buster-backports).
  • golang-github-mattn-go-isatty ~ 0.0.8-2~bpo10+1 (to buster-backports).
  • golang-github-mattn-go-runewidth ~ 0.0.7-1~bpo10+1 (to buster-backports).
  • golang-github-dustin-go-humanize ~ 1.0.0-1~bpo10+1 (to buster-backports).
  • golang-github-blang-semver ~ 3.6.1-1~bpo10+1 (buster-backports).
  • golang-github-flynn-json5 ~ 0.0+git20160717.7620272-2~bpo10+1 (to buster-backports).
  • golang-github-zyedidia-terminal ~ 0.0+git20180726.533c623-2~bpo10+1 (to buster-backports).
  • golang-github-go-errors-errors ~ 1.0.1-3~bpo10+1 (to buster-backports).
  • python-debianbts ~ 3.0.2~bpo10+1 (to buster-backports).

Bug Fixes:

  • #945232 for ruby-benchmark-suite.
  • #946904 for ruby-excon (CVE-2019-19779).

Reviews and Sponsored Uploads:

  • phpmyadmin for William Desportes.
  • Outreachy mentoring for GitLab project for Sakshi Sangwan.
  • Raised various MRs upstream to sync Debian’s package version with GitLab’s upstream.

One exciting blog post coming very soon.

Until next time.
:wq for today.

10 February, 2020 09:50PM

Reproducible Builds

Reproducible Builds in January 2020

Welcome to the January 2020 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to. In this month’s report, we cover:

  • Upstream news & event coverageReproducing the Telegram messenger, etc.
  • Software developmentUpdates and improvements to our tooling
  • Distribution workMore work in Debian, openSUSE & friends
  • Misc newsFrom our mailing list & how to get in touch etc.
What are reproducible builds?

Whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

If you are interested in contributing, please visit the Contribute page on our website.

Upstream news & event coverage

The Telegram messaging application has documented full instructions for verifying that its original source code is exactly the same code that is used to build the versions available on the Apple App Store and Google Play.

Reproducible builds were mentioned in a panel on Software Distribution with Sam Hartman, Richard Fontana, & Eben Moglen at the Software Freedom Law Center’s 15h Anniversary Fall Conference (at ~35m21s).

Vagrant Cascadian will present a talk at SCALE 18x in Pasadena, California on March 8th titled There and Back Again, Reproducibly.

Matt Graeber (@mattifestation) posted on Twitter that:

If you weren’t aware of the reason Portable Executable timestamps in Win 10 binaries were nonsensical, Raymond’s post explains the reason: to support reproducible builds.

… referencing an article by Raymond Chen from January 2018 which, amongst other things, mentions:

One of the changes to the Windows engineering system begun in Windows 10 is the move toward reproducible builds.

Jan Nieuwenhuizen announced the release of GNU Mes 0.22. Vagrant Cascadian subsequently uploaded this version to Debian which produced a bit-for-bit identical mescc-mes-static binary with the mes-rb5 package in GNU Guix.

Software development


diffoscope is our in-depth and content-aware diff-like utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of nondeterministic behaviour.

This month, diffoscope versions 135 and 136 were uploaded to Debian unstable by Chris Lamb. He also made the following changes to diffoscope itself, including:

  • New features:

    • Support external difference tools such as Meld, etc. similar to git-difftool(1). (#87)
    • Extract resources.arsc files as well as classes.dex from Android .apk files to ensure that we show the differences there. (#27)
    • Fallback to the regular .zip container format for .apk files if apktool is not available. [][][][]
    • Drop --max-report-size-child and --max-diff-block-lines-parent; scheduled for removal in January 2018. []
    • Append a comment to a difference if we fallback to a less-informative container format but we are missing a tool. [][]
  • Bug fixes:

    • No longer raise a KeyError exception if we request an invalid member from a directory container. []
  • Documentation/workflow improvements:

    • Clarify that “install X” in various outputs actually refers to system-level packages. []
    • Add a note to the Contributing documentation to suggest enable concurrency when running the tests locally. []
    • Include the CONTRIBUTING.md file in the PyPI.org release. [][]
  • Logging improvements:

    • Log a debug-level message if we cannot open a file as container due to a missing tool to assist in diagnosing issues. []
    • Correct a debug message related to compare_meta calls to quote the arguments correctly. []
    • Add the current PATH environment variable to the Normalising locale... debug-level message. []
    • Print the Starting diffoscope $VERSION line as the first line of the log as we are, well, starting diffoscope. []
    • If we don’t know the HTML output name, don’t emit an enigmatically truncated HTML output for debug message. []
  • Tests:

    • Don’t exhaustively output the entire HTML report when testing the regression for #875281; parsing the JSON and pruning the tree should be enough. (#84)
    • Refresh and update the fixtures for the .ico tests to match the latest version of Imagemagick in Debian unstable. []
  • Code improvements:

    • Add a .git-blame-ignore-revs file to improve the output of git-blame(1) by ignoring large changes when introducing the Black source code reformatter and update the CONTRIBUTING.md guide on how to optionally use it locally. []
    • Add a noqa line to avoid a false-positive Flake8 “unused import” warning. []
    • Move logo.svg to under the doc/ directory [] and make setup.py executable [].
    • Tidy diffoscope.main’s configure method. [][][][]
    • Drop an assertion that is guaranteed by parallel if conditional [] and an unused “Difference” import from the APK comparator. []
    • Turn down the “volume” for a recommendation in a comment. []
    • Rename the diffoscope.locale module to diffoscope.environ as we are modifying things beyond just the locale (eg. calling tzset, etc.) []
    • Factor-out the generation of foo not available in path comment messages into the exception that raises them [] and factor out running all of our many zipinfo into a new method [].
  • trydiffoscope is the web-based version of diffoscope. This month, Chris Lamb fixed the PyPI.org release by adding the trydiffoscope script itself to the MANIFEST file and performing another release cycle. []

In addition, Marc Herbert adjusted the cbfstool tests to search for expected keywords in the output, rather than specific output [], fixed a misplaced debugging line [] and added a “Testing” section to the CONTRIBUTING.rst [] file. Vagrant Cascadian updated to diffoscope 135 in GNU Guix.


reprotest is our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences. This month, versions 0.7.11 and 0.7.12 were uploaded to Debian unstable by Holger Levsen. This month, Iñaki Malerba improved the version test to split on the + character [] and Ross Vandegrift updated the code to allow the user to override timeouts from the surrounding environment [].

Holger Levsen also made the following additionally changes:

  • Drop the short timeout and use the install timeout instead. (#897442)
  • Use “real” reStructuredText comments instead of using the raw directive. []
  • Update the PyPI classifier to express we are using Python 3.7 now. []

Other tools

  • disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues. This month, Chris Lamb fixed an issue by ignoring the return values of fsyncdir to ensure (for example) dpkg(1) can “flush” /var/lib/dpkg correctly [] and merged a change from Helmut Grohne to use the build architecture’s version of pkg-config to permit cross-architecture builds [].

  • strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, version 1.6.3-2 was uploaded to Debian unstable by Holger Levsen to bump the Standards-Version. []

Upstream development

The Reproducible Builds project detects, dissects and attempts to fix as many unreproducible packages as possible. Naturally, we endeavour to send all of our patches upstream. This month, we wrote another large number of such patches, including:

Distribution work


In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update and submitted the following bugs and patches:

Many Python packages were updated to avoid writing .pyc files with an embedded random path, including jupyter-jupyter-wysiwyg, jupyter-jupyterlab-latex, python-PsyLab, python-hupper, python-ipyevents (don’t rewrite .zip file), python-ipyleaflet, python-jupyter-require, python-jupyter_kernel_test, python-nbdime (do not rewrite .zip, avoid time-based .pyc), python-nbinteract, python-plaster, python-pythreejs, python-sidecar & tensorflow (use pip install --no-compile).


There was yet more progress towards making the Debian Installer images reproducible. Following-on from last months’ efforts, Chris Lamb requested a status update on the Debian bug in question.

Daniel Schepler posted to the debian-devel mailing list to ask whether “running dpkg-buildpackage manually from the command line” is supported, particularly with respect to having extra packages installed during the package was built either resulted in a failed build or even broken packages (eg. #948522, #887902, etc.). Our .buildinfo files could be one solution to this as they record the environment at the time of the package build.

Holger disabled scheduling of packages from the “oldstable” stretch release on tests.reproducible-builds.org. This is the first time since stretch’s existence that we are no longer testing this release.

OpenJDK, a free and open-source implementation of the Java Platform was updated in Debian to incorporate a number of patches from Emmanuel Bourg, including:

  • Make the generated character data source files reproducible. (#933339)
  • Make the generated module-info.java files reproducible. (#933342)
  • Make the generated copyright headers reproducible. (#933349)
  • Make the build user reproducible. (#933373)

83 reviews of Debian packages were added, 32 were updated and 96 were removed this month adding to our knowledge about identified issues. Many issue types were updated by Chris Lamb, including timestamp_in_casacore_tables, random_identifiers_in_epub_files_generated_by_asciidoc, nondeterministic_ordering_in_casacore_tables, captures_build_path_in_golang_compiler, captures_build_path_via_haskell_adddependentfile & png_generated_by_plantuml_captures_kernel_version_and_builddate`.

Lastly, Mattia Rizzolo altered the permissions and shared the notes.git repository which underpins the aforementioned package classifications with the entire “Debian” group on Salsa, therefore giving all DDs write access to it. This is an attempt to invite more direct contributions instead of merge requests.

Other distributions

The FreeBSD Project Tweeted that:

Reproducible builds are turned on by default for -RELEASE []

… which targets the next released version of this distribution (view revision). Daniel Ebdrup followed-up to note that this option:

Used to be turned on in -CURRENT when it was being tested, but it has been turned off now that there’s another branch where it’s used, whereas -CURRENT has more need to have the revision printed in uname (which is one of the things that make a build unreproducible). []

For Alpine Linux, Holger Levsen disabled the builders run by the Reproducible Builds project as our patch to the abuild utility (see December’s report doesn’t apply anymore and thus all builds have become unreproducible again. Subsequent to this, a patch was merged upstream. []

In GNU Guix, on January 14th, Konrad Hinsen posted a blog post entitled Reproducible computations with Guix which, amongst other things remarks that:

The [guix time-machine command] machine actually downloads the specified version of Guix and passes it the rest of the command line. You are running the same code again. Even bugs in Guix will be reproduced faithfully!

The Yocto Project reported that they have reproducible cross-built binaries that are independent of both the underlying host distribution the build is run on and independent of the path used for the build. This is now being continually tested on the Yocto Project’s automated infrastructure to ensure this state is maintained in the future.

Project website & documentation

There was more work performed on our website this month, including:

In addition, Arnout Engelen added a Scala programming language example for the SOURCE_DATE_EPOCH environment variable [], David del Amo updated the link to the Software Freedom Conversancy to remove some double parentheses [] and Peter Wu added a Debian example for the -ffile-prefix-map argument to support Clang version 10 [].

Testing framework

We operate a fully-featured and comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Adrian Bunk:
    • Use the et_EE locale/language instead of fr_CH. In Estonian, the z character is sorted between s and t which is contrary to common incorrect assumptions about the sorting order of ASCII characters.. []
    • Add ffile_prefix_map_passed_to_clang to the list of issues filtered as these build failures should be ignored. []
    • Remove the ftbfs_build_depends_not_available_on_amd64 from the list of filtered issues as this specific problem no longer exists. []
  • Holger Levsen:

    • Debian:
      • Always configure apt to ignore expired release files on hosts running in the future. []
      • Create an “oldsuites” page, showing suites we used to test in the past. [][][][][]
      • Schedule more old packages from the buster distribution. []
      • Deal with shell escaping and other options. [][][]
      • Reverse the suite ordering on the packages page. [][]
      • Show bullseye statistics on dashboard page, moving away from buster [] and additionally omit stretch [].
    • F-Droid:
      • Document the increased diskspace requirements; we require over 700 GiB now. []
    • Misc:
      • Gracefully deal with umount problems. [][]
      • Run code to show “todo” entries locally. []
      • Use mmdebstrap instead of debootstrap. [][][]
  • Jelle van der Waa (Arch Linux):

    • Set the PACKAGER variable to a valid string to avoid noise in the logging. []
    • Add a link to the Arch Linux-specific package page in the overview table. []
  • Mattia Rizzolo:
    • Fix a hard-coded reference to the current year. []
    • Ignore No server certificate defined warning messages when automatically parsing logfiles. []
  • Vagrant Cascadian special-cased u-boot on the armhf architecture: First, do not build the all architecture as the dependencies are not available on this architecture [] and also pass the --binary-arch argument to pbuilder too [].

The usual node maintenance was performed by Mattia Rizzolo [][], Vagrant Cascadian [][][][] and Holger Levsen.

Misc news

On our mailing list this month:

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can also get in touch with us via:

This month’s report was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, heinrich5991, Holger Levsen, Jelle van der Waa, Mattia Rizzolo and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

10 February, 2020 05:33PM

hackergotchi for Anisa Kuci

Anisa Kuci


As many other people, this year I attended FOSDEM.

For the ones that might not be familiar with the name, FOSDEM is the biggest free software developers gathering in Europe, happening every year in Brussels, Belgium.

This year I decided to attend again as it is an event I have really enjoyed the last two times I have attended during the past years. As I am currently doing my Outreachy internship I found FOSDEM a very good opportunity to receive some more inspiration. My goal was to come back from this event with some ideas or motivation that would help during the last phases of my internship, as I need to work on documentation and best practices on fundraising. I also wanted to meet in person the people that I have worked with so far regarding Outreachy and discuss with them in person about organizational topics and even ask for advice.

As FOSDEM is quite big, I saw again and met many Debian community members and I received very nice feedback on my work on Outreachy. During the weekend I spent a little bit of time at the Debian booth, were I tried to help as all the people there were already busy and the Debian booth during FOSDEM is really crowded. I understand why, I couldn’t resist but buying some Debian merchandise myself. I felt proud to also see the results of my work on the fundraising materials for DebConf20, as the fundraising brochure that I worked on and the “freshly baked” stickers were available at the booth, to promote the next DebConf.

FOSDEM 2020 - Debian booth merchandise

During the weekend I volunteered to help at the GNOME booth, which was quite crowded as well. This is not the first time I contribute to GNOME. I have been adapted very quickly to the GNOME community as everyone is very friendly and positive, so for me it was very enjoyable to spend some time there as well. I was also introduced to the GNOME Asia organizing team and had a great exchange on our mutual interest of organizing conferences. Thank you for the GNOME Asia keychain!

FOSDEM 2020 - Meeting GNOME Asia

I attended a few talks, and unfortunately I missed some other ones that I was interested in. Luckily the FOSDEM team works hard and they have recorded the talks, so they are available online for people who could not make it to the conference or to the talk rooms because they are often full.

FOSDEM 2020 - Attending talks

As I am working on fundraising I was requested by the team to be part of a meeting with one of the potential sponsors for DebConf20. We have been discussing the sponsor levels available and perks that this specific company would be interested in receiving. This was a good experience for me as this kind of in-person communication is very important for establishing a good connection with potential work partners.

My Outreachy internship finishes soon and this is also one of the reasons why my mentor supported attending FOSDEM using the Outreachy stipend. FOSDEM is huge, and you meet hundreds of people within two days, so it is a good opportunity to look for a future job. There is also a job fair booth where companies post job offers. I surely passed by and got myself some offers that I thought would be suitable for me.

And the cherry on top of the cake during FOSDEM, are all the booths distributed in different buildings. I did not only meet friends from different communities, but also got to know so many new projects that I had not heard of before. And of course, got some very nice swag. Stickers and other goodies are never too much!

Thank you Debian, Outreachy and SFC for enabling me to attend FOSDEM 2020.

FOSDEM 2020 - Debian booth

10 February, 2020 12:30PM

François Marier

Fedora 31 LXC setup on Ubuntu Bionic 18.04

Similarly to what I wrote for Fedora 29, here is how I was able to create a Fedora 31 LXC container on an Ubuntu 18.04 (bionic) laptop.

Setting up LXC on Ubuntu

First of all, install lxc:

apt install lxc
echo "veth" >> /etc/modules
modprobe veth

turn on bridged networking by putting the following in /etc/sysctl.d/local.conf:


and applying it using:

sysctl -p /etc/sysctl.d/local.conf

Then allow the right traffic in your firewall (/etc/network/iptables.up.rules in my case):

# LXC containers
-A FORWARD -d -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -d -s -j ACCEPT
-A INPUT -d -s -j ACCEPT
-A INPUT -d -s -j ACCEPT
-A INPUT -d -s -j ACCEPT

and apply these changes:


before restarting the lxc networking:

systemctl restart lxc-net.service

Create the container

Once that's in place, you can finally create the Fedora 29 container:

lxc-create -n fedora31 -t download -- -d fedora -r 31 -a amd64

To see a list of all distros available with the download template:

lxc-create -n foo --template=download -- --list

Once the container has been created, disable AppArmor for it:

lxc.apparmor.profile = unconfined

since the AppArmor profile isn't working at the moment.

Logging in as root

Starting the container in one window:

lxc-start -n fedora31 -F

and attaching to a console:

lxc-attach -n fedora31

to set a root password:


Logging in as an unprivileged user via ssh

While logged into the console, I tried to install ssh:

$ dnf install openssh-server
Cannot create temporary file - mkstemp: No such file or directory

but it failed because TMPDIR is set to a non-existent directory:

$ echo $TMPDIR

I found a fix and ran the following:

TMPDIR=/tmp dnf install openssh-server

then started the ssh service:

systemctl start sshd.service

Then I installed a few other packages as root:

dnf install vim sudo man

and created an unprivileged user with sudo access:

adduser francois -G wheel
passwd francois

I set this in /etc/ssh/sshd_config:

GSSAPIAuthentication no

to prevent slow ssh logins.

Now login as that user from the console and add an ssh public key:

mkdir .ssh
chmod 700 .ssh
echo "<your public key>" > .ssh/authorized_keys
chmod 644 .ssh/authorized_keys

You can now login via ssh. The IP address to use can be seen in the output of:

lxc-ls --fancy

10 February, 2020 05:30AM

February 09, 2020

Enrico Zini

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.9.850.1.0

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 685 other packages on CRAN.

A new upstream release 9.850.1 of Armadillo was just released. And as some will undoubtedly notice, Conrad opted for an increment of 50 rather 100. We wrapped this up as version 0.9.850.1.0, having prepared a full (github-only) tarball and the release candidate 9.850.rc1 a few days ago. Both the release candidate and the release got the full reverse depends treatment, and no issues were found.

Changes in the new release below.

Changes in RcppArmadillo version 0.9.850.1.0 (2020-02-09)

  • Upgraded to Armadillo release 9.850.1 (Pyrocumulus Wrath)

    • faster handling of compound expressions by diagmat()

    • expanded .save() and .load() to handle CSV files with headers via csv_name(filename,header) specification

    • added log_normpdf()

    • added .is_zero()

    • added quantile()

  • The sparse matrix test using scipy, if available, is now simplified thanks to recently added reticulate conversions.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 February, 2020 10:29PM