March 21, 2018

Vincent Fourmond

Release 2.2 of QSoas

The new release of QSoas is finally ready ! It brings in a lot of new features and improvements, notably greatly improved memory use for massive multifits, a fit for linear (in)activation processes (the one we used in Fourmond et al, Nature Chemistry 2014), a new way to transform "numbers" like peak position or stats into new datasets and even SVG output ! Following popular demand, it also finally brings back the peak area output in the find-peaks command (and the other, related commands) ! You can browse the full list of changes there.

The new release can be downloaded from the downloads page.

Freely available binary images for QSoas 1.0

In addition to the new release, we are now releasing the binary images for MacOS and Windows for the release 1.0. They are also freely available for download from the downloads page.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.2. You can download its source code or buy precompiled versions for MacOS and Windows there.

21 March, 2018 08:46PM by Vincent Fourmond (

hackergotchi for Julien Danjou

Julien Danjou

On blog migration

On blog migration

I've started my first Web page in 1998 and one could say that it evolved quite a bit in the meantime. From a Frontpage designed Web site with frames, it evolved to plain HTML files. I've started blogging in 2003, though the archives of this blog only gets back to 2007. Truth is, many things I wrote in the first years were short (there were no Twitter) and not that relevant nowadays. Therefore, I never migrated them along the road of the many migrations that site had.

The last time I switched this site engine was in 2011, were I switched from Emacs Muse (and my custome muse-blog.el extension) to Hyde, a static Web site generator written in Python.

That taught me a few things.

First, you can't really know for sure which project will be a ghost in 5 years. I had no clue back then that Hyde author would lose interest and struggle passing the maintainership to someone else. The community was not big but it existed. Betting on a horse is part skill and part chance. My skills were probably lower seven years ago and I also may have had bad luck.

Secondly, maintaining a Web site is painful. I used to blog more regularly a few years ago, as the friction of using a dynamic blog engine was lower than spawning my deprecated static engine. Knowing that it needs 2 minutes to generate a static Web site really makes it difficult to compose and see the result at the same time without losing patience. It took me a few years to decide it was time to invest in the migration. I just jumped from Hyde to Ghost, hosted on their Pro engine as I don't want to do any maintenance. Let's be honest, I've no will to inflict myself the maintenance of a JavaScript blogging engine.

On blog migration

The positive side is that this is still Markdown based, so I the migration job was not so painful. Ghost offers a REST API which allow to manipulate most of the content. It works fine, and I was able to leverage the Python ghost-client to write a tiny migration script to migrate every post.

I am looking forward to share most of the things that I work on during the next months. I really enjoyed reading contents of great hackers those last years, and I learnt ton of things by reading the adventure of smarter engineers.

It might be my time to share.

21 March, 2018 06:46PM by Julien Danjou

Jeremy Bicha

gksu is dead. Long live PolicyKit

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.


21 March, 2018 05:29PM by Jeremy Bicha

Petter Reinholdtsen

Facebooks ability to sell your personal information is the real Cambridge Analytica scandal

So, Cambridge Analytica is getting some well deserved criticism for (mis)using information it got from Facebook about 50 million people, mostly in the USA. What I find a bit surprising, is how little criticism Facebook is getting for handing the information over to Cambridge Analytica and others in the first place. And what about the people handing their private and personal information to Facebook? And last, but not least, what about the government offices who are handing information about the visitors of their web pages to Facebook? No-one who looked at the terms of use of Facebook should be surprised that information about peoples interests, political views, personal lifes and whereabouts would be sold by Facebook.

What I find to be the real scandal is the fact that Facebook is selling your personal information, not that one of the buyers used it in a way Facebook did not approve when exposed. It is well known that Facebook is selling out their users privacy, but a scandal nevertheless. Of course the information provided to them by Facebook would be misused by one of the parties given access to personal information about the millions of Facebook users. Collected information will be misused sooner or later. The only way to avoid such misuse, is to not collect the information in the first place. If you do not want Facebook to hand out information about yourself for the use and misuse of its customers, do not give Facebook the information.

Personally, I would recommend to completely remove your Facebook account, and take back some control of your personal information. According to The Guardian, it is a bit hard to find out how to request account removal (and not just 'disabling'). You need to visit a specific Facebook page and click on 'let us know' on that page to get to the real account deletion screen. Perhaps something to consider? I would not trust the information to really be deleted (who knows, perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the exposure a bit.

If you want to learn more about the capabilities of Cambridge Analytica, I recommend to see the video recording of the one hour talk Paul-Olivier Dehaye gave to NUUG last april about Data collection, psychometric profiling and their impact on politics.

And if you want to communicate with your friends and loved ones, use some end-to-end encrypted method like Signal or Ring, and stop sharing your private messages with strangers like Facebook and Google.

21 March, 2018 03:30PM

Iustin Pop

Hakyll basics

As part of my migration to Hakyll, I had to spend quite a bit time understanding how it works before I became somewhat “at-home” with it. There are many posts that show “how to do x”, but not so many that explain its inner workings. Let me try to fix that: at its core, Hakyll is nothing else than a combination of make and m4 all in one. Simple, right? Let’s see :)

Note: in the following, basic proficiency with Haskell is assumed.

Monads and data types


The first area (the make equivalent), more precisely the Rules monad, concerns itself with the rules for mapping source files into output files, or creating output files from scratch.

Key to this mapping is the concept of an Identifier, which is name in an abstract namespace. Most of the time—e.g. for all the examples in the upstream Hakyll tutorial—this identifier actually maps to a real source file, but this is not required; you can create an identifier from any string value.

The similarity, or relation, to file paths manifests in two ways:

  • the Identifier data type, although opaque, is internally implemented as a simple data type consisting of a file path and a “version”; the file path here points to the source file (if any), while the version is rather a variant of the item (not a numeric version!).
  • if the identifier has been included in a rule, it will have an output file (in the Compiler monad, via getRoute).

In effect, the Rules monad is all about taking source files (as identifiers) or creating them from scratch, and mapping them to output locations, while also declaring how to transform—or create—the contents of the source into the output (more on this later). Anyone can create an identifier value via fromFilePath, but “registering” them into the rules monad is done by one of:

Note: I’m probably misusing the term “registered” here. It’s not the specific value that is registered, but the identifier’s file path. Once this string value has been registered, one can use a different identifier value with a similar string (value) in various function calls.

Note: whether we use match or create doesn’t matter; only the actual values matter. So a match "" is equivalent to create [""], match here takes the list of identifiers from the file-system, but does not associated them to the files themselves—it’s just a way to get the list of strings.

The second argument to the match/create calls is another rules monad, in which we’re processing the identifiers and tell how to transform them.

This transformation has, as described, two aspects: how to map the file path to an output path, via the Rules data type, and how to compile the body, in the Compiler monad.

Name mapping

The name mapping starts with the route call, which lifts the routes into the rules monad.

The routing has the usual expected functionality:

  • idRoute :: Routes, which maps 1:1 the input file name to the output one.
  • setExtension :: String -> Routes, which changes the extension of the filename, or sets it (if there wasn’t any).
  • constRoute :: FilePath -> Routes, which is special in that it will result in the same output filename, which is obviously useful only for rules matching a single identifier.
  • and a few more options, like building the route based on the identifier (customRoute), building it based on metadata associated to the identifier (metadataRoute), composing routes, match-and-replace, etc.

All in all, routes offer all the needed functionality for mapping.

Note that how we declare the input identifier and how we compute the output route is irrelevant, what matters is the actual values. So for an identifier with name (file path), route idRoute is equivalent to constRoute "".


Slightly into more interesting territory here, as we’re moving beyond just file paths :) Lifting a compiler into the routes monad is done via the compile function:

The Compiler monad result is an Item a which is just and identifier with a body (of type a). This type variable a means we can return any Writable item. Many of the compiler functions work with/return String, but the flexibility to use other types is there.

The functionality in this module revolves around four topics:

The current identifier

First the very straightforward functions for the identifier itself:

  • getUnderlying :: Compiler Identifier, just returns the identifier
  • getUnderlyingExtension :: Compiler String, returns the extension

And the for the body (data) of the identifier (mostly copied from the haddock of the module):

  • getResourceBody :: Compiler (Item String): returns the full contents of the matched source file as a string, but without metadata preamble, if there was one.
  • getResourceString :: Compiler (Item String), returns the full contents of the matched source file as a string.
  • getResourceLBS :: Compiler (Item ByteString), equivalent to the above but as lazy bytestring.
  • getResourceFilePath :: Compiler FilePath, returns the file path of the resource we are compiling.

More or less, these return the data to enable doing arbitrary things to it, and are at the cornerstone of a static site compiler. One could implement a simple “copy” compiler by doing just:

All the other functions in the module work on arbitrary identifiers.


I’m used to Yesod and its safe routes functionality. Hakyll has something slightly weaker, but with programmer discipline can allow similar levels of I know this will point to the right thing (and maybe correct escaping as well). Enter the:

function which I alluded to earlier, and which—either for the current identifier or another identifier—returns the destination file path, which is useful for composing links (as in HTML links) to it.

For example, instead of hard-coding the path to the archive page, as /archive.html, one can instead do the following:

The reuse of archiveId above ensures that if the actual path to the archive page changes (renames, site reorganisation, etc.), then all the links to it (assuming, again, discipline of not hard-coding them) are automatically pointing to the right place.

Working with other identifiers

Getting to the interesting aspect now. In the compiler monad, one can ask for any other identifier, whether it was already loaded/compiled or not—the monad takes care of tracking dependencies/compiling automatically/etc.

There are two main functions:

  • load :: (Binary a, Typeable a) => Identifier -> Compiler (Item a), which returns a single item, and
  • loadAll :: (Binary a, Typeable a) => Pattern -> Compiler [Item a], which return a list of items, based on the same patterns used in the rules monad.

If the identifier/pattern requested do not match actual identifiers declared in the “parent” rules monad, then these calls will fail (as in monadic fail).

The use of other identifiers in a compiler step is what allows moving beyond “input file to output file”; aggregating a list of pages (e.g. blog posts) into a single archive page is the most obvious example.

But sometimes getting just the final result of the compilation step (of other identifiers) is not flexible enough—in case of HTML output, this includes the entire page, including the <html><head></head> part, not only the body we might be interested in. So, to ease any aggregation, one uses snapshots.


Snapshots allow, well, snapshotting the intermediate result under a specific name, to allow later retrieval:

  • saveSnapshot :: (Binary a, Typeable a) => Snapshot -> Item a -> Compiler (Item a), to save a snapshot
  • loadSnapshot :: (Binary a, Typeable a) => Identifier -> Snapshot -> Compiler (Item a), to load a snapshot, similar to load
  • loadAllSnapshots :: (Binary a, Typeable a) => Pattern -> Snapshot -> Compiler [Item a], similar to loadAll

One can save an arbitrary number of snapshots at various steps of the compilation, and then re-use them.

Note: load and loadAll are actually just the snapshot variant, with a hard-coded value for the snapshot. As I write this, the value is "_final", so probably it’s best not to use the underscore prefix for one’s own snapshots. A bit of a shame that this is not done better, type-wise.

What next?

We have rules to transform things, including smart name transforming, we have compiler functionality to transform the data. But everything mentioned until now is very generic, fundamental functionality, bare-bones to the bone (ha!).

With just this functionality, you have everything needed to build an actual site. But starting at this level would be too tedious even for hard-core fans of DIY, so Hakyll comes with some built-in extra functionality.

And that will be the next post in the series. This one is too long already :)

21 March, 2018 02:00AM

March 20, 2018

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Debian CEF packages

I've created some Debian CEF packages—CEF isn't the easiest thing to package (and it takes an hour to build even on my 20-core server, since it needs to build basically all of Chromium), but it's fairly rewarding to see everything fall into place. It should benefit not only Nageru, but also OBS and potentially CasparCG if anyone wants to package that.

It's not in the NEW queue because it depends on a patch to chromium that I hope the Chromium maintainers are brave enough to include. :-)

20 March, 2018 11:38PM

Reproducible builds folks

Reproducible Builds: Weekly report #151

Here's what happened in the Reproducible Builds effort between Sunday March 11 and Saturday March 17 2018:

Upcoming events

Patches submitted

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (168)
  • Emmanuel Bourg (2)
  • Pirate Praveen (1)
  • Tiago Stürmer Daitx (1)


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

20 March, 2018 07:59PM

hackergotchi for Neil McGovern

Neil McGovern

ED Update – week 11

It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.

  • The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
  • GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
    • Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
  • Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
  • The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
  • A couple of administrative bits:
    • We now have a filing system for paperwork in NextCloud
    • Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
    • Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
  • Helping out with a couple of trademark issues that have come up
  • Regular working sessions for Flathub legal bits with our lawyers
  • I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.

20 March, 2018 03:45PM by Neil McGovern

hackergotchi for Holger Levsen

Holger Levsen


Some problems with Code of Conducts

shiromarieke took her time and wrote an IMHO very good text about problems with Code of Conducts, which I wholeheartly recommend to read.

I'll just quote two sentences which I think are essential:

Quote 1: "This is not a rant - it is a call for action: Let's gather, let's build the structures we need to make all people feel safe and respected in our communities." - in that sense, if you have feedback, please share it with shiromarieke as suggested by her. I'm very thankful she is taking the time to discuss her critism and work on possible improvements! (I'll likely not discuss this online though I'll be happy to discuss offline.) I just wanted to share this link with the Debian communities, as I agree with many of shiromarieke's points and because I want to support effords to improve this, as I believe those efforts will benefit everyone (as diversity and a welcoming athmospehre benefits everyone).

Quote 2: "Although I don't believe CoC are a good solution to help fix problems I have and will always do my best to respect existing CoC of workplaces, events or other groups I am involved with and I am thankful for your attempt to make our places and communities safer." - me too.

20 March, 2018 12:26PM

hackergotchi for Daniel Pocock

Daniel Pocock

Can a GSoC project beat Cambridge Analytica at their own game?

A few weeks ago, I proposed a GSoC project on the topic of Firefox and Thunderbird plugins for Free Software Habits.

At first glance, this topic may seem innocent and mundane. After all, we all know what habits are, don't we? There are already plugins that help people avoid visiting Facebook too many times in one day, what difference will another one make?

Yet the success of companies like Facebook and those that prey on their users, like Cambridge Analytica (who are facing the prospect of a search warrant today), is down to habits: in other words, the things that users do over and over again without consciously thinking about it. That is exactly why this plugin is relevant.

Many students have expressed interest and I'm keen to find out if any other people may want to act as co-mentors (more information or email me).

One Facebook whistleblower recently spoke about his abhorrence of the dopamine-driven feedback loops that keep users under a spell.

The game changer

Can we use the transparency of free software to help users re-wire those feedback loops for the benefit of themselves and society at large? In other words, instead of letting their minds be hacked by Facebook and Cambridge Analytica, can we give users the power to hack themselves?

In his book The Power of Habit, Charles Duhigg lays bare the psychology and neuroscience behind habits. While reading the book, I frequently came across concepts that appeared immediately relevant to the habits of software engineers and also the field of computer security, even though neither of these topics is discussed in the book.

where is my cookie?

Most significantly, Duhigg finishes with an appendix on how to identify and re-wire your habits and he has made it available online. In other words, a quickstart guide to hack yourself: could Duhigg's formula help the proposed plugin succeed where others have failed?

If you could change one habit, you could change your life

The book starts with examples of people who changed a single habit and completely reinvented themselves. For example, an overweight alcoholic and smoker who became a super-fit marathon runner. In each case, they show how the person changed a single keystone habit and everything else fell into place. Wouldn't you like to have that power in your own life?

Wouldn't it be even better to share that opportunity with your friends and family?

One of the challenges we face in developing and promoting free software is that every day, with every new cloud service, the average person in the street, including our friends, families and co-workers, is ingesting habits carefully engineered for the benefit of somebody else. Do you feel that asking your friends and co-workers not to engage you in these services has become a game of whack-a-mole?

Providing a simple and concise solution, such as a plugin, can help people to find their keystone habits and then help them change them without stress or criticism. Many people want to do the right thing: if it can be made easier for them, with the right messages, at the right time, delivered in a positive manner, people feel good about taking back control. For example, if somebody has spent 15 minutes creating a Doodle poll and sending the link to 50 people, is there any easy way to communicate your concerns about Doodle? If a plugin could highlight an alternative before they invest their time in Doodle, won't they feel better?

If you would like to provide feedback or even help this project go ahead, you can subscribe here and post feedback to the thread or just email me.

cat plays whack-a-mole

20 March, 2018 12:15PM by Daniel.Pocock

March 19, 2018

hackergotchi for Jonathan McDowell

Jonathan McDowell

First impressions of the Gemini PDA

Gemini PDA

Last March I discovered the IndieGoGo campaign for the Gemini PDA, a plan to produce a modern PDA with a decent keyboard inspired by the Psion 5. At that point in time the estimated delivery date was November 2017, and it wasn’t clear they were going to meet their goals. As someone has owned a variety of phones with keyboards, from a Nokia 9000i to a T-Mobile G1 I’ve been disappointed about the lack of mobile devices with keyboards. The Gemini seemed like a potential option, so I backed it, paying a total of $369 including delivery. And then I waited. And waited. And waited.

Finally, one year and a day after I backed the project, I received my Gemini PDA. Now, I don’t get as much use out of such a device as I would have in the past. The Gemini is definitely not a primary phone replacement. It’s not much bigger than my aging Honor 7 but there’s no external display to indicate who’s calling and it’s a bit clunky to have to open it to dial (I don’t trust Google Assistant to cope with my accent enough to have it ring random people). The 9000i did this well with an external keypad and LCD screen, but then it was a brick so it had the real estate to do such things. Anyway. I have a laptop at home, a laptop at work and I cycle between the 2. So I’m mostly either in close proximity to something portable enough to move around the building, or travelling in a way that doesn’t mean I could use one.

My first opportunity to actually use the Gemini in anger therefore came last Friday, when I attended BelFOSS. I’d normally bring a laptop to a conference, but instead I decided to just bring the Gemini (in addition to my normal phone). I have the LTE version, so I put my FreedomPop SIM into it - this did limit the amount I could do with it due to the low data cap, but for a single day was plenty for SSH, email + web use. I already have the Pro version of the excellent JuiceSSH, am a happy user of K-9 Mail and tend to use Chrome these days as well. All 3 were obviously perfectly happy on the Android 7.1.1 install.

Aside: Why am I not running Debian on the device? Planet do have an image available form their Linux Support page, but it’s running on top of the crufty 3.18 Android kernel and isn’t yet a first class citizen - it’s not clear the LTE will work outside Android easily and I’ve no hope of ARM opening up the Mali-T880 drivers. I’ve got plans to play around with improving the support, but for the moment I want to actually use the device a bit until I find sufficient time to be able to make progress.

So how did the day go? On the whole, a success. Battery life was great - I’d brought a USB battery pack expecting to need to boost the charge at some point, but I last charged it on Thursday night and at the time of writing it’s still claiming 25% battery left. LTE worked just fine; I had a 4G signal for most of the day with occasional drops down to 3G but no noticeable issues. The keyboard worked just fine; much better than my usual combo of a Nexus 7 + foldable Bluetooth keyboard. Some of the symbols aren’t where you’d expect, but that’s understandable on a scaled down keyboard. Screen resolution is great. I haven’t used the USB-C ports other than to charge and backup so far, but I like the fact there are 2 provided (even if you need a custom cable to get HDMI rather than it following the proper standard). The device feels nice and solid in your hand - the case is mostly metal plates that remove to give access to the SIM slot and (non-removable but user replaceable) battery. The hinge mechanism seems robust; I haven’t been worried about breaking it at any point since I got the device.

What about problems? I can’t deny there are a few. I ended up with a Mediatek X25 instead of an X27 - that matches what was initial promised, but there had been claims of an upgrade. Unfortunately issues at the factory meant that the initial production run got the older CPU. Later backers are support to get the upgrade. As someone who took the early risk this does leave a slightly bitter taste but I doubt I’ll actually notice any significant performance difference. The keys on the keyboard are a little lop sided in places. This seems to be just a cosmetic thing and I haven’t noticed any issues in typing. The lack of first class Debian support is disappointing, but I believe will be resolved in time (by the community if not Planet). The camera isn’t as good as my phone, but then it’s a front facing webcam style thing and it’s at least as good as my laptop at that.

Bottom line: Would I buy it again? At $369, absolutely. At the current $599? Probably not - I’m simply not on the move enough to need this on a regular basis, so I’d find it hard to justify. Maybe the 2nd gen, assuming it gets a bit more polish on the execution and proper mainline Linux support. Don’t get me wrong, I think the 1st gen is lovely and I’ve had lots of envious people admiring it, I just think it’s ended up priced a bit high for what it is. For the same money I’d be tempted by the GPD Pocket instead.

19 March, 2018 08:41PM

hackergotchi for Vincent Bernat

Vincent Bernat

Integration of a Go service with systemd: socket activation

In a previous post, I highlighted some useful features of systemd when writing a service in Go, notably to signal readiness and prove liveness. Another interesting bit is socket activation: systemd listens on behalf of the application and, on incoming traffic, starts the service with a copy of the listening socket. Lennart Poettering details in a blog post:

If a service dies, its listening socket stays around, not losing a single message. After a restart of the crashed service it can continue right where it left off. If a service is upgraded we can restart the service while keeping around its sockets, thus ensuring the service is continously responsive. Not a single connection is lost during the upgrade.

This is one solution to get zero-downtime deployment for your application. Another upside is you can run your daemon with less privileges—loosing rights is a difficult task in Go.1

The basics🔗

Let’s take back our nifty 404-only web server:

package main

import (

func main() {
    listener, err := net.Listen("tcp", ":8081")
    if err != nil {
        log.Panicf("cannot listen: %s", err)
    http.Serve(listener, nil)

Here is the socket-activated version, using go-systemd:

package main

import (


func main() {
    listeners, err := activation.Listeners(true) // ❶
    if err != nil {
        log.Panicf("cannot retrieve listeners: %s", err)
    if len(listeners) != 1 {
        log.Panicf("unexpected number of socket activation (%d != 1)",
    http.Serve(listeners[0], nil) // ❷

In ❶, we retrieve the listening sockets provided by systemd. In ❷, we use the first one to serve HTTP requests. Let’s test the result with systemd-socket-activate:

$ go build 404.go
$ systemd-socket-activate -l 8000 ./404
Listening on [::]:8000 as 3.

In another terminal, we can make some requests to the service:

$ curl '[::1]':8000
404 page not found
$ curl '[::1]':8000
404 page not found

For a proper integration with systemd, you need two files:

  • a socket unit for the listening socket, and
  • a service unit for the associated service.

We can use the following socket unit, 404.socket:

ListenStream = 8000
BindIPv6Only = both

WantedBy =

The systemd.socket(5) manual page describes the available options. BindIPv6Only = both is explicitely specified because the default value is distribution-dependent. As for the service unit, we can use the following one, 404.service:

Description = 404 micro-service

ExecStart = /usr/bin/404

systemd knows the two files work together because they share the same prefix. Once the files are in /etc/systemd/system, execute systemctl daemon-reload and systemctl start 404.​socket. Your service is ready to accept connections!

Handling of existing connections🔗

Our 404 service has a major shortcoming: existing connections are abruptly killed when the daemon is stopped or restarted. Let’s fix that!

Waiting a few seconds for existing connections🔗

We can include a short grace period for connections to terminate, then kill remaining ones:

// On signal, gracefully shut down the server and wait 5
// seconds for current connections to stop.
done := make(chan struct{})
quit := make(chan os.Signal, 1)
server := &http.Server{}
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)

go func() {
    log.Println("server is shutting down")
    ctx, cancel := context.WithTimeout(context.Background(),
    defer cancel()
    if err := server.Shutdown(ctx); err != nil {
        log.Panicf("cannot gracefully shut down the server: %s", err)

// Start accepting connections.

// Wait for existing connections before exiting.

Upon reception of a termination signal, the goroutine would resume and schedule a shutdown of the service:

Shutdown() gracefully shuts down the server without interrupting any active connections. Shutdown() works by first closing all open listeners, then closing all idle connections, and then waiting indefinitely for connections to return to idle and then shut down.

While restarting, new connections are not accepted: they sit in the listen queue associated to the socket. This queue is bounded and its size can be configured with the Backlog directive in the socket unit. Its default value is 128. You may keep this value, even when your service is expecting to receive many connections by second. When this value is exceeded, incoming connections are silently dropped. The client should automatically retry to connect. On Linux, by default, it will retry 5 times (tcp_syn_retries) in about 3 minutes. This is a nice way to avoid the herd effect you would experience on restart if you increased the listen queue to some high value.

Waiting longer for existing connections🔗

If you want to wait for a very long time for existing connections to stop, you do not want to ignore new connections for several minutes. There is a very simple trick: ask systemd to not kill any process on stop. With KillMode = none, only the stop command is executed and all existing processes are left undisturbed:

Description = slow 404 micro-service

ExecStart = /usr/bin/404
ExecStop  = /bin/kill $MAINPID
KillMode  = none

If you restart the service, the current process gracefully shuts down for as long as needed and systemd spawns immediately a new instance ready to serve incoming requests with its own copy of the listening socket. On the other hand, we loose the ability to wait for the service to come to a full stop—either by itself or forcefully after a timeout with SIGKILL.

Waiting longer for existing connections (alternative)🔗

An alternative to the previous solution is to make systemd believe your service died during reload.

done := make(chan struct{})
quit := make(chan os.Signal, 1)
server := &http.Server{}
    // for reload:
    // for stop or full restart:
    syscall.SIGINT, syscall.SIGTERM)
go func() {
    sig := <-quit
    switch sig {
    case syscall.SIGINT, syscall.SIGTERM:
        // Shutdown with a time limit.
        log.Println("server is shutting down")
        ctx, cancel := context.WithTimeout(context.Background(),
        defer cancel()
        if err := server.Shutdown(ctx); err != nil {
            log.Panicf("cannot gracefully shut down the server: %s", err)
    case syscall.SIGHUP: // ❶
        // Execute a short-lived process and asks systemd to
        // track it instead of us.
        log.Println("server is reloading")
        pid := detachedSleep()
        daemon.SdNotify(false, fmt.Sprintf("MAINPID=%d", pid))
        time.Sleep(time.Second) // Wait a bit for systemd to check the PID

        // Wait without a limit for current connections to stop.
        if err := server.Shutdown(context.Background()); err != nil {
            log.Panicf("cannot gracefully shut down the server: %s", err)

// Serve requests with a slow handler.
server.Handler = http.HandlerFunc(
    func(w http.ResponseWriter, r *http.Request) {
        time.Sleep(10 * time.Second)
        http.Error(w, "404 not found", http.StatusNotFound)

// Wait for all connections to terminate.
log.Println("server terminated")

The main difference is the handling of the SIGHUP signal in ❶: a short-lived decoy process is spawned and systemd is told to track it. When it dies, systemd will start a new instance. This method is a bit hacky: systemd needs the decoy process to be a child of PID 1 but Go cannot easily detach on its own. Therefore, we leverage a short Python helper, wrapped in a detachedSleep() function:2

// detachedSleep spawns a detached process sleeping
// one second and returns its PID.
func detachedSleep() uint64 {
    py := `
import os
import time

pid = os.fork()
if pid == 0:
    for fd in {0, 1, 2}:
    cmd := exec.Command("/usr/bin/python3", "-c", py)
    out, err := cmd.Output()
    if err != nil {
        log.Panicf("cannot execute sleep command: %s", err)
    pid, err := strconv.ParseUint(strings.TrimSpace(string(out)), 10, 64)
    if err != nil {
        log.Panicf("cannot parse PID of sleep command: %s", err)
    return pid

During reload, there may be a small period during which both the new and the old processes accept incoming requests. If you don’t want that, you can move the creation of the short-lived process outside the goroutine, after server.Serve(), or implement some synchronization mechanism. There is also a possible race-condition when we tell systemd to track another PID—see PR #7816.

The 404.service unit needs an update:

Description = slow 404 micro-service

ExecStart    = /usr/bin/404
ExecReload   = /bin/kill -HUP $MAINPID
Restart      = always
NotifyAccess = main
KillMode     = process

Each additional directive is significant:

  • ExecReload tells how to reload the process—by sending SIGHUP.
  • Restart tells to restart the process if it stops “unexpectedly”, notably on reload.3
  • NotifyAccess specifies which process can send notifications, like a PID change.
  • KillMode tells to only kill the main identified process—others are left untouched.

Zero-downtime deployment?🔗

Zero-downtime deployment is a difficult endeavor on Linux. For example, HAProxy had a long list of hacks until a proper—and complex—solution was implemented in HAproxy 1.8. How do we fare with our simple implementation?

From the kernel point of view, there is a only one socket with a unique listen queue. This socket is associated to several file descriptors: one in systemd and one in the current process. The socket stays alive as long as there is at least one file descriptor. An incoming connection is put by the kernel in the listen queue and can be dequeued from any file descriptor with the accept() syscall. Therefore, this approach actually achieves zero-downtime deployment: no incoming connection is rejected.

By contrast, HAProxy was using several different sockets listening to the same addresses, thanks to the SO_REUSEPORT option.4 Each socket gets its own listening queue and the kernel balances incoming connections between each queue. When a socket gets closed, the content of its queue is lost. If an incoming connection was sitting here, it would receive a reset. An elegant patch for Linux to signal a socket should not receive new connections was rejected. HAProxy 1.8 is now recycling existing sockets to the new processes through an Unix socket.

I hope this post and the previous one show how systemd is a good sidekick for a Go service: readiness, liveness and socket activation are some of the useful features you can get to build a more reliable application.

Addendum: decoy process using Go🔗

UPDATED (2018.03): On /r/golang, it was pointed out to me that, in the version where systemd is tracking a decoy, the helper can be replaced by invoking the main executable. By relying on a change of environment, it assumes the role of the decoy. Here is such an implementation replacing the detachedSleep() function:

func init() {
    // As early as possible, check if we should be the decoy.
    state := os.Getenv("__SLEEPY")
    switch state {
    case "1":
        // First step, fork again.
        execPath := self()
        child, err := os.StartProcess(
                Env: append(os.Environ(), "__SLEEPY=2"),
        if err != nil {
            log.Panicf("cannot execute sleep command: %s", err)

        // Advertise child's PID and exit. Child will be
        // orphaned and adopted by PID 1.
        fmt.Printf("%d", child.Pid)
    case "2":
        // Sleep and exit.
    // Not the sleepy helper. Business as usual.

// self returns the absolute path to ourselves. This relies on
// /proc/self/exe which may be a symlink to a deleted path (for
// example, during an upgrade).
func self() string {
    execPath, err := os.Readlink("/proc/self/exe")
    if err != nil {
        log.Panicf("cannot get self path: %s", err)
    execPath = strings.TrimSuffix(execPath, " (deleted)")
    return execpath

// detachedSleep spawns a detached process sleeping one second and
// returns its PID. A full daemonization is not needed as the process
// is short-lived.
func detachedSleep() uint64 {
    cmd := exec.Command(self())
    cmd.Env = append(os.Environ(), "__SLEEPY=1")
    out, err := cmd.Output()
    if err != nil {
        log.Panicf("cannot execute sleep command: %s", err)
    pid, err := strconv.ParseUint(strings.TrimSpace(string(out)), 10, 64)
    if err != nil {
        log.Panicf("cannot parse PID of sleep command: %s", err)
    return pid

Addendum: identifying sockets by name🔗

For a given service, systemd can provide several sockets. To identify them, it is possible to name them. Let’s suppose we also want to return 403 error codes from the same service but on a different port. We add an additional socket unit definition, 403.socket, linked to the same 404.service job:

ListenStream = 8001
BindIPv6Only = both
Service      = 404.service


Unless overridden with FileDescriptorName, the name of the socket is the name of the unit: 403.socket. go-systemd provides the ListenersWithNames() function to fetch a map from names to listening sockets:

package main

import (


func main() {
    var wg sync.WaitGroup

    // Map socket names to handlers.
    handlers := map[string]http.HandlerFunc{
        "404.socket": http.NotFound,
        "403.socket": func(w http.ResponseWriter, r *http.Request) {
            http.Error(w, "403 forbidden",

    // Get listening sockets.
    listeners, err := activation.ListenersWithNames(true)
    if err != nil {
        log.Panicf("cannot retrieve listeners: %s", err)

    // For each listening socket, spawn a goroutine
    // with the appropriate handler.
    for name := range listeners {
        for idx := range listeners[name] {
            go func(name string, idx int) {
                defer wg.Done()
            }(name, idx)

    // Wait for all goroutines to terminate.

Let’s build the service and run it with systemd-socket-activate:

$ go build 404.go
$ systemd-socket-activate -l 8000 -l 8001 \
>                         --fdname=404.socket:403.socket \
>                         ./404
Listening on [::]:8000 as 3.
Listening on [::]:8001 as 4.

In another console, we can make a request for each endpoint:

$ curl '[::1]':8000
404 page not found
$ curl '[::1]':8001
403 forbidden

  1. Many process characteristics in Linux are attached to threads. Go runtime transparently manages them without much user control. Until recently, this made some features, like setuid() or setns(), unusable. ↩︎

  2. Python is a good candidate: it’s likely to be available on the system, it is low-level enough to easily implement the functionality and, as an interpreted language, it doesn’t require a specific build step.

    UPDATED (2018.03): There is no need to fork twice as we only need to detach the decoy from the current process. This simplify a bit the Python code. ↩︎

  3. This is not an essential directive as the process is also restarted through socket-activation. ↩︎

  4. This approach is more convenient when reloading since you don’t have to figure out which sockets to reuse and which ones to create from scratch. Moreover, when several processes need to accept connections, using multiple sockets is more scalable as the different processes won’t fight over a shared lock to accept connections. ↩︎

19 March, 2018 08:28AM by Vincent Bernat

hackergotchi for Daniel Pocock

Daniel Pocock

GSoC and Outreachy: Mentors don't need to be Debian Developers

A frequent response I receive when talking to prospective mentors: "I'm not a Debian Developer yet".

As student applications have started coming in, now is the time for any prospective mentors to introduce yourself on the debian-outreach list if you would like to help with any of the listed projects or any topics that have been proposed spontaneously by students without any mentor.

It doesn't matter if you are a Debian Developer or not. Furthermore, mentoring in a program like GSoC or Outreachy is a form of volunteering that is recognized just as highly as packaging or any other development activity.

When an existing developer writes an email advocating your application to become a developer yourself, they can refer to your contribution as a mentor. Many other processes, such as requests for DebConf bursaries, also ask for a list of your contributions and you can mention your mentoring experience there.

With the student deadline on 27 March, it is really important to understand the capacity of the mentoring team over the next 10 days so we can decide how many projects can realistically be supported. Please ask on the debian-outreach list if you have any questions about getting involved.

19 March, 2018 08:10AM by Daniel.Pocock

hackergotchi for Steve Kemp

Steve Kemp

Serverless deployment via docker

I've been thinking about serverless-stuff recently, because I've been re-deploying a bunch of services and some of them could are almost microservices. One thing that a lot of my things have in common is that they're all simple HTTP-servers, presenting an API or end-point over HTTP. There is no state, no database, and no complex dependencies.

These should be prime candidates for serverless deployment, but at the same time I don't want to have to recode them for AWS Lamda, or any similar locked-down service. So docker is the obvious answer.

Let us pretend I have ten HTTP-based services, each of which each binds to port 8000. To make these available I could just setup a simple HTTP front-end:


We'd need to route the request to the appropriate back-end, so we'd start to present URLs like:


Here any request which had the prefix steve/foo would be routed to a running instance of the docker container steve/foo. In short the name of the (first) path component performs the mapping to the back-end.

I wrote a quick hack, in golang, which would bind to port 80 and dynamically launch the appropriate containers, then proxy back and forth. I soon realized that this is a terrible idea though! The problem is a malicious client could start making requests for things like:


That would trigger my API-proxy to download the containers and spin them up. Allowing running arbitrary (albeit "sandboxed") code. So taking a step back, we want to use the path-component of an URL to decide where to route the traffic? Each container will bind to :8000 on its private (docker) IP? There's an obvious solution here: HAProxy.

So I started again, I wrote a trivial golang deamon which will react to docker events - containers starting and stopping - and generate a suitable haproxy configuration file, which can then be used to reload haproxy.

The end result is that if I launch a container named "foo" then requests to will reach it. Success! The only downside to this approach is that you must manually launch your back-end docker containers - but if you do so they'll become immediately available.

I guess there is another advantage. Since you're launching the containers (manually) you can setup links, volumes, and what-not. Much more so than if your API layer span them up with zero per-container knowledge.

19 March, 2018 07:01AM

Michael Stapelberg


I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it.

To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up.

sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable.

On a system without any sbuild/schroot bits installed, a transcript of the full setup looks like this:

% sudo apt install -t unstable sbuild-debian-developer-setup
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libsbuild-perl sbuild schroot
Suggested packages:
  deborphan btrfs-tools aufs-tools | unionfs-fuse qemu-user-static
Recommended packages:
  exim4 | mail-transport-agent autopkgtest
The following NEW packages will be installed:
  libsbuild-perl sbuild sbuild-debian-developer-setup schroot
0 upgraded, 4 newly installed, 0 to remove and 1454 not upgraded.
Need to get 1.106 kB of archives.
After this operation, 3.556 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://localhost:3142/ unstable/main amd64 libsbuild-perl all 0.74.0-1 [129 kB]
Get:2 http://localhost:3142/ unstable/main amd64 sbuild all 0.74.0-1 [142 kB]
Get:3 http://localhost:3142/ testing/main amd64 schroot amd64 1.6.10-4 [772 kB]
Get:4 http://localhost:3142/ unstable/main amd64 sbuild-debian-developer-setup all 0.74.0-1 [62,6 kB]
Fetched 1.106 kB in 0s (5.036 kB/s)
Selecting previously unselected package libsbuild-perl.
(Reading database ... 276684 files and directories currently installed.)
Preparing to unpack .../libsbuild-perl_0.74.0-1_all.deb ...
Unpacking libsbuild-perl (0.74.0-1) ...
Selecting previously unselected package sbuild.
Preparing to unpack .../sbuild_0.74.0-1_all.deb ...
Unpacking sbuild (0.74.0-1) ...
Selecting previously unselected package schroot.
Preparing to unpack .../schroot_1.6.10-4_amd64.deb ...
Unpacking schroot (1.6.10-4) ...
Selecting previously unselected package sbuild-debian-developer-setup.
Preparing to unpack .../sbuild-debian-developer-setup_0.74.0-1_all.deb ...
Unpacking sbuild-debian-developer-setup (0.74.0-1) ...
Processing triggers for systemd (236-1) ...
Setting up schroot (1.6.10-4) ...
Created symlink /etc/systemd/system/ → /lib/systemd/system/schroot.service.
Setting up libsbuild-perl (0.74.0-1) ...
Processing triggers for man-db ( ...
Setting up sbuild (0.74.0-1) ...
Setting up sbuild-debian-developer-setup (0.74.0-1) ...
Processing triggers for systemd (236-1) ...

% sudo sbuild-debian-developer-setup
The user `michael' is already a member of `sbuild'.
I: SUITE: unstable
I: TARGET: /srv/chroot/unstable-amd64-sbuild
I: MIRROR: http://localhost:3142/
I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,eatmydata --components=main --resolve-deps unstable /srv/chroot/unstable-amd64-sbuild http://localhost:3142/
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages 
I: Validating Packages 
I: Found packages in base already in required: apt 
I: Resolving dependencies of required packages...
I: Successfully set up unstable chroot.
I: Run "sbuild-adduser" to add new sbuild users.
ln -s /usr/share/doc/sbuild/examples/sbuild-update-all /etc/cron.daily/sbuild-debian-developer-setup-update-all
Now run `newgrp sbuild', or log out and log in again.

% newgrp sbuild

% sbuild -d unstable hello
sbuild (Debian sbuild) 0.74.0 (14 Mar 2018) on x1

| hello (amd64)                                Mon, 19 Mar 2018 07:46:14 +0000 |

Package: hello
Distribution: unstable
Machine Architecture: amd64
Host Architecture: amd64
Build Architecture: amd64
Build Type: binary

I hope you’ll find this useful.

19 March, 2018 07:00AM

March 18, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.1: A few new tricks

A new release, now at 0.2.1, of the RcppSMC package arrived on CRAN earlier this afternoon (and once again as a very quick pretest-publish within minutes of submission).

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts .

This releases contains a few bug fixes and one minor rearrangment allowing header-only use of the package from other packages, or via a Rcpp plugin. Many of these changes were driven by new contributors, which is a wonderful thing to see for any open source project! So thanks to everybody who helped with. Full details below.

Changes in RcppSMC version 0.2.1 (2018-03-18)

  • The sampler now has a copy constructor and assignment overload (Brian Ni in #28).

  • The SMC library component can now be used in header-only mode (Martin Lysy in #29).

  • Plugin support was added for use via cppFunction() and other Rcpp Attributes (or inline functions (Dirk in #30).

  • The sampler copy ctor/assigment operator is now copy-constructor safe (Martin Lysy In #32).

  • A bug in state variance calculation was corrected (Adam in #36 addressing #34).

  • History getter methods are now more user-friendly (Tiberiu Lepadatu in #37).

  • Use of pow with atomic types was disambiguated to std::pow) to help the Solaris compiler (Dirk in #42).

Courtesy of CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 March, 2018 09:16PM

Russ Allbery

control-archive 1.8.0

This is the software that maintains the archive of control messages and the newsgroups and active files on I update things in place, but it's been a while since I made a formal release, and one seemed overdue (particularly since it needed some compatibility tweaks for GnuPG v1).

In code changes, signing IDs with whitespace are now supported, summaries when there is no log file for the summary period don't produce an error, and gpg1 is now used explicitly with flags to allow weak digest algorithms since the state of crypto for Usenet control messages is rather dire.

On the documentation side, there are multiple fixes to the README.html file that's also shipped with pgpcontrol, updating email addresses, URLs, package versions, and various other details.

For hierarchy changes, the grisbi.* key has been cleaned up a bit for hopefully more reliable verification, and everything related to gov.* has been dropped.

You can get the latest release from the control-archive distribution page.

18 March, 2018 09:14PM

François Marier

Dynamic DNS on your own domain

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS
dyn NS
dyn NS
dyn NS
dyn NS

Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

use=web,, web-skip='IP Address'

and the following in /etc/default/ddclient:


Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short

18 March, 2018 08:45PM

Iustin Pop

New site layout

With the move to Hakyll, I thought whether to also get rid of the old /~iustin on my homepage address. I don’t remember exactly why I chose that layout - maybe I thought I’ll use my domain ( for other purposes? But there are other ways to do that (e.g.

So, from today, my new homepage address is simply

18 March, 2018 04:50PM

Goodbye Ikiwiki, hello Hakyll!

For a while now, I was somewhat unhappy with Ikiwiki, for very “important” reasons.

First, it’s written in Perl, and I haven’t written serious Perl for around 20 years (and that was not serious code). So, me extending it if needed is unlikely, and in all my years of using it I haven’t touched anything except the config file.

Second, and these the real reasons, Ikiwiki is too complex. The templating system is oh-so-verbose. I tried to move to Bootstrap for the styling of my blog (I don’t have time myself to learn enough CSS for responsive, nice and clean sites), but editing the default page template was giving me head-aches. Its wiki origins mean tight integration with the source repository, and the software automatically committing stuff to git as it needed (e.g. new tag pages, calendar updates, etc.) which is overhead for what should basically be a static web site.

So, in the interest of throwing the baby with the bathwater, I said let’s give Hakyll a go. It’s written in Haskell, so everything will be good, right?

And it is indeed. It’s so bare-bones that doing anything non-trivial (as in not just a plain page) requires writing code. The exercise of having a home-page/blog was, during the past week as I worked on converting to it, a programming exercise. Which is quite strange in itself, but for me it works - another excuse for Haskell.

Now, Ikiwiki is a real wiki engine, so didn’t I lose too much by moving to Hakyll, which is just static site generator? No, I actually stopped using the “live” functionality of Ikiwiki (its cgi-bin script) a long while ago, as I disabled the commenting functionality; there was just too much spam. And live editing of pages was never needed for my use-case.

This migration resulted in some downsides, though:

  • I haven’t yet, and probably will never import the ~100 or so comments that I had on the old pages; as said, I stopped this a long while ago (around 2013), so…
  • I reorganised the URLs, and a lot of my old posts were not conforming to any scheme, so posts up until mid-2016 do not have the right redirects; I’ll possibly fix this sometimes soon. Posts newer than that date already had date in the URL, and these have a generic redirection in place.
  • Hakyll doesn’t include the tags in the atom/rss feeds “categories”, so this is a downgrade from before
  • Because Hakyll is more extensible, I can use canned stuff (e.g. Bootstrap, Font Awesome) more easily, which means bigger site; the previous one was really trivial in terms of size.

Internally (for my self), there are a few more issues: Ikiwiki came with a lot of real functionality, that is now missing; as a trivial example, shortcuts like [[!wiki Foobar]], which I now have to replicate.

But with all the above said, there are good parts as well. The entire site is responsive design now, and both old and future posts that include images will be much nicer behaving on non-large-desktop-viewing case. Instead of ~60 lines of (non-commented-out) configuration, I now have 200 lines of Haskell code (ignoring comments, etc.); this is a net win, right?

On top of that, because of the lack of built-in things, I had to learn how to use Hakyll, so now I can (and already did) much more customisation to the html output; random example: for linking to internal pictures, I have a simple macro:

$pic("xxx.jpg", "alt-text")$

which knows to look for xxx.jpg in the right place, relative to the current page URL, and all this results in a custom HTML code, rather than what the markdown engine would do by default. Case in point, and this is not hard-coded in the source of this page, but dynamically expanded from the above example (and escaped), the result is:

Another good part is that, given how bare-bones Hakyll is, if you look for “How to do foo in Hakyll”, you don’t get as a result “Use this plugin”, but rather - here’s the code how to do it.

I’ll have some more work to do before I’m happy with the end state, but - as change for the sake of change goes - this was fun stuff.

18 March, 2018 03:05PM

hackergotchi for Vincent Bernat

Vincent Bernat

Route-based VPN on Linux with WireGuard

In a previous article, I described an implementation of redundant site-to-site VPNs using IPsec (with strongSwan as an IKE daemon) and BGP (with BIRD) to achieve this: 🦑

Redundant VPNs between 3 sites

The two strengths of such a setup are:

  1. Routing daemons distribute routes to be protected by the VPNs. They provide high availability and decrease the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation are executed in a different network namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

As an alternative to IPsec, WireGuard is an extremely simple (less than 5,000 lines of code) yet fast and modern VPN that utilizes state-of-the-art and opinionated cryptography (Curve25519, ChaCha20, Poly1305) and whose protocol, based on Noise, has been formally verified. It is currently available as an out-of-tree module for Linux but is likely to be merged when the protocol is not subject to change anymore. Compared to IPsec, its major weakness is its lack of interoperability.

It can easily replace strongSwan in our site-to-site setup. On Linux, it already acts as a route-based VPN. As a first step, for each VPN, we create a private key and extract the associated public key:

$ wg genkey
$ echo oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs= | wg pubkey

Then, for each remote VPN, we create a short configuration file:1

PrivateKey = oM3PZ1Htc7FnACoIZGhCyrfeR+Y8Yh34WzDaulNEjGs=
ListenPort = 5803

PublicKey  = Jixsag44W8CFkKCIvlLSZF86/Q/4BovkpqdB9Vps5Sk=
EndPoint   = [2001:db8:2::1]:5801
AllowedIPs =,::/0

A new ListenPort value should be used for each remote VPN. WireGuard can multiplex several peers over the same UDP port but this is not applicable here, as the routing is dynamic. The AllowedIPs directive tells to accept and send any traffic.

The next step is to create and configure the tunnel interface for each remote VPN:

$ ip link add dev wg3 type wireguard
$ wg setconf wg3 wg3.conf

WireGuard initiates a handshake to establish symmetric keys:

$ wg show wg3
interface: wg3
  public key: hV1StKWfcC6Yx21xhFvoiXnWONjGHN1dFeibN737Wnc=
  private key: (hidden)
  listening port: 5803

peer: Jixsag44W8CFkKCIvlLSZF86/Q/4BovkpqdB9Vps5Sk=
  endpoint: [2001:db8:2::1]:5801
  allowed ips:, ::/0
  latest handshake: 55 seconds ago
  transfer: 49.84 KiB received, 49.89 KiB sent

Like VTI interfaces, WireGuard tunnel interfaces are namespace-aware: once created, they can be moved into another network namespace where clear traffic is encapsulated and decapsulated. Encrypted traffic is routed in its original namespace. Let’s move each interface into the private namespace and assign it a point-to-point IP address:

$ ip link set netns private dev wg3
$ ip -n private addr add 2001:db8:ff::/127 dev wg3
$ ip -n private link set wg3 up

The remote end uses 2001:db8:ff::1/127. Once everything is setup, from one VPN, we should be able to ping each remote host:

$ ip netns exec private fping 2001:db8:ff::{1,3,5,7}
2001:db8:ff::1 is alive
2001:db8:ff::3 is alive
2001:db8:ff::5 is alive
2001:db8:ff::7 is alive

BIRD configuration is unmodified compared to our previous setup and the BGP sessions should establish quickly:

$ birdc6 -s /run/bird6.private.ctl show proto | grep IBGP_
IBGP_V2_1 BGP      master   up     20:16:31    Established
IBGP_V2_2 BGP      master   up     20:16:31    Established
IBGP_V3_1 BGP      master   up     20:16:31    Established
IBGP_V3_2 BGP      master   up     20:16:29    Established

Remote routes are learnt over the different tunnel interfaces:

$ ip -6 -n private route show proto bird
2001:db8:a1::/64 via fe80::5254:33ff:fe00:13 dev eth2 metric 1024 pref medium
2001:db8:a2::/64 metric 1024
        nexthop via 2001:db8:ff::1 dev wg3 weight 1
        nexthop via 2001:db8:ff::3 dev wg4 weight 1
2001:db8:a3::/64 metric 1024
        nexthop via 2001:db8:ff::5 dev wg5 weight 1
        nexthop via 2001:db8:ff::7 dev wg6 weight 1

From one site, you can ping an host on the other site through the VPNs:

$ ping -c 2 2001:db8:a3::1
PING 2001:db8:a3::1(2001:db8:a3::1) 56 data bytes
64 bytes from 2001:db8:a3::1: icmp_seq=1 ttl=62 time=1.54 ms
64 bytes from 2001:db8:a3::1: icmp_seq=2 ttl=62 time=1.67 ms

--- 2001:db8:a3::1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.542/1.607/1.672/0.065 ms

As with the strongSwan setup, you can easily snoop unencrypted traffic with tcpdump:

$ ip netns exec private tcpdump -c3 -pni wg5 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wg5, link-type RAW (Raw IP), capture size 262144 bytes
08:34:34 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 40
08:34:35 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 41
08:34:36 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 42
3 packets captured
3 packets received by filter
0 packets dropped by kernel

You can find all the configuration files for this example on GitHub.

  1. Compared to IPsec, the cryptography is not configurable and you have to use the strong provided defaults. ↩︎

18 March, 2018 01:29AM by Vincent Bernat

Russ Allbery

DocKnot 1.03

This is the software that I use to generate documentation for my software. Currently, it just handles README,, and the top-level web page for the package.

This release adds a new metadata file, support/extra, which includes information that should be added to the middle of the normal SUPPORT section of README and files. It also adds an explanatory paragraph about SPDX to the default templates, and adds SPDX license identifiers to the package itself.

I've spent quite some time looking at good ways of maintaining accurate license metadata for my packages (and for Debian packages I maintain), including writing a truly ugly Perl script that generates a Debian copyright-format 1.0 file from a source tree. (There are multiple versions of this; mine is pickier than any other that I'm aware of.) Rather than trying to solve the free-form comment parsing problem, some form of structured metadata that's broadly adopted feels like the correct engineering solution (putting aside the fact that it will be hard to get everyone to adopt it). The SPDX project is trying to solve this, and although it seems very bureaucratic and the spec is almost unreadable, it does seem to be catching on to a degree.

I'm therefore adopting it in my packages at least to the extent of adding SPDX-License-Identifier headers to my source files and using the SPDX-standard identifiers (which annoyingly differ from the Debian copyright-format identifiers). I added a test to check that all the files have these headers and will start adding that to all my packages as I release them.

I'm still generating the LICENSE file with my messed-up Perl script. I want to switch from that to a better script that supports SPDX and I don't have to maintain, and will take a look at both the SPDX tooling and cme when I have a chance.

You can get the latest release from the DocKnot distribution page.

18 March, 2018 12:42AM

March 17, 2018

hackergotchi for Martin Zobel-Helas

Martin Zobel-Helas

Unboxing and commissioning of my new reMarkable Paper tablet

A few days back my reMarkable paper tables arrived. A couple of friends asked me to do a review of this device, so here we go.

The Device

It is a E-Ink tablet for reading, writing and sketching using a small stylus. It is very thin (approx. 7mm) and with its 360 gramm it is very lightweight and fits into every laptop bag. Like any common mobile device today it uses micro USB for charging its 3000mAh battery. Its build in wireless can be used to sync with the vendors cloud, and with its 8GB internal storage it provides space for several thousand of pages of documents.

The device itself is run by a ARM A9 CPU running on the vendors own Linux derivate called ‘Codex’. The vendor publishes the source code for his uBoot and linux kernel on their GitHub account. With Linux kernel 4.1.28 they do not run a very recent kernel for a device that ships since September 2017.

First steps with Linux

The device officially can only be synced with Windows or macOS or be synced using the vendor’s closed cloud with the Android or iOS app.

But this is only partly true. The device, when connected with micro USB to a Linux machine, announces itself as a network device:

zobel@gjallar ~ % sudo tail -f /var/log/kern.log
Mar 16 20:15:07 gjallar kernel: [52605.362166] usb 1-1: new high-speed USB device number 43 using xhci_hcd
Mar 16 20:15:08 gjallar kernel: [52605.512014] usb 1-1: New USB device found, idVendor=04b3, idProduct=4010
Mar 16 20:15:08 gjallar kernel: [52605.512020] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Mar 16 20:15:08 gjallar kernel: [52605.512024] usb 1-1: Product: RNDIS/Ethernet Gadget
Mar 16 20:15:08 gjallar kernel: [52605.512028] usb 1-1: Manufacturer: Linux 4.1.28-fslc+g7f82abb with 2184000.usb
Mar 16 20:15:08 gjallar kernel: [52606.078698] cdc_ether 1-1:1.0 usb0: register 'cdc_ether' at usb-0000:00:14.0-1, CDC Ethernet Device, c2:1f:85:68:47:d8
Mar 16 20:15:08 gjallar kernel: [52606.078746] usbcore: registered new interface driver cdc_ether
Mar 16 20:15:08 gjallar kernel: [52606.091233] cdc_ether 1-1:1.0 enp0s20f0u1: renamed from usb0

So if you do DHCP on that device, your interface will be assigned an IP:

zobel@gjallar ~ % ip addr sh dev enp0s20f0u1
12: enp0s20f0u1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether c2:1f:85:68:47:d8 brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic noprefixroute enp0s20f0u1
       valid_lft 43sec preferred_lft 43sec
    inet6 fe80::f358:7473:1050:ed9b/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

You can even log into the device. When you click on the “rM” sign on the top left corner, and then click “About” you get the information how to log into the device using SSH. The device runs busybox and dropbear.

About screen on reMarkable

So, here we go, lets to to log into the device!

zobel@gjallar ~ % ssh -l root
root@ password:
╺━┓┏━╸┏━┓┏━┓   ┏━╸┏━┓┏━┓╻ ╻╻╺┳╸┏━┓┏━┓
┏━┛┣╸ ┣┳┛┃ ┃   ┃╺┓┣┳┛┣━┫┃┏┛┃ ┃ ┣━┫┗━┓
┗━╸┗━╸╹┗╸┗━┛   ┗━┛╹┗╸╹ ╹┗┛ ╹ ╹ ╹ ╹┗━┛
remarkable: ~/ 

Syncing data with Linux

After a bit of searching around on the reMarkable wiki pages, i found out that the documents are saved into ~/.local/share/remarkable/xochitl/ but you can not simply copy files there, the tablet wants some meta data next to the PDF or epub files.

The best and probably easiest way to copy PDF or epub files to your device is curl (thanks Ganneff for that hint!). This needs the USB web interface enabled. This can be found in the ‘Storage Settings’. Tab the switch next to the IP address of your device to enable it.

With the USB web interface enable you can now send data to the device using curl:

So first, lets get a free epub document to read on the reMarkable device:

zobel@gjallar ~/Documents % wget

Once you have downloaded that, here is how you upload it:

zobel@gjallar ~/Documents % curl '' -H 'Origin:' -H 'Accept: */*' -H 'Referer:' -H 'Connection: keep-alive' -F "file=@policy.epub;filename=policy.epub;type=application/epub"
Upload successfull

Et voilà, here we go with the Debian Policy Manual on the reMarkable.

Debian Policy Manual

Now we get to the really interesting parts. The reMarkable tablet with its stylus offers the possibility to make remarks on pages or highlight paragraphs. Once you found the uuid of your document, there is a $uuid.cache subdirectory, which contains PNG files of your document, including the overlay of your remarks and highlights.

Annotated Debian Policy

Next steps and conclusion

The reMarkable wiki describes a couple of other syncing methods, using ssh or curl. One other method i will try to setup the next days is rclone. Using this, i hope to find a way to sync data using my own cloud other other public cloud storage (e.g. Microsoft Azure Blob Storage).

I will keep you posted on my progress with other syncing methods.

Even though reMarkable does not officialy offer syncing with Linux the device can be used with Linux. The documentation on their wiki pages is good enough to write an own syncing client for Linux.

17 March, 2018 01:48PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Minimal SQL privileges

Lately, I have been working pretty hard on a paper I have to hand out at the end of my university semester for the machine learning class I'm taking. I will probably do a long blog post about this paper in May if it turns out to be good, but for the time being I have some time to kill while my latest boosting model runs.

So let's talk about something I've started doing lately: creating issues on FOSS webapp project trackers when their documentation tells people to grant all privileges to the database user.

You know, something like:

GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password';

I'd like to say I've never done this and always took time to specify a restricted subset of privileges on my servers, but I'd be lying. To be honest, I woke up last Christmas when someone told me it was an insecure practice.

When you take a few seconds to think about it, there are quite a few database level SQL privileges and I don't see why I should grant them all to a webapp if it only needs a few of them.

So I started asking projects to do something about this and update their documentation with a minimal set of SQL privileges needed to run correctly. The Drupal project does this quite well and tells you to:


When I first reached out to the upstream devs of these projects, I was sure I'd be seen as some zealous nuisance. To my surprise, everyone thought it was a good idea and fixed it.

Shout out to Nextcloud, Mattermost and KanBoard for taking this seriously!

If you are using a webapp and the documentation states you should grant all privileges to the database user, here is a template you can use to create an issue and ask them to change it:


The installation documentation says that you should grant all SQL privileges to
the database user:

    GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password';

I was wondering what are the true minimal SQL privileges WEBAPP needs to run

I don't normally like to grant all privileges for security reasons and would
really appreciate it if you could publish a minimal SQL database privileges

I guess I'm expecting something like [Drupal][drupal] does.


At the database level, [MySQL/MariaDB][mariadb] supports:

* `DROP`

Does WEBAPP really need database level privileges like EVENT or CREATE ROUTINE?
If not, why should I grant them?

Thanks for your work on WEBAPP!


17 March, 2018 12:45AM by Louis-Philippe Véronneau

March 16, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppClassicExamples 0.1.2

Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the previous uploads in December of 2012 (!!).

No new code or features. Full details below. And as a reminder, don't use the old RcppClassic -- use Rcpp instead.

Changes in version 0.1.2 (2018-03-15)

  • Registered S3 print method [per CRAN request]

  • Added src/init.c with registration and updated all .Call usages taking advantage of it

  • Updated http references to https

  • Updated DESCRIPTION conventions

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 March, 2018 09:56PM

RDieHarder 0.1.4

Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the last upload in 2014.

No NEWS.Rd file to take a summary from, but the top of the ChangeLog has details.

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 March, 2018 09:52PM

Russell Coker

Racism in the Office

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

16 March, 2018 12:21PM by etbe

hackergotchi for Daniel Pocock

Daniel Pocock

OSCAL'18, call for speakers, radio hams, hackers & sponsors reminder

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

16 March, 2018 08:46AM by Daniel.Pocock

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, February 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, about 196 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change but a new platinum sponsor is about to join our project.

The security tracker currently lists 60 packages with a known CVE and the dla-needed.txt file 33. The number of open issues increased significantly and we seem to be behind in terms of CVE triaging.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 March, 2018 08:08AM by Raphaël Hertzog

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2018 (pretest) hits Debian/experimental

TeX Live 2017 has been frozen and we have entered into the preparation phase for the release of TeX Live 2018. Time to update also the Debian packages to the current status.

The other day I have uploaded the following set of packages to Debian/experimental:

  • texlive-bin 2018.20180313.46939-1
  • texlive-base, texlive-lang, texlive-extra 2018.20180313-1
  • biber 2.11-1

This brings Debian/experimental on par with the current status of TeX Live’s tlpretest. After a bit of testing and the sources have stabilized a bit more I will upload all the stuff to unstable for broader testing.

This year hasn’t seen any big changes, see the above linked post for details. Testing and feedback would be greatly appreciated.


16 March, 2018 04:27AM by Norbert Preining

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Roundcube fr_FEM locale 1.3.5

Roundcube 1.3.5 was released today and with it, I've released version 1.3.5 of my fr_FEM (French gender-neutral) locale.

This latest version is actually the first one that can be used with a production version of Roundcube: the first versions I released were based on the latest commit in the master branch at the time instead of an actual release. Not sure why I did that.

I've also changed the versioning scheme to follow Roundcube's. Version 1.3.5 of my localisation is thus compatible with Roundcube 1.3.5. Again, I should have done that from the start.

The fine folks at Riseup actually started using fr_FEM as the default French locale on their instance and I'm happy to say the UI integration seems to be working pretty well.

Sandro Knauß (hefee), who is working on the Debian Roundcube package, also told me he'd like to replace the default Roundcube French locale by fr_FEM in Debian. Nice to see people think a gender-neutral locale is a good idea!

Finally, since this was the first time I had to compare two different releases of Roundcube to see if the 20 files I care about had changed, I decided to write a simple script that leverages git to do this automatically. Running ./ -p git_repo -i 1.3.4 -f 1.3.5 -l fr_FR -o roundcube_diff.txt outputs a nice file that tells you if new localisation files have been added and displays what changed in the old ones.

You can find the locale here.

16 March, 2018 04:00AM by Louis-Philippe Véronneau

March 15, 2018

hackergotchi for Clint Adams

Clint Adams

Don't feed them after midnight

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Hold on while I fellate this 魔鬼,” announced Adrian.

Spaniard doing his thing

Posted on 2018-03-15
Tags: bgs

15 March, 2018 12:51PM

Daniel Powell

Mentorship within software development teams

In response to: This email I wrote a short blog post with some insight about the subject of mentorship.


In my journey to find an internship opportunity through Google Summer of Code, I wanted to give input about the relationship between a mentor and an intern/apprentice. My time as a service manager in the automotive repair industry gave me insight into the design of these relationships.

My recommendation for mentoring programs within a software development team are to have a dual group and private messaging environment for teams of 3 mentors guiding 2 or 3 interns based on their comfort and experience in a group setting. My rationale for this is ass follows:

Every personality does not necessarily engage well with each other. While it's important to learn to work with people who you disagree with, I have found that when given the opportunity to float between mentors for different issues, apprentices will learn more from those who they get along with the best. If the end goal is for the pupil to learn the most during this experience, and hence increase also their productivity on a project then having the dual ability to use a group setting or PM to a specific mentor is ideal. This also gives the opportunity for a mentor to recommend asking a question to another mentor because their specialty in the topic area is better, which in turn can help assuage a conflict of personality simply from the shared introduction. (Just think about when someone you like or respect recommends you work with someone who you thought you didn't get along with - it's a more comfortable situation when you are introduced in this circumstance, when it's done in a transparent and positive light).

Our most successful ratio of mentors to apprentices was 3:2 for technicians who were short on shop experience, but in the scope of this project a 3:3 ratio could be appropriate. I would, however, avoid assigning a mentor as a lead for a student in this format. It makes the barrier for reaching out to the other two mentors too high (especially for those who are relatively new to a team dynamic). You may also change the ratio based on the experience of the students that you accept and their team experience. For example, if you have two students who have never worked in a team environment it may be prudent to move to a 3:2 ratio as to not overwhelm the mentors. It's nice to have that flexibility, so it may be good to avoid such a rigid structuring of teams.

15 March, 2018 10:45AM

March 14, 2018

Sven Hoexter

aput - simple upload script for a flat artifactory Debian repository

At work we're using Jfrog Artifactory to provide a Debian repository (among other kinds of repository). Using the WebUI sucks, uploading by cut&pasting a curl command is annoying too, so I just wrote down a few lines of shell to upload a single Debian binary package.

Expectation is a flat repository and that you edit the variables at the top to provide the repository URL, name and your API Key. So no magic involved.

14 March, 2018 06:26PM

Abhijith PA

Going to FOSSASIA 2018

I will be attending FOSSASIA summit 2018 happening at Singapore. Thanks to Daniel Pocock, we have a Debian booth there. If you are attending please add your name to this wiki page or contact me personally. We can hangout at the booth.

14 March, 2018 12:43PM

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

WordPress for Android and short blog posts

I use for my social network interactions and from time to time I post short thoughts there.

I usually reserve my blog for longer posts including links etc.

That means that it’s harder for me to publish in my blog.

OTOH my daily commute time may be enough to craft short posts. I bring my laptop with me but it’s common that I
open kate, begin to write, and arrive my destination with my post almost finished but unpublished. Or, second variant, I cannot sit so I cannot type in the metro and pass the time reading or thinking.

I’ve just installed WordPress for Android and hopefully that helps me to write short posts in my commute time and publish quicker. Let’s try and see what happens 🙂


Comment about this post in this thread.

14 March, 2018 06:34AM by larjona

hackergotchi for Norbert Preining

Norbert Preining

Replacing a lost Yubikey

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN

Version:       	1
Cipher name:   	aes

Key Slot 0: ENABLED
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: ENABLED

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.

14 March, 2018 06:05AM by Norbert Preining

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Playing with water

H2o Flow gradient boosting job

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

H2o Flow gradient boosting model

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

H2o Flow variable importance weights

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

14 March, 2018 04:00AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.16: A small update

The sixteenth update the 0.12.* series of Rcpp landed on CRAN earlier this evening after a few days of gestation in incoming/ at CRAN.

Once again, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, and the 0.12.15.release in January 2018 making it the twentieth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1316 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

Compared to other releases, this release contains a relatively small change set, but between Kirill, Kevin and myself a few things got cleaned up and solidified. Full details are below.

Changes in Rcpp version 0.12.16 (2018-03-08)

  • Changes in Rcpp API:

    • Rcpp now sets and puts the RNG state upon each entry to an Rcpp function, ensuring that nested invocations of Rcpp functions manage the RNG state as expected (Kevin in #825 addressing #823).

    • The R::pythag wrapper has been commented out; the underlying function has been gone from R since 2.14.0, and ::hypot() (part of C99) is now used unconditionally for complex numbers (Dirk in #826).

    • The long long type can now be used on 64-bit Windows (Kevin in #811 and again in #829 addressing #804).

  • Changes in Rcpp Attributes:

    • Code generated with cppFunction() now uses .Call() directly (Kirill Mueller in #813 addressing #795).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ vignette is now indexed as 'Rcpp-FAQ'; a stale Gmane reference was removed and entry for getting compilers under Conda was added.

    • The top-level now has a Support section.

    • The Rcpp.bib reference file was refreshed to current versions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 March, 2018 12:49AM

March 13, 2018

Reproducible builds folks

Reproducible Builds: Weekly report #150

Here's what happened in the Reproducible Builds effort between Sunday March 4 and Saturday March 10 2018:

diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

Mattia Rizzolo backported version 91 to the Debian backports repository.

In addition, Juliana — our Outreachy intern — continued her work on parallel processing.

Bugs filed

In addition, package reviews have been added, 44 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Lastly, two issue classification types have been added: development

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (49)
  • Antonio Terceiro (1)
  • James Cowgill (1)
  • Ole Streicher (1)


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 March, 2018 08:25PM

hackergotchi for Thomas Lange

Thomas Lange build service now supports creation of VM disk images

A few days ago, I've added a new feature to the build service.

Additionally to creating an installation image, can now build bootable disk images. These disk images can be booted in a VM like KVM, Virtualbox or VMware or openstack.

You can define a disk image size, select a language, set a user and root password, select a Debian distribution and enable backports just by one click. It's possible to add your public key for access to the root account without a password. This can also be done by just specifying your GitHub account. Several disk formats are supports, like raw (compressed with xz or zstd), qcow2, vdi, vhdx and vmdk. And you can add your own list of packages, you want to have inside this OS. After a few minutes the disk image is created and you will get a download link, including a log the the creation process and a link to the FAI configuration that was used to create your customized image.

The new service is available at

If you have any comments, feature requests or feedback, do not hesitate to contact me.

13 March, 2018 04:27PM

Petter Reinholdtsen

First rough draft Norwegian and Spanish edition of the book Made with Creative Commons

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 March, 2018 12:00PM

March 12, 2018

hackergotchi for Junichi Uekawa

Junichi Uekawa

I've been writing js more for chrome extensions.

I've been writing js more for chrome extensions. I write python using pandas for plotting graphs now. I wonder if there's good graphing solution for js. I don't remember how I crafted R graphs annymore.

12 March, 2018 07:54AM by Junichi Uekawa

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, February 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked 13 hours. I will carry over 2 hours to March.

I made another release on the Linux 3.2 longterm stable branch (3.2.99) and started the review cycle for the next update (3.2.100). I rebased the Debian package onto 3.2.99 but didn't upload an update to Debian this month.

I also discussed the possibilities for cooperation between Debian LTS and CIP, briefly reviewed leptonlib for additional security issues, and updated the wiki page about the status of Spectre and Meltdown in Debian.

12 March, 2018 12:51AM

March 11, 2018

Elena Gjevukaj

CoderGals Hackathon

CoderGals Hackathon was organized for the first time in my country. This event took place in the beautiful city of Prizren. This hackathon held for 24 to 48 hours, was an idea which started from two girls majoring in Computer Science, Qendresa and Albiona Hoti.

Thanks to them, we had the chance to work on exciting projects as well as be mentored by key tech people including: Mergim Cahani, Daniel Pocock, Taulant Mehmeti, Mergim Krasniqi, Kolos Pukaj, Bujar Dervishaj, Arta Shehu Zaimi and Edon Bajrami.

We brainstormed for about 3-4 hours to decide for the project. We discussed many ideas that ranged from Doppler effect to GUI interfaces for phone calls. Finally we ended up making an project for linking the PC with your phone so it will ease the procedure not to use both when you need to add a contact, make a call or even sent text messages. We called it Phone Client project.

You can check our work online:

Phone Client

It was a challenge for us because we worked for the first time on Debian OS.

Projects that other girls worked on:

11 March, 2018 09:04AM by Elena Gjevukaj (

hackergotchi for Vasudev Kamath

Vasudev Kamath

Biboumi - A XMPP - IRC Gateway

IRC is a communication mode (technically a communication protocol) used by many Free Software projects for communication and collaboration. It is serving these projects well even 30 years after its inception. Though I'm pretty much okay with IRC I had a problem of not able to use IRC from the mobile phones. Main problem is the inconsistent network connection, where IRC needs always to be connected. This is where I came across Biboumi.

Biboumi by itself does not have anything to do with mobile phones, its just a gateway which will allow you to connect with IRC channel as if it is a XMPP MUC room from any XMPP client. Benefit of this is it allows to enjoy some of XMPP feature in your IRC channel (not all but those which can be mapped).

I run Biboumi with my ejabbered instance and there by now I can connect to some of the Debian IRC channel directly from my phone using Conversations XMPP client for Android.

Biboumi is packaged for Debian, though I'm co-maintainer of the package most hardwork is done by Jonas Smedegaard in keeping the package in shape. It is also available for stretch-backports (though slightly outdated as its not packaged by us for backports). Once you install the package, copy example configuration file from /usr/share/doc/biboumi/examples/example.conf to /etc/biboumi/biboumi.cfg and modify the values as needed. Below is my sample file with password redacted.


Explanation for all the key, values in the configuration file is available in the man page (man biboumi).

Biboumi is configured as external component of the XMPP server. In my case I'm using ejabberd to host my XMPP service. Below is the configuration needed for allowing biboumi to connect with ejabberd.

  port: 8888
  ip: ""
  module: ejabberd_service
  acess: all
       password: xxx

password field in biboumi configuration should match password value in your XMPP server configuration.

After doing above configuration reload ejabberd (or your XMPP server) and start biboumi. Biboumi package provides systemd service file so you might need to enable it first. That's it now you have an XMPP to IRC gateway ready.

You might notice that I'm using local host name for hostname key as well as ip field in ejabberd configuration. This is because TLS support was added to biboumi Debian package only after 7.2 release as botan 2.x was not available till that point in Debian. Hence using proper domain name and making biboumi listen to public will be not safe at least prior to Debian package version 7.2-2. Also making the biboumi service public means you will also need to handle spam bots trying to connect from your service to IRC, which might get your VPS banned from IRC.

Connection Semantics

Once biboumi is configured and running you can now use XMPP client of your choice (Gajim, Conversation etc.) to connect to IRC. To connect to OFTC from your XMPP client you need to following address in Group Chat section

Replace part after @ to what you have configured in hostname field in biboumi configuration. To join a specific channel on a IRC server you need to join the group conversation with following format

If your nick name is registered and you would want to identify yourself to IRC server you can do that by joining in group conversation with NickServ using following address

Once connected you can send NickServ command directly in this virtual channel. Like identify password nick. It is also possible to configure your XMPP clients like Gajim to send Ad-Hoc commands on connection to particular IRC server for identifying your self with IRC servers. But this part I did not get working in Gajim.

If you are running your own XMPP server then biboumi gives you best way to connect to IRC from your mobile phones. And with applications like Conversation running XMPP application won't be hard on your phone battery.


11 March, 2018 05:19AM by copyninja

March 10, 2018

Jeremy Bicha

webkitgtk in Debian Stretch: Report Card

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).

Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.

Release History

Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).

Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).

Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)

Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).

Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).

Release Schedule

webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.

Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).

There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).

In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.

Report Card

From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).

Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.

To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.

If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.


I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.

10 March, 2018 05:25PM by Jeremy Bicha

Andrew Shadura

Say no to Slack, say yes to Matrix

Of all proprietary chatting systems, Slack has always seemed one of the worst to me. Not only it’s a closed proprietary system with no sane clients, open source or not, but it not just one walled garden, as Facebook or WhatsApp are, but a constellation of walled gardens, isolated from each other. To be able to participate in multiple Slack communities, the user has to create multiple accounts and keep multiple chat windows open all the time. Federation? Self-hosting? Owning your data? All of those are not a thing in Slack. Until recently, it was possible to at least keep the logs of all conversations locally by connecting to the chat using IRC or XMPP if the gateway was enabled.

Now, with Slack shutting down gateways not only you cannot keep the logs on your computer, you also cannot use a client of your choice to connect to Slack. They also began changing the bots API which was likely the reason the Matrix-to-Slack gateway didn’t work properly at times. The issue has since resolved itself, but Slack doesn’t give any guarantees the gateway will continue working, and obviously they aren’t really interested in keeping it working.

So, following Gunnar Wolf’s advice (consider also reading this article by Megan Squire), I recommend you stop using Slack. If you prefer an isolated chat system with features Slack provides, and you can self-host, consider MatterMost or Rocket.Chat. Both seem to provide more or less the same features as Slack, but don’t lock you in, and you can choose to either use their paid cloud offering, or run it on your own server. We’ve been using MatterMost at Collabora since July last year, and while it’s not perfect, it’s not a bad piece of software.

If you woulde prefer a system you can federate, you may be interested to have a look at Matrix. Matrix is an open decentralised protocol and ecosystem, which architecturally looks similar to XMPP, but uses different technologies and offers a richer and more modern baseline, including VoIP, end-to-end encryption, decentralised history and content storage, easy bot integration and more. The web client for Matrix, Riot is comparable to Slack, but unlike Slack, there are more clients you can use, including Weechat, libpurple, a bunch of Qt-based clients and, importantly, Riot for Android and iOS.

You don’t have to self-host a Matrix homeserver, since runs one you can use, but it’s quite easy to run one if you decide to, and you don’t even have to migrate your existing chats — you just join them from accounts on your own homeserver, and that’s it!

To help you with the decision to move from Slack to Matrix, you should know that since Matrix has a Slack gateway, you can gradually migrate your colleagues to the new infrastructure, by joining the Slack and Matrix chats together, and dropping the gateway only when everyone moves from Slack.

Repeating Gunnar, say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

10 March, 2018 01:50PM

Michael Stapelberg

dput usability changes

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

10 March, 2018 09:00AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

On the demise of Slack's IRC / XMPP gateways

I have grudgingly joined three Slack workspaces , due to me being part of proejects that use it as a communications center for their participants. Why grudgingly? Because there is very little that it adds to well-established communications standards that we have had for long years decades.

On this topic, I must refer you to the talk and article presented by Megan Squire, one of the clear highlights of my participation last year at the 13th International Conference on Open Source Systems (OSS2017): «Considering the Use of Walled Gardens for FLOSS Project Communication». Please do have a good read of this article.

Thing is, after several years of playing open with probably the best integration gateway I have seen, Slack is joining the Embrace, Extend and Extinguish-minded companies. Of course, I strongly doubt they will manage to extinguish XMPP or IRC, but they want to strengthen the walls around their walled garden...

So, once they have established their presence among companies and developer groups alike, Slack is shutting down their gateways to XMPP and IRC, arguing it's impossible to achieve feature-parity via the gateway.

Of course, I guess all of us recognize and understand there has long not been feature parity. But that's a feature, not a bug! I expressly dislike the abuse of emojis and images inside what's supposed to be a work-enabling medium. Of course, connecting to Slack via IRC, I just don't see the content not meant for me.

The real motivation is they want to control the full user experience.

Well, they have lost me as a user. The day my IRC client fails to connect to Slack, I will delete my user account. They already had record of all of my interactions using their system. Maybe I won't be able to move any of the groups I am part of away from Slack – But many of us can help create a flood.

Say no to predatory tactics. Say no to Embrace, Extend and Extinguish. Say no to Slack.

10 March, 2018 01:23AM by gwolf

March 09, 2018

hackergotchi for Adnan Hodzic

Adnan Hodzic

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

09 March, 2018 10:08PM by ahodzic

Sven Hoexter

half-assed Oracle JRE/JDK 10 support for java-package

I spent an hour to add very basic support for the upcoming Java 10 to my fork of java-package. It still has some edges and the list of binary executables managed via the alternatives system requires some major cleanup. I think once Java 8 is EOL in September it's a good point to consolidate and strip everything except for Java 11 support. If someone requires an older release he can still get back on an earlier version, but by then we won't see any new releases of Java 8, 9, 10. Not speaking about even older stuff.

[sven@digital lib (master)]$ java -version
java version "10" 2018-03-20
Java(TM) SE Runtime Environment 18.3 (build 10+46)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10+46, mixed mode)

09 March, 2018 06:22PM

hackergotchi for Olivier Berger

Olivier Berger

Adding a reminder notification in XFCE systray that I should launch a backup script

I’ve started using borg and borgmatic for backups of my machines. I won’t be using a fully automated backup via a crontab for a start. Instead, I’ve added a recurrent reminder system that will appear on my XFCE desktop to tell me it may be time to do backups.

I’m using yad (a zenity on steroids) to add notifications in the desktop via an anacron.

The notification icon, when clicked, will start a shell script that performs the backups, starting borgmatic.

Here are some bits of my setup :

crontab -l excerpt:

@hourly /usr/sbin/anacron -s -t $HOME/.anacron/etc/anacrontab -S $HOME/.anacron/spool

~/.anacron/etc/anacrontab excerpt:

7 15      borgmatic-home  /home/olivier/bin/

The idea of this anacrontab is to remind me weekly that I should do a backup, 15 minutes after I’ve booted the machine. Another reminding mechanism may be more handy… time will tell.

Then, the script :

notify-send 'Borg backups at home!' "It's time to do a backup." --icon=document-save

# borrowed from

# create a FIFO file, used to manage the I/O redirection from shell
PIPE=$(mktemp -u --tmpdir ${0##*/}.XXXXXXXX)
mkfifo $PIPE

# attach a file descriptor to the file
exec 3<> $PIPE

# add handler to manage process shutdown
function on_exit() {
 echo "quit" >&3
 rm -f $PIPE
trap on_exit EXIT

# add handler for tray icon left click
function on_click() {
 # echo "pid: $YAD_PID"
 echo "icon:document-save" >/proc/$YAD_PID/fd/3
 echo "visible:blink" >/proc/$YAD_PID/fd/3
 xterm -e bash -c "/home/olivier/bin/ --verbosity 1 -c /home/olivier/borgmatic/home-config.yaml; read -p 'Press any key ...'"
 echo "quit" >/proc/$YAD_PID/fd/3
 # kill -INT $YAD_PID
export -f on_click

# create the notification icon
yad --notification \
 --listen \
 --image="appointment-soon" \
 --text="Click icon to start borgmatic backup at home" \
 --command="bash -c on_click $YAD_PID" <&3

The script will start yad so that it displays an icon in the systray. When the icon is clicked, it will start borgmatic, after having changed the icon. Borgmatic will be started inside an xterm so as to get passphrase input, and display messages. Once borgmatic is done backing up, yad will be terminated.

There may be a more elegant way to pass commands to yad listening on file descriptor 3/pipe, but I couldn’t figure out, so the /proc hack. This works on Linux… but not sure in other Unices.

Hope this helps.

09 March, 2018 02:10PM by Olivier Berger

hackergotchi for Steve Kemp

Steve Kemp

A change of direction ..

In my previous post I talked about how our child-care works here in wintery Finland, and suggested there might be a change in the near future.

So here is the predictable update; I've resigned from my job and I'm going to be taking over childcare/daycare. Ideally this will last indefinitely, but it is definitely going to continue until November. (Which is the earliest any child could be moved into public day-care if there problems.)

I've loved my job, twice, but even though it makes me happy (in a way that several other positions didn't) there is no comparison. Child-care makes me happier-still. Sure there are days when your child just wants to scream, refuse to eat, and nothing works. But on average everything is awesome.

It's a hard decision, a "brave" decision too apparently (which I read negatively!), but also an easy one to make.

It'll be hard. I'll have no free time from 7AM-5PM, except during nap-time (11AM-1PM, give or take). But it will be worth it.

And who knows, maybe I'll even get to rant at people who ask "Where's his mother?" I live for those moments. Truly.

09 March, 2018 11:00AM

hackergotchi for Christoph Berg

Christoph Berg

Cool Unix Features: paste

paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.

One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.

$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)

[1] See "J" in The ABCs of Unix

[PS: I meant to blog this in 2011, but apparently never committed the file...]

09 March, 2018 09:06AM

Stepping down as DAM

After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.

Thanks for the fish!

09 March, 2018 08:58AM

hackergotchi for Joey Hess

Joey Hess

prove you are not an Evil corporate person

In which Google be Google and I drop a hot AGPL tip.


Google Is Quietly Providing AI Technology for Drone Strike Targeting Project
Google Is Helping the Pentagon Build AI for Drones

to automate the identification and classification of images taken by drones — cars, buildings, people — providing analysts with increased ability to make informed decisions on the battlefield

These news reports don't mention reCaptcha explicitly, but it's been asking about a lot of cars lately. Whatever the source of the data that Google is using for this, it's disgusting that they're mining it from us without our knowledge or consent.

Google claims that "The technology flags images for human review, and is for non-offensive uses only". So, if a drone operator has a neural network that we all were tricked & coerced into training to identify cars and people helping to highlight them on their screen and center the crosshairs just right, and the neural network is not pressing the kill switch, is it being used for "non-offensive purposes only"?

Google is known to be deathly allergic to the AGPL license. Not only on servers; they don't even allow employees to use AGPL software on workstations. If you write free software, and you'd prefer that Google not use it, a good way to ensure that is to license it under the AGPL.

I normally try to respect the privacy of users of my software, and of personal conversations. But at this point, I feel that Google's behavior has mostly obviated those moral obligations. So...

Now seems like a good time to mention that I have been contacted by multiple people at Google about several of my AGPL licensed projects (git-annex and either keysafe or debug-me I can't remember which) trying to get me to switch them to the GPL, and had long conversations with them about it.

Google has some legal advice that the AGPL source provision triggers much more often than it's commonly understood to. I encouraged them to make that legal reasoning public, so the community could address/debunk it, but I don't think they have. I won't go into details about it here, other than it seemed pretty bonkers.

Mixing in some AGPL code with an otherwise GPL codebase also seems sufficient to trigger Google's allergy. In the case of git-annex, it's possible to build all releases (until next month's) with a flag that prevents linking with any AGPL code, which should mean the resulting binary is GPL licensed, but Google still didn't feel able to use it, since the git-annex source tree includes AGPL files.

I don't know if Google's allergy to the AGPL extends to software used for drone murder applications, but in any case I look forward to preventing Google from using more of my software in the future.

(Illustration by scatter//gather)

09 March, 2018 05:24AM

Russ Allbery

My friend Stirge

Eric Sturgeon, one of my oldest and dearest friends, died this week of complications from what I'm fairly certain was non-alcoholic fatty liver disease.

It was not entirely unexpected. He'd been getting progressively worse over the past six months. But at the same time there's no way to expect this sort of hole in my life.

I've known Stirge for twenty-five years, more than half of my life. We were both in college when we first met on Usenet in 1993 in the rec.arts.comics.* hierarchy, where Stirge was the one with the insane pull list and the canonical knowledge of the Marvel Universe. We have been friends ever since: part of on-line fiction groups, IRC channels, and free-form role-playing groups. He's been my friend through school and graduation, through every step of my career, through four generations of console systems, through two moves for me and maybe a dozen for him, through a difficult job change... through my entire adult life.

For more than fifteen years, he's been spending a day or a week or two, several times a year, sitting on my couch and playing video games. Usually he played and I navigated, researching FAQs and walkthroughs. Twitch was immediately obvious to me the moment I discovered it existed; it's the experience I'd had with Stirge for years before that. I don't know what video games are without his thoughts on them.

Stirge rarely was able to put his ideas into stories he could share with other people. He admired other people's art deeply, but wasn't an artist himself. But he loved fictional worlds, loved their depth and complexity and lore, and was deeply and passionately creative. He knew the stories he played and read and watched, and he knew the characters he played, particularly in World of Warcraft and Star Wars: The Old Republic. His characters had depth and emotions, histories, independent viewpoints, and stories that I got to hear. Stirge wrote stories the way that I do: in our heads, shared with a small number of people if anyone, not crafted for external consumption, not polished, not always coherent, but deeply important to our thoughts and our emotions and our lives. He's one of the very few people in the world I was able to share that with, who understood what that was like.

He was the friend who I could not see for six months, a year, and then pick up a conversation with as if we'd seen each other yesterday.

After my dad had a heart attack and emergency surgery to embed a pacemaker while we were on vacation in Oregon, I was worrying about how we would manage to get him back home. Stirge immediately volunteered to drive down from Seattle to drive us. He had a crappy job with no vacation, and if he'd done that he almost certainly would have gotten fired, and I knew with absolute certainty that he would have done it anyway.

I didn't take him up on the offer (probably to his vast relief). When I told him years later how much it meant to me, he didn't feel like it should have counted, since he didn't do anything. But he did. In one of the worst moments of my life, he said exactly the right thing to make me feel like I wasn't alone, that I wasn't bearing the burden of figuring everything out by myself, that I could call on help if I needed it. To this day I start crying every time I think about it. It's one of the best things that anyone has ever done for me.

Stirge confided in me, the last time he visited me, that he didn't think he was the sort of person anyone thought about when he wasn't around. That people might enjoy him well enough when he was there, but that he'd quickly fade from memory, with perhaps a vague wonder about what happened to that guy. But it wasn't true, not for me, not ever. I tried to convince him of that while he was alive, and I'm so very glad that I did.

The last time I talked to him, he explained the Marvel Cinematic Universe to me in detail, and gave me a rundown of the relative strength of every movie, the ones to watch and the ones that weren't as good, and then did the same thing for the DC movies. He got to see Star Wars before he died. He would have loved Black Panther.

There were so many games we never finished, and so many games we never started.

I will miss you, my friend. More than I think you would ever have believed.

09 March, 2018 02:52AM

hackergotchi for Daniel Pocock

Daniel Pocock

Bug Squashing and Diversity

Over the weekend, I was fortunate enough to visit Tirana again for their first Debian Bug Squashing Party.

Every time I go there, female developers (this is a hotspot of diversity) ask me if they can host the next Mini DebConf for Women. There have already been two of these very successful events, in Barcelona and Bucharest. It is not my decision to make though: anybody can host a MiniDebConf of any kind, anywhere, at any time. I've encouraged the women in Tirana to reach out to some of the previous speakers personally to scope potential dates and contact the DPL directly about funding for necessary expenses like travel.

The confession

If you have read Elena's blog post today, you might have seen my name and picture and assumed that I did a lot of the work. As it is International Women's Day, it seems like an opportune time to admit that isn't true and that as in many of the events in the Balkans, the bulk of the work was done by women. In fact, I only bought my ticket to go there at the last minute.

When I arrived, Izabela Bakollari and Anisa Kuci where already at the venue getting everything ready. They looked busy, so I asked them if they would like a bonus responsibility, presenting some slides about bug squashing that they had never seen before while translating them into Albanian in real-time. They delivered the presentation superbly, it was more entertaining than any TED talk I've ever seen.

The bugs that won't let you sleep

The event was boosted by a large contingent of Kosovans, including 15 more women. They had all pried themselves out of bed at 03:00 am to take the first bus to Tirana. It's rare to see such enthusiasm for bugs amongst developers anywhere but it was no surprise to me: most of them had been at the hackathon for girls in Prizren last year, where many of them encountered free software development processes for the first time, working long hours throughout the weekend in the summer heat.

and a celebrity guest

A major highlight of the event was the presence of Jona Azizaj, a Fedora contributor who is very proactive in supporting all the communities who engage with people in the Balkans, including all the recent Debian events there. Jona is one of the finalists for Red Hat's Women in Open Source Award. Jona was a virtual speaker at DebConf17 last year, helping me demonstrate a call from the Fedora community WebRTC service to the Debian equivalent, At Mini DebConf Prishtina, where fifty percent of talks were delivered by women, I invited Jona on stage and challenged her to contemplate being a speaker at Red Hat Summit. Giving a talk there seemed like little more than a pipe dream just a few months ago in Prishtina: as a finalist for this prestigious award, her odds have shortened dramatically. It is so inspiring that a collaboration between free software communities helps build such fantastic leaders.

With results like this in the Balkans, you may think the diversity problem has been solved there. In reality, while the ratio of female participants may be more natural, they still face problems that are familiar to women anywhere.

One of the greatest highlights of my own visits to the region has been listening to some of the challenges these women have faced, things that I never encountered or even imagined as the stereotypical privileged white male. Yet despite enormous social, cultural and economic differences, while I was sitting in the heat of the summer in Prizren last year, it was not unlike my own time as a student in Australia and the enthusiasm and motivation of these young women discovering new technologies was just as familiar to me as the climate.

Hopefully more people will be able to listen to what they have to say if Jona wins the Red Hat award or if a Mini DebConf for Women goes ahead in the Balkans (subscribe before posting).

09 March, 2018 12:39AM by Daniel.Pocock

March 08, 2018

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in February 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Last month I wrote about „The state of Debian Games“ and I was pleasantly surprised that someone apparently read my post and offered some help with saving endangered games. Well, I don’t know how it will turn out but at least it is encouraging to see that there are people who still care about some old fashioned games. As a matter of fact the GNOME maintainers would like to remove some obsolete GNOME 2 libraries which makes a few of our games RC-buggy. Ideally they should be ported to GNOME 3 but if they could be replaced with a similar game written in a different and awesome programming language (such as Java or Clojure?), for a different desktop environment, that would do as well. 😉 If you’re bored to death or just want a challenge contact us at
  • I packaged a new release of mupen64plus-qt to fix a FTBFS bug (#887576)
  • I uploaded a new version of freeciv to stretch-backports.
  • Pygame-sdl2 and renpy got some love too. (new upstream releases)
  • I sponsored a new revision of redeclipse for Martin-Erik Werner to fix #887744.
  • Yangfl introduced ddnet to Debian which is a popular modification/standalone game similar to teeworlds. I reviewed and eventually sponsored a new upstream release for him. If you are into multiplayer games then ddnet is certainly something you should look forward to.
  • I gladly applied another patch by Peter Green to fix #889059 in warzone2100 and Aurelien Jarno’s fix for btanks (#890632).

Debian Java

  • The Eclipse problem: The Eclipse IDE is seriously threatened to be removed from Debian. Once upon a time we even had a dedicated team that cared about the package but nowadays there is nobody. We regularly get requests to update the IDE to the latest version but there is no one who wants to do the necessary work. The situation is best described in #681726. This alone is worrying enough but due to an interesting dependency chain (batik -> maven -> guice -> libspring-java -> aspectj -> eclipse-platform) Eclipse cannot be removed without breaking dozens of other Java packages. So long story short I started to work on it and packaged a standalone libequinox-osgi-java package, so that we can save at least all reverse-dependencies for this package. Next was tycho which is required to build newer Eclipse versions. Annoyingly it requires said newer version of Eclipse to build…which means we must bootstrap it. I’m still in the process to upgrade tycho to version 1.0 and hope to make some progress in March.
  • I prepared security updates for jackson-databind, lucene-solr and tomcat-native.
  • New upstream releases: jboss-xnio, commons-parent, jboss-logging, jboss-module, mongo-java-driver and libspring-java (#890001).
  • Bug fixes and triaging: wagon2 (#881815, #889427), byte-buddy, (#884207), commons-io, maven-archiver (#886875), jdeb (#889642), commons-math, jflex (#890345), commons-httpclient (#871142)
  • I introduced jboss-bridger which is a new build-dependency of jboss-modules.
  • I sponsored a freeplane update for Felix Natter.

Debian LTS

This was my twenty-fourth month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 05.02.2018 until 11.02.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in binutils, graphicsmagick, wayland, unzip, kde-runtime, libjboss-remoting-java, libvirt, exim4, libspring-java, puppet, audacity, leptonlib, librsvg, suricata, exiv2, polarssl and imagemagick.
  • I tested a security update for exim4 and uploaded a package for Abhijith.
  • DLA-1275-1. Issued a security update for uwsgi fixing 1 CVE.
  • DLA-1276-1. Issued a security update for tomcat-native fixing 1 CVE.
  • DLA-1280-1. Issued a security update for pound fixing 1 CVE.
  • DLA-1281-1. Issued a security update for advancecomp fixing 1 CVE.
  • DLA-1295-1. Issued a security update for drupal7 fixing 4 CVE.
  • DLA-1296-1. Issued a security update for xmltooling fixing 1 CVE.
  • DLA-1301-1. Issued a security update for tomcat7 fixing 2 CVE.


  • I NMUed vdk2 (#885760) to prevent the removal of langdrill.

Thanks for reading and see you next time.

08 March, 2018 07:56PM by Apo