October 07, 2015

hackergotchi for Norbert Preining

Norbert Preining

Kobo DRM removal on Linux

Finally it has been done – to remove the DRM of a Kobo eBook one does not need to switch to Windows or Mac anymore, deal with the brain-dead Kobo Desktop app, etc etc. The only thing you need is a Kobo device! DRM is evil, we know that, please read up on DRM protection here.


I wrote about the Pain of DRM some time ago, and after two years of fighting finally it is here – the latest release of Apprentice Alf‘s DeDRM tools starting with version 6.3.4a makes the Obok plugin usable on Linux, too.

Just get the latest tools, unzip them, install the obok_plugin.zip from the Obok_calibre_plugin directory, and restart your Calibre. After that connect your device and press the Obok button, and there we go, you can backup you purchased eBooks on your computer. Of course the same is true on Windows and Mac, so if you have a device, there is no need for Kobo Desktop anymore.

Just one warning: Don’t use it for pirating ebooks, the tools have the sole purpose to backup and make properly purchased ebooks available on different devices (Linux computer).

07 October, 2015 04:20AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.1.6

An incremental RcppRedis release arrived on CRAN yesterday. Russell Pierce contributed four new function for pushing and popping to/from lists, and added authentication support. I added a simple ping command as well.

Changes in version 0.1.6 (2015-10-05)

  • Added support (including new unit tests) for lpop, rpop, lpush, rpush as well as auth via augmented constructor (all thanks to PRs #11 and #13 by Russell Pierce)

  • Added ping command and unit test

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

07 October, 2015 03:21AM

October 06, 2015

John Goerzen

Objects On Earth Are Closer Than They Appear

“We all live beneath the great Big Dipper.”

So goes a line in a song I once heard the great Tony Brown sing. As I near the completion of my private pilot’s training, I’ve had more and more opportunities to literally see the wisdom in those words. Here’s a story of one of them.


“A shining beacon in space — all alone in the night.”

– Babylon 5

A night cross-country flight, my first, taking off from a country airport. The plane lifts into the dark sky. The bright white lights of the runway get smaller, and disappear as I pass the edge of the airport. Directly below me, it looks like a dark sky; pitch black except for little pinpoints of light at farmhouses and the occasional car. But seconds later, an expanse of light unfolds, from a city it takes nearly an hour to reach by car. Already it is in sight, and as I look off to other directions, other cities even farther away are visible, too. The ground shows a square grid, the streets of the city visible for miles.

There are no highway signs in the sky. There are no wheels to keep my plane pointed straight. Even if I point the plane due south, if there is an east wind, I will actually be flying southwest. I use my eyes, enhanced by technology like a compass, GPS, and VHF radio beacons, to find my way. Before ever getting into the airplane, I have carefully planned my route, selecting both visual and technological waypoints along the way to provide many ways to ensure I am on course and make sure I don’t get lost.

Soon I see a flash repeating every few seconds in the distance — an airport beacon. Then another, and another. Little pinpoints of light nestled in the square orange grid. Wichita has many airports, each with its beacon, and one of them will be my first visual checkpoint of the night. I make a few clicks in the cockpit, and soon the radio-controlled lights at one of the airports spring to life, illuminating my first checkpoint. More than a mile of white lights there to welcome any plane that lands, and to show a point on the path of any plane that passes.

I continue my flight, sometimes turning on lights at airports, other times pointing my plane at lights from antenna towers (that are thousands of feet below me), sometimes keeping a tiny needle on my panel centered on a radio beacon. I land at a tiny, deserted airport, and then a few minutes later at a large commercial airport.

On my way back home, I fly solely by reference to the ground — directly over a freeway. I have other tools at my disposal, but don’t need them; the steady stream of red and white lights beneath me are all I need.

From my plane, there is just red and white. One after another, passing beneath me as I fly over them at 115 MPH. There is no citizen or undocumented immigrant, no rich or poor, no atheist or Christian or Muslim, no Democrat or Replubican, no American or Mexican, no adult or child, no rich or poor, no Porsche or Kia. Just red and white points of light, each one the same as the one before and the one after, stretching as far as I can see into the distance. All alike in the night.

You only need to get a hundred feet off the ground before you realize how little state lines, national borders, and the machinery of politics and exclusion really mean. From the sky, the difference between a field of corn and a field of wheat is far more significant than the difference between Kansas and Missouri.

This should be a comforting reminder to us. We are all unique, and beautiful in our uniqueness, but we are all human, each as valuable as the next.

Up in the sky, even though my instructor was with me, during quiet times it is easy to feel all alone in the night. But I know it is not the case. Only a few thousand feet separate my plane from those cars. My plane, too, has red and white lights.

How often at night, when the heavens were bright,
With the light of the twinkling stars
Have I stood here amazed, and asked as I gazed,
If their glory exceed that of ours.

– John A. Lomax

06 October, 2015 11:39PM by John Goerzen

hackergotchi for Ben Armstrong

Ben Armstrong

Cranberry Lake and nearby bog – Fall, 2015

On one of my regular walks with a friend, we decided today to walk part of the BLT Trail to Cranberry Lake and the bog just past it, an easy 5 km round trip.

On the trail to the lake, golds dominateOn the trail to the lake, golds dominate
A calm day, the lake like glassA calm day, the lake like glass
In the bog, copper and golden huesIn the bog, copper and golden hues
On the margins of the bog, brilliant orange and redOn the margins of the bog, brilliant orange and red
The reds, dark greens and dead trees in counterpointThe reds, dark greens and dead trees in counterpoint
At our turning point, my cranberry patch provided a puckery snackAt our turning point, my cranberry patch provided a puckery snack

06 October, 2015 10:23PM by Ben Armstrong

Thorsten Alteholz

My Debian Activities in September 2015

FTP assistant

Another month passed and another statistic arrives: This month I marked 341 packages for accept and rejected only 48 of them. Almost like last month I had to send 14 emails to maintainers.

Squeeze LTS

This was my fifteenth month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian.

This month I only got a workload of 14.5h. I finally uploaded a new version of php5. Unfortunately in one library a parameter to a function call introduced new values. As a result all running processes that used the old version of that library produced an error message until they got restarted. As complaints showed up on all channels, I rechecked my patches again and again but could not find an error. I wonder whether this happened once before. At least the php package does not have a mechanism to restart something…
Altogether I uploaded those DLAs:

  • [DLA 307-1] php5 security update
  • [DLA 309-1] openldap security update
  • [DLA 311-1] rpcbind security update
  • [DLA 312-1] libtorrent-rasterbar security update

I also started to work on an upload of freeimage and the next upload of php5.

This month I also had another term of doing frontdesk work. So I answered questions on the IRC channel and looked for CVEs that are important for Squeeze LTS or could be ignored.

Other stuff

Some time ago someone mentioned pump.io and that it would be nice to have it in Debian. I found a Wiki page listing dependencies, with lots of stuff already done and just a few holes. It didn’t look like much work todo until I realized that this page showed only the surface and the shoals are hidden below. Anyway, I started to work on it and up to now

  • node-boolbase
  • node-domelementtype
  • node-eventsource
  • node-querystringify
  • node-rai
  • node-requires-port
  • node-url-parse
  • node-wrappy
  • node-xoauth2

are uploaded and

  • node-schlock
  • node-array-parallel
  • node-css-what
  • node-bufferjs
  • node-exit

are still in NEW. Luckily most of them could be handled by npm2deb, so it was mainly routine piece of work. So, expect more to come …

I also polished some smaller packages and could even close some bugs:

  • dict-elements
  • rplay -> #741567 #597152
  • setserial -> #786976 #761951 #761951
  • siggen -> #772364
  • texify

06 October, 2015 08:45PM by alteholz

hackergotchi for Matthew Garrett

Matthew Garrett

Going my own way

Reaction to Sarah's post about leaving the kernel community was a mixture of terrible and touching, but it's still one of those things that almost certainly won't end up making any kind of significant difference. Linus has made it pretty clear that he's fine with the way he behaves, and nobody's going to depose him. That's unfortunate, because earlier today I was sitting in a presentation at Linuxcon and remembering how much I love the technical side of kernel development. "Remembering" is a deliberate choice of word - it's been increasingly difficult to remember that, because instead I remember having to deal with interminable arguments over the naming of an interface because Linus has an undying hatred of BSD securelevel, or having my name forever associated with the deepthroating of Microsoft because Linus couldn't be bothered asking questions about the reasoning behind a design before trashing it.

In the end it's a mixture of just being tired of dealing with the crap associated with Linux development and realising that by continuing to put up with it I'm tacitly encouraging its continuation, but I can't be bothered any more. And, thanks to the magic of free software, it turns out that I can avoid putting up with the bullshit in the kernel community and get to work on the things I'm interested in doing. So here's a kernel tree with patches that implement a BSD-style securelevel interface. Over time it'll pick up some of the power management code I'm still working on, and we'll see where it goes from there. But, until there's a significant shift in community norms on LKML, I'll only be there when I'm being paid to be there. And that's improved my mood immeasurably.

comment count unavailable comments

06 October, 2015 01:18PM

hackergotchi for Gergely Nagy

Gergely Nagy

What dh-exec is, and what it isn't for

Strange as it may be, it turns out I never wrote about dh-exec yet, even though it is close to being four years old. Gosh, time flies so fast when you're having fun! Since its first introduction, there's been a reasonable uptake in dh-exec use: as of this writing, 129 packages build-depend on dh-exec. One might think this would be a cause for celebration, that the package is put to great use. But it's not.

You see, a significant number of those 129 packages are doing it wrong, and need not build-depend on dh-exec at all.

The purpose of dh-exec is to allow one to do things stock debhelper can't do, such as renaming files during the dh_install phase, or applying architecture or build profile based filtering, or doing environment variable substitution. There are many legit uses for all of these features, but there are some which can be easily solved without using dh-exec. So first, I'll talk a bit about the don'ts.

What not to use dh-exec for

One of the most abused part of dh-exec is its variable substitution feature, and it is often used without need, to install multiarch-related files. While that is one intended use-case, there are few situations currently in the archive that stock debhelper can't handle. Let me explain the situation!

Lets assume we have an upstream package, where the build system is something as common as autotools or CMake, for which debhelper can automatically set the appropriate flags and paths. Furthermore, lets assume that the upstream build system would install the following files:


We want to include the first two in the libalala1 package, the rest in libalala-dev, so what do we do? We use stock debhelper, of course!

  • libalala1.install:

  • libalala-dev.install:


That is all you need. In this case, there is absolutely no need for dh-exec. While using dh-exec without need is not much of an issue, because it only increases the space required for the build and build times by a tiny bit, I would still strongly recommend not introducing dh-exec needlessly. Why? Because of simplicity and aesthetics.

So, if you find yourself doing any of these, or similar, that's a sign you are doing things wrong:

usr/lib/${DEB_HOST_MULTIARCH}/*.so.* /usr/lib/${DEB_HOST_MULTIARCH}/
usr/lib/${DEB_HOST_MULTIARCH}/pkgconfig/*.pc /usr/lib/${DEB_HOST_MULTIARCH}/pkgconfig
usr/lib/${DEB_HOST_MULTIARCH}/package/*.so /usr/lib/${DEB_HOST_MULTIARCH}/package

Unless there are other directories under usr/lib that are not the multiarch triplet, using stock debhelper and wildcards is not only more succinct, simpler and more elegant, but also lighter on resources required.

When dh-exec becomes useful

Changing installation paths

Once you want to change where things get installed, then dh-exec becomes useful:

usr/lib/*.so.* /usr/lib/${DEB_HOST_MULTIARCH}
usr/lib/${DEB_HOST_MULTIARCH}/package/* /usr/lib/${DEB_HOST_MULTIARCH}/package-plugins/
some/dir/*.so.* /usr/lib/${DEB_HOST_MULTIARCH}

This usually happens when upstream's build system can't easily be taught about multiarch paths. For most autotools and CMake-based packages, this is not the case.

Variable substitution

Consider this case:

/usr/share/octave/packages/mpi-${DEB_VERSION_UPSTREAM}/hello2dimmat.m /usr/share/doc/octave-mpi/examples/hello2dimmat.m
/usr/share/octave/packages/mpi-${DEB_VERSION_UPSTREAM}/hellocell.m /usr/share/doc/octave-mpi/examples/hellocell.m

Here, the build supplies ${DEB_VERSION_UPSTREAM}, and using dh-exec allows one to have a generic debian/links file, that does not need updating whenever the upstream version changes. We can't use wildcards here, because dh_link does not expand them.

Renaming files

In case one needs to rename files during the dh_install phase, dh-exec can be put to use for great results:

ssh-agent-filter.bash-completion => usr/share/bash-completion/completions/ssh-agent-filter


Sometimes one would wish to conditionally install something based on the architecture, or the build profile. In this case, dh-exec is the tool to turn to:

<!stage1 !stage2> ../../libdde-linux26/Makeconf* usr/share/libdde_linux26

usr/lib/gvfs/gvfsd-afc                                          [!hurd-any]
usr/lib/gvfs/gvfsd-gphoto2                                      [linux-any]

06 October, 2015 12:54PM by Gergely Nagy

hackergotchi for Norbert Preining

Norbert Preining

Craft Beer Kanazawa 2015 地ビール祭り・金沢

Last weekend the yearly Craft Beer Kanazawa Festival took place in central Kanazawa. This year 14 different producers brought about 80 different kind of beers for us to taste. Compared with 6 years ago when I came to Japan and Japan was still more or less Kirin-Asahi-Sapporo country without any distinguishable taste, the situation as improved vastly, and we can now enjoy lots of excellent local beers!

Returning from a trip to Swansea and a conference in Fukuoka, I arrived at Kanazawa train station and went directly to the beer festival. A great welcome back in Kanazawa, but due to excessive sleep deprivation and the feeling of “finally I want to come home”, I only enjoyed 6 beers from 6 different producers.

In the gardens behind the Shinoki Cultural Center lots of small tents with beer and food were set up. Lots of tables and chairs were also available, but most people enjoyed flocking around in the grass around the tents. What a difference to last year’s rainy and cold beer festival!

This year’s producers were (in order from left to right, page links according to language):


With only 6 beers to my avail (due to ticket system), I choose the ones I don’t have nearby. Mind that the following comments are purely personal and do not define a quality standards 😉 I just say what I like from worst to best:

  • Ohya Brasserie, Kitokito Hop きときとホップ: A desaster, I was close to throw this beer away, but then thought – もったいない (what a waste!). Strange and disturbing taste.
  • Ushitora Brewery, Pure Street Session IPA: Ok, but nothing special. Too light and not taste a bit unclear to me.
  • Minoh Brewery, Momo Weizen: Typical Weizen Beer, light and refreshing taste. Good.
  • Swanlake Brewery, some seasonal IPA: good, not so extremely bitter, nice taste.
  • Ise Kadoya Brewery, Pale Ale: very good, full taste
  • Yoho Brewery, Red Ale: my absolute favorite – I don’t know why, but this brewery simply produces absolutely stunning ales. Their Aooni 青鬼 IPA is my day-in-day-out beer, world class. Their Yona Yona Ale, less bitter than the Aooni, is already famous, and this Red Ale was a perfect addition.


A great beer festival and I am looking for next years festival to try a few more. In the mean time I will stock up beers at home, so that I have always a good Yoho Brewery beer at hand!


06 October, 2015 12:36AM by Norbert Preining

October 05, 2015

hackergotchi for Sune Vuorela

Sune Vuorela

KDE at Qt World Summit

So. KDE has landed at Qt World Summit.


You can come and visit our booth and …

  • hear about our amazing Free Qt Addons (KDE Frameworks)
  • stories about our development tools
  • meet some of our developers
  • Talk about KDE in general
  • Or just say hi!

KDE – 19 years of Qt Experience.

05 October, 2015 02:06PM by Sune Vuorela

Bálint Réczey

Debian success stories: Automated signature verification

Debian was not generally seen as a bleeding-edge distribution, but it offered a perfect combination of stability and up-to-date software in our field when we chose the platform for our signature verification project. Having an active Debian Developer in the team also helped ensuring that packages which we use were in good shape when the freeze, then the release came and we can still rely on Jessie images with only a few extra packages to run our software stack.

Not having to worry about the platform, we could concentrate on the core project and I’m proud to announce that our start-up‘s algorithm won this year’s Signature Verification Competition for Online Skilled Forgeries (SigWIComp2015) . The more detailed story can be read already in the English business news and is also on index.hu, a leading Hungarian news site. We are also working on a solution for categorizing users based on cursor/finger movements for targeting content, offers and ads better. This is also covered in the articles.

László – a signature comparable in quality to the reference signatures

The verification task was not easy. The reference signatures were recorded at very low resolution and frequency and the forgers did a very good job in forging them creating a true challenge for everyone competing. At first glance it is hard to imagine that there is usable information in such small amount of recorded data, but our software is already better than me, for example in telling the difference between genuine and forged signatures. It feels like when the chess program beats the programmer again and again. :-)

I would like to thank you all, who helped making Debian an awesome universal operating system and hope we can keep making every release better and better!

05 October, 2015 11:41AM by Réczey Bálint

hackergotchi for Michal Čihař

Michal Čihař

python-suseapi 0.22

The python-suseapi 0.22 has been released last week. The version number shows nothing special, but one important change has happened - the development repository has been moved.

It's now under openSUSE project on GitHub, what makes it easier to find for potential users and also makes team maintenance a bit easier than under my personal account.

If you're curious what the module does - it's mostly usable only inside SUSE, providing access to some internal services. One major thing usable outside is the Bugzilla interface, which should be at one day replaced by python-bugzilla, but for now provides some features not available there (using web scraping).

Anyway the code has documentation on readhtedocs.org, so you can figure out yourself what it includes.

Filed under: Coding English SUSE | 0 comments

05 October, 2015 10:00AM by Michal Čihař (michal@cihar.com)

hackergotchi for Julien Danjou

Julien Danjou

Gnocchi talk at OpenStack Paris Meetup #16

Last week, I've been invited to the OpenStack Paris meetup #16, whose subject was about metrics in OpenStack. Last time I spoke at this meetup was back in 2012, during the OpenStack Paris meetup #2. A very long time ago!

I talked for half an hour about Gnocchi, the OpenStack project I've been running for 18 months now. I started by explaining the story behind the project and why we needed to build it. Ceilometer has an interesting history and had a curious roadmap these last year, and I summarized that briefly. Then I talk about how Gnocchi works and what it offers to users and operators. The slides where full of JSON, but I imagine it offered a interesting view of what the API looks like and how easy it is to operate. This also allowed me to emphasize how many use cases are actually really covered and solved, contrary to what Ceilometer did so far. The talk has been well received and I got a few interesting questions at the end.

The video of the talk (in French) and my slides are available on my talk page and below. I hope you'll enjoy it.

05 October, 2015 08:45AM by Julien Danjou

October 04, 2015

hackergotchi for Philipp Kern

Philipp Kern

Root on LVM on Debian s390x, new Hercules

Two s390x changes landed in Debian unstable today:
With this it should be possible to install Debian on s390x with root on LVM. I'd be happy to hear feedback about installations with any configuration, be it root on a single DASD or root on LVM. Unless you set both mirror/udeb/suite and mirror/suite to unstable you'll need to wait until the changes are in testing, though. (The debian-installer build does not matter as zipl-installer is not part of the initrd and sysconfig-hardware is part of the installation.)

Furthermore I uploaded a new version of Hercules - a z/Architecture emulator - to get a few more years of maintenance into Debian. See its upstream changelog for details on the changes (old 3.07 → new 3.11).

At this point qemu at master is also usable for s390x emulation. It is much faster than Hercules, but it uses newfangled I/O subsystems like virtio. Hence we will need to do some more patching to make debian-installer just work. One patch for netcfg is in to support virtio networking correctly, but then it forces the user to configure a DASD. (Which would be as wrong if Fibre Channel were to be used.) In the end qemu and KVM on s390x look so much like a normal x86 VM that we could drop most of the special-casing of s390x (netcfg-static instead of netcfg; network-console instead of using the VM console; DASD configuration instead of simply using virtio-blk devices; I guess we get to keep zIPL for booting).

04 October, 2015 09:08PM by Philipp Kern (noreply@blogger.com)

hackergotchi for Lunar


Reproducible builds: week 23 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

Andreas Metzler uploaded autogen/1:5.18.6-1 in experimental with several patches for reproducibility issues written by Valentin Lorentz.

Groovy upstream has merged a change proposed by Emmanuel Bourg to remove timestamps generated by groovydoc.

Ben Hutchings submitted a patch to add support for SOURCE_DATE_EPOCH in linux-kbuild as an alternate way to specify the build timestamp.

Reiner Herrman has sent a patch adding support for SOURCE_DATE_EPOCH in docbook-utils.

Packages fixed

The following packages became reproducible due to changes in their build dependencies: commons-csv. fest-reflect, sunxi-tools, xfce4-terminal,

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

Tomasz Rybak uploaded pycuda/2015.1.3-1 which should fix reproducibility issues. The package has not been tested as it is in contrib.

akira found an embedded code copy of texi2html in fftw.


Email notifications are now only sent once a day per package, instead of on each status change. (h01ger)

disorderfs has been temporarily disabled to see if it had any impact on the disk space issues. (h01ger)

When running out of disk space, build nodes will now automatically detect the problem. This means test results will not be recorded as “FTBFS” and the problem will be reported to Jenkins maintainers. (h01ger)

The navigation menu of package pages has been improved. (h01ger)

The two amd64 builders now use two different kernel versions: 3.16 from stable and 4.1 from backports on the other. (h01ger)

We now graph the number of packages which needs to be fixed. (h01ger)

Munin now creates graphs on how many builds were performed by build nodes (example). (h01ger)

A migration plan has been agreed with DSA on how to turn Jenkins into an official Debian service. A backport of jenkins-job-builder for Jessie is currently missing. (h01ger)

Package reviews

119 reviews have been removed, 103 added and 45 updated this week.

16 “fail to build from source” issues were reported by Chris Lamb and Mattia Rizzolo.

New issue this week: timestamps_in_manpages_generated_by_docbook_utils.


Allan McRae has submitted a patch to make ArchLinux pacman record a .BUILDINFO file.

04 October, 2015 09:07PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


armadillo image

The somewhat regular monthly upstream Armadillo update brings us a first release of the 6.* series. This follows an earlier test release announced on the list, and released to the Rcpp drat. And as version 6.100.0 was released on Friday by Conrad, we rolled it into RcppArmadillo release yesterday. Following yet another full test against all reverse dependencies, got uploaded to CRAN which has now accepted it. A matching upload to Debian will follow shortly.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

This release a few changes:

Changes in RcppArmadillo version (2015-10-03)

  • Upgraded to Armadillo 6.100.0 ("Midnight Blue")

    • faster norm() and normalise() when using ATLAS or OpenBLAS

    • added Schur decomposition: schur()

    • stricter handling of matrix objects by hist() and histc()

    • advanced constructors for using auxiliary memory by Mat, Col, Row and Cube now have the default of strict = false

    • Cube class now delays allocation of .slice() related structures until needed

    • expanded join_slices() to handle joining cubes with matrices

Courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 October, 2015 08:06PM

hackergotchi for Jonathan Carter

Jonathan Carter

Long Overdue Debconf 15 Post

Debconf 15

In August (that was 2 months ago, really!?) I attended DebCamp and DebConf in Heidelberg, Germany. This blog post is somewhat belated due to the debbug (flu obtained during DebConf) and all the catching up I had to do since then.


Debcamp was great, I got to hack on some of my Python related packages that were in need of love for a long time and also got to spend a lot of time tinkering with VLC for the Video Team. Even better than that, I caught up with a lot of great people I haven’t seen in ages (and met new ones) and stayed up waaaaay too late drinking beer, playing Mao and watching meteor showers.


At Debconf, I gave a short talk about AIMS Desktop (slides) but also expanded on the licensing problems we’ve had with Ubuntu on that project. Not all was bleak on the Ubuntu front though, some Ubuntu/Canonical folk were present at DebConf and said that they’d gladly get involved with porting Ubiquity (the Ubuntu installer, a front-end to d-i) to Debian. That would certainly be useful to many derivatives including potentiall AIMS Desktop if it were to move over to Debian.

AIMS Desktop talk slides:

We’re hosting DebConf in Cape Town next year and did an introduction during a plenary (slides). It was interesting spending some time with the DC15 team and learning how they work, it’s amazing all the detail they have to care about and how easy they made it look from the outside, I hope the DC16 team will pull that off as well.

Debconf 16 Slides:

DC16 at DC15 talk

DebConf 16 team members present at DebConf16 during DC16 presentation:

I uploaded my photos to DebConf Gallery, Facebook and Google, take your pick ;-), many sessions were recorded, catch them on video.debian.net. If I had to summarize everything that I found interesting I’d have to delay posting this entry even further, topics that were particularly interesting were:

  • Reproducible Builds (project page on wiki)
  • Trademark Issues (general logo use discussion, and what can call itself “Debian”)
  • Many Derivative discussions
  • PPAs for Debian
  • Many packaging and workflow related talks and discussions where I was only qualified to listen and tried to take in as much as possible

Pollito’s First Trip to Africa

In my state of flu with complete lack of concentration for anything work related, I went ahead and made a little short story documenting Pollito’s (the DebConf mascot chicken) first trip to Africa. It’s silly but it was fun to make and some people enjoyed it ^_^

Well, what else can I say? DebConf 15 was a blast! Hope to see you at Debconf 16!

04 October, 2015 04:58PM by jonathan

hackergotchi for Johannes Schauer

Johannes Schauer

new sbuild release 0.66.0

I just released sbuild 0.66.0-1 into unstable. It fixes a whopping 30 bugs! Thus, I'd like to use this platform to:

  • kindly ask all sbuild users to report any new bugs introduced with this release
  • give a big thank you to everybody who supplied the patches that made fixing this many bugs possible (in alphabetical order): Aurelien Jarno, Christian Kastner, Christoph Egger, Colin Watson, Dima Kogan, Guillem Jover, Luca Falavigna, Maria Valentina Marin Rordrigues, Miguel A. Colón Vélez, Paul Tagliamonte

And a super big thank you to Roger Leigh who, despite having resigned from Debian, was always available to give extremely helpful hints, tips, opinion and guidance with respect to sbuild development. Thank you!

Here is a list of the major changes since the last release:

  • add option --arch-all-only to build arch:all packages
  • environment variable SBUILD_CONFIG allows to specify a custom configuration file
  • add option --build-path to set a deterministic build path
  • fix crossbuild dependency resolution
  • add option --extra-repository-key for extra apt keys
  • add option --build-dep-resolver=aspcud for aspcud based resolver
  • allow complex commands as sbuild hooks
  • add now external command %SBUILD_SHELL produces an interactive shell
  • add options --build-deps-failed-commands, --build-failed-commands and --anything-failed-commands for more hooks

04 October, 2015 09:31AM

October 03, 2015

Stig Sandbeck Mathisen

Free software activities in September 2015


Working on making the munin master fit inside Mojolicious. The existing code is not written to make this trivial, but all the pieces are there. Most of the pieces need breaking up into smaller pieces to fit.



New version of puppet-module-puppetlabs-apache (Closes: #788124 #788125 #788127 ). I like it when a new upstream version closes all bugs left in the bts for a package.

A new package, the TLS proxy hitch currently waiting in the queue.


Lots of work on a new ceph puppet module.

03 October, 2015 08:56PM

October 02, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

Want to be selected for Google Summer of Code 2016?

I've mentored a number of students in 2013, 2014 and 2015 for Debian and Ganglia and most of the companies I've worked with have run internships and graduate programs from time to time. GSoC 2015 has just finished and with all the excitement, many students are already asking what they can do to prepare and be selected for Outreachy or GSoC in 2016.

My own observation is that the more time the organization has to get to know the student, the more confident they can be selecting that student. Furthermore, the more time that the student has spent getting to know the free software community, the more easily they can complete GSoC.

Here I present a list of things that students can do to maximize their chance of selection and career opportunities at the same time. These tips are useful for people applying for GSoC itself and related programs such as GNOME's Outreachy or graduate placements in companies.


There is no guarantee that Google will run the program again in 2016 or any future year until the Google announcement.

There is no guarantee that any organization or mentor (including myself) will be involved until the official list of organizations is published by Google.

Do not follow the advice of web sites that invite you to send pizza or anything else of value to prospective mentors.

Following the steps in this page doesn't guarantee selection. That said, people who do follow these steps are much more likely to be considered and interviewed than somebody who hasn't done any of the things in this list.

Understand what free software really is

You may hear terms like free software and open source software used interchangeably.

They don't mean exactly the same thing and many people use the term free software for the wrong things. Not all projects declaring themselves to be "free" or "open source" meet the definition of free software. Those that don't, usually as a result of deficiencies in their licenses, are fundamentally incompatible with the majority of software that does use genuinely free licenses.

Google Summer of Code is about both writing and publishing your code and it is also about community. It is fundamental that you know the basics of licensing and how to choose a free license that empowers the community to collaborate on your code well after GSoC has finished.

Please review the definition of free software early on and come back and review it from time to time. The The GNU Project / Free Software Foundation have excellent resources to help you understand what a free software license is and how it works to maximize community collaboration.

Don't look for shortcuts

There is no shortcut to GSoC selection and there is no shortcut to GSoC completion.

The student stipend (USD $5,500 in 2014) is not paid to students unless they complete a minimum amount of valid code. This means that even if a student did find some shortcut to selection, it is unlikely they would be paid without completing meaningful work.

If you are the right candidate for GSoC, you will not need a shortcut anyway. Are you the sort of person who can't leave a coding problem until you really feel it is fixed, even if you keep going all night? Have you ever woken up in the night with a dream about writing code still in your head? Do you become irritated by tedious or repetitive tasks and often think of ways to write code to eliminate such tasks? Does your family get cross with you because you take your laptop to Christmas dinner or some other significant occasion and start coding? If some of these statements summarize the way you think or feel you are probably a natural fit for GSoC.

An opportunity money can't buy

The GSoC stipend will not make you rich. It is intended to make sure you have enough money to survive through the summer and focus on your project. Professional developers make this much money in a week in leading business centers like New York, London and Singapore. When you get to that stage in 3-5 years, you will not even be thinking about exactly how much you made during internships.

GSoC gives you an edge over other internships because it involves publicly promoting your work. Many companies still try to hide the potential of their best recruits for fear they will be poached or that they will be able to demand higher salaries. Everything you complete in GSoC is intended to be published and you get full credit for it. Imagine a young musician getting the opportunity to perform on the main stage at a rock festival. This is how the free software community works. It is a meritocracy and there is nobody to hold you back.

Having a portfolio of free software that you have created or collaborated on and a wide network of professional contacts that you develop before, during and after GSoC will continue to pay you back for years to come. While other graduates are being screened through group interviews and testing days run by employers, people with a track record in a free software project often find they go straight to the final interview round.

Register your domain name and make a permanent email address

Free software is all about community and collaboration. Register your own domain name as this will become a focal point for your work and for people to get to know you as you become part of the community.

This is sound advice for anybody working in IT, not just programmers. It gives the impression that you are confident and have a long term interest in a technology career.

Choosing the provider: as a minimum, you want a provider that offers DNS management, static web site hosting, email forwarding and XMPP services all linked to your domain. You do not need to choose the provider that is linked to your internet connection at home and that is often not the best choice anyway. The XMPP foundation maintains a list of providers known to support XMPP.

Create an email address within your domain name. The most basic domain hosting providers will let you forward the email address to a webmail or university email account of your choice. Configure your webmail to send replies using your personalized email address in the From header.

Update your ~/.gitconfig file to use your personalized email address in your Git commits.

Create a web site and blog

Start writing a blog. Host it using your domain name.

Some people blog every day, other people just blog once every two or three months.

Create links from your web site to your other profiles, such as a Github profile page. This helps reinforce the pages/profiles that are genuinely related to you and avoid confusion with the pages of other developers.

Many mentors are keen to see their students writing a weekly report on a blog during GSoC so starting a blog now gives you a head start. Mentors look at blogs during the selection process to try and gain insight into which topics a student is most suitable for.

Create a profile on Github

Github is one of the most widely used software development web sites. Github makes it quick and easy for you to publish your work and collaborate on the work of other people. Create an account today and get in the habbit of forking other projects, improving them, committing your changes and pushing the work back into your Github account.

Github will quickly build a profile of your commits and this allows mentors to see and understand your interests and your strengths.

In your Github profile, add a link to your web site/blog and make sure the email address you are using for Git commits (in the ~/.gitconfig file) is based on your personal domain.

Start using PGP

Pretty Good Privacy (PGP) is the industry standard in protecting your identity online. All serious free software projects use PGP to sign tags in Git, to sign official emails and to sign official release files.

The most common way to start using PGP is with the GnuPG (GNU Privacy Guard) utility. It is installed by the package manager on most Linux systems.

When you create your own PGP key, use the email address involving your domain name. This is the most permanent and stable solution.

Print your key fingerprint using the gpg-key2ps command, it is in the signing-party package on most Linux systems. Keep copies of the fingerprint slips with you.

This is what my own PGP fingerprint slip looks like. You can also print the key fingerprint on a business card for a more professional look.

Using PGP, it is recommend that you sign any important messages you send but you do not have to encrypt the messages you send, especially if some of the people you send messages to (like family and friends) do not yet have the PGP software to decrypt them.

If using the Thunderbird (Icedove) email client from Mozilla, you can easily send signed messages and validate the messages you receive using the Enigmail plugin.

Get your PGP key signed

Once you have a PGP key, you will need to find other developers to sign it. For people I mentor personally in GSoC, I'm keen to see that you try and find another Debian Developer in your area to sign your key as early as possible.

Free software events

Try and find all the free software events in your area in the months between now and the end of the next Google Summer of Code season. Aim to attend at least two of them before GSoC.

Look closely at the schedules and find out about the individual speakers, the companies and the free software projects that are participating. For events that span more than one day, find out about the dinners, pub nights and other social parts of the event.

Try and identify people who will attend the event who have been GSoC mentors or who intend to be. Contact them before the event, if you are keen to work on something in their domain they may be able to make time to discuss it with you in person.

Take your PGP fingerprint slips. Even if you don't participate in a formal key-signing party at the event, you will still find some developers to sign your PGP key individually. You must take a photo ID document (such as your passport) for the other developer to check the name on your fingerprint but you do not give them a copy of the ID document.

Events come in all shapes and sizes. FOSDEM is an example of one of the bigger events in Europe, linux.conf.au is a similarly large event in Australia. There are many, many more local events such as the Debian UK mini-DebConf in Cambridge, November 2015. Many events are either free or free for students but please check carefully if there is a requirement to register before attending.

On your blog, discuss which events you are attending and which sessions interest you. Write a blog during or after the event too, including photos.

Quantcast generously hosted the Ganglia community meeting in San Francisco, October 2013. We had a wild time in their offices with mini-scooters, burgers, beers and the Ganglia book. That's me on the pink mini-scooter and Bernard Li, one of the other Ganglia GSoC 2014 admins is on the right.

Install Linux

GSoC is fundamentally about free software. Linux is to free software what a tree is to the forest. Using Linux every day on your personal computer dramatically increases your ability to interact with the free software community and increases the number of potential GSoC projects that you can participate in.

This is not to say that people using Mac OS or Windows are unwelcome. I have worked with some great developers who were not Linux users. Linux gives you an edge though and the best time to gain that edge is now, while you are a student and well before you apply for GSoC.

If you must run Windows for some applications used in your course, it will run just fine in a virtual machine using Virtual Box, a free software solution for desktop virtualization. Use Linux as the primary operating system.

Here are links to download ISO DVD (and CD) images for some of the main Linux distributions:

If you are nervous about getting started with Linux, install it on a spare PC or in a virtual machine before you install it on your main PC or laptop. Linux is much less demanding on the hardware than Windows so you can easily run it on a machine that is 5-10 years old. Having just 4GB of RAM and 20GB of hard disk is usually more than enough for a basic graphical desktop environment although having better hardware makes it faster.

Your experiences installing and running Linux, especially if it requires some special effort to make it work with some of your hardware, make interesting topics for your blog.

Decide which technologies you know best

Personally, I have mentored students working with C, C++, Java, Python and JavaScript/HTML5.

In a GSoC program, you will typically do most of your work in just one of these languages.

From the outset, decide which language you will focus on and do everything you can to improve your competence with that language. For example, if you have already used Java in most of your course, plan on using Java in GSoC and make sure you read Effective Java (2nd Edition) by Joshua Bloch.

Decide which themes appeal to you

Find a topic that has long-term appeal for you. Maybe the topic relates to your course or maybe you already know what type of company you would like to work in.

Here is a list of some topics and some of the relevant software projects:

  • System administration, servers and networking: consider projects involving monitoring, automation, packaging. Ganglia is a great community to get involved with and you will encounter the Ganglia software in many large companies and academic/research networks. Contributing to a Linux distribution like Debian or Fedora packaging is another great way to get into system administration.
  • Desktop and user interface: consider projects involving window managers and desktop tools or adding to the user interface of just about any other software.
  • Big data and data science: this can apply to just about any other theme. For example, data science techniques are frequently used now to improve system administration.
  • Business and accounting: consider accounting, CRM and ERP software.
  • Finance and trading: consider projects like R, market data software like OpenMAMA and connectivity software (Apache Camel)
  • Real-time communication (RTC), VoIP, webcam and chat: look at the JSCommunicator or the Jitsi project
  • Web (JavaScript, HTML5): look at the JSCommunicator

Before the GSoC application process begins, you should aim to learn as much as possible about the theme you prefer and also gain practical experience using the software relating to that theme. For example, if you are attracted to the business and accounting theme, install the PostBooks suite and get to know it. Maybe you know somebody who runs a small business: help them to upgrade to PostBooks and use it to prepare some reports.

Make something

Make some small project, less than two week's work, to demonstrate your skills. It is important to make something that somebody will use for a practical purpose, this will help you gain experience communicating with other users through Github.

For an example, see the servlet Juliana Louback created for fixing phone numbers in December 2013. It has since been used as part of the Lumicall web site and Juliana was selected for a GSoC 2014 project with Debian.

There is no better way to demonstrate to a prospective mentor that you are ready for GSoC than by completing and publishing some small project like this yourself. If you don't have any immediate project ideas, many developers will also be able to give you tips on small projects like this that you can attempt, just come and ask us on one of the mailing lists.

Ideally, the project will be something that you would use anyway even if you do not end up participating in GSoC. Such projects are the most motivating and rewarding and usually end up becoming an example of your best work. To continue the example of somebody with a preference for business and accounting software, a small project you might create is a plugin or extension for PostBooks.

Getting to know prospective mentors

Many web sites provide useful information about the developers who contribute to free software projects. Some of these developers may be willing to be a GSoC mentor.

For example, look through some of the following:

Getting on the mentor's shortlist

Once you have identified projects that are interesting to you and developers who work on those projects, it is important to get yourself on the developer's shortlist.

Basically, the shortlist is a list of all students who the developer believes can complete the project. If I feel that a student is unlikely to complete a project or if I don't have enough information to judge a student's probability of success, that student will not be on my shortlist.

If I don't have any student on my shortlist, then a project will not go ahead at all. If there are multiple students on the shortlist, then I will be looking more closely at each of them to try and work out who is the best match.

One way to get a developer's attention is to look at bug reports they have created. Github makes it easy to see complaints or bug reports they have made about their own projects or other projects they depend on. Another way to do this is to search through their code for strings like FIXME and TODO. Projects with standalone bug trackers like the Debian bug tracker also provide an easy way to search for bug reports that a specific person has created or commented on.

Once you find some relevant bug reports, email the developer. Ask if anybody else is working on those issues. Try and start with an issue that is particularly easy and where the solution is interesting for you. This will help you learn to compile and test the program before you try to fix any more complicated bugs. It may even be something you can work on as part of your academic program.

Find successful projects from the previous year

Contact organizations and ask them which GSoC projects were most successful. In many organizations, you can find the past students' project plans and their final reports published on the web. Read through the plans submitted by the students who were chosen. Then read through the final reports by the same students and see how they compare to the original plans.

Start building your project proposal now

Don't wait for the application period to begin. Start writing a project proposal now.

When writing a proposal, it is important to include several things:

  • Think big: what is the goal at the end of the project? Does your work help the greater good in some way, such as increasing the market share of Linux on the desktop?
  • Details: what are specific challenges? What tools will you use?
  • Time management: what will you do each week? Are there weeks where you will not work on GSoC due to vacation or other events? These things are permitted but they must be in your plan if you know them in advance. If an accident or death in the family cut a week out of your GSoC project, which work would you skip and would your project still be useful without that? Having two weeks of flexible time in your plan makes it more resilient against interruptions.
  • Communication: are you on mailing lists, IRC and XMPP chat? Will you make a weekly report on your blog?
  • Users: who will benefit from your work?
  • Testing: who will test and validate your work throughout the project? Ideally, this should involve more than just the mentor.

If your project plan is good enough, could you put it on Kickstarter or another crowdfunding site? This is a good test of whether or not a project is going to be supported by a GSoC mentor.

Learn about packaging and distributing software

Packaging is a vital part of the free software lifecycle. It is very easy to upload a project to Github but it takes more effort to have it become an official package in systems like Debian, Fedora and Ubuntu.

Packaging and the communities around Linux distributions help you reach out to users of your software and get valuable feedback and new contributors. This boosts the impact of your work.

To start with, you may want to help the maintainer of an existing package. Debian packaging teams are existing communities that work in a team and welcome new contributors. The Debian Mentors initiative is another great starting place. In the Fedora world, the place to start may be in one of the Special Interest Groups (SIGs).

Think from the mentor's perspective

After the application deadline, mentors have just 2 or 3 weeks to choose the students. This is actually not a lot of time to be certain if a particular student is capable of completing a project. If the student has a published history of free software activity, the mentor feels a lot more confident about choosing the student.

Some mentors have more than one good student while other mentors receive no applications from capable students. In this situation, it is very common for mentors to send each other details of students who may be suitable. Once again, if a student has a good Github profile and a blog, it is much easier for mentors to try and match that student with another project.

GSoC logo generic


Getting into the world of software engineering is much like joining any other profession or even joining a new hobby or sporting activity. If you run, you probably have various types of shoe and a running watch and you may even spend a couple of nights at the track each week. If you enjoy playing a musical instrument, you probably have a collection of sheet music, accessories for your instrument and you may even aspire to build a recording studio in your garage (or you probably know somebody else who already did that).

The things listed on this page will not just help you walk the walk and talk the talk of a software developer, they will put you on a track to being one of the leaders. If you look over the profiles of other software developers on the Internet, you will find they are doing most of the things on this page already. Even if you are not selected for GSoC at all or decide not to apply, working through the steps on this page will help you clarify your own ideas about your career and help you make new friends in the software engineering community.

02 October, 2015 04:41PM by Daniel.Pocock

Sylvain Beucler

Android Free developer tools rebuilds

I published some Free rebuilds of the Android SDK, NDK and ADT at:


As described in my previous post, Google is click-wrapping all developer binaries (including preview versions for which source code isn't published yet) with a non-free EULA, notably an anti-fork clause.

There's been some discussion on where to host this project at the android@lists.fsfe.org campaign list.

Build instructions are provided, so feel free to check if the builds are reproducible, and contribute instructions for more tools!

02 October, 2015 11:18AM

hackergotchi for Norbert Preining

Norbert Preining

Updates for OSX 10.11 El Capitan: cjk-gs-integrate and jfontmaps 20151002.0

Now that OSX 10.11 El Capitan is released and everyone is eagerly updating, in cooperation with the colleagues from the Japanese TeX world we have released new versions of the jfontmaps and cjk-gs-integrate packages. With these two packages in TeX Live, El Capitan users can take advantage of the newly available fonts in the Japanese TeX engines ((u)ptex et al), and directly in Ghostscript.


For jfontmaps the changes were minimal, Yusuke Terada fixed a mismatch in ttc index numbers for some fonts. Without this fix, Hiragino Interface is used instead of HiraginoSans-W3 and -W6.

On the other hand, cjk-gs-integrate has seen a lot more changes:

  • add support for OSX 10.11 El Capitan provided fonts (by Yusuke Terada)
  • added 2004-{H,V} encodings for Japanese fonts (by Munehiro Yamamoto)
  • fix incorrect link name – this prevented kanji-config-updmap from the jfontmaps package to find and use the linked fonts
  • rename --link-texmflocal to --link-texmf [DIR] with an optional argument
  • add a --remove option to revert the operation – this does clean up completely only if the same set of fonts is found

For more explanations concerning how to run cjk-gs-integrate, please see the dedicated page: CJK fonts and Ghostscript integration.

For feedback and bug reports, please use the github project pages: jfontmaps, cjk-gs-support.

Both packages should arrive in your local TeX Live CTAN repository within a day or two.

We hope that with this users of El Capitan can use their fonts to the full extend.


02 October, 2015 07:52AM by Norbert Preining

October 01, 2015

hackergotchi for Junichi Uekawa

Junichi Uekawa

Playing with FUSE and git.

Playing with FUSE and git. I've been playing with FUSE and git to make a file system, for fun. There's already many filesystems that are implemented with FUSE, and there are quite a few ones that implement filesystem for git, but I don't use any of them. I wondered why that is the case but tried to build one anyway. It's in github repository gitlstreefs. I have created several toy file systems in C++. ninjafs is one where it shows ninja targets as files and builds the file target when file is actually needed. They aren't quite as useful yet but an interesting excercise, FUSE was reasonably straightforward to implement simple filesystems with.

01 October, 2015 09:58PM by Junichi Uekawa

Petter Reinholdtsen

French Docbook/PDF/EPUB/MOBI edition of the Free Culture book

As I wrap up the Norwegian version of Free Culture book by Lawrence Lessig (still waiting for my final proof reading copy to arrive in the mail), my great dblatex helper and developer of the dblatex docbook processor, Benoît Guillon, decided a to try to create a French version of the book. He started with the French translation available from the Wikilivres wiki pages, and wrote a program to convert it into a PO file, allowing the translation to be integrated into the po4a based framework I use to create the Norwegian translation from the English edition. We meet on the #dblatex IRC channel to discuss the work. If you want to help create a French edition, check out his git repository and join us on IRC. If the French edition look good, we might publish it as a paper book on lulu.com. A French version of the drawings and the cover need to be provided for this to happen.

01 October, 2015 11:20AM

Mike Gabriel

My FLOSS activities in August/September 2015

Here comes my "monthly" FLOSS report for August and September 2015. As 50% of August 2015 had been dedicated to taking some time off (spending time in Sweden with the family), it happened that even more workload had to be processed in September 2015.

  • Completion of MATE 1.10 in Debian testing/unstable and Ubuntu 15.10
  • Contribution to Debian LTS, Debian packaging
  • Development of GOsa² Plugin SchoolManager
  • Automatic builds for Arctica Project
  • Forking Unity Greeter as Arctica Greeter (with focus on the remote logon part inside Unity Greeter)

Received Sponsorship

My monthly 8h portion of working for the Debian LTS project I had to dispatch from August into September. Thus, I received 16h of paid work for working on Debian LTS in September 2015. For details, see below. Thanks to Raphael Hertzog for having me on the team [1]. Thanks to all the people and companies sponsoring the Debian LTS Team's work.

The development of GOsa² Plugin SchoolManager (for details, see below) was done on contract for a school in Nothern Germany. The code will be released under the same license as the GOsa² software itself.

Completion of MATE 1.10 in Debian testing/unstable and Ubuntu 15.10

In the first half of September all MATE 1.10 packages finally landed in Debian testing (aka stretch). Martin Wimpress handled most of the packaging changes, whereas my main job was being reviewer and uploader of his efforts. Thanks to John Paul Adrian Glaubitz for jumping in as reviewer and uploader during my vacation time.

read more

01 October, 2015 11:18AM by sunweaver

Nightly builds for Arctica Project (Debian / Ubuntu)

I am happy to announce that The Arctica Project can now provide automatic nightly builds of its developers' coding code work.

Packages are built automatically via Jenkins, see [1] for an overview of the current build queues. The Jenkins system builds code as found on our CGit mirror site [2].

NOTE: The Arctica Project's nightly builds may especially be interesting to people that want to try out the latest development steps on nx-libs (3.6.x branch) as we provide nx-libs 3.6.x binary preview builds.

Currently, we only build our code against Debian and Ubuntu (amd64, i386), more distros and platforms are likely to be added. If people can provide machine power (esp. non-Intel based architectures), please get in touch with us on Freenode IRC (channel: #arctica).

This is how you can add our package repositories to your APT system.

Debian APT (here: stretch)

Please note that we only support recent Debian versions (currently version 7.x and above).

$ echo 'deb http://packages.arctica-project.org/debian-nightly stretch main' | sudo tee /etc/apt/sources.list.d/arctica.list
$ sudo apt-key adv --recv-keys --keyserver pgp.mit.edu 0x98DE3101
$ sudo apt-get update

Ubuntu APT (here: trusty)

Please note that we support recent Ubuntu LTS versions only (Ubuntu 14.04 only at the moment).

$ echo 'deb http://packages.arctica-project.org/ubuntu-nightly trusty main' | sudo tee /etc/apt/sources.list.d/arctica.list
$ sudo apt-key adv --recv-keys --keyserver pgp.mit.edu 0x98DE3101
$ sudo apt-get update

read more

01 October, 2015 10:42AM by sunweaver

hackergotchi for Michal Čihař

Michal Čihař

IMAP utils 0.5

I've just released new version of imap-utils. Main reason for new release was change on PyPI which now needs files to be hosted there.

However the new release also comes with other changes:

  • Changed license to GPL3+.
  • Various coding style fixes.

Also this is first release done from Git repository hosted on GitHub.

Filed under: Coding English IMAP | 0 comments

01 October, 2015 10:00AM by Michal Čihař (michal@cihar.com)

hackergotchi for Clint Adams

Clint Adams

September 30, 2015

hackergotchi for Ben Armstrong

Ben Armstrong

Halifax Mainland Common: Early Fall, 2015

A friend and I regularly meet to chat over coffee and then usually finish up by walking the maintained trail in the Halifax Mainland Common Park, but today we decided to take a brief excursion onto the unmaintained trails criss-crossing the park. The last gasp of a faint summer and early signs of fall are evident everywhere.

Some mushrooms are dried and cracked in a mosaic pattern:



Ferns and other brush are browning amongst the various greens of late summer:


A few late blueberries still cling to isolated bushes here and there:


The riot of fall colours in this small clearing, dotted with cotton-grass, burst into view as we round a corner, set behind by a backdrop of nearby buildings:



The ferns here are vivid, like a slow burning fire that will take the rest of fall to burn out:


We appreciate one last splash of colour before we head back under the cover of woods to rejoin the maintained trail:


So many times we’ve travelled our usual route “on automatic”. I’m happy today we left the more travelled trail to share in these glimpses of the changing of seasons in a wilderness preserved for our enjoyment immediately at hand to a densely populated part of the city.


30 September, 2015 10:33PM by Ben Armstrong

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in September 2015

Inspired by Raphaël Hertzog, here is a monthly update covering a large part of what I have been doing in the free software world:


The Reproducible Builds project was also covered in depth on LWN as well as in Lunar's weekly reports (#18, #19, #20, #21, #22).


  • redis — A new upstream release, as well as overhauling the systemd configuration, maintaining feature parity with sysvinit and adding various security hardening features.
  • python-redis — Attempting to get its Debian Continuous Integration tests to pass successfully.
  • libfiu — Ensuring we do not FTBFS under exotic locales.
  • gunicorn — Dropping a dependency on python-tox now that tests are disabled.

RC bugs

I also filed FTBFS bugs against actdiag, actdiag, bangarang, bmon, bppphyview, cervisia, choqok, cinnamon-control-center, clasp, composer, cpl-plugin-naco, dirspec, django-countries, dmapi, dolphin-plugins, dulwich, elki, eqonomize, eztrace, fontmatrix, freedink, galera-3, golang-git2go, golang-github-golang-leveldb, gopher, gst-plugins-bad0.10, jbofihe, k3b, kalgebra, kbibtex, kde-baseapps, kde-dev-utils, kdesdk-kioslaves, kdesvn, kdevelop-php-docs, kdewebdev, kftpgrabber, kile, kmess, kmix, kmldonkey, knights, konsole4, kpartsplugin, kplayer, kraft, krecipes, krusader, ktp-auth-handler, ktp-common-internals, ktp-text-ui, libdevice-cdio-perl, libdr-tarantool-perl, libevent-rpc-perl, libmime-util-java, libmoosex-app-cmd-perl, libmoosex-app-cmd-perl, librdkafka, libxml-easyobj-perl, maven-dependency-plugin, mmtk, murano-dashboard, node-expat, node-iconv, node-raw-body, node-srs, node-websocket, ocaml-estring, ocaml-estring, oce, odb, oslo-config, oslo.messaging, ovirt-guest-agent, packagesearch, php-svn, php5-midgard2, phpunit-story, pike8.0, plasma-widget-adjustableclock, plowshare4, procps, pygpgme, pylibmc, pyroma, python-admesh, python-bleach, python-dmidecode, python-libdiscid, python-mne, python-mne, python-nmap, python-nmap, python-oslo.middleware, python-riemann-client, python-traceback2, qdjango, qsapecng, ruby-em-synchrony, ruby-ffi-rzmq, ruby-nokogiri, ruby-opengraph-parser, ruby-thread-safe, shortuuid, skrooge, smb4k, snp-sites, soprano, stopmotion, subtitlecomposer, svgpart, thin-provisioning-tools, umbrello, validator.js, vdr-plugin-prefermenu, vdr-plugin-vnsiserver, vdr-plugin-weather, webkitkde, xbmc-pvr-addons, xfsdump & zanshin.

30 September, 2015 10:23PM

hackergotchi for Norbert Preining

Norbert Preining

6 years in Japan

Exactly 6 years ago, on October 1, 2009, I started my work at the Japan Advanced Institute of Science and Technology (JAIST), arriving the previous day in a place not completely unknown, but with a completely different outlook: I had a position as Associate Professor, and somehow was looking forward to an interesting and challenging time.


6 years later I am still here at the JAIST, but things have changed considerably, and my future is even less clear than 6 years ago. So it is time to reflect a bit about the last years.

The biggest achievement

My biggest achievement in these 6 years is probably that I managed to learn Japanese to a degree that I can teach in Japanese (math, logic, etc), can read Japanese books to a certain degree, and have generally no problem communicating in daily life. Said that, there is still a long way to go. Reading, and much more writing, is still requiring concentration and power, far from the natural flow in my other languages. While talking feels rather natural, the complexity of the written language is a huge hurdle. But this is probably the good, the high point of the 6 years, a great challenge, that keeps my mind busy and working and challenged over long time, with still more to do.

The happiest thing

Many events here in Japan were of great fun and enjoyment for me. The rich culture, paired with a spectacular love for traditional handicraft I haven’t seen anywhere else, is a guarantee for enjoyable and intellectually stimulating activities. But the biggest joy of my time here of course was that I found a lovely, beautiful, and caring wife. Not knowing the challenges of an international marriage, I was caught without preparation, and so we had (and still have) rough times due to the cultural differences, and different expectations. But this is what makes life interesting, and so I am always grateful for this chance. Whatever happens in the future, she will be part of my decisions and the center of my life.

The biggest disappointment

Of course, when you live in a country for some time, you learn to know the highs and lows. As someone interested in politics and social systems, Japan is a pain in the butt in many respects. But the biggest disappointment was in a different area: Working environment. While I love my work and had great surroundings, there is something that always is present in the background: Foreigners here are not considered assets, but embellishment. Meaning that they are the first ones to loose their jobs when times are difficult, meaning that they are not considered as full members. After many years at a university here, and with no outlook on a job after March, I can only say, Japan is a country of “Japanese first”, especially when it comes to jobs. Of course, other countries are not that different, but looking at the average mixture of nationalities at universities in Europe or the US, and comparing them to Japanese universities, a bleak image is arising. I enjoyed my time here, I worked hard and did a lot for my university, but the economically hard times make it necessary to change things, and that means getting rid of foreigners.

That is the reason why the work environment is the biggest disappointment in these years.


The future is unclear, as it always was. The dire fate of many researchers. Being in my 40ies without a permanent position and a family, I am forced to think hard what my next options are. The hide-and-seek games of Japanese (and other) universities seem to me less and less an option. Sad as it is, after having worked 20+ years in academics, having done some interesting (for me) research and having managed to secure a name in our community, I am not sure where my future is. Continuing on definite contracts does not sound like a great option for me. Several things for the future come to my mind: starting my own business, work as programmer (maybe Google still wants me after I rejected them 2 years ago), work as mountain guide (have done that for some years before going to Japan). All of that is possible, but my loosing the time to research will always be a pain, since I enjoy cracking my brain on some complicated and deep logical problems.

Whatever comes, I will take it as a chance to learn new things. And in one way or another it will work out, I hope.

30 September, 2015 09:03PM by Norbert Preining

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

Kernel recipes 2015: Hardened kernels for everyone

As part of my ongoing effort to provide grsecurity patched kernels for Debian, I gave a talk this morning at Kernel Recipes 2015. Slides and video should be available at one point, but you can find the former here in the meantime. I'm making some progresses on #605090 which I should be able to push soon.

30 September, 2015 04:00PM by Yves-Alexis (corsac@debian.org)

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in September 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 8 hours on Debian LTS. In that time, I mostly did CVE triaging (in the last 3 days since I’m of LTS frontdesk duty this week). I pushed 14 commits to the security tracker. There were multiple CVE without any initial investigation so I checked the status of the CVE not only in squeeze but also in wheezy/jessie.

On unpaid time, I wrote and sent the summary of the work session held during DebConf. And I tried to initiate a discussion about offering mysql-5.5 in squeeze-lts. We also have setup lts-security@debian.org so that we can better handle embargoed security updates.

The Debian Administrator’s Handbook

Debian Handbook: cover of the jessie editionI spent a lot of time on my book, the content update has been done but now we’re reviewing it before preparing the paperback. I also started updating its French translation. You can help review it too.

While working on the book I noticed that snort got removed from jessie and the SE linux reference policy as well. I mailed their maintainers to recommend that they provide them in jessie-backports at least… those packages are relatively important/popular and it’s a pity that they are missing in jessie.

I hope to finish the book update in the next two weeks!

Distro Tracker

I spent a lot of time to revamp the mail part of Distro Tracker. But as it’s not finished yet, I don’t have anything to show yet. That said I pushed an important fix concerning the mail subscriptions (see #798555), basically all subscriptions of packages containing a dash were broken. It just shows that the new tracker is not yet widely used for mail subscription…

I also merged a patch from Andrew Starr-Bochicchio (#797633) to improve the description of the WNPP action items. And I reviewed another patch submitted by Orestis Ioannou to allow browsing of old news (see #756766).

And I filed #798011 against bugs.debian.org to request that a new X-Debian-PR-Severity header field be added to outgoing BTS mail so that Distro Tracker can filter mails by severity and offer people to subscribe to RC bugs only.

Misc Debian work

I filed many bugs this month and almost all of them are related to my Kali work:

  • 3 on debootstrap: #798560 (request for –suite-config option), #798562 (allow sharing bootstrap scripts), #7985604 (request to add kali related bootstrap scripts).
  • 3 requests of new upstream versions: for gpsd (#797899), for valgrind (#800013) and for puppet (#798636).
  • #797783: sbuild fails without any error message when /var/lib/sbuild is not writable in the chroot
  • #798181: gnuradio: Some files take way too long to compile (I had to request a give-back on another build daemon to ensure gnuradio migrated back to testing, and Julien Cristau suggested that it would be better to fix the package so that a single file doesn’t take more than 5 hours to build…)
  • #799550: libuhd003v5 lost its v5 suffix…


See you next month for a new summary of my activities.

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

30 September, 2015 03:12PM by Raphaël Hertzog

Dominique Dumont

Using custom cache object with AngularJS $http


At work, I’ve been bitten by the way AngularJS handles cache by default when using $https service. This post will show a simple way to improve cache handling with $http service.

The service I’m working on must perform the followings tasks:

  • retrieve data from a remote server.
  • save data to the same server.
  • retrieve the saved data and some extra information generated by the server to update a UI

At first, I’ve naively used $http.get cache parameter to enable or disable caching using a sequence like:

  1. $http.get(url, {cache: true} )
  2. $http.post(url)
  3. $http.get(url, {cache: false})
  4. $http.get(url, {cache: true})

Let’s say the calls above use the following data:

  1. $http.get(url, {cache: true}) returns “foo”
  2. $http.post(url) stores “bar”
  3. $http.get(url, {cache: false}) returns “bar”

I expected the next call $http.get(url, {cache: false}) to return “bar”. But no, I got “foo”, i.e. the obsolete data.

Turns out that cache object is completely left alone when {cache: false} is passed to $http.get.

ok. Fair enough. But this means that the value of the cache parameter should not change for a given URL. The default cache provided by $https cannot be cleared. (Well, actually, you can clear the cache under AngularJS’s hood, but that will probably not improve the readability of your code).

The naive approach does not work. Let’s try another solution by using a custom cache object as suggested by AngularJS doc. This cache object should be created by $cacheFactory service.

This cache object can then be passed to $http.get to be used as cache. When needed, the cache can be cleared. In the example above, the cache must be cleared after saving some data to the remote service.

There’s 2 possibilities to clear a cache:

  • Completely flush the cache using removeAll() function.
  • Clear the cache for the specific URL using remove(key) function. The only hitch is that the “key” used by $http is not documented.

So, we have to use the first solution and create a cache object for each API entry point:

angular.module('app').factory('myService', function ($http, $cacheFactory) {
  var myFooUrl = '/foo-rest-service';
  // create cache object. The cache id must be unique
  var fooCache = $cacheFactory('myService.foo'); 
  function getFooData () {
    return $http.get( myFooUrl, { cache: fooCache });
  function saveFooData(data) {
    return $http.post( myFooUrl, { cache: fooCache }).then(function() {
      myCache.removeAll() ;

The code above ensures that:

  • cached data for foo service is always consistent
  • http get requests are not sent more than necessary

This simple approach has the following limitations:

  • cache is not refreshed if the data on the server are updated by another client
  • cache is flushed when only the browser page is reloaded

If you need more a more advance cache mechanism, you may want to check jmdobry’s angular cache project

All the best

30 September, 2015 09:02AM by dod

September 29, 2015

Dariusz Dwornikowski

Delete until signature in vim

It has been bugging me for a while. When responding to an email, you often want to delete all the content (or part of the previous content) until the end of the email's body. However it would be nice to leave your signature in place. For that I came up with this nifty little vim trick:

nnoremap <silent> <leader>gr <Esc>d/--\_.*Dariusz<CR>:nohl<CR>O

Assuming that your signature starts with -- and the following line starts with your name (in my case it is Dariusz), this will delete all the content from the current line until the signature. Then it will remove search highlighting, and finally move one line up.

29 September, 2015 02:13PM by Dariusz Dwornikowski

hackergotchi for Norbert Preining

Norbert Preining

Multi-boot stick update: TAILS 1.6, SysresCD 4.6.0, GParted 0.23, Debian 8.2

Updates for my multi-boot/multi-purpose USB stick: All components have been updated to the latest versions and I have confirmed that all of them still boot properly – although changes in the grub.cfg file are necessary. So going through these explanations one will end up with a usable USB stick that can boot you into TAILS, System Rescue CD, GNU Parted Live CD, GRML, and also can boot into an installation of Debian 8.2 Jessie installation. All this while still being able to use the USB stick as normal media.


Since there have been a lot of updates, and also changes in the setup and grub config file, I include the full procedure here, that is, merging and updating these previous posts: USB stick with Tails and SystemRescueCD, Tails 1.2.1, Debian jessie installer, System Rescue CD on USB, USB stick update: TAILS 1.4, GParted 0.22, SysResCD 4.5.2, Debian Jessie, and USB stick update: Debian is back, plus GRML.

Let us repeat some things from the original post concerning the wishlist and the main players:

I have a long wishlist of items a boot stick should fulfill

  • boots into Tails, SystemRescueCD, GParted, and GRML
  • boots on both EFI and legacy systems
  • uses the full size of the USB stick (user data!)
  • allows installation of Debian
  • if possible, preserve already present user data on the stick


A USB stick, the iso images of TAILS 1.6, SystemRescueCD 4.6.0, GParted Lice CD 0.23.0, GRML 2014.11, and some tool to access iso images, for example ISOmaster (often available from your friendly Linux distribution).

I assume that you have already an USB stick prepared as described previously. If this is not the case, please go there and follow the section on preparing your usb stick.

Three types of boot options

We will employ three different approaches to boot special systems: the one is directly from an iso image (easiest, simple to update), the other via extraction of the necessary kernels and images (bit painful, needs some handwork), and the last one is a mixture necessary to get Debian booting (most painful, needs additional downloads and handwork).

At the moment we have the following status with respect to boot methods:

  • Booting directly from ISO image: System Rescue CD, GNOME Parted Live CD, GRML
  • Extraction of kernels/images: TAILS
  • Mixture: Debian Jessie install

Booting from ISO image

Grub has gained quite some time ago the ability to boot directly from an ISO image. In this case the iso image is mounted via loopback, and the kernel and initrd are specified relatively to the iso image root. This system makes it extremely easy to update the respective boot option: just drop the new iso image onto the USB stick, and update the isofile setting. One could even use some -latest method, but I prefer to keep the exact name.

For both SystemRescueCD, GNOME Partition Live CD, and GRML, just drop the iso files into /boot/iso/, in my case /boot/iso/systemrescuecd-x86-4.6.0.iso and /boot/iso/gparted-live-0.23.0-1-i586.iso.

After that, entries like the following have to be added to grub.cfg. For the full list see grub.cfg:

submenu "System Rescue CD 4.6.0 (via ISO) ---> " {
  set isofile="/boot/iso/systemrescuecd-x86-4.6.0.iso"
  menuentry "SystemRescueCd (64bit, default boot options)" {
        set gfxpayload=keep
        loopback loop (hd0,1)$isofile
        linux   (loop)/isolinux/rescue64 isoloop=$isofile
        initrd  (loop)/isolinux/initram.igz
submenu "GNU/Gnome Parted Live CD 0.23.0 (via ISO) ---> " {
  set isofile="/boot/iso/gparted-live-0.23.0-1-i586.iso"
  menuentry "GParted Live (Default settings)"{
    loopback loop (hd0,1)$isofile
    linux (loop)/live/vmlinuz boot=live union=overlay username=user config components quiet noswap noeject  ip= net.ifnames=0 nosplash findiso=$isofile
    initrd (loop)/live/initrd.img
submenu "GRML 2014.11 ---> " {
  menuentry "Grml Rescue System 64bit" {
        export iso_path
        loopback loop (hd0,1)$iso_path
        set root=(loop)
        kernelopts=" ssh=foobarbaz toram  "
        export kernelopts
        configfile /boot/grub/loopback.cfg

Note the added isoloop=$isofile and findiso=$isofile that helps the installer find the iso images.

Booting via extraction of kernels and images

This is a bit more tedious, but still not too bad.

Installation of TAILS files

Assuming you have access to the files on the TAILS CD via the directory ~/tails, execute the following commands:

mkdir -p /usbstick/boot/tails
cp -a ~/tails/live/* /usbstick/boot/tails/

The grub.cfg entries look now similar to the following:

submenu "TAILS Environment 1.6 ---> " {
  menuentry "Tails64 Live System" {
        linux   /boot/tails/vmlinuz2 boot=live live-media-path=/boot/tails config live-media=removable nopersistent noprompt timezone=Etc/UTC block.events_dfl_poll_msecs=1000 splash noautologin module=Tails
        initrd  /boot/tails/initrd2.img

The important part here is the live-media-path=/boot/tails, otherwise TAILS will not find the correct files for booting. The rest of the information was extracted from the boot setup of TAILS itself.

Mixture of iso image and extraction – Debian jessie

As mentioned in the previous post, booting Debian/Jessie installation images via any method laid out above didn’t work, since the iso images is never found. It turned out that the current installer iso images do not contain the iso-scan package, which is responsible for searching and loading of iso images.

But with a small trick one can overcome this: One needs to replace the initrd that is on the ISO image with one that contains the iso-scan package. And we do not need to create these initrd by ourselves, but simply use the ones from hd-media type installer. I downloaded the following four gzipped initrds from one of the Debian mirrors: i386/initrd text mode, i386/initrd gui mode, amd64/initrd text mode, amd64/initrd gui mode, and put them into the USB stick’s boot/debian/install.386, boot/debian/install.386/gtk, boot/debian/install.amd, boot/debian/install.amd/gtk, respectively. Finally, I added entries similar to this one (rest see the grub.cfg file):

submenu "Debian 8.2 Jessie NetInstall ---> " {
    set isofile="/boot/iso/firmware-8.2.0-amd64-i386-netinst.iso"
    menuentry '64 bit Install' {
        set background_color=black
        loopback loop (hd0,1)$isofile
        linux    (loop)/install.amd/vmlinuz iso-scan/ask_second_pass=true iso-scan/filename=$isofile vga=788 -- quiet 
        initrd   /boot/debian/install.amd/initrd.gz

Again an important point, don’t forget the two kernel command line options: iso-scan/ask_second_pass=true iso-scan/filename=$isofile, otherwise you probably will have to make the installer scan all disks and drives completely, which might take ages.

Current status of USB stick

Just to make sure, the usb stick should contain at the current stage the following files:

        vmlinuz Tails.module initrd.img ....
            lots of files
            lots of files
            lots of files
        grub.cfg            *this file we create in the next step!!*

The Grub config file grub.cfg

The final step is to provide a grub config file in /usbstick/boot/grub/grub.cfg. I created one by looking at the isoboot.cfg files both in the SystemRescueCD, TAILS iso images, GParted iso image, and the Debian/Jessie image, and converting them to grub syntax. Excerpts have been shown above in the various sections.

I spare you all the details, grab a copy here: grub.cfg


That’s it. Now you can anonymously provide data about your evil government, rescue your friends computer, fix a forgotten Windows password, and above all, install a proper free operating system.

If you have any comments, improvements or suggestions, please drop me a comment. I hope this helps a few people getting a decent USB boot stick running.


29 September, 2015 08:57AM by Norbert Preining

hackergotchi for Erich Schubert

Erich Schubert

Ubuntu broke Java because of Unity

Unity, that is the Ubuntu user interface, that nobody else uses.

Since it is a Ubuntu-only thing, few applications have native support for its OSX-style hipster "global" menus.

For Java, someone once wrote a hack called java-swing-ayatana, or "jayatana", that is preloaded into the JVM via the environment variable JAVA_TOOL_OPTIONS. The hacks seems to be unmaintained now.

Unfortunately, this hack seems to be broken now (Google has thousands of problem reports), and causes a NullPointerException or similar crashes in many applications; likely due to a change in OpenJDK 8.

Now all Java Swing applications appear to be broken for Ubuntu users, if they have the jayatana package installed. Congratulations!

And of couse, you see bug reports everywhere. Matlab seems to no longer work for some, NetBeans appears to have issues, and I got a number of bug reports on ELKI because of Ubuntu. Thank you, not.

29 September, 2015 08:45AM

September 28, 2015

Sven Hoexter

HP tooling switches from hpacucli to hpssacli

I guess I'm a bit late in the game but I just noticed that HP no longer provides the venerable hpacucli tool for Debian/jessie and Ubuntu 14.04. While you could still install it (as I did from an internal repository) it won't work anymore on Gen9 blades. The replacement seems to be hpssacli, and it's available as usual from the HP repository.

I should've read the manual.

28 September, 2015 09:41AM

September 27, 2015

hackergotchi for Clint Adams

Clint Adams

He then went on to sing the praises of Donald Trump

“I like Italian food and Mexican food,” he said.

“Where are you from?” she asked.

“Yemen, but I like Italian food and Mexican food,” he answered.

“You don't like Yemeni food?” she asked.

“Eh, well, it's the thing you grow up with,” he replied. “Do you know Yemeni food?”

“Yes,” she said, “I like حنيذ.”

“Oh, حنيذ is good if you like meat. If you like vegetables, try سلتة.”

“Why wouldn't I like meat?” she demanded.

“You know, every place in Yemen does ﺢﻨﻳﺫ differently. I like the way they do it in the west of Yemen, near Africa,” he said, and proceeded to describe the cooking process.

27 September, 2015 08:44PM

Sven Hoexter

1blu hack and the usual TLS certificate key madness

Some weeks ago the german low cost hoster 1blu got hacked and there was a bit of fuss later about the TLS certificates issued by 1blu. I think they reissued all of them. Since I knew that some hoster offer to generate the complete cert + key package for the customer I naively assumed that only the lazy and novice customers were the victims of that issue.

Today, while helping someone, I learned that 1blu forces you to use the key generated by them for certificates included in a virtual server bundle and probably other bundles. That makes those bundles a lot less attractive since the included certificate is not useful at all. One could of course argue that a virtual server is not trustworthy anyway, but I'd like to believe for now that it's more complicated to extract stuff from all running virtual servers compared to dumping the central database / key repository.

Maybe it's time to create a wrapper around openssl that is less opaque to novice users so we can get rid of key generation by a third party one day. In the end it's a disasterous trend that only got started because of usability issues.

27 September, 2015 03:36PM

Dominique Dumont

How to automount optical media on Debian Linux for Kodi


This problem has been bugging me for a while: how to setup my Kodi based home cinema to automatically mount an optical media ?

Turns out the solution is quite simple, now that Debian has switched for systemd. Just add the following line to /etc/fstab:

/dev/sr0 /media/bluray auto defaults,nofail,x-systemd.automount 0 2


  • /dev/sr0 is the device file. You can also use one of the symbolic links setup by udev in /dev/disk/by-id
  • /media/bluray is the mount point. You can choose another mount point
  • nofail is required to avoid failure report when booting without a disc in the optical drive
  • x-systemd.automount is the option to configure systemd to automatically mount the inserted disc

Do not specify noauto: this would prevent systemd to automatically mount a disc, which defeats the purpose.

To test you setup:

  • Run the command journalctl -x -f in a terminal to check what is going on with systemd
  • Reload systemd configuration with sudo systemctl daemon-reload.
  • load a disc in your optical drive

Then, journalctl should show something like:

Sept. 27 16:07:01 frodo systemd[1]: Mounted /media/bluray.

And that’s it. No need to have obsolete packages like udisk-glue or autofs.

Last but not least: this blog is moderated, please do not waste your time (and mine) posting rants.

All the best.

Tagged: automount, debian, kodi, optical, systemd

27 September, 2015 02:37PM by dod

Niels Thykier

There is nothing like (missing) iptables (rules) to make you use tor

I have been fiddling with setting up both iptables and tor on my local machine.  Most of it was fairly easy to do, once I dedicated the time to actually do it. Configuring both “at the same time” also made things easier for me, but YMMV.  Regardless, it did take quite a while researching, tweaking and testing – most of that time was spent on the iptables front for me.

I ended up doing this incrementally.  The major 5 steps I went through were:

  1. Created a basic incoming (INPUT) firewall – enforcing
  2. Installed tor + torsocks and aliased a few commands to run with torsocks
  3. Created a basic outgoing (OUTPUT) firewall – permissive
  4. Make the outgoing firewall enforcing
  5. Migrate the majority of programs and services to use tor.

Some of these overlapped time-wise and I certainly revisited the configuration a couple of times.  A couple of things, that I learned:

  • You probably want to have a look at “netstat --listen -put --numeric” when you write your INPUT firewall.
  • The tor developers have tried a lot to make things easy.  It is scary how often “torsocks program [args]” just works(tm).
    • That said, it does not always work.
  • Tor and iptables (OUTPUT) can have a synergy effect on each other.
    • Notably, when it is easier to just “torsocks” a program than adding the necessary iptables rules.
  • Writing iptables rules become a lot easier once:
    • You learn how to iptables’s LOG rule
    • You use sensible-editor + iptables-restore or something like puppet’s firewall module

Filed under: Debian

27 September, 2015 01:43PM by Niels Thykier

hackergotchi for Ben Armstrong

Ben Armstrong

Annual Bluff Hike, 2015

Here is a photo journal of our hike on the Bluff Wilderness Trail with my friend, Ryan Neily, as is our tradition at this time of year. Rather than hike all four loops, as we achieved last year, we chose to cover only the Pot Lake and Indian Hill loops. Like our meandering pace, our conversations were enjoyable and far ranging, with Nature doing her part, stimulating our minds and bodies and refreshing our spirits.

A break at the summit of Pot Lake loop. Click to start slideshow.A break at the summit of Pot Lake loop. Click to start slideshow.
Northern bayberry

A few showers quickly dissipated into light mist on the first leg of the hike Ryan, enjoying one of the many beautiful views Cormorant or shag. Hard to say from this poor, zoomed cellphone shot. Darkened pool amongst the rugged trees Late summer colours A riot of life shoots up in every crevice Large boulders and trees, forming a non-concrete alley along the trail margin Huckleberries still plentiful on the Indian Hill loop Sustenance to keep us going Not at all picked over, like the Pot Lake loop We break here for lunch Just about ready to embark on the last half We are surprised by the productivity of these short, scrubby huckleberries Barely rising from the reindeer moss, each huckleberry twig provides sweet, juicy handfuls A small pond on the trip back A break on the home stretch “Common” juniper, which nevertheless is not so common out here

Immature green common juniper “berries” (actually cones)


27 September, 2015 01:32PM by Ben Armstrong

hackergotchi for Lunar


Reproducible builds: week 22 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

  • Ben Hutchings uploaded linux-tools/4.2-1 which makes the tarball generated by genorig.py reproducible.

Packages fixed

The following 22 packages became reproducible due to changes in their build dependencies: breathe, cdi-api, geronimo-jpa-2.0-spec, geronimo-validation-1.0-spec, gradle-propdeps-plugin, jansi, javaparser, libjsr311-api-java, mac-widgets, mockito, mojarra, pastescript, plexus-utils2, powerline, python-psutil, python-sfml, python-tldap, pythondialog, tox, trident, truffle, zookeeper.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

  • fldigi/3.23.01-1 by Kamal Mostafa.

Patches submitted which have not made their way to the archive yet:

diffoscope development

The changes to make diffoscope run under Python 3, along with many small fixes, entered the archive with version 35 on September 21th.

Another release was made the very next day fixed two encoding-related issues discovered when running diffoscope on more Debian packages.

strip-nondeterminism development

Version 0.12.0 now preserves file permissions on modified zip files and dh_strip_nondeterminism has been made compatible with older debhelper.

disorderfs development

Version 0.3.0 implemented a “multi-user” mode that was required to build Debian packages using disorderfs. It also added command line options to control the ordering of files in directory (either shuffled or reversed) and another to do arbitrary changes to the reported space used by files on disk.

A couple days later, version 0.4.0 was released to support locks, flush, fsync, fsyncdir, read_buf, and write_buf. Almost all known issues have now been fixed.


disorderfs is now used during the second build. This makes file ordering issue very easy to identify as such. (h01ger)

Work has been done on making the distributed build setup more reliable. (h01ger)

Documentation update

Matt Kraii fixed the example on how to fix issues related to dates in Sphinx. Recent Sphinx versions should also be compatible with SOURCE_DATE_EPOCH.

Package reviews

53 reviews have been removed, 85 added and 13 updated this week.

46 packages failing to build from source has been identified by Chris Lamb, Chris West, and Niko Tyni. Chris Lamb was the lucky reporter of bug #800000 on vdr-plugin-prefermenu.

Issues related to disorderfs are being tracked with a new issue.

27 September, 2015 01:06PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Laptop Mode Tools 1.68.1

I am please to announce the release of Laptop Mode Tools 1.68.1.

The last release (1.68) was mostly about systemd integration, and so is this release. There were a couple of bugs reported, and most of them fixed, with this release. All downstreams are requested to upgrade.

For RPM packages for Fedora and OpenSUSE (Tumbleweed), please see the homepage.

1.68.1 - Sun Sep 27 14:00:13 IST 2015

    * Update details about runtime-pm in manpage

    * Revert "Drop out reload"

    * Log error more descriptively

    * Write to common stderr. Do not hardcode a specific one

    * Call lmt-udev in lmt-poll. Don't call the laptop_mode binary directly.

      Helps in a lot of housekeeping

    * Direct stderr/stdout to journal

    * Fix stdout descriptor

    * Install the new .timer and poll service

    * Use _sbindir for RPM




27 September, 2015 08:57AM by Ritesh Raj Sarraf

September 25, 2015

hackergotchi for Clint Adams

Clint Adams

This can or cannot be copyrighted

“Honey“ Mojito

  • 12 oz. “honey”
  • 7 medium limes
  • bag crushed ice
  • small bouquet fresh mint
  • water
  • light rum
  • sparkling water

Combine 12 oz. of “honey” with 8 oz. of warm water. Stir mixture together until the “honey” has completely dissolved. Juice limes in a juicer and pour into the “honey” and water. Squeeze the bunch of mint sprigs and add to a pitcher of crushed ice. Pour the “honey”/lime mixture over the ice. Stir and top with sparkling water. Add more “honey”, water, limes, or rum to your taste. Enjoy!

Serves 2

25 September, 2015 08:45PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

WadC 2.0 released


This week I released version 2.0 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

Version 2.0 is the first version in about four years and adds a fair number of features, most notably the ability to compose textures in your code and a basic command-line interface.

For more information see the release notes and the reference.

25 September, 2015 08:20PM

Sven Hoexter

Ubuntu 14.04 php-apcu 4.0.7 backport

Looks like the php-apcu release shipped with Ubuntu 14.04 is really buggy. Since nobody at Ubuntu seems to care about packages in universe I've added a backport of php-apcu 4.0.7 to my ppa. It's just a rebuild, so no magic involved.

Update: I've used the requestbackport thingy now to request a backport the Ubuntu way.

25 September, 2015 06:16PM

getting rid of xchat

I'm lazy. So I sticked to xchat for way too long. It seems to be dead since 2010 but luckily some good souls maintain a fork called hexchat. That's what I moved myself to a few weeks ago.

Now looking at the Debian xchat package I feel the urgent need to fill a request for removal. Sine I'm not a member of QA I asked for some advice, but the feedback is a bit sparse so far.

Maybe everyone still using xchat could just switch to hexchat so we can remove xchat next year and nobody would notice it? The only obvious drawback I can see at the moment is the missing Tcl plugin. The rest of the migration is more or less reconfiguring everything to your preferences.

25 September, 2015 06:03PM


If you visit your potentially new team in the office and there is no whiteboard, or only a barely used one, you might be better off looking for a different team.

25 September, 2015 05:27PM

hackergotchi for Christian Perrier

Christian Perrier

Bugs #780000 - 790000

Thorsten Glaser reported Debian bug #780000 on Saturday March 7th 2015, against the gcc-4.9 package.

Bug #770000 was reported as of November 18th so there have been 10,000 bugs in about 3.5 months, which was significantly slower than earlier.

Salvatore Bonaccorso reported Debian bug #790000 on Friday June 26th 2015, against the pcre3 package.

Thus, there have been 10,000 bugs in 3.5 months again. It seems that the bug report rate stabilized again.

Sorry for missing bug #780000 annoucement. I'm doing this since....November 2007 for bug #450000 and it seems that this lack of attention is somehow significant wrt my involvment in Debian. Still, this involvment is still here and I'll try to "survive" in the project until we reach bug #1000000...:-)

See you for bug #800000 annoucement and the result of the bets we placed on the date it would happen.

25 September, 2015 10:36AM

Bug #800000 has been reported...Tomasz Muras wins a 2.5-year-old bet..:-)

Here it is.

Debian had eight hundred thousand bugs reported in its history.

Tomasz Muras guessed, more than 2 years ago, that it would be reported on September 24h, and it has been reported on 25th. Good catch!

Chris Lamb is the happy bug submitter for this release critical bug against the vdr-plugin-prefermenu package.

Of course, I will soon open the wiki page for the bug #900000 bet, which will again include a place where you can also bet for bug #1000000. Be patient, the week-end is coming..:-)

It took two years, 7 months and 18 days to report 100,000 bugs in Debian since bug #700000 was reported.

25 September, 2015 09:51AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


A bugfix release of RcppEigen is now on CRAN and in Debian. The NEWS file entry follows.

Changes in RcppEigen version (2015-09-23)

  • Corrected use of kitten() thanks to Grant Brown (#21)

  • Applied upstream change to protect against undefined behaviour with null pointers

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 September, 2015 03:06AM

hackergotchi for Steve McIntyre

Steve McIntyre

Linaro VLANd v0.4

VLANd is a python program intended to make it easy to manage port-based VLAN setups across multiple switches in a network. It is designed to be vendor-agnostic, with a clean pluggable driver API to allow for a wide range of different switches to be controlled together.

There's more information in the README file. I've just released v0.4, with a lot of changes included since the last release:

  • Large numbers of bugfixes and code cleanups
  • Code changes for integration with LAVA:
    • Added db.find_lowest_unused_vlan_tag()
    • create_vlan() with a tag of -1 will find and allocate the first unused tag automatically
  • Add port numbers as well as names to the ports database, to give human-recognisable references. See README.port-numbering for more details.
  • Add tracking of trunks, the inter-switch connections, needed for visualisation diagrams.
  • Add a simple http-based visualisation feature:
    • Generate network diagrams on-demand based on the information in the VLANd database, colour-coded to show port configuration
    • Generate a simple website to reference those diagrams.
  • Allow more ports to be seen on Catalyst switches
  • Add a systemd service file for vland

VLANd is Free Software, released under the GPL version 2 (or any later version). For now, grab it from git; tarballs will be coming shortly.

25 September, 2015 12:44AM

September 24, 2015

hackergotchi for Lars Wirzenius

Lars Wirzenius

FUUG grant for Obnam development

I'm very pleased to say that the FUUG foundation in Finland has awarded me a grant to buy some hardware to help development of Obnam, by backup program. The announcement has more details in Finnish.

24 September, 2015 07:24PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

New GPG key

Just before I went to DebConf15 I got around to setting up my gnuk with the latest build (1.1.7), which supports 4K RSA keys. As a result I decided to generate a new certification only primary key, using a live CD on a non-networked host and ensuring the raw key was only ever used in this configuration. The intention is that in general I will use the key via the gnuk, ensuring no danger of leaking the key material.

I took part in various key signings at DebConf and the subsequent UK Debian BBQ, and finally today got round to dealing with the key slips I had accumulated. I’m sure I’ve missed some people off my signing list, but at least now the key should be embedded into the strong set of keys. Feel free to poke me next time you see me if you didn’t get mail from me with fresh signatures and you think you should have.

Key details are:

pub   4096R/0x21E278A66C28DBC0 2015-08-04 [expires: 2018-08-03]
      Key fingerprint = 3E0C FCDB 05A7 F665 AA18  CEFA 21E2 78A6 6C28 DBC0
uid                 [  full  ] Jonathan McDowell <noodles@earth.li>

I have no reason to assume my old key (0x94FA372B2DA8B985) has been compromised and for now continue to use that key. Also for the new key I have not generated any subkeys as yet, which caff handles ok but emits a warning about unencrypted mail. Thanks to those of you who sent me signatures despite this.

[Update: I was asked about my setup for the key generation, in particular how I ensured enough entropy, given that it was a fresh boot and without networking there were limited entropy sources available to the machine. I made the decision that the machine’s TPM and the use of tpm-rng and rng-tools was sufficient (i.e. I didn’t worry overly about the TPM being compromised for the purposes of feeding additional information into the random pool). Alternative options would have been flashing the gnuk with the NeuG firmware or using my Entropy Key.]

24 September, 2015 02:45PM

Petter Reinholdtsen

The life and death of a laptop battery

When I get a new laptop, the battery life time at the start is OK. But this do not last. The last few laptops gave me a feeling that within a year, the life time is just a fraction of what it used to be, and it slowly become painful to use the laptop without power connected all the time. Because of this, when I got a new Thinkpad X230 laptop about two years ago, I decided to monitor its battery state to have more hard facts when the battery started to fail.

First I tried to find a sensible Debian package to record the battery status, assuming that this must be a problem already handled by someone else. I found battery-stats, which collects statistics from the battery, but it was completely broken. I sent a few suggestions to the maintainer, but decided to write my own collector as a shell script while I waited for feedback from him. Via a blog post about the battery development on a MacBook Air I also discovered batlog, not available in Debian.

I started my collector 2013-07-15, and it has been collecting battery stats ever since. Now my /var/log/hjemmenett-battery-status.log file contain around 115,000 measurements, from the time the battery was working great until now, when it is unable to charge above 7% of original capacity. My collector shell script is quite simple and look like this:

# Inspired by
# http://www.ifweassume.com/2013/08/the-de-evolution-of-my-laptop-battery.html
# See also
# http://blog.sleeplessbeastie.eu/2013/01/02/debian-how-to-monitor-battery-capacity/

files="manufacturer model_name technology serial_number \
    energy_full energy_full_design energy_now cycle_count status"

if [ ! -e "$logfile" ] ; then
	printf "timestamp,"
	for f in $files; do
	    printf "%s," $f
    ) > "$logfile"

log_battery() {
    # Print complete message in one echo call, to avoid race condition
    # when several log processes run in parallel.
    msg=$(printf "%s," $(date +%s); \
	for f in $files; do \
	    printf "%s," $(cat $f); \
    echo "$msg"

cd /sys/class/power_supply

for bat in BAT*; do
    (cd $bat && log_battery >> "$logfile")

The script is called when the power management system detect a change in the power status (power plug in or out), and when going into and out of hibernation and suspend. In addition, it collect a value every 10 minutes. This make it possible for me know when the battery is discharging, charging and how the maximum charge change over time. The code for the Debian package is now available on github.

The collected log file look like this:


I wrote a small script to create a graph of the charge development over time. This graph depicted above show the slow death of my laptop battery.

But why is this happening? Why are my laptop batteries always dying in a year or two, while the batteries of space probes and satellites keep working year after year. If we are to believe Battery University, the cause is me charging the battery whenever I have a chance, and the fix is to not charge the Lithium-ion batteries to 100% all the time, but to stay below 90% of full charge most of the time. I've been told that the Tesla electric cars limit the charge of their batteries to 80%, with the option to charge to 100% when preparing for a longer trip (not that I would want a car like Tesla where rights to privacy is abandoned, but that is another story), which I guess is the option we should have for laptops on Linux too.

Is there a good and generic way with Linux to tell the battery to stop charging at 80%, unless requested to charge to 100% once in preparation for a longer trip? I found one recipe on askubuntu for Ubuntu to limit charging on Thinkpad to 80%, but could not get it to work (kernel module refused to load).

I wonder why the battery capacity was reported to be more than 100% at the start. I also wonder why the "full capacity" increases some times, and if it is possible to repeat the process to get the battery back to design capacity. And I wonder if the discharge and charge speed change over time, or if this stay the same. I did not yet try to write a tool to calculate the derivative values of the battery level, but suspect some interesting insights might be learned from those.

Update 2015-09-24: I got a tip to install the packages acpi-call-dkms and tlp (unfortunately missing in Debian stable) packages instead of the tp-smapi-dkms package I had tried to use initially, and use 'tlp setcharge 40 80' to change when charging start and stop. I've done so now, but expect my existing battery is toast and need to be replaced. The proposal is unfortunately Thinkpad specific.

24 September, 2015 02:00PM

hackergotchi for Joachim Breitner

Joachim Breitner

The Incredible Proof Machine

In a few weeks, I will have the opportunity to offer a weekend workshop to selected and motivated high school students1 to a topic of my choice. My idea is to tell them something about logic, proofs, and the joy of searching and finding proofs, and the gratification of irrevocable truths.

While proving things on paper is already quite nice, it is much more fun to use an interactive theorem prover, such as Isabelle, Coq or Agda: You get immediate feedback, you can experiment and play around if you are stuck, and you get lots of small successes. Someone2 once called interactive theorem proving “the worlds most geekiest videogame”.

Unfortunately, I don’t think one can get high school students without any prior knowledge in logic, or programming, or fancy mathematical symbols, to do something meaningful with a system like Isabelle, so I need something that is (much) easier to use. I always had this idea in the back of my head that proving is not so much about writing text (as in “normally written” proofs) or programs (as in Agda) or labeled statements (as in Hilbert-style proofs), but rather something involving facts that I have proven so far floating around freely, and way to combine these facts to new facts, without the need to name them, or put them in a particular order or sequence. In a way, I’m looking for labVIEW wrestled through the Curry-Horward-isomorphism. Something like this:

A proof of implication currying

A proof of implication currying

So I set out, rounded up a few contributors (Thanks!), implemented this, and now I proudly present: The Incredible Proof Machine3

This interactive theorem prover allows you to do perform proofs purely by dragging blocks (representing proof steps) onto the paper and connecting them properly. There is no need to learn syntax, and hence no frustration about getting that wrong. Furthermore, it comes with a number of example tasks to experiment with, so you can simply see it as a challenging computer came and work through them one by one, learning something about the logical connectives and how they work as you go.

For the actual workshop, my plan is to let the students first try to solve the tasks of one session on their own, let them draw their own conclusions and come up with an idea of what they just did, and then deliver an explanation of the logical meaning of what they did.

The implementation is heavily influenced by Isabelle: The software does not know anything about, say, conjunction (∧) and implication (→). To the core, everything is but an untyped lambda expression, and when two blocks are connected, it does unification4 of the proposition present on either side. This general framework is then instantiated by specifying the basic rules (or axioms) in a descriptive manner. It is quite feasible to implement other logics or formal systems on top of this as well.

Another influence of Isabelle is the non-linear editing: You neither have to create the proof in a particular order nor have to manually manage a “proof focus”. Instead, you can edit any bit of the proof at any time, and the system checks all of it continuously.

As always, I am keen on feedback. Also, if you want to use this for your own teaching or experimenting needs, let me know. We have a mailing list for the project, the code is on GitHub, where you can also file bug reports and feature requests. Contributions are welcome! All aspects of the logic are implemented in Haskell and compiled to JavaScript using GHCJS, the UI is plain hand-written and messy JavaScript code, using JointJS to handle the graph interaction.

Obviously, there is still plenty that can be done to improve the machine. In particular, the ability to create your own proof blocks, such as proof by contradiction, prove them to be valid and then use them in further proofs, is currently being worked on. And while the page will store your current progress, including all proofs you create, in your browser, it needs better ways to save, load and share tasks, blocks and proofs. Also, we’d like to add some gamification, i.e. achievements (“First proof by contradiction”, “50 theorems proven”), statistics, maybe a “share theorem on twitter” button. As the UI becomes more complicated, I’d like to investigating moving more of it into Haskell world and use Functional Reactive Programming, i.e. Ryan Trickle’s reflex, to stay sane.

Customers who liked The Incredible Proof Machine might also like these artifacts, that I found while looking whether something like this exists:

  • Easyprove, an interactive tool to create textual proofs by clicking on rules.
  • Domino On Acid represents natural deduction rules in propositional logic with → and ⊥ as a game of dominoes.
  • Proofscape visualizes the dependencies between proofs as graphs, i.e. it operates on a higher level than The Incredible Proof Machine.
  • Proofmood is a nice interactive interface to conduct proofs in Fitch-style.
  • Proof-Game represents proofs trees in a sequent calculus with boxes with different shapes that have to match.
  • JAPE is an editor for proofs in a number of traditional proof styles. (Thanks to Alfio Martini for the pointer.)
  • Logitext, written by Edward Z. Yang, is an online tool to create proof trees in sequent style, with a slick interface, and is even backed by Coq! (Thanks to Lev Lamberov for the pointer.)
  • Carnap is similar in implementation to The Incredible Proof Machine (logical core in Haskell, generic unification-based solver). It currently lets you edit proof trees, but there are plans to create something more visual.
  • Clickable Proofs is a (non-free) iOS app that incorporates quite a few of the ideas that are behind The Incredible Proof Machine. It came out of a Bachelor’s thesis of Tim Selier and covers propositional logic.
  • Euclid the game by Kasper Peulen is a nice game to play with geometric constructions.

  1. Students with migration background supported by the START scholarship

  2. Does anyone know the reference?

  3. We almost named it “Proofcraft”, which would be a name our current Minecraft-wild youth would appreciate, but it is alreay taken by Gerwin Kleins blog. Also, the irony of a theorem prover being in-credible is worth something.

  4. Luckily, two decades ago, Tobias Nipkow published a nice implementation of higher order pattern unification as ML code, which I transliterated to Haskell for this project.

24 September, 2015 12:14PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Matthew Garrett

Matthew Garrett

Filling in the holes in Linux boot chain measurement, and the TPM measurement log

When I wrote about TPM attestation via 2FA, I mentioned that you needed a bootloader that actually performed measurement. I've now written some patches for Shim and Grub that do so.

The Shim code does a couple of things. The obvious one is to measure the second-stage bootloader into PCR 9. The perhaps less expected one is to measure the contents of the MokList and MokSBState UEFI variables into PCR 14. This means that if you're happy simply running a system with your own set of signing keys and just want to ensure that your secure boot configuration hasn't been compromised, you can simply seal to PCR 7 (which will contain the UEFI Secure Boot state as defined by the UEFI spec) and PCR 14 (which will contain the additional state used by Shim) and ignore all the others.

The grub code is a little more complicated because there's more ways to get it to execute code. Right now I've gone for a fairly extreme implementation. On BIOS systems, the grub stage 1 and 2 will be measured into PCR 9[1]. That's the only BIOS-specific part of things. From then on, any grub modules that are loaded will also be measured into PCR 9. The full kernel image will be measured into PCR10, and the full initramfs will be measured into PCR11. The command line passed to the kernel is in PCR12. Finally, each command executed by grub (including those in the config file) is measured into PCR 13.

That's quite a lot of measurement, and there are probably fairly reasonable circumstances under which you won't want to pay attention to all of those PCRs. But you've probably also noticed that several different things may be measured into the same PCR, and that makes it more difficult to figure out what's going on. Thankfully, the spec designers have a solution to this in the form of the TPM measurement log.

Rather than merely extending a PCR with a new hash, software can extend the measurement log at the same time. This is stored outside the TPM and so isn't directly cryptographically protected. In the simplest form, it contains a hash and some form of description of the event associated with that hash. If you replay those hashes you should end up with the same value that's in the TPM, so for attestation purposes you can perform that verification and then merely check that specific log values you care about are correct. This makes it possible to have a system perform an attestation to a remote server that contains a full list of the grub commands that it ran and for that server to make its attestation decision based on a subset of those.

No promises as yet about PCR allocation being final or these patches ever going anywhere in their current form, but it seems reasonable to get them out there so people can play. Let me know if you end up using them!

[1] The code for this is derived from the old Trusted Grub patchset, by way of Sirrix AG's Trusted Grub 2 tree.

comment count unavailable comments

24 September, 2015 01:21AM

September 23, 2015

Simon Josefsson

Cosmos – A Simple Configuration Management System

Back in early 2012 I had been helping with system administration of a number of Debian/Ubuntu-based machines, and the odd Solaris machine, for a couple of years at $DAYJOB. We had a combination of hand-written scripts, documentation notes that we cut’n’paste’d from during installation, and some locally maintained Debian packages for pulling in dependencies and providing some configuration files. As the number of people and machines involved grew, I realized that I wasn’t happy with how these machines were being administrated. If one of these machines would disappear in flames, it would take time (and more importantly, non-trivial manual labor) to get its services up and running again. I wanted a system that could automate the complete configuration of any Unix-like machine. It should require minimal human interaction. I wanted the configuration files to be version controlled. I wanted good security properties. I did not want to rely on a centralized server that would be a single point of failure. It had to be portable and be easy to get to work on new (and very old) platforms. It should be easy to modify a configuration file and get it deployed. I wanted it to be easy to start to use on an existing server. I wanted it to allow for incremental adoption. Surely this must exist, I thought.

During January 2012 I evaluated the existing configuration management systems around, like CFEngine, Chef, and Puppet. I don’t recall my reasons for rejecting each individual project, but needless to say I did not find what I was looking for. The reasons for rejecting the projects I looked at ranged from centralization concerns (single-point-of-failure central servers), bad security (no OpenPGP signing integration), to the feeling that the projects were too complex and hence fragile. I’m sure there were other reasons too.

In February I started going back to my original needs and tried to see if I could abstract something from the knowledge that was in all these notes, script snippets and local dpkg packages. I realized that the essence of what I wanted was one shell script per machine, OpenPGP signed, in a Git repository. I could check out that Git repository on every new machine that I wanted to configure, verify the OpenPGP signature of the shell script, and invoke the script. The script would do everything needed to get the machine up into an operational stage again, including package installation and configuration file changes. Since I would usually want to modify configuration files on a system even after its initial installation (hey not everyone is perfect), it was natural to extend this idea to a cron job that did ‘git pull’, verified the OpenPGP signature, and ran the script. The script would then have to be a bit more clever and not redo everything every time.

Since we had many machines, it was obvious that there would be huge code duplication between scripts. It felt natural to think of splitting up the shell script into a directory with many smaller shell scripts, and invoke each shell script in turn. Think of the /etc/init.d/ hierarchy and how it worked with System V initd. This would allow re-use of useful snippets across several machines. The next realization was that large parts of the shell script would be to create configuration files, such as /etc/network/interfaces. It would be easier to modify the content of those files if they were stored as files in a separate directory, an “overlay” stored in a sub-directory overlay/, and copied into the file system’s hierarchy with rsync. The final realization was that it made some sense to run one set of scripts before rsync’ing in the configuration files (to be able to install packages or set things up for the configuration files to make sense), and one set of scripts after the rsync (to perform tasks that require some package to be installed and configured). These set of scripts were called the “pre-tasks” and “post-tasks” respectively, and stored in sub-directories called pre-tasks.d/ and post-tasks.d/.

I started putting what would become Cosmos together during February 2012. Incidentally, I had been using etckeeper on our machines, and I had been reading its source code, and it greatly inspired the internal design of Cosmos. The git history shows well how the ideas evolved — even that Cosmos was initially called Eve but in retrospect I didn’t like the religious connotations — and there were a couple of rewrites on the way, but on the 28th of February I pushed out version 1.0. It was in total 778 lines of code, with at least 200 of those lines being the license boiler plate at the top of each file. Version 1.0 had a debian/ directory and I built the dpkg file and started to deploy on it some machines. There were a couple of small fixes in the next few days, but development stopped on March 5th 2012. We started to use Cosmos, and converted more and more machines to it, and I quickly also converted all of my home servers to use it. And even my laptops. It took until September 2014 to discover the first bug (the fix is a one-liner). Since then there haven’t been any real changes to the source code. It is in daily use today.

The README that comes with Cosmos gives a more hands-on approach on using it, which I hope will serve as a starting point if the above introduction sparked some interest. I hope to cover more about how to use Cosmos in a later blog post. Since Cosmos does so little on its own, to make sense of how to use it, you want to see a Git repository with machine models. If you want to see how the Git repository for my own machines looks you can see the sjd-cosmos repository. Don’t miss its README at the bottom. In particular, its global/ sub-directory contains some of the foundation, such as OpenPGP key trust handling.

23 September, 2015 10:38PM by simon

hackergotchi for Daniel Pocock

Daniel Pocock

The only way to ensure the VW scandal never happens again

The automotive industry has been in the spotlight after a massive scandal at Volkswagen, using code hidden in the engine management software to cheat emissions tests.

What else is hidden in your new car's computer?

Every technology we use in our lives is becoming more computerized, even light bulbs and toilet seats.

In a large piece of equipment like a car, there are many opportunities for computerization. In most cases, consumers aren't even given a choice whether or not they want software in their car.

It has long been known that such software is spying on the habits of the driver and this data is extracted from the car when it is serviced and uploaded to the car company. Car companies are building vast databases about the lifestyles, habits and employment activities of their customers.

Computers aren't going away, so what can be done?

Most people realize that computers aren't going to go away any time soon. That doesn't mean that people have to put up with these deceptions and intrusions on our lives.

For years, many leading experts in the software engineering world have been promoting the benefits and principles of free software.

What we mean by free is that users, regulators and other independent experts should have the freedom to see and modify the source code in the equipment that we depend on as part of modern life. In fact, experts generally agree that there is no means other than software freedom to counter the might of corporations like Volkswagen and their potential to misuse that power, as demonstrated in the emissions testing scandal.

If Governments and regulators want to be taken seriously and protect society, isn't it time that they insisted that the car industry replaces all hidden code with free and open source software?

23 September, 2015 07:48AM by Daniel.Pocock