February 28, 2015

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, February 2015

This was my third month working on Debian LTS, and the first where I actually uploaded packages. I also worked on userland packages for the first time.

In the middle of February I finished and uploaded a security update for the kernel package (linux-2.6 version 2.6.32-48squeeze11, DLA 155-1). I decided not to include the fix for CVE-2014-9419 and the large FPU/MMX/SSE/AVX state management changes it depends on, as they don't seem to be worth the risk.

The old patch system used in linux-2.6 in squeeze still frustrates me, but I committed a script in the kernel subversion repository to simplify adding patches to it. This might be useful to any other LTS team members working on it.

In the past week I uploaded security updates for cups (version 1.4.4-7+squeeze7, DLA 159-1) and sudo (1.7.4p4-2.squeeze.5, DLA 160-1). My work on the cups package was slowed down by its reliance on dpatch, which thankfully has been replaced in later versions. sudo is a more modern quilt/debhelper package, but upstream has an odd way of building manual pages. In the version used in squeeze the master format is Perl POD, while in wheezy it's mandoc, but in both cases the upstream source includes pre-generated manual pages and doesn't rebuild them by default. debian/rules is supposed to fix this but doesn't (#779363), so I had to regenerate 'by hand' and fold the changes into the respective patches.

Finally, I started work on addressing the many remaining security issues in eglibc. Most of the patches applied to wheezy were usable with minimal adjustment, but I didn't have time left to perform any meaningful testing. I intend to upload what I've done to people.debian.org for testing by interested parties and then make an upload early in March (or let someone else on the LTS or glibc team do so).

Update: I sent mail about the incomplete eglibc update to the debian-lts list.

28 February, 2015 09:39PM

Petter Reinholdtsen

The Citizenfour documentary on the Snowden confirmations to Norway

Today I was happy to learn that the documentary Citizenfour by Laura Poitras finally will show up in Norway. According to the magazine Montages, a deal has finally been made for Cinema distribution in Norway and the movie will have its premiere soon. This is great news. As part of my involvement with the Norwegian Unix User Group, me and a friend have tried to get the movie to Norway ourselves, but obviously we were too late and Tor Fosse beat us to it. I am happy he did, as the movie will make its way to the public and we do not have to make it happen ourselves. The trailer can be seen on youtube, if you are curious what kind of film this is.

The whistle blower Edward Snowden really deserve political asylum here in Norway, but I am afraid he would not be safe.

28 February, 2015 09:10PM

Mathieu Parent

Hello Planet Debian

After more than five years of being a Debian developer, here is my first post on Planet Debian!

I currently maintain 165 packages. My focus has changed since 2009, but those are still mostly sysadmin packages:

  • ctdb (under the pkg-samba umbrella), the clustered database used by samba
  • c-icap and c-icap-modules: a c-icap server mostly useful with squid and providing url blacklists and antivirus filtering
  • pkg-php-tools: easy packaging of PHP packages (PEAR, PECL and Composer) as .deb
  • 124 php-horde* (Horde) packages: A groupware and webmail, written in PHP
  • 12 PHP PEAR, Composer, or PECL packages (those are Horde dependencies)
  • I’m mostly maintaining alone the above packages. Any help is appreciated!
  • python-ceres, graphite-carbon and graphite-web: Graphite is an high performance monitoring and graphing software. Jonas Genannt is maintaining the packages well and I only do review
  • 20 shinken packages : a monitoring solution, compatible with nagios configuration files and written in python. Thibault Cohen is doing most of the packaging, and I give advice
  • svox: The TTS from Android (unfortunately non-free because of missing or outdated sources). This is now under the Debian Accessibility Team umbrella
  • kolabadmin: this is the last remaining piece from my former pkg-kolab membership (unfortunately kolab server won’t be in jessie, you can help the team for Stretch)

Now that the first post is online, I will try to keep up!

28 February, 2015 06:05PM by sathieu

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


A new release of RcppEigen is now on CRAN and in Debian. It synchronizes the Eigen code with the 3.2.4 upstream release, and updates the RcppEigen.package.skeleton() package creation helper to use the kitten() function from pkgKitten for enhanced package creation.

The NEWS file entry follows.

Changes in RcppEigen version (2015-02-23)

  • Updated to version 3.2.4 of Eigen

  • Update RcppEigen.package.skeleton() to use pkgKitten if available

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 February, 2015 02:34PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Welcome to the world, little ones!

Welcome to the world, little ones!

Welcome little babies!

Yesterday night, we entered the hospital. Nervous, heavy, and... Well, would we ever be ready? As ready as we could.

A couple of hours later, Alan and Elena Wolf Daichman became individuals on their own right. As is often the case in the case of twins, they were brought to this world after a relatively short preparation (34 weeks, that's about 7.5 months). At 1.820 and 1.980Kg, they are considerably smaller than either of the parents... But we will be working on that!

Regina is recovering from the operation, the babies are under observation. As far as we were told, they seem to be quite healthy, with just minor issues to work on during neonatal care. We are waiting for our doctors to come today and allow us to spend time with them.

And as for us... It's a shocking change to finally see the so long expected babies. We are very very very happy... And the new reality is hard to grasp, to even begin understanding :)

PS- Many people have told me that my blog often errors out under load. I expect it to happen today :) So, if you cannot do it here, there are many other ways to contact us. Use them! :)

28 February, 2015 01:26PM by gwolf

February 27, 2015

Richard Hartmann

Release Critical Bug report for Week 09

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1072 (Including 181 bugs affecting key packages)
    • Affecting Jessie: 152 (key packages: 117) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 101 (key packages: 80) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 23 bugs are tagged 'patch'. (key packages: 17) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 6 bugs are marked as done, but still affect unstable. (key packages: 4) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 72 bugs are neither tagged patch, nor marked done. (key packages: 59) Help make a first step towards resolution!
      • Affecting Jessie only: 51 (key packages: 37) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 35 bugs are in packages that are unblocked by the release team. (key packages: 27)
        • 16 bugs are in packages that are not unblocked. (key packages: 10)

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68) 187 (132+55)
5 2 (0+2) 224 (132+92) 175 (124+51)
6 release! 212 (129+83) 161 (109+52)
7 release+1 194 (128+66) 147 (106+41)
8 release+2 206 (144+62) 147 (96+51)
9 release+3 174 (105+69) 152 (101+51)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

27 February, 2015 03:40PM by Richard 'RichiH' Hartmann

Enrico Zini


Another day in the life of a poor developer

    # After Python 3.3
    from collections.abc import Iterable
except ImportError:
    # This has changed in Python 3.3 (why, oh why?), reinforcing the idea that
    # the best Python version ever is still 2.7, simply because upstream has
    # promised that they won't touch it (and break it) for at least 5 more
    # years.
    from collections import Iterable

import shlex
if hasattr(shlex, "quote"):
    # New in version 3.3.
    shell_quote = shlex.quote
    # Available since python 1.6 but deprecated since version 2.7: Prior to Python
    # 2.7, this function was not publicly documented. It is finally exposed
    # publicly in Python 3.3 as the quote function in the shlex module.
    # Except everyone was using it, because it was the only way provided by the
    # python standard library to make a string safe for shell use
    # See http://stackoverflow.com/questions/35817/how-to-escape-os-system-calls-in-python
    import pipes
    shell_quote = pipes.quote

import shutil
if hasattr(shutil, "which"):
    # New in version 3.3.
    shell_which = shutil.which
    # Available since python 1.6:
    # http://stackoverflow.com/questions/377017/test-if-executable-exists-in-python
    from distutils.spawn import find_executable
    shell_which = find_executable

27 February, 2015 11:02AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.4.650.1.1 (and also 0.4.650.2.0)

A new Armadillo release 4.650.1 was released by Conrad a few days ago. Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

It turned out that this release had one shortcoming with respect to the C++11 RNG initializations in the R use case (where we need to protect the users from the C++98 RNG deemed unsuitable by the CRAN gatekeepers). And this lead to upstream release 4.650.1 which we wrapped into RcppArmadillo 0.4.650.1.1. As before this, was tested against all 107 reverse dependencies of RcppArmadillo on the CRAN repo.

This version is now on CRAN, and was just uploaded to Debian. Its changes are summarized below based on the NEWS.Rd file.

Changes in RcppArmadillo version 0.4.650.1.1 (2015-02-25)

  • Upgraded to Armadillo release Version 4.650.1 ("Intravenous Caffeine Injector")

    • added randg() for generating random values from gamma distributions (C++11 only)

    • added .head_rows() and .tail_rows() to submatrix views

    • added .head_cols() and .tail_cols() to submatrix views

    • expanded eigs_sym() to optionally calculate eigenvalues with smallest/largest algebraic values fixes for handling of sparse matrices

  • Applied small correction to main header file to set up C++11 RNG whether or not the alternate RNG (based on R, our default) is used

Now, it turns out that another small fix was needed for the corner case of a submatrix within a submatrix, ie V.subvec(1,10).tail(5). I decided not to re-release this to CRAN given the CRAN Repository Policy preference for releases “no more than every 1–2 months”.

But fear not, for we now have drat. I created a drat package repository in the RcppCore account (to not put a larger package into my main drat repository often used via a fork to initialize a drat). So now with these two simple commands

## if needed, first install 'drat' via:   install.packages("drat")

you will get the newest RcppArmadillo via this drat package repository. And course install.packages("RcppArmadillo") would also work, but takes longer to type :)

Lastly, courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 February, 2015 02:01AM

February 26, 2015

hackergotchi for Daniel Pocock

Daniel Pocock

PostBooks accounting and ERP suite coming to Fedora

PostBooks has been successful on Debian and Ubuntu for a while now and for all those who asked, it is finally coming to Fedora.

The review request has just been submitted and the spec files have also been submitted to xTuple as pull requests so future upstream releases can be used with rpmbuild to create packages.

Can you help?

A few small things outstanding:

  • Putting a launcher icon in the GNOME menus
  • Packaging the schemas - they are in separate packages on Debian/Ubuntu. Download them here and load the one you want into your PostgreSQL instance using the instructions from the Debian package.

Community support

The xTuple forum is a great place to ask any questions and get to know the community.


Here is a quick look at the login screen on a Fedora 19 host:

26 February, 2015 09:08PM by Daniel.Pocock

hackergotchi for EvolvisForge blog

EvolvisForge blog

tomcat7 log encoding

TIL: the encoding of the catalina.out file is dependent on the system locale, using standard Debian wheezy tomcat7 package.

Fix for ‘?’ instead of umlauts in it:

cat >>/etc/default/tomcat7 <<EOF
export LC_CTYPE

My “problem” here is that I have the system locale be the “C” locale, to get predictable behaviour; applications that need it can set a locale by themselves. (Many don’t bother with POSIX locales and use different/separate means of determining especially encoding, but possibly also i18n/l10n. But it seems the POSIX locales are getting more and more used.)

Update: There is also adding -Dfile.encoding=UTF-8 to $JAVA_OPTS which seems to be more promising: no fiddling with locales, no breakage if someone defined LC_ALL already, and it sets precisely what it should set (the encoding) and nothing else (since the encoding does not need to correlate to any locale setting, why should it).

26 February, 2015 04:09PM by Thorsten Glaser

tomcat7 init script is asynchronous

TIL: the init script of tomcat7 in Debian is asynchronous.

For some piece of software, our rollout (install and upgrade) process works like this:

  • service tomcat7 stop
  • rm -rf /var/lib/tomcat7/webapps/appname{,.war}
  • cp newfile.war /var/lib/tomcat7/webapps/appname.war
  • service tomcat7 start # ← here
  • service tomcat7 stop
  • edit some config files under /var/lib/tomcat7/webapps/appname/WEB-INF/
  • service tomcat7 start

The first tomcat7 start “here” is just to unzip the *.war files. For some reason, people like to let tomcat7 do that.

This failed today; there were two webapps. Manually unzipping it also did not work for some reason.

Re-doing it, inserting a sleep 30 after the “here”, made it work.

In a perfect world, initscripts only return when the service is running, so that the next one started in a nice sequential (not parallel!) init or manual start sequence can do what it needs to, assuming the previous command has fully finished.

In this perfect world, those who do wish for faster startup times use a different init system, one that starts things in parallel, for example. Even there, dependencies will wish for the depended-on service to be fully running when they are started; even more so, since the delays between starting things seem to be less for that other init system.

So, this is not about the init system, but about the init script; a change that would be a win-win for users of both init schemes.

Update: Someone already contacted me with feedback: they suggested to wait until the “shutdown port” is listened on by tomcat7. We’ll look at this later. In the meantime, we’re trying to also get rid of the “config (and logs) in webapps/” part…

PS: If someone is interested in an init script (Debian/LSB sysvinit, I made the effort to finally learn that… some months before the other system came) that starts Wildfly (formerly known as JBoss AS) synchronously, waiting until all *.?ar files are fully “deployed” before returning (though with a timeout in case it won’t ever finish), just ask (maybe it will become a dialogue, in which we can improve it together). (We have two versions of it, the more actively maintained one is in a secret internal project though, so I’d have to merge it and ready it for publication though, plus the older one is AGPLv3, the newer one was relicenced to a BSDish licence.)

26 February, 2015 02:24PM by Thorsten Glaser

hackergotchi for Michael Banck

Michael Banck

26 Feb 2015

My recent Debian LTS activities

Over the past months, my employer credativ has sponsored some of my work time to keep PostgreSQL updated for squeeze-lts. Version 8.4 of PostgreSQL was declared end-of-life by the upstream PostgreSQL Global Development Group (PGDG) last summer, around the same time official squeeze support ended and squeeze-lts took over. Together with my colleagues Christoph Berg (who is on the PostgreSQL package maintainer team) and Bernd Helmle, we continued backpatching changes to 8.4. We tried our best to continue the PGDG backpatching policy and looked only at commits at the oldest still maintained branch, REL9_0_STABLE.

Our work is publicly available as a separate REL8_4_LTS branch on Github. The first release (called 8.4.22lts1) happened this month mostly coinciding with the official 9.0, 9.1, 9.2, 9.3 and 9.4 point releases. Christoph Berg has uploaded the postgresql-8.4 Debian package for squeeze-lts and release tarballs can be found on Github here (scroll down past the release notes for the tarballs).

We intend to keep the 8.4 branch updated on a best-effort community basis for the squeeze-lts lifetime. If you have not yet updated from 8.4 to a more recent version of PostgreSQL, you probably should. But if you are stuck on squeeze, you should use our LTS packages. If you have any questions or comments concerning PostgreSQL for squeeze-lts, contact me.

26 February, 2015 11:45AM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Dear non-Belgian web developer,

Localization in the web context is hard, I know. To make things easier, it may seem like a good idea to use GeoIP to detect what country an IP is coming from and default your localization based on that. While I disagree with that premise, this blog post isn't about that.

Instead, it's about the fact that most of you get something wrong about this little country. I know, I know. If you're not from here, it's difficult to understand. But please get this through your head: Belgium is not a French-speaking country.

That is, not entirely. Yes, there is a large group of French-speaking people who live here. Mostly in the south. But if you check the numbers, you'll find that there are, in fact, more people in Belgium who speak Dutch rather than French. Not by a very wide margin, mind you, but still by a wide enough margin to be significant. Wikipedia claims the split is 59%/41% Dutch/French; I don't know how accurate those numbers are, but they don't seem too wrong.

So please, pretty please, with sugar on top: next time you're going to do a localized website, don't assume my French is better than my English. And if you (incorrectly) do, then at the very least make it painfully obvious to me where the "switch the interface to a different language" option in your website is. Because while it's annoying to be greeted in a language that I'm not very good at, it's even more annoying to not be able to find out how to get the correctly-localized version.


26 February, 2015 09:22AM

February 25, 2015

hackergotchi for Holger Levsen

Holger Levsen


Developing is a use case too

For whatever reason Ulrike's blog post about AppArmor user stories and user tags was not syndicated to planet.d.o, despite it should have been and despite planet admins nicely having looked into it. Whatever...

As you might have guessed by now, the user stories referred to in this blog post are about developers supporting AppArmor (a kernel module for restricting capabilities of processes) in their Debian packages. So if you're maintaining packages and have always been pondering to look into this apparmor thingy, go read that blog post!

Hopefully the next post will "magically" appear on planet again ;-)

25 February, 2015 11:07PM

hackergotchi for Joachim Breitner

Joachim Breitner

DarcsWatch End-Of-Life’d

Almost seven years ago, at a time when the “VCS wars” have not even properly started yet, GitHub was seven days old and most Haskell related software projects were using Darcs as their version control system of choice, when you submitted a patch, you simply ran darcs send and mail with your changes would be sent to the right address, e.g. the maintainer or a mailing list. This was almost as convenient as Pull Requests are on Github now, only that it was tricky to keep track of what was happening with the patch, and it would be easy to forget to follow up on it.

So back then I announced DarcsWatch: A service that you could CC in your patch submitting mail, which then would monitor the repository and tell you about the patches status, i.e. whether it was applied or obsoleted by another patch.

Since then, it quitely did its work without much hickups. But by now, a lot of projects moved away from Darcs, so I don’t really use it myself any more. Also, its Darcs patch parser does not like every submissions by a contemporary darcs, so it is becoming more and more unreliable. I asked around on the xmonad and darcs mailing lists if others were still using it, and noboy spoke up. Therefore, after seven years and 4660 monitored patches, I am officially ceasing to run DarcsWatch.

The code and data is still there, so if you believe this was a mistake, you can still speak up -- but be prepared to be asked to take over maintaining it.

I have a disklike for actually deleting data, so I’ll keep the static parts of DarcsWatch web page in the current state running.

I’d like to thank the guys from spiny.org.uk for hosting DarcsWatch on urching for the last 5 years.

25 February, 2015 11:00PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Clint Adams

Clint Adams

Juliet did not show up on cue

I brought a dozen cupcakes. There were 3 carrot, 3 red velvet, 2 marble, 2 peanut butter fudge swirl, and 2 of some chocolate-chocolate-chocolate thing that I forgot the name of because it sounded so disgusting.

He had a romcom fantasy about her a year before. She did not live up to his expectations, so things went sideways.

Now she was having a romcom fantasy all by herself, waiting patiently for hours for him to do something in particular.

You could have graphed her hopes falling. In the end, she left dejected. He didn't understand why. Then he left town.

He was much more excited about the cupcakes than she was.

25 February, 2015 09:43PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

CD ripping on Linux

A few months ago I decided it would be good to re-rip my CD collection, retaining a lossless digital copy, and set about planning the project. I then realised I hadn't the time to take the project on for the time being and parked it, but not before figuring a few bits and pieces out.

Starting at the beginning, with ripping the CDs. The most widely used CD ripping software on Linux systems is still cdparanoia, which is pretty good, but it's still possible to get bad CD rips, and I've had several in a very small sample size. On Windows systems, the recommended ripper is Exact Audio Copy, or EAC for short. EAC calculates checksums of ripped CDs or tracks and compares them against an online database of rips called AccurateRip. It also calibrates your CD drive against the same database and uses the calculated offset during the rip.

I wasn't aware of any AccurateRip-supporting rippers until recently when Mark Brown introduced me to morituri. I've done some tentative experiments and it appears to be produce identical rips to EAC for some sample CDs (with different CD reading hardware too).

Fundamentally, AccurateRip is a proprietary database, and so I think the longer term goal in the F/OSS community should be to create an alternative, open database of rip checksums and drive offsets. The audio community has already been burned by the CDDB database going proprietary, but at least we now have the—far superior—MusicBrainz.

25 February, 2015 09:36PM

hackergotchi for Eddy Petrișor

Eddy Petrișor

Occasional Rsnapshot v1.3.0

It is almost exactly 1 year and a half since I came up with the idea of having a way of making backups using Rsnapshot automatically triggered by my laptop when I have the backup media connected to my laptop. This could mean connecting a USB drive directly to the laptop or mounting a NFS/sshfs share in my home network. Today I tagged Occasional Rsnapshot the v1.3.0 version, the first released version that makes sure even when you connect your backup media occasionally, your Rsnapshot backups are done if and when it makes sense to do it, according to the rsnapshot.conf file and the status of the existing backups on the backup media.

Quoting from the README, here is what Occasional Rsnapshot does:

This is a tool that allows automatic backups using rsnapshot when the external backup drive or remote backup media is connected.

Although the ideal setup would be to have periodic backups on a system that is always online, this is not always possible. But when the connection is done, the backup should start fairly quickly and should respect the daily/weekly/... schedules of rsnapshot so that it accurately represents history.

In other words, if you backup to an external drive or to some network/internet connected storage that you don't expect to have always connected (which is is case with laptops) you can use occasional_rsnapshot to make sure your data is backed up when the backup storage is connected.

occasional_rsnapshot is appropriate for:
  • laptops backing up on:
    • a NAS on the home LAN or
    • a remote or an internet hosted storage location
  • systems making backups online (storage mounted locally somehow)
  • systems doing backups on an external drive that is not always connected to the system
The only caveat is that all of these must be mounted in the local file system tree somehow by any arbitrary tool, occasional_rsnapshot or rsnapshot do not care, as long as the files are mounted.

So if you find yourself in a simillar situation, this script might help you to easily do backups in spite of the occasional availability of the backup media, instead of no backups at all. You can even trigger backups semi-automatically when you remember to or decide is time to backup, by simply pulging in your USB backup HDD.

But how did I end up here, you might ask?

In December 2012 I was asking about suggestions for backup solutions that would work for my very modest setup with Linux and Windows so I can backup my and my wife's system without worrying about loss of data.

One month later I was explaining my concept of a backup solution that would not trust the backup server, and leave to the clients as much as possible the decision to start the backup at their desired time. I was also pondering on the problems I might encounter.

From a security PoV, what I wanted was that:
  1. clients would be isolated from each other
  2. even in the case of a server compromise:
      • the data would not be accessible since they would be already encrypted before leaving the client
      • the clients could not be compromised

    The general concept was sane and supplemental security measures such as port knocking and initiation of backups only during specific time frames could be added.

    The problem I ran to was that when I set up this in my home network a sigle backup cycle would take more than a day, due to the fact that I wanted to do backup of all of my data and my server was a humble Linksys NSLU2 with a 3TB storage attached on USB.

    Even when the initial copy was done by attaching the USB media directly to the laptop, so the backup would only copy changed data, the backup with the HDD attached to the NSLU2 was not finished even after more than 6 hours.

    The bottleneck was the CPU speed and the USB speed. I tried even mounting the storage media over sshfs so the tiny xscale processor in the NSLU2 would not be bothered by any of the rsync computation. This proved to an exercise in futility, any attempt to put the NSLU2 anywhere in the loop resulted in an unacceptable and impractically long backup time.

    All these attempts, of course, took time, but that meant that I was aware I still didn't have appropriate backups and I wasn't getting closer to the desired result.

    So this brings us August 2013, when I realized I was trying to manually trigger Rsnapshot backups from time to time, but having to do all sorts of mental gymnastics and manual listing to evaluate if I needed to do monthly, weekly and daily backups or if weekly and daily was due.

    This had to stop.
    Triggering a backup should happen automatically as soon as the backup media is available, without any intervention from the user.
    I said.

    Then I came up with the basic concept for Occasional Rsnapshot: a very silent script that would be called from  cron every 5 minutes, would check if the backup media is mounted, if is not, exit silently to not generate all sorts of noise in cron emails, but if mounted, compute which backup intervals should be triggered, and trigger them, if the appropriate amount of time passed since the most recent backup in that backup interval.

    Occasional Rsnapshot version v1.3.0 is the 8th and most recent release of the script. Even if I used Occasional Rsnapshot since the day 1, v1.3.0 is the first one I can recommend to others, without fearing they might lose data due to it.

    The backup media can be anything, starting from your regular USB mounted HDD, your sshfs mounted backup partition on the home NAS server to even a remote storage such as Amazon S3 online storage, and there are even brief instructions on how to do encrypted backups for the cases where you don't trust the remote storage.

    So if you think you might find anything that I described remotely interesting, I recommend downloading the latest release of Occasional Rsnapshot, go through the README and trying it out.

    Feedback and bug reports are welcome.
    Patches are welcomed with a 'thank you'.
    Pull requests are eagerly waited for :) .

    25 February, 2015 07:41PM by eddyp (noreply@blogger.com)

    Andrew Cater

    Cubietruck now running Debian :)

    Following a debootstrap build of sid on one machine to complete the cross-compilation of mainline u-boot, I managed to get vanilla Debian installed on my Cubietruck

    A USB-serial cable is a must for the install and for any subsequent major reconfiguration as the stock Debian installer does not have drivers for the video / audio. Various Cubietruck derivative distributions do - but the Sunxi kernel appears flaky

    All was fine for a few days, then I decided to try and configure the Wifi by hand configuring /etc/network/interfaces and wpasupplicant files. I managed to break the network connectivity by doing things in a hurry and typing blind. I'd put it into the appropriate closed metal case so was rather stuck.

    A friend carefully took the case apart by easing off the metal cover plates, removed two screws holding the whole thing together and precision drilled  the metal cover plates on one side so that four screws can be undone and the entire inner part of the case can slide out as one while the other metal clover plate remains captive. He will follow this procedure with his two later.

    Very pleased with the way it's turned out. The WiFi driver has non-free firmware but I now have a tiny, silent machine, drawing about 3W tops and both interfaces are now working.

    25 February, 2015 06:23PM by Andrew Cater (noreply@blogger.com)

    Petter Reinholdtsen

    The Norwegian open channel Frikanalen - 24x7 on the Internet

    The Norwegian nationwide open channel Frikanalen is still going strong. It allow everyone to send the video they want on national television. It is a TV station administrated completely using a web browser, running only Free Software, providing a REST api for administrators and members, and with distribution on the national DVB-T distribution network RiksTV. But only between 12:00 and 17:30 Norwegian time. This has finally changed, after many years with limited distribution. A few weeks ago, we set up a Ogg Theora stream via icecast to allow everyone with Internet access to check out the channel the rest of the day. This is presented on the Frikanalen web site now. And since a few days ago, the channel is also available via multicast on UNINETT, available for those using IPTV TVs and set-top boxes in the Norwegian National Research and Education network.

    If you want to see what is on the channel, point your media player to one of these sources. The first should work with most players and browsers, while as far as I know, the multicast UDP stream only work with VLC.

    The Ogg Theora / icecast stream is not working well, as the video and audio is slightly out of sync. We have not been able to figure out how to fix it. It is generated by recoding a internal MPEG transport stream with MPEG4 coded video (ie H.264) to Ogg Theora / Vorbis, and the result is less then stellar. If you have ideas how to fix it, please let us know on frikanalen (at) nuug.no. We currently use this with ffmpeg2theora 0.29:

    ./ffmpeg2theora.linux <OBE_gemini_URL.ts> -F 25 -x 720 -y 405 \
     --deinterlace --inputfps 25 -c 1 -H 48000 --keyint 8 --buf-delay 100 \
     --nosync -V 700 -o - | oggfwd video.nuug.no 8000 <pw> /frikanalen.ogv

    If you get the multicast UDP stream working, please let me know, as I am curious how far the multicast stream reach. It do not make it to my home network, nor any other commercially available network in Norway that I am aware of.

    25 February, 2015 08:10AM

    February 24, 2015

    Sven Hoexter


    I recently learnt that my former coworker Jonny took his efforts around his own monitoring system Bloonix and moved to self-employment.

    If you're considering to outsource your monitoring consider Bloonix. :) As a plus all the code is open under GPLv3 and available on GitHub. So if you do not like to outsource it you can still build up an instance on your own. Since this has been a one man show for a long time most of the documentation is still in german. Might be a pro for some but a minus for others, if you like Bloonix I guess documentation translations or a howto in english is welcome. Beside of that Jonny is also the upstream author of a few Perl modules like libsys-statistics-linux-perl.

    So another one has taken the bold step to base his living on free and open source software, something that always has my admiration. Jonny, I hope you'll succeed with this step.

    24 February, 2015 07:48PM

    hackergotchi for EvolvisForge blog

    EvolvisForge blog

    Java™, logging and the locale

    A coworker and I debugged a fascinating problem today.

    They had a tomcat7 installation with a couple of webapps, and one of the bundled libraries was logging in German. Everything else was logging in English (the webapps themselves, and the things the other bundled libraries did).

    We searched around a bit, and eventually found that the wrongly-logging library (something jaxb/jax-ws) was using, after unravelling another few layers of “library bundling another library as convenience copy” (gah, Java!), com.sun.xml.ws.resources.WsservletMessages which contains quite a few com.sun.istack.localization.Localizable members. Looking at the other classes in that package, in particular Localizer, showed that it defaults to the java.util.Locale.getDefault() value for the language.

    Which is set from the environment.

    Looking at /proc/pid-of-JVM-running-tomcat7/environ showed nothing, “of course”. The system locale was, properly, set to English. (We mostly use en_GB.UTF-8 for better paper sizes and the metric system (unless the person requesting the machine, or the admin creating it, still likes the system to speak German *shudder*), but that one still had en_US.UTF-8.)

    Browsing the documentation for java.util.Locale proved more fruitful: it also contains a setDefault method, which sets the new “default” locale… JVM-wide.

    Turns out another of the webapps used that for some sort of internal localisation. Clearly, the containment of tomcat7 is incomplete in this case.

    Documenting for the larger ’net, in case someone else runs into this. It’s not as if things like this would be showing up in the USA, where the majority of development appears to happen.

    24 February, 2015 04:02PM by Thorsten Glaser

    Sven Hoexter

    just because

    That inevitable led to this on the office wall.

    24 February, 2015 03:51PM

    February 23, 2015

    Richard Hartmann


    Even if you disregard how amazing this is, this quote blows my proverbial mind:

    The test rig is carefully designed to remove any possible sources of error. Even the lapping of waves in the Gulf of Mexico 25 miles away every three to four seconds would have showed up on the sensors, so the apparatus was floated pneumatically to avoid any influence. The apparatus is completely sealed, with power and signals going through liquid metal contacts to prevent any force being transmitted through cables.

    23 February, 2015 11:21PM by Richard 'RichiH' Hartmann

    Simon Josefsson

    Laptop Buying Advice?

    My current Lenovo X201 laptop has been with me for over four years. I’ve been looking at new laptop models over the years thinking that I should upgrade. Every time, after checking performance numbers, I’ve always reached the conclusion that it is not worth it. The most performant Intel Broadwell processor is the the Core i7 5600U and it is only about 1.5 times the performance of my current Intel Core i7 620M. Meanwhile disk performance has increased more rapidly, but changing the disk on a laptop is usually simple. Two years ago I upgraded to the Samsung 840 Pro 256GB disk, and this year I swapped that for the Samsung 850 Pro 1TB, and both have been good investments.

    Recently my laptop usage patterns have changed slightly, and instead of carrying one laptop around, I have decided to aim for multiple semi-permanent laptops at different locations, coupled with a mobile device that right now is just my phone. The X201 will remain one of my normal work machines.

    What remains is to decide on a new laptop, and there begins the fun. My requirements are relatively easy to summarize. The laptop will run a GNU/Linux distribution like Debian, so it has to work well with it. I’ve decided that my preferred CPU is the Intel Core i7 5600U. The screen size, keyboard and mouse is mostly irrelevant as I never work longer periods of time directly on the laptop. Even though the laptop will be semi-permanent, I know there will be times when I take it with me. Thus it has to be as lightweight as possible. If there would be significant advantages in going with a heavier laptop, I might reconsider this, but as far as I can see the only advantage with a heavier machine is bigger/better screen, keyboard (all of which I find irrelevant) and maximum memory capacity (which I would find useful, but not enough of an argument for me). The only sub-1.5kg laptops with the 5600U CPU on the market right now appears to be:

    Lenovo X250 1.42kg 12.5″ 1366×768
    Lenovo X1 Carbon (3rd gen) 1.44kg 14″ 2560×1440
    Dell Latitude E7250 1.25kg 12.5″ 1366×768
    Dell XPS 13 1.26kg 13.3″ 3200×1800
    HP EliteBook Folio 1040 G2 1.49kg 14″ 1920×1080
    HP EliteBook Revolve 810 G3 1.4kg 11.6″ 1366×768

    I find it interesting that Lenovo, Dell and HP each have two models that meets my 5600U/sub-1.5kg criteria. Regarding screen, possibly there exists models with other screen resolutions. The XPS 13, HP 810 and X1 models I looked had touch screens, the others did not. As screen is not important to me, I didn’t evaluate this further.

    I think all of them would suffice, and there are only subtle differences. All except the XPS 13 can be connected to peripherals using one cable, which I find convenient to avoid a cable mess. All of them have DisplayPort, but HP uses DisplayPort Standard and the rest uses miniDP. The E7250 and X1 have HDMI output. The X250 boosts a 15-pin VGA connector, none of the others have it — I’m not sure if that is a advantage or disadvantage these days. All of them have 2 USB v3.0 ports except the E7250 which has 3 ports. The HP 1040, XPS 13 and X1 Carbon do not have RJ45 Ethernet connectors, which is a significant disadvantage to me. Ironically, only the smallest one of these, the HP 810, can be memory upgraded to 12GB with the others being stuck at 8GB. HP and the E7250 supports NFC, although Debian support is not certain. The E7250 and X250 have a smartcard reader, and again, Debian support is not certain. The X1, X250 and 810 have a 3G/4G card.

    Right now, I’m leaning towards rejecting the XPS 13, X1 and HP 1040 because of lack of RJ45 ethernet port. That leaves me with the E7250, X250 and the 810. Of these, the E7250 seems like the winner: lightest, 1 extra USB port, HDMI, NFC, SmartCard-reader. However, it has no 3G/4G-card and no memory upgrade options. Looking for compatibility problems, it seems you have to be careful to not end up with the “Dell Wireless” card and the E7250 appears to come in a docking and non-docking variant but I’m not sure what that means.

    Are there other models I should consider? Other thoughts?

    23 February, 2015 10:49PM by simon

    Enrico Zini


    Akonadi client example

    After many failed attemps I have managed to build a C++ akonadi client. It has felt like one of the most frustrating programming experiences of my whole life, so I'm sharing the results hoping to spare others from all the suffering.

    First thing first, akonadi client libraries are not in libakonadi-dev but in kdepimlibs5-dev, even if kdepimlibs5-dev does not show in apt-cache search akonadi.

    Then, kdepimlibs is built with Qt4. If your application uses Qt5 (mine was) you need to port it back to Qt4 if you want to talk to Akonadi.

    Then, kdepimlibs does not seem to support qmake and does not ship pkg-config .pc files, and if you want to use kdepimlibs your build system needs to be cmake. I ported by code from qmake to cmake, and now qtcreator wants me to run cmake by hand every time I change the CMakeLists.txt file, and it stopped allowing to add, rename or delete sources.

    Finally, most of the code / build system snippets found on the internet seem flawed in a way or another, because the build toolchain of Qt/KDE applications has undergone several redesignins during time, and the network is littered with examples from different eras. The way to obtain template code to start a Qt/KDE project is to use kapptemplate. I have found no getting started tutorial on the internet that said "do not just copy the snippets from here, run kapptemplate instead so you get them up to date".

    kapptemplate supports building an "Akonadi Resource" and an "Akonadi Serializer", but it does not support generating template code for an akonadi client. That left me with the feeling that I was dealing with some software that wants to be developed but does not want to be used.

    Anyway, now an example of how to interrogate Akonadi exists as is on the internet. I hope that all the tears of blood that I cried this morning have not been cried in vain.

    23 February, 2015 02:44PM


    The wonders of missing documentation

    Update: I have managed to build an example Akonadi client application.

    I'm new here, I want to make a simple C++ GUI app that pops up a QCalendarWidget which my local Akonadi has appointments.

    I open qtcreator, create a new app, hack away for a while, then of course I get undefined references for all Akonadi symbols, since I didn't tell the build system that I'm building with akonadi. Ok.

    How do I tell the build system that I'm building with akonadi? After 20 minutes of frantic looking around the internet, I still have no idea.

    There is a package called libakonadi-dev which does not seem to have anything to do with this. That page mentions everything about making applications with Akonadi except how to build them.

    There is a package called kdepimlibs5-dev which looks promising: it has no .a files but it does have haders and cmake files. However, qtcreator is only integrated with qmake, and I would really like the handholding of an IDE at this stage.

    I put something together naively doing just what looked right, and I managed to get an application that segfaults before main() is even called:

     * Copyright © 2015 Enrico Zini <enrico@enricozini.org>
     * This work is free. You can redistribute it and/or modify it under the
     * terms of the Do What The Fuck You Want To Public License, Version 2,
     * as published by Sam Hocevar. See the COPYING file for more details.
    #include <QDebug>
    int main(int argc, char *argv[])
        qDebug() << "BEGIN";
        return 0;
    QT       += core gui widgets
    CONFIG += c++11
    TARGET = wtf
    TEMPLATE = app
    LIBS += -lkdecore -lakonadi-kde
    SOURCES += wtf.cpp

    I didn't achieve what I wanted, but I feel like I achieved something magical and beautiful after all.

    I shall now perform some haruspicy on those oscure cmake files to see if I can figure something out. But seriously, people?

    23 February, 2015 10:36AM

    hackergotchi for Dirk Eddelbuettel

    Dirk Eddelbuettel

    drat Tutorial: Publishing a package


    The drat package was released earlier this month, and described in a first blog post. I received some helpful feedback about what works and what doesn't. For example, Jenny Bryan pointed out that I was not making a clear enough distinction between the role of using drat to publish code, and using drat to receive/install code. Very fair point, and somewhat tricky as R aims to blur the line between being a user and developer of statistical analyses, and hence packages. Many of us are both. Both the main point is well taken, and this note aims to clarify this issue a little by focusing on the former.

    Another point make by Jenny concerns the double use of repository. And indeed, I conflated repository (in the sense of a GitHub code repository) with repository for a package store used by a package manager. The former, a GitHub repository, is something we use to implement a personal drat with: A GitHub repository happens to be uniquely identifiable just by its account name, and given an (optional) gh-pages branch also offers a stable and performant webserver we use to deliver packages for R. A (personal) code repository on the other hand is something we implement somewhere---possibly via drat which supports local directories, possibly on a network share, as well as anywhere web-accessible, e.g. via a GitHub repository. It is a little confusing, but I will aim to make the distinction clearer.

    Just once: Setting up a drat repository

    So let us for the remainder of this post assume the role of a code publisher. Assume you have a package you would like to make available, which may not be on CRAN and for which you would like to make installation by others easier via drat. The example below will use an interim version of drat which I pushed out yesterday (after fixing a bug noticed when pushing the very new RcppAPT package).

    For the following, all we assume (apart from having a package to publish) is that you have a drat directory setup within your git / GitHub repository. This is not an onerous restriction. First off, you don't have to use git or GitHub to publish via drat: local file stores and other web servers work just as well (and are documented). GitHub simply makes it easiest. Second, bootstrapping one is trivial: just fork my drat GitHub repository and then create a local clone of the fork.

    There is one additional requirement: you need a gh-pages branch. Using the fork-and-clone approach ensures this. Otherwise, if you know your way around git you already know how to create a gh-pages branch.

    Enough of the prerequisities. And on towards real fun. Let's ensure we are in the gh-pages branch:

    edd@max:~/git/drat(master)$ git checkout gh-pages
    Switched to branch 'gh-pages'
    Your branch is up-to-date with 'origin/gh-pages'.

    Publish: Run one drat command to insert a package

    Now, let us assume you have a package to publish. In my case this was version of drat itself as it contains a fix for the very command I am showing here. So if you want to run this, ensure you have this version of drat as the CRAN version is currently behind at release 0.0.1 (though I plan to correct that in the next few days).

    To publish an R package into a code repository created via drat running on a drat GitHub repository, just run insertPackage(packagefile) which we show here with the optional commit=TRUE. The path to the package can be absolute are relative; the easists is often to go up one directory from the sources to where R CMD build ... has created the package file.

    edd@max:~/git$ Rscript -e 'library(drat); insertPackage("drat_0.0.1.2.tar.gz", commit=TRUE)'
    [gh-pages 0d2093a] adding drat_0.0.1.2.tar.gz to drat
     3 files changed, 2 insertions(+), 2 deletions(-)
     create mode 100644 src/contrib/drat_0.0.1.2.tar.gz
    Counting objects: 7, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (7/7), done.
    Writing objects: 100% (7/7), 7.37 KiB | 0 bytes/s, done.
    Total 7 (delta 1), reused 0 (delta 0)
    To git@github.com:eddelbuettel/drat.git
       206d2fa..0d2093a  gh-pages -> gh-pages

    You can equally well run this as insertPackage("drat_0.0.1.2.tar.gz"), then inspect the repo and only then run the git commands add, commit and push. Also note that future versions of drat will most likely support git operations directly by relying on the very promising git2r package. But this just affect package internals, the user-facing call of e.g. insertPackage("drat_0.0.1.2.tar.gz", commit=TRUE) will remain unchanged.

    And in a nutshell that really is all there is to it. With the newly drat-ed package pushed to your GitHub repository with a single function call), it is available via the automatically-provided gh-pages webserver access to anyone in the world. All they need to do is to point R's package management code (which is built into R itself and used for e.g._ CRAN and BioConductor R package repositories) to the new repo---and that is also just a single drat command. We showed this in the first blog post and may expand on it again in a follow-up.

    So in summary, that really is all there is to it. After a one-time setup / ensuring you are on the gh-pages branch, all it takes is a single function call from the drat package to publish your package to your drat GitHub repository.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    23 February, 2015 02:04AM

    February 22, 2015

    hackergotchi for Rogério Brito

    Rogério Brito

    User-Agent strings and privacy

    I just had my hands on some mobile devices (a Samsung's Galaxy Tab S 8.4", an Apple's iPad mini 3, and my no-name tablet that runs Android).

    I got curious to see how the different browsers identify themselves to the world via their User agent strings and I must say that each browser's string reveals a lot about both the browser makers and their philosophies regarding user privacy.

    Here is a simple table that I compiled with the information that I collected (sorry if it gets too wide):

    Device Browser User-Agent String
    Samsung Galaxy Tab S Firefox 35.0 Mozilla/5.0 (Android; Tablet; rv:35.0) Gecko/35.0 Firefox/35.0
    Samsung Galaxy Tab S Firefox 35.0.1 Mozilla/5.0 (Android; Tablet; rv:35.0.1) Gecko/35.0.1 Firefox/35.0.1
    Samsung Galaxy Tab S Android's 4.4.2 stock browser Mozilla/5.0 (Linux; Android 4.4.2; en-gb; SAMSUNG SM-T700 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.5 Chrome/28.0.1500.94 Safari/537.36
    Samsung Galaxy Tab S Updated Chrome Mozilla/5.0 (Linux; Android 4.4.2; SM-T700 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.109 Safari/537.36
    Vanilla tablet Android's 4.1.1 stock browser Mozilla/5.0 (Linux; U; Android 4.1.1; en-us; TB1010 Build/JRO03H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Safari/534.30
    Vanilla tablet Firefox 35.0.1 Mozilla/5.0 (Android; Tablet; rv:35.0.1) Gecko/35.0.1 Firefox/35.0.1
    iPad Safari's from iOS 8.1.3 Mozilla/5.0 (iPad; CPU OS 8_1_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B466 Safari/600.1.4
    Notebook Debian's Iceweasel 35.0.1 Mozilla/5.0 (X11; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0 Iceweasel/35.0.1

    So, briefly looking at the table above, you can tell that the stock Android browser reveals quite a bit of information: the model of the device (e.g., SAMSUNG SM-T700 or TB1010) and even the build number (e.g., Build/KOT49H or Build/JRO03H)! This is super handy for malicious websites and I would say that it leaks a lot of possibly undesired information.

    The iPad is similar, with Safari revealing the version of the iOS that it is running. It doesn't reveal, though, the language that the user is using via the UA string (it probably does via other HTTP fields).

    Chrome is similar to the stock Android browser here, but, at least, it doesn't reveal the language of the user. It does reveal the version of Android, including the patch-level (that's a bit too much, IMVHO).

    I would say that the winner respecting privacy of the users among the browsers that I tested is Firefox: it conveys just the bare minimum, not differentiating from a high-end tablet (Samsung's Galaxy Tab S with 8 cores) and a vanilla tablet (with 2 cores). Like Chrome, Firefox still reveals a bit too much in the form of the patch-level. It should be sufficient to say that it is version 35.0 even if the user has 35.0.1 installed.

    The bonus points with Firefox is that it is also available on F-Droid, in two versions: as Firefox itself and as Fennec.

    22 February, 2015 11:54PM

    Hideki Yamane

    New laptop ThinkPad E450

    I've got a new laptop, Lenovo ThinkPad E450.

    • CPU: Intel Core i5 (upgraded)
    • Mem: 8GB (upgraded, one empty slot, can up to 16GB)
    • HDD: 500GB
    • LCD: FHD (1920x1080, upgraded)
    • wifi: 802.11ac (upgraded, Intel 7265 BT ACBGN)
    nice,  it was less than $600 $500.

    Well, probably you know about Superfish issue with Lenovo Laptop, but it didn't affect to me because first thing when I got it is replacing HDD with another empty one, and did fresh install Debian Jessie (of course).

    22 February, 2015 05:55AM by Hideki Yamane (noreply@blogger.com)

    February 21, 2015

    Francesca Ciceri

    Dudes in dresses, girls in trousers

    "As long as people still think of people like me as "a dude in a dress" there is a lot work to do to fight transphobia and gain tolerance and acceptance."

    This line in Rhonda's most recent blogpost broke my heart a little, and sparked an interesting conversation with her about the (perceived?) value of clothes, respect and identity.

    So, guess what? Here's a pic of a "girl in trousers". Just because.

    MadameZou in her best James Dean impersonation

    (Sorry for the quality: couldn't find my camera and had to use a phone. Also, I don't own a binder, so I used a very light binding)

    21 February, 2015 06:03PM

    Dominique Dumont

    Performance improvement for ‘cme check dpkg’


    Thanks to Devel::NYTProf, I’ve realized that Module::CoreList was used in a not optimal way (to say the least) in Config::Model::Dpkg::Dependency when checking the dependency between Perl packages. (Note that only Perl packages with many dependencies were affected by this lack of performance)

    After a rework, the performance are much better. Here’s an example comparing check time before and after the modification of libconfig-model-dpkg-perl.

    With libconfig-model-dpkg-perl 2.059:
    $ time cme check dpkg
    Using Dpkg
    loading data
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    checking data
    check done

    real 0m10.235s
    user 0m10.136s
    sys 0m0.088s

    With libconfig-model-dpkg-perl 2.060:
    $ time cme check dpkg
    Using Dpkg
    loading data
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    checking data
    check done

    real 0m1.565s
    user 0m1.468s
    sys 0m0.092s


    All in all, a 8x performance improvement on the dependency check.

    Note that, due to the freeze, the new version of libconfig-model-dpkg-perl is available only in experimental.

    All the best

    Tagged: Config::Model, debian, dpkg, package

    21 February, 2015 03:09PM by dod

    hackergotchi for Dirk Eddelbuettel

    Dirk Eddelbuettel

    RcppAPT 0.0.1

    Over the last few days I put together a new package RcppAPT which interfaces the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their GUI-based brethren.

    The package currently implements two functions which permit search for package information via a regular expression, as well as a (vectorised) package name-based check. More to come, and contributions would be very welcome.

    A few examples just to illustrate follow.

    R> hasPackages(c("r-cran-rcpp", "r-cran-rcppapt"))
       r-cran-rcpp r-cran-rcppapt 
              TRUE          FALSE 

    This shows that Rcpp is (of course) available as a binary, but this (very new) package is (unsurprisingly) not yet available pre-built.

    We can search by regular expression:

    R> library(RcppAPT)
    R> getPackages("^r-base-c.")
              Package      Installed       Section
    1 r-base-core-dbg 3.1.2-1utopic0 universe/math
    2 r-base-core-dbg           <NA> universe/math
    3     r-base-core 3.1.2-1utopic0 universe/math
    4     r-base-core           <NA> universe/math

    With the (default) expression catching everything, we see a lot of packages:

    R> dim(getPackages())
    [1] 104431      3

    A bit more information is on the package page here as well as as the GitHub repo.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    21 February, 2015 02:33PM

    hackergotchi for Vasudev Kamath

    Vasudev Kamath

    Running Plan9 using 9vx - using vx32 sandboxing library

    Now a days I'm more and more attracted towards Plan9, an Operating System meant to be the successor of UNIX and created by same people who created original UNIX. I'm always baffled by the simplicity of Plan9. Sadly Plan9 never took off for whatever reasons.

    I've been for a while trying to run Plan9, I ran Plan9 on Raspberry Pi model B using 9pi, but I couldn't experiment with it more due to some restrictions in my home setup.

    I installed original Plan9 4th Edition from Bell labs (now part of Alcatel-Lucent), I will write about it in on different post. But running virtual machine on my system is again PITA as system is already old (3 and half year). I came across the 9vx which is port of Plan9 for FreeBSD, Linux and Mac OSX by Russ Cox.

    I downloaded original 9vx version 0.9.12 from Russ's page linked above. The archive contains a Plan9 rootfs along with precompiled 9vx binaries for Linux, FreeBSD and Mac OS X. I ran the Linux binary but it crashed.

    ./9vx.Linux -u glenda

    I was seeing some illegal instruction error in dmesg. I didn't bother to do more investigation.

    A bit of googling showed me Arch Linux's wiki page on 9vx. I got errors trying to compile the original vx32 from rsc's repository but later saw that AUR 9vx package is built from different repository forked from rsc's found here.

    I cloned the repository to local and compiled it, I don't really remember if I had installed any additional packages. But if you get error you will know what additional thing is required. After compilation the 9vx binary is found inside src/9vx/9vx. I used this newly compiled 9vx to run the the rootfs I downloaded from Russ's website.

    9vx -u glenda -r /path/to/extracted/9vx-0.9.12/

    This launches Plan9 and allows you to work inside Plan9. The good part is its not resource hungry and still looks like you have a VM running with Plan9 on it.

    But there seems to be a better way to do this directly from plan9 iso from bell labs. It can be found on 9fans list. Now I'm going to try that out too :-). And in next post I will share my experience of using Plan9 on Qemu.

    21 February, 2015 07:02AM by copyninja

    February 20, 2015

    Richard Hartmann

    Release Critical Bug report for Week 08

    The UDD bugs interface currently knows about the following release critical bugs:

    • In Total: 1069 (Including 188 bugs affecting key packages)
      • Affecting Jessie: 147 (key packages: 114) That's the number we need to get down to zero before the release. They can be split in two big categories:
        • Affecting Jessie and unstable: 96 (key packages: 81) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
          • 23 bugs are tagged 'patch'. (key packages: 19) Please help by reviewing the patches, and (if you are a DD) by uploading them.
          • 2 bugs are marked as done, but still affect unstable. (key packages: 0) This can happen due to missing builds on some architectures, for example. Help investigate!
          • 71 bugs are neither tagged patch, nor marked done. (key packages: 62) Help make a first step towards resolution!
        • Affecting Jessie only: 51 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
          • 34 bugs are in packages that are unblocked by the release team. (key packages: 22)
          • 17 bugs are in packages that are not unblocked. (key packages: 11)

    How do we compare to the Squeeze and Wheezy release cycles?

    Week Squeeze Wheezy Jessie
    43 284 (213+71) 468 (332+136) 319 (240+79)
    44 261 (201+60) 408 (265+143) 274 (224+50)
    45 261 (205+56) 425 (291+134) 295 (229+66)
    46 271 (200+71) 401 (258+143) 427 (313+114)
    47 283 (209+74) 366 (221+145) 342 (260+82)
    48 256 (177+79) 378 (230+148) 274 (189+85)
    49 256 (180+76) 360 (216+155) 226 (147+79)
    50 204 (148+56) 339 (195+144) ???
    51 178 (124+54) 323 (190+133) 189 (134+55)
    52 115 (78+37) 289 (190+99) 147 (112+35)
    1 93 (60+33) 287 (171+116) 140 (104+36)
    2 82 (46+36) 271 (162+109) 157 (124+33)
    3 25 (15+10) 249 (165+84) 172 (128+44)
    4 14 (8+6) 244 (176+68) 187 (132+55)
    5 2 (0+2) 224 (132+92) 175 (124+51)
    6 release! 212 (129+83) 161 (109+52)
    7 release+1 194 (128+66) 147 (106+41)
    8 release+2 206 (144+62) 147 (96+51)
    9 release+3 174 (105+69)
    10 release+4 120 (72+48)
    11 release+5 115 (74+41)
    12 release+6 93 (47+46)
    13 release+7 50 (24+26)
    14 release+8 51 (32+19)
    15 release+9 39 (32+7)
    16 release+10 20 (12+8)
    17 release+11 24 (19+5)
    18 release+12 2 (2+0)

    Graphical overview of bug stats thanks to azhag:

    20 February, 2015 07:32PM by Richard 'RichiH' Hartmann

    hackergotchi for Rhonda D'Vine

    Rhonda D'Vine

    Queer-Positive Songs

    Just recently I stumbled upon one of these songs again and thought to myself: Are there more out there? With these songs I mean songs that could from its lyrics be considered queer-positive. Lyrics that cointain parts that speak about queer topics. To get you an idea of what I mean here are three songs as examples:

    • Saft by Die Fantastischen Vier: The excert from the lyrics I am refering to is: "doch im Grunde sucht jeder Mann eine Frau // Wobei so mancher Mann besser mit Männern kann // und so manche Frau lässt lieber Frauen ran" ("but basically every man looks for a woman // though some man prefer men // and some women prefer women").
    • Liebe schmeckt gut by Grossstadtgeflüster: Here the lyrics go like "Manche lieben sich selber // manche lieben unerkannt // manche drei oder fünf" ("some love themself // some love in secrecy // some three or five"). For a stereo sound version of the song watch this video instead, but I love the video. :)
    • Mein schönstes Kleid by Früchte des Zorns: This song is so much me. It starts off with "Eines Tages werd ich aus dem Haus geh'n und ich trag mein schönstes Kleid" ("One day I'll go out and I'll wear my most beautiful dress" sung by a male voice). I was made aware of it after the Poetry Night at debconf12 in Nicaragua. As long as people still think of people like me as "a dude in a dress" there is a lot work to do to fight transphobia and gain tolerance and acceptance.

    Do you have further examples for me? I know that I already mentioned another one in my blog entry about Garbage for a start. I am aware that there probably are dedicated bands that out of their own history do a lot songs in that direction, but I also want to hear about songs in which it is only mentioned in a side note and not made the central topic of the whole song, making it an absolutely normal random by-note.

    Like always, enjoy—and I'm looking forward to your suggestions!

    /music | permanent link | Comments: 15 | Flattr this

    20 February, 2015 04:05PM by Rhonda

    hackergotchi for David Bremner

    David Bremner

    Dear Lenovo, it's not me, it's you.

    I've been a mostly happy Thinkpad owner for almost 15 years. My first Thinkpad was a 570, followed by an X40, an X61s, and an X220. There might have been one more in there, my archives only go back a decade. Although it's lately gotten harder to buy Thinkpads at UNB as Dell gets better contracts with our purchasing people, I've persevered, mainly because I'm used to the Trackpoint, and I like the availability of hardware service manuals. Overall I've been pleased with the engineering of the X series.

    Over the last few days I learned about the installation of the superfish malware on new Lenovo systems, and Lenovo's completely inadequate response to the revelation. I don't use Windows, so this malware would not have directly affected me (unless I had the misfortune to use this system to download installation media for some GNU/Linux distribution). Nonetheless, how can I trust the firmware installed by a company that seems to value its users' security and privacy so little?

    Unless Lenovo can show some sign of understanding the gravity of this mistake, and undertake not to repeat it, then I'm afraid you will be joining Sony on my list of vendors I used to consider buying from. Sure, it's only a gross income loss of $500 a year or so, if you assume I'm alone in this reaction. I don't think I'm alone in being disgusted and angered by this incident.

    20 February, 2015 02:00PM

    hackergotchi for Wouter Verhelst

    Wouter Verhelst

    LOADays 2015

    Looks like I'll be speaking at LOADays again. This time around, at the suggestion of one of the organisers, I'll be speaking about the Belgian electronic ID card, for which I'm currently employed as a contractor to help maintain the end-user software. While this hasn't been officially confirmed yet, I've been hearing some positive signals from some of the organisers.

    So, under the assumption that my talk will be accepted, I've started working on my slides. The intent is to explain how the eID middleware works (in general terms), how the Linux support is supposed to work, and what to do when things fail.

    If my talk doesn't get rejected at the final hour, I will continue my uninterrupted "speaker at loadays" streak, which has started since loadays' first edition...

    20 February, 2015 10:47AM

    hackergotchi for MJ Ray

    MJ Ray

    Rebooting democracy? The case for a citizens constitutional convention.

    I’m getting increasingly cynical about our largest organisations and their voting-centred approach to democracy. You vote once, for people rather than programmes, then you’re meant to leave them to it for up to three years until they stand for reelection and in most systems, their actions aren’t compared with what they said they’d do in any way.

    I have this concern about Cooperatives UK too, but then its CEO publishes http://www.uk.coop/blog/ed-mayo/2015-02-18/rebooting-democracy-case-citizens-constitutional-convention and I think there may be hope for it yet. Well worth a read if you want to organise better groups.

    20 February, 2015 04:03AM by mjr

    February 19, 2015

    hackergotchi for Matthew Garrett

    Matthew Garrett

    It has been 0 days since the last significant security failure. It always will be.

    So blah blah Superfish blah blah trivial MITM everything's broken.

    Lenovo deserve criticism. The level of incompetence involved here is so staggering that it wouldn't be a gross injustice for the company to go under as a result[1]. But let's not pretend that this is some sort of isolated incident. As an industry, we don't care about user security. We will gladly ship products with known security failings and no plans to update them. We will produce devices that are locked down such that it's impossible for anybody else to fix our failures. We will hide behind vague denials, we will obfuscate the impact of flaws and we will deflect criticisms with announcements of new and shinier products that will make everything better.

    It'd be wonderful to say that this is limited to the proprietary software industry. I would love to be able to argue that we respect users more in the free software world. But there are too many cases that demonstrate otherwise, even where we should have the opportunity to prove the benefits of open development. An obvious example is the smartphone market. Hardware vendors will frequently fail to provide timely security updates, and will cease to update devices entirely after a very short period of time. Fortunately there's a huge community of people willing to produce updated firmware. Phone manufacturer is never going to fix the latest OpenSSL flaw? As long as your phone can be unlocked, there's a reasonable chance that there's an updated version on the internet.

    But this is let down by a kind of callous disregard for any deeper level of security. Almost every single third-party Android image is either unsigned or signed with the "test keys", a set of keys distributed with the Android source code. These keys are publicly available, and as such anybody can sign anything with them. If you configure your phone to allow you to install these images, anybody with physical access to your phone can replace your operating system. You've gained some level of security at the application level by giving up any real ability to trust your operating system.

    This is symptomatic of our entire ecosystem. We're happy to tell people to disable security features in order to install third-party software. We're happy to tell people to download and build source code without providing any meaningful way to verify that it hasn't been tampered with. Install methods for popular utilities often still start "curl | sudo bash". This isn't good enough.

    We can laugh at proprietary vendors engaging in dreadful security practices. We can feel smug about giving users the tools to choose their own level of security. But until we're actually making it straightforward for users to choose freedom without giving up security, we're not providing something meaningfully better - we're just providing the same shit sandwich on different bread.

    [1] I don't see any way that they will, but it wouldn't upset me

    comment count unavailable comments

    19 February, 2015 07:43PM

    Niels Thykier

    Partial rewrite of lintian’s reporting setup

    I had the mixed pleasure of doing a partial rewrite of lintian’s reporting framework.  It started as a problem with generating the graphs, which turned out to be “not enough memory”. On the plus side, I am actually quite pleased with the end result.  I managed to “scope-creep” myself quite a bit and I ended up getting rid of a lot of old issues.

    The major changes in summary:

    • A lot of logic was moved out of harness, meaning it is now closer to becoming a simple “dumb” task scheduler.  With the logic being moved out in separate processes, harness now hogs vastly less memory that I cannot convince perl to release to the OS.  On lilburn.debian.org “vastly less” is on the order of reducing “700ish MB” to “32 MB”.
    • All important metadata was moved into the “harness state-cache”, which is a simple YAML file. This means that “Lintian laboratory” is no longer a data store. This change causes a lot of very positive side effects.
    • With all metadata now stored in a single file, we can now do atomic updates of the data store. That said, this change itself does not enable us to run multiple lintian’s in parallel.
    • As the lintian laboratory is no longer a data store, we can now do our processing in “throw away laboratories” like the regular lintian user does.  As the permanent laboratory is the primary source of failure, this removes an entire class of possible problems.

    There are also some nice minor “features”:

    • Packages can now be “up to date” in the generated reports.  Previously, they would always be listed as “out of date” even if they were up to date.  This is the only end user/website-visitor visible change in all of this (besides the graphs are now working again \o/).
    • The size of the harness work list is no longer based on the number of changes to the archive.
    • The size of the harness work list can now be changed with a command line option and is no longer hard coded to 1024.  However, the “time limit” remains hard coded for now.
    • The “full run” (and “clean run”) now simply marks everything “out-of-date” and processes its (new) backlog over the next (many) harness runs.  Accordingly, a full-run no longer causes lintian to run 5-7 days on lilburn.d.o before getting an update to the website.  Instead we now get incremental updates.
    • The “harness.log” now features status updates from lintian as they happen with “processed X successfully” or “error processing Y” plus a little wall time benchmark.  With this little feature I filed no less than 3 bugs against lintian – 2 of which are fixed in git.  The last remains unfixed but can only be triggered in Debian stable.
    • It is now possible with throw-away labs to terminate the lintian part of a reporting run early with minimal lost processing.  Since the lintian-harness is regular fed status updates from lintian, we can now mark successfully completed entries as done even if lintian does not complete its work list.  Caveat: There may be minor inaccuracies in the generated report for the particular package lintian was processing when it was interrupted.  This will fix itself when the package is reprocessed again.
    • It is now vastly easier to collect new meta data to be used in the reports.  Previously, they had to be included in the laboratory and extracted from there.  Now, we just have to fit it into a YAML file.  In fact, I have been considering to add the “wall time” and make a “top X slowest” page.
    • It is now possible to generate the html pages with only a “state-cache” and the “lintian.log” file.  Previously, it also required a populated lintian laboratory.

    As you can probably tell, I am quite pleased with the end result.  The reporting framework lacks behind in development, since it just “sits there and takes care of itself”.  Also with the complete lack of testing, it also suffers from the “if it is not broken, then do not fix it” paradigm (because we will not notice if we broke until it is too late).

    Of course, I managed to break the setup a couple of times in the process.  However, a bonus feature of the reporting setup is that if you break it, it simply leaves an outdated report on the website.

    Anyway, enjoy. :)


    Filed under: Debian, Lintian

    19 February, 2015 06:54PM by Niels Thykier

    hackergotchi for Michal Čihař

    Michal Čihař

    Weblate 2.2

    Weblate 2.2 has been released today. It comes with improved search, user interface cleanup and various other fixes.

    Full list of changes for 2.2:

    • Performance improvements.
    • Fulltext search on location and comments fields.
    • New SVG/javascript based activity charts.
    • Support for Django 1.8.
    • Support for deleting comments.
    • Added own SVG badge.
    • Added support for Google Analytics.
    • Improved handling of translation file names.
    • Added support for monolingual JSON translations.
    • Record component locking in a history.
    • Support for editing source (template) language for monolingual translations.
    • Added basic support for Gerrit.

    You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.

    Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, Gammu, Weblate itself and other projects.

    If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

    Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

    PS: The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

    Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

    19 February, 2015 05:00PM by Michal Čihař (michal@cihar.com)

    hackergotchi for Nicolas Dandrimont

    Nicolas Dandrimont

    We need your help to make GSoC and Outreachy in Debian a success this summer!

    Hi everyone,

    A quick announcement: Debian has applied to the Google Summer of Code, and will also participate in Outreachy (formerly known as the Outreach Program for Women) for the Summer 2015 round! Those two mentoring programs are a great way for our project to bootstrap new ideas, give an new impulse to some old ones, and of course to welcome an outstanding team of motivated, curious, lively new people among us.

    We need projects and mentors to sign up really soon (before February 27th, that’s next week), as our project list is what Google uses to evaluate our application to GSoC. Projects proposals should be described on our wiki page. We have three sections:

    1. Coding projects with confirmed mentors are proposed to both GSoC and Outreachy applicants
    2. Non-Coding projects with confirmed mentors are proposed only to Outreachy applicants
    3. Project ideas without confirmed mentors will only happen if a mentor appears. They are kept on the wiki page until the application period starts, as we don’t want to give applicants false hopes of being picked for a project that won’t happen.

    Once you’re done, or if you have any questions, drop us a line on our mailing-list (soc-coordination@lists.alioth.debian.org), or on #debian-soc on OFTC.

    We also would LOVE to be able to welcome more Outreachy interns. So far, and thanks to our DPL, Debian has committed to fund one internship (US$6500). If we want more Outreachy interns, we need your help :).

    If you, or your company, have some money to put towards an internship, please drop us a line at opw@debian.org and we’ll be in touch. Some of the successes of our Outreachy alumni include the localization of the Debian Installer to a new locale, improvements in the sources.debian.net service, documentation of the debbugs codebase, and a better integration of AppArmor profiles in Debian.

    Thanks a lot for your help!

    19 February, 2015 03:50PM by olasd

    February 18, 2015

    Richard Hartmann

    Listing screen sessions on login

    Given Peter Eisentraut's blog post on the same topic, I thought I would also share this Zsh function (from 2011):

    startup () {
        # info on any running screens
        if [[ -x $(which screen) ]]
            ZSHRC_SCREENLIST=(${${(M)${(f)"$(screen -ls)"}:#(#s)?:space:##([0-9]##).*}/(#b)?:space:#([0-9]##).*/$match[1]})
            if [[ $#ZSHRC_SCREENLIST -ge 1 ]]
                echo "There are $#ZSHRC_SCREENLIST screens running. $ZSHRC_SCREENLIST"

    18 February, 2015 08:32PM by Richard 'RichiH' Hartmann

    hackergotchi for Philipp Kern

    Philipp Kern

    Caveats of the HP MicroServer Gen8

    If you intend to buy the HP MicroServer Gen8 as a home server there are a few caveats that I didn't find on the interwebs before I bought the device:
    • Even though the main chassis fan is now fixed in AHCI mode with recent BIOS versions, there is still an annoying PSU fan that's tiny and high frequency. You cannot control it and the PSU seems to be custom-built.
    • The BIOS does not support ACPI S3 (suspend-to-RAM) at all. Apparently it being a server BIOS they chose to not include the code paths in the BIOS needed to properly turn off devices and turn them back on. This means that it's not possible to simply suspend it and have it woken up when your media center boots.
    • In contrast to the older AMD-based MicroServers the Gen8 comes with iLO, which will consume quite a few watts just for being present even if you don't use it. I read figures of about ten watts. It also cannot be turned off, as it does system management like fan control.
    • The HDD cages are not vibration proof or decoupled.
    If you try to boot FreeBSD with its zfsloader you will likely need to apply a workaround patch, because the BIOS seems to do something odd. Linux works as expected.

    18 February, 2015 06:15PM by Philipp Kern (noreply@blogger.com)

    Mark Brown

    Kernel build times for automated builders

    Over the past year or so various people have been automating kernel builds with the aim of both setting the standard that things should build reliably and using the resulting builds for automated testing. This has been having good results, it’s especially nice to compare the results for older stable kernel builds with current ones and notice how much happier everything is.

    One of the challenges with doing this is that for good coverage you really need to include allmodconfig or allyesconfig builds to ensure coverage of as much kernel code as possible but that’s fairly resource intensive given the size of the kernel, especially when you want to cover several architectures. It’s also fairly important to get prompt results, development trees are changing all the time and the longer the gap between a problem appearing and it being identified the more likely the report is to be redundant.

    Since I was looking at my own setup and I know of several people who’ve done similar benchmarking I thought I’d publish some ballpark numbers for from scratch allmodconfig builds on a single architecture:

    i7-4770 with SSD 20 minutes
    linode 2048 1.25 hours
    EC2 m3.medium 1.5 hours
    EC2 c3.large 2 hours
    Cubietruck with SSD 20 hours
    Intel Celeron N2940 1.75 hours

    All with the number of tasks spawned by make set to the number of execution threads the system has and no speedups from anything like ccache. I may keep this updated in future with further results.

    Obviously there’s tradeoffs beyond the time, especially for someone like me doing this at home with their own resources – my desktop is substantially faster than anything else I’ve tried but I’m also using it interactively for my work, it’s not easily accessible when not at home and the fans spin up during builds while EC2 starts to cost noticeable money to use as you add more builds.

    18 February, 2015 04:28PM by Mark Brown

    February 17, 2015

    hackergotchi for Lucas Nussbaum

    Lucas Nussbaum

    Some email organization tips and tricks

    I’d like to share a few tips that were useful to strengthen my personal email organization. Most of what follows is probably not very new nor special, but hey, let’s document it anyway.

    Many people have an inbox folder that just grow over time. It’s actually similar to a twitter or RSS feed (except they probably agree that they are supposed to read more of their email “feed”). When I send an email to them, it sometimes happen that they don’t notice it, if the email arrives at a bad time. Of course, as long as they don’t receive too many emails, and there aren’t too many people relying on them, it might just work. But from time to time, it’s super-painful for those interacting with them, when they miss an email and they need to be pinged again. So let’s try not to be like them. :-)

    Tip #1: do Inbox Zero (or your own variant of it)

    Inbox Zero is an email management methodology inspired from David Allen’s Getting Things Done book. It’s best described in this video. The idea is to turn one’s Inbox into an area that is only temporary storage, where every email will get processed at some point. Processing can mean deleting an email, archiving it, doing the action described in the email (and replying to it), etc. Put differently, it basically means implementing the Getting Things Done workflow on one’s email.

    Tip #1.1: archive everything

    One of the time-consuming decisions in the original GTD workflow is to decide whether something should be eliminated (deleted) or stored for reference. Given that disk space is quite cheap, it’s much easier to never decide about that, and just archive everything (by duplicating the email to an archive folder when it is received). To retrieve archived emails when needed, I then use notmuch within mutt to easily search through recent (< 2 year) archives. I use archivemail to archive older email in compressed mboxes from time to time, and grepmail to search through those mboxes when needed.

    I don’t archive most Debian mailing lists though, as they are easy to fetch from master.d.o with the following script:

    rsync -vP master.debian.org:~debian/*/*$1/*$1.${2:-$(date +%Y%m)}* .

    Then I can fetch a specific list archive with getlist devel 201502, or a set of archives with e.g. getlist devel 2014, or the current month with e.g. getlist devel. Note that to use grepmail on XZ-compressed archives, you need libmail-mbox-messageparser-perl version 1.5002-3 (only in unstable — I was using a locally-patched version for ages, but finally made a patch last week, which Gregor kindly uploaded).

    Tip #1.2: split your inbox

    (Yes, this one looks obvious but I’m always surprised at how many people don’t do that.)

    Like me, you probably receive various kinds of emails:

    • emails about your day job
    • emails about Debian
    • personal emails
    • mailing lists about your day job
    • mailing lists about Debian
    • etc.

    Splitting those into separate folders has several advantages:

    • I can adjust my ‘default action’ based on the folder I am in (e.g. delete after reading for most mailing lists, as it’s archived already)
    • I can adjust my level of focus depending on the folder (I might not want to pay a lot of attention to each and every email from a mailing list I am only remotely interested in; while I should definitely pay attention to each email in my ‘DPL’ folder)
    • When busy, I can skip the less important folders for a few days, and still be responsive to emails sent in my more important folders

    I’ve seen some people splitting their inbox into too many folders. There’s not much point in having a per-sender folder organization (unless there’s really a recurring flow of emails from a specific person), as it increases the risk of missing an email.

    I use procmail to organize my email into folders. I know that there are several more modern alternatives, but I haven’t looked at them since procmail does the job for me.

    Resulting organization

    I use one folder for my day-job email, one for my DPL email, one for all other email directed or Cced to me. Then, I have a few folders for automated notifications of stuff. My Debian mailing list folders are auto-managed by procmail’s $MATCH:

     * ^X-Mailing-List: <.*@lists\.debian\.org>
     * ^X-Mailing-List: <debian-\/[^@]*

    Some other mailing lists are in they separate folders, and there’s a catch-all folder for the remaining ones. Ah, and since I use feed2imap, I have folders for the RSS/Atom feeds I follow.

    I have two different commands to start mutt. One only shows a restricted number of (important) folders. The other one shows the full list of (non-empty) folders. This is a good trick to avoid spending time reading email when I am supposed to do something more important. :)

    As for many people probably, my own organization is loosely based on GTD and Inbox Zero. It sometimes happen that some emails stay in my Inbox for several days or weeks, but I very rarely have more than 20 or 30 emails in one of my main inbox folders. I also do reviews of the whole content of my main inbox folders once or twice a week, to ensure that I did not miss an email that could be acted on quickly.

    A last trick is that I have a special folder replies, where procmail copies emails that are replies to a mail I sent, but which do not Cc me. That’s useful to work-around Debian’s “no Cc on reply to mailing list posts” policy.

    I receive email using offlineimap (over SSH to my mail server), and send it using nullmailer (through a SSH tunnel). The main advantage of offlineimap over using IMAP directly in mutt is that using IMAP to a remote server feels much more sluggish. Another advantage is that I only need SSH access to get my full email setup to work.

    Tip #2: tracking sent emails

    Two recurring needs I had was:

    • Get an overview of emails I sent to help me write the day-to-day DPL log
    • Easily see which emails got an answer, or did not (which might mean that they need a ping)

    I developed a short script to address that. It scans the content of my ‘Sent’ maildir and my archive maildirs, and, for each email address I use, displays (see example output in README) the list of emails sent with this address (one email per line). It also detects if an email was replied to (“R” column after the date), and abbreviates common parts in email addresses (debian-project@lists.debian.org becomes d-project@l.d.o). It also generates a temporary maildir with symlinks to all emails, so that I can just open the maildir if needed.

    17 February, 2015 05:34PM by lucas

    hackergotchi for Clint Adams

    Clint Adams

    Copyleft licenses are oppressing someone

    I go to a party, carrying two expensive bottles of liquor that I have acquired from faraway lands.

    The hosts of the party provide a variety of liquors, snacks, and mixers.

    Some neuro guy shows up, looks around, feels guilty, says that he should have brought something.

    His friend shows up, bearing hot food. The neuro guy decides to contribute $7 to the purchase of food since he didn't bring anything. The friend then proceeds to charge us each $7.

    No one else demands money for any of the other things being share and consumed by everyone. The hosts do not retroactively charge a cover fee for entrance to the house. No one else offers to pay anyone for anything.

    The neuro guy attempts to wash some dishes before leaving, but is stopped by the hosts, because he is a guest.

    17 February, 2015 05:18PM

    John Goerzen

    “Has Linux lost its way?” comments prompt a Debian developer to revisit FreeBSD after 20 years

    I’ll admit it. I have a soft spot for FreeBSD. FreeBSD was the first Unix I ran, and it was somewhere around 20 years ago that I did so, before I switched to Debian. Even then, I still used some of the FreeBSD Handbook to learn Linux, because Debian didn’t have the great Reference that it does now.

    Anyhow, some comments in my recent posts (“Has modern Linux lost its way?” and Reactions to that, and the value of simplicity), plus a latent desire to see how ZFS fares in FreeBSD, caused me to try it out. I installed it both in VirtualBox under Debian, and in an old 64-bit Thinkpad sitting in my basement that previously ran Debian.

    The results? A mixture of amazing and disappointing. I will say that I am quite glad that both exist; there is plenty of innovation happening everywhere and neat features exist everywhere, too. But I can also come right out and say that the statement that FreeBSD doesn’t have issues like Linux does is false and misleading. In many cases, it’s running the exact same stack. In others, it’s better, but there are also others where it’s worse. Perhaps this article might dispell a bit of the FUD surrounding jessie, while also showing off some of the nice things FreeBSD does. My conclusion: Both jessie and FreeBSD 10.1 are awesome Free operating systems, but both have their warts. This article is more about FreeBSD than Debian, but it will discuss a few of Debian’s warts as well.

    The experience

    My initial reaction to FreeBSD was: wow, this feels so familiar. It reminds me of a commercial Unix, or maybe of Linux from a few years ago. A minimal, well-documented base system, everything pretty much in logical places in the filesystem, and solid memory management. I felt right at home. It was almost reassuring, even.

    Putting together a FreeBSD box is a lot of package installing and config file editing. The FreeBSD Handbook, describing how to install X, talks about editing this or that file for this or that feature. I like being able to learn directly how things fit together by doing this.

    But then you start remembering the reasons you didn’t like Linux a few years ago, or the commercial Unixes: maybe it’s that programs like apache are still not as well supported, or maybe it’s that the default vi has this tendency to corrupt the terminal periodically, or perhaps it’s that root’s default shell is csh. Or perhaps it’s that I have to do a lot of package installing and config file editing. It is not quite the learning experience it once was, either; now there are things like “paste this XML file into some obscure polkit location to make your mouse work” or something.

    Overall, there are some areas where FreeBSD kills it in a way no other OS does. It is unquestionably awesome in several areas. But there are a whole bunch of areas where it’s about 80% as good as Linux, a number of areas (even polkit, dbus, and hal) where it’s using the exact same stack Linux is (so all these comments about FreeBSD being so differently put together strike me as hollow), and frankly some areas that need a lot of work and make it hard to manage systems in a secure and stable way.

    The amazing

    Let’s get this out there: I’ve used ZFS too much to use any OS that doesn’t support it or something like it. Right now, I’m not aware of anything like ZFS that is generally stable and doesn’t cost a fortune, so pretty much: if your Unix doesn’t do ZFS, I’m not interested. (btrfs isn’t there yet, but will be awesome when it is.) That’s why I picked FreeBSD for this, rather than NetBSD or OpenBSD.

    ZFS on FreeBSD is simply awesome. They have integreated it extremely well. The installer supports root on zfs, even encrypted root on zfs (though neither is a default). top on a FreeBSD system shows a line of ZFS ARC (cache) stats right alongside everything else. The ZFS defaults for maximum cache size, readahead, etc. auto-tune themselves at boot (unless overridden) based on the amount of RAM in a system and the system type. Seriously, these folks have thought of everything and it just reeks of solid. I haven’t seen ZFS this well integrated outside the Solaris-type OSs.

    I have been using ZFSOnLinux for some time now, but it is just not as mature as ZFS on FreeBSD. ZoL, for instance, still has some memory tuning issues, and is not really suggested for 32-bit machines. FreeBSD just nails it. ZFS on FreeBSD even supports TRIM, which is not available in ZoL and I think fairly unique even among OpenZFS platforms. It also supports delegated administration of the filesystem, both to users and to jails on the system, seemingly very similar to Solaris zones.

    FreeBSD also supports beadm, which is like a similar tool on Solaris. This lets you basically use ZFS snapshots to make lightweight “boot environments”, so you can select which to boot into. This is useful, say, before doing upgrades.

    Then there are jails. Linux has tried so hard to get this right, and fallen on its face so many times, a person just wants to take pity sometimes. We’ve had linux-vserver, openvz, lxc, and still none of them match what FreeBSD jails have done for a long time. Linux’s current jail-du-jour is LXC, though it is extremely difficult to configure in a secure way. Even its author comments that “you won’t hear any of the LXC maintainers tell you that LXC is secure” and that it pretty much requires AppArmor profiles to achieve reasonable security. These are still rather in flux, as I found out last time I tried LXC a few months ago. My confidence in LXC being as secure as, say, KVM or FreeBSD is simply very low.

    FreeBSD’s jails are simple and well-documented where LXC is complex and hard to figure out. Its security is fairly transparent and easy to control and they just work well. I do think LXC is moving in the right direction and might even get there in a couple years, but I am quite skeptical that even Docker is getting the security completely right.

    The simply different

    People have been throwing around the word “distribution” with respect to FreeBSD, PC-BSD, etc. in recent years. There is an analogy there, but it’s not perfect. In the Linux ecosystem, there is a kernel project, a libc project, a coreutils project, a udev project, a systemd/sysvinit/whatever project, etc. You get the idea. In FreeBSD, there is a “base system” project. This one project covers the kernel and the base userland. Some of what they use in the base system is code pulled in from elsewhere but maintained in their tree (ssh), some is completely homegrown (kernel), etc. But in the end, they have a nicely-integrated base system that always gets upgraded in sync.

    In the Linux world, the distribution makers are responsible for integrating the bits from everywhere into a coherent whole.

    FreeBSD is something of a toolkit to build up your system. Gentoo might be an analogy in the Linux side. On the other end of the spectrum, Ubuntu is a “just install it and it works, tweak later” sort of setup. Debian straddles the middle ground, offering both approaches in many cases.

    There are pros and cons to each approach. Generally, I don’t think either one is better. They are just different.

    The not-quite-there

    I said that there are a lot of things in FreeBSD that are about 80% of where Linux is. Let me touch on them here.

    Its laptop support leaves something to be desired. I installed it on a few-years-old Thinkpad — basically the best possible platform for working suspend in a Free OS. It has worked perfectly out of the box in Debian for years. In FreeBSD, suspend only works if it’s in text mode. If X is running, the video gets corrupted and the system hangs. I have not tried to debug it further, but would also note that suspend on closed lid is not automatic in FreeBSD; the somewhat obscure instuctions tell you what policykit pkla file to edit to make suspend work in XFCE. (Incidentally, it also says what policykit file to edit to make the shutdown/restart options work).

    Its storage subsystem also has some surprising misses. Its rough version of LVM, LUKS, and md-raid is called GEOM. GEOM, however, supports only RAID0, RAID1, and RAID3. It does not support RAID5 or RAID6 in software RAID configurations! Linux’s md-raid, by comparison, supports RAID0, RAID1, RAID4, RAID5, RAID6, etc. There seems to be a highly experimental RAID5 patchset floating around for many years, but it is certainly not integrated into the latest release kernel. The current documentation makes no mention of RAID5, although it seems that a dated logical volume manager supported it. In any case, RAID5 does not seem to be well-supported in software like it is in Linux.

    ZFS does have its raidz1 level, which is roughly the same as RAID5. However, that requires full use of ZFS. ZFS also does not support some common operations, like adding a single disk to an existing RAID5 group (which is possible with md-raid and many other implementations.) This is a ZFS limitation on all platforms.

    FreeBSD’s filesystem support is rather a miss. They once had support for Linux ext* filesystems using the actual Linux code, but ripped it out because it was in GPL and rewrote it so it had a BSD license. The resulting driver really only works with ext2 filesystems, as it doesn’t work with ext3/ext4 in many situations. Frankly I don’t see why they bothered; they now have something that is BSD-licensed but only works with a filesystem so old nobody uses it anymore. There are only two FreeBSD filesystems that are really useable: UFS2 and ZFS.

    Virtualization under FreeBSD is also not all that present. Although it does support the VirtualBox Open Source Edition, this is not really a full-featured or fast enough virtualization environment for a server. Its other option is bhyve, which looks to be something of a Xen clone. bhyve, however, does not support Windows guests, and requires some hoops to even boot Linux guest installers. It will be several years at least before it reaches feature-parity with where KVM is today, I suspect.

    One can run FreeBSD as a guest under a number of different virtualization systems, but their instructions for making the mouse work best under VirtualBox did not work. There may have been some X.Org reshuffle in FreeBSD that wasn’t taken into account.

    The installer can be nice and fast in some situations, but one wonders a little bit about QA. I had it lock up on my twice. Turns out this is a known bug reported 2 months ago with no activity, in which the installer attempts to use a package manger that it hasn’t set up yet to install optional docs. I guess the devs aren’t installing the docs in testing.

    There is nothing like Dropbox for FreeBSD. Apparently this is because FreeBSD has nothing like Linux’s inotify. The Linux Dropbox does not work in FreeBSD’s Linux mode. There are sketchy reports of people getting an OwnCloud client to work, but in something more akin to rsync rather than instant-sync mode, if they get it working at all. Some run Dropbox under wine, apparently.

    The desktop environments tend to need a lot more configuration work to get them going than on Linux. There’s a lot of editing of polkit, hal, dbus, etc. config files mentioned in various places. So, not only does FreeBSD use a lot of the same components that cause confusion in Linux, it doesn’t really configure them for you as much out of the box.

    FreeBSD doesn’t support as many platforms as Linux. FreeBSD has only two platforms that are fully supported: i386 and amd64. But you’ll see people refer to a list of other platforms that are “supported”, but they don’t have security support, official releases, or even built packages. They includ arm, ia64, powerpc, and sparc64.

    The bad: package management

    Roughly 20 years ago, this was one of the things that pulled me to Debian. Perhaps I am spolied from running the distribution that has been the gold standard for package management for so long, but I find FreeBSD’s package management — even “pkg-ng” in 10.1-RELEASE — to be lacking in a number of important ways.

    To start with, FreeBSD actually has two different package management systems: one for the base system, and one for what they call the ports/packages collection (“ports” being the way to install from source, and “packages” being the way to install from binaries, but both related to the same tree.) For the base system, there is freebsd-update which can install patches and major upgrades. It also has a “cron” option to automate this. Sadly, it has no way of automatically indicating to a calling script whether a reboot is necessary.

    freebsd-update really manages less than a dozen packages though. The rest are managed by pkg. And pkg, it turns out, has a number of issues.

    The biggest: it can take a week to get security updates. The FreeBSD handbook explains pkg audit -F which will look at your installed packages (but NOT the ones in the base system) and alert you to packages that need to be updates, similar to a stripped-down version of Debian’s debsecan. I discovered this myself, when pkg audit -F showed a vulnerability in xorg, but pkg upgrade showed my system was up-to-date. It is not documented in the Handbook, but people on the mailing list explained it to me. There are workarounds, but they can be laborious.

    If that’s not bad enough, FreeBSD has no way to automatically install security patches for things in the packages collection. Debian has several (unattended-upgrades, cron-apt, etc.) There is “pkg upgrade”, but it upgrades everything on the system, which may be quite a bit more than you want to be upgraded. So: if you want to run Apache with PHP, and want it to just always apply security patches, FreeBSD packages are not up to the job like Debian’s are.

    The pkg tool doesn’t have very good error-handling. In fact, its error handling seems to be nonexistent at times. I noticed that some packages had failures during install time, but pkg ignored them and marked the package as correctly installed. I only noticed there was a problem because I happened to glance at the screen at the right moment during messages about hundreds of packages. In Debian, by contrast, if there are any failures, at the end of the run, you get a nice report of which packages failed, and an exit status to use in scripts.

    It also has another issue that Debian resolved about a decade ago: package scripts displaying messages that are important for the administrator, but showing so many of them that they scroll off the screen and are never seen. I submitted a bug report for this one also.

    Some of these things just make me question the design of pkg. If I can’t trust it to accurately report if the installation succeeded, or show me the important info I need to see, then to what extent can I trust it?

    Then there is the question of testing of the ports/packages. It seems that, automated tests aside, basically everyone is running off the “master” branch of the ports/packages. That’s like running Debian unstable on your servers. I am distinctly uncomfortable with this notion, though it seems FreeBSD people report it mostly works well.

    There are some other issues, too: FreeBSD ports make no distinction between development and runtime files like Debian’s packages do. So, just by virtue of wanting to run a graphical desktop, you get all of the static libraries, include files, build scripts, etc for XOrg installed.

    For a package as concerned about licensing as FreeBSD, the packages collection does not have separate sections like Debian’s main, contrib, and non-free. It’s all in one big pot: BSD-license, GPL-license, proprietary without source license. There is /usr/local/share/licenses where you can look up a license for each package, but there is no way with FreeBSD, like there is with Debian, to say “never even show me packages that aren’t DFSG-free.” This is useful, for instance, when running in a company to make sure you never install packages that are for personal use only or something.

    The bad: ABI stability

    I’m used to being able to run binaries I compiled years ago on a modern system. This is generally possible in Linux, assuming you have the correct shared libraries available. In FreeBSD, this is explicitly NOT possible. After every major version upgrade, you must reinstall or recompile every binary on your system.

    This is not necessarily a showstopper for me, but it is a hassle for a lot of people.

    Update 2015-02-17: Some people in the comments are pointing out compat packages in the ports that may help with this situation. My comment was based on advice in the FreeBSD Handbook stating “After a major version upgrade, all installed packages and ports need to be upgraded”. I have not directly tried this, so if the Handbook is overstating the need, then this point may be in error.


    As I said above, I found little validation to the comments that the Debian ecosystem is noticeably worse than the FreeBSD one. Debian has its warts too — particularly with keeping software up-to-date. You can see that the two projects are designed around a different passion: FreeBSD’s around the base system, and Debian’s around an integrated whole system. It would be wrong to say that either of those is always better. FreeBSD’s approach clearly produces some leading features, especially jails and ZFS integration. Yet Debian’s approach also produces some leading features in the way of package management and security maintainability beyond the small base.

    My criticism of excessive complexity in the polkit/cgmanager/dbus area still stands. But to those people commenting that FreeBSD hasn’t “lost its way” like Linux has, I would point out that FreeBSD mostly uses these same components also, and FreeBSD has excessive complexity in its ports/package system and system management tools. I think it’s a draw. You pick the best for your use case. If you’re looking for a platform to run a single custom app then perhaps all of the Debian package management benefits don’t apply to you (you may not even need FreeBSD’s packages, or just a few). The FreeBSD ZFS support or jails may well appeal. If you’re looking to run a desktop environment, or a server with some application that needs a ton of PHP, Python, Perl, or C libraries, then Debian’s package management and security handling may well be attractive.

    I am disappointed that Debian GNU/kFreeBSD will not be a release architecture in jessie. That project had the promise to provide a best of both worlds for those that want jails or tight ZFS integration.

    17 February, 2015 04:11PM by John Goerzen

    Enrico Zini


    Setting up Akonadi

    Now that I have a CalDAV server that syncs with my phone I would like to use it from my desktop.

    It looks like akonadi is able to sync with CalDAV servers, so I'm giving it a try.

    First thing first is to give a meaning to the arbitrary name of this thing. Wikipedia says it is the oracle goddess of justice in Ghana. That still does not hint at all at personal information servers, but seems quite nice. Ok. I gave up with software having purpose-related names ages ago.

    # apt-get install akonadi-server akonadi-backend-postgresql

    Akonadi wants a SQL database as a backend. By default it uses MySQL, but I had enough of MySQL ages ago.

    I tried SQLite but the performance with it is terrible. Terrible as in, it takes 2 minutes between adding a calendar entry and having it show up in the calendar. I'm fascinated by how Akonadi manages to use SQLite so badly, but since I currently just want to get a job done, next in line is PostgreSQL:

    # su - postgres
    $ createuser enrico
    $ psql postgres
    postgres=# alter user enrico createdb;

    Then as enrico:

    $ createdb akonadi-enrico
    $ cat <<EOT > ~/.config/akonadi/akonadiserverrc

    I can now use kontact to connect Akonadi to my CalDAV server and it works nicely, both with calendar and with addressbook entries.

    KDE has at least two clients for Akonadi: Kontact, which is a kitchen sink application similar to Evolution, and KOrganizer, which is just the calendar and scheduling component of Kontact.

    Both work decently, and KOrganizer has a pretty decent startup time. I now have a usable desktop PIM application that is synced with my phone. W00T!

    Next step is to port my swift little calendar display tool to use Akonadi as a back-end.

    17 February, 2015 02:34PM

    hackergotchi for Peter Eisentraut

    Peter Eisentraut

    February 16, 2015

    hackergotchi for Matthew Garrett

    Matthew Garrett

    Intel Boot Guard, Coreboot and user freedom

    PC World wrote an article on how the use of Intel Boot Guard by PC manufacturers is making it impossible for end-users to install replacement firmware such as Coreboot on their hardware. It's easy to interpret this as Intel acting to restrict competition in the firmware market, but the reality is actually a little more subtle than that.

    UEFI Secure Boot as a specification is still unbroken, which makes attacking the underlying firmware much more attractive. We've seen several presentations at security conferences lately that have demonstrated vulnerabilities that permit modification of the firmware itself. Once you can insert arbitrary code in the firmware, Secure Boot doesn't do a great deal to protect you - the firmware could be modified to boot unsigned code, or even to modify your signed bootloader such that it backdoors the kernel on the fly.

    But that's not all. Someone with physical access to your system could reflash your system. Even if you're paranoid enough that you X-ray your machine after every border crossing and verify that no additional components have been inserted, modified firmware could still be grabbing your disk encryption passphrase and stashing it somewhere for later examination.

    Intel Boot Guard is intended to protect against this scenario. When your CPU starts up, it reads some code out of flash and executes it. With Intel Boot Guard, the CPU verifies a signature on that code before executing it[1]. The hash of the public half of the signing key is flashed into fuses on the CPU. It is the system vendor that owns this key and chooses to flash it into the CPU, not Intel.

    This has genuine security benefits. It's no longer possible for an attacker to simply modify or replace the firmware - they have to find some other way to trick it into executing arbitrary code, and over time these will be closed off. But in the process, the system vendor has prevented the user from being able to make an informed choice to replace their system firmware.

    The usual argument here is that in an increasingly hostile environment, opt-in security isn't sufficient - it's the role of the vendor to ensure that users are as protected as possible by default, and in this case all that's sacrificed is the ability for a few hobbyists to replace their system firmware. But this is a false dichotomy - UEFI Secure Boot demonstrated that it was entirely possible to produce a security solution that provided security benefits and still gave the user ultimate control over the code that their machine would execute.

    To an extent the market will provide solutions to this. Vendors such as Purism will sell modern hardware without enabling Boot Guard. However, many people will buy hardware without consideration of this feature and only later become aware of what they've given up. It should never be necessary for someone to spend more money to purchase new hardware in order to obtain the freedom to run their choice of software. A future where users are obliged to run proprietary code because they can't afford another laptop is a dystopian one.

    Intel should be congratulated for taking steps to make it more difficult for attackers to compromise system firmware, but criticised for doing so in such a way that vendors are forced to choose between security and freedom. The ability to control the software that your system runs is fundamental to Free Software, and we must reject solutions that provide security at the expense of that ability. As an industry we should endeavour to identify solutions that provide both freedom and security and work with vendors to make those solutions available, and as a movement we should be doing a better job of articulating why this freedom is a fundamental part of users being able to place trust in their property.

    [1] It's slightly more complicated than that in reality, but the specifics really aren't that interesting.

    comment count unavailable comments

    16 February, 2015 08:44PM

    hackergotchi for Julien Danjou

    Julien Danjou

    Hacking Python AST: checking methods declaration

    A few months ago, I wrote the definitive guide about Python method declaration, which had quite a good success. I still fight every day in OpenStack to have the developers declare their methods correctly in the patches they submit.

    Automation plan

    The thing is, I really dislike doing the same things over and over again. Furthermore, I'm not perfect either, and I miss a lot of these kind of problems in the reviews I made. So I decided to replace me by a program – a more scalable and less error-prone version of my brain.

    In OpenStack, we rely on flake8 to do static analysis of our Python code in order to spot common programming mistakes.

    But we are really pedantic, so we wrote some extra hacking rules that we enforce on our code. To that end, we wrote a flake8 extension called hacking. I really like these rules, I even recommend to apply them in your own project. Though I might be biased or victim of Stockholm syndrome. Your call.

    Anyway, it's pretty clear that I need to add a check for method declaration in hacking. Let's write a flake8 extension!

    Typical error

    The typical error I spot is the following:

    class Foo(object):
    # self is not used, the method does not need
    # to be bound, it should be declared static
    def bar(self, a, b, c):
    return a + b - c

    That would be the correct version:

    class Foo(object):
    def bar(a, b, c):
    return a + b - c

    This kind of mistake is not a show-stopper. It's just not optimized. Why you have to manually declare static or class methods might be a language issue, but I don't want to debate about Python misfeatures or design flaws.


    We could probably use some big magical regular expression to catch this problem. flake8 is based on the pep8 tool, which can do a line by line analysis of the code. But this method would make it very hard and error prone to detect this pattern.

    Though it's also possible to do an AST based analysis on on a per-file basis with pep8. So that's the method I pick as it's the most solid.

    AST analysis

    I won't dive deeply into Python AST and how it works. You can find plenty of sources on the Internet, and I even talk about it a bit in my book The Hacker's Guide to Python.

    To check correctly if all the methods in a Python file are correctly declared, we need to do the following:

    • Iterate over all the statement node of the AST
    • Check that the statement is a class definition (ast.ClassDef)
    • Iterate over all the function definitions (ast.FunctionDef) of that class statement to check if it is already declared with @staticmethod or not
    • If the method is not declared static, we need to check if the first argument (self) is used somewhere in the method

    Flake8 plugin

    In order to register a new plugin in flake8 via hacking, we just need to add an entry in setup.cfg:

    flake8.extension =
    H904 = hacking.checks.other:StaticmethodChecker
    H905 = hacking.checks.other:StaticmethodChecker

    We register 2 hacking codes here. As you will notice later, we are actually going to add an extra check in our code for the same price. Stay tuned.

    The next step is to write the actual plugin. Since we are using an AST based check, the plugin needs to be a class following a certain signature:

    class StaticmethodChecker(object):
    def __init__(self, tree, filename):
    self.tree = tree
    def run(self):

    So far, so good and pretty easy. We store the tree locally, then we just need to use it in run() and yield the problem we discover following pep8 expected signature, which is a tuple of (lineno, col_offset, error_string, code).

    This AST is made for walking ♪ ♬ ♩

    The ast module provides the walk function, that allow to iterate easily on a tree. We'll use that to run through the AST. First, let's write a loop that ignores the statement that are not class definition.

    class StaticmethodChecker(object):
    def __init__(self, tree, filename):
    self.tree = tree
    def run(self):
    for stmt in ast.walk(self.tree):
    # Ignore non-class
    if not isinstance(stmt, ast.ClassDef):

    We still don't check for anything, but we know how to ignore statement that are not class definitions. The next step need to be to ignore what is not function definition. We just iterate over the attributes of the class definition.

    for stmt in ast.walk(self.tree):
    # Ignore non-class
    if not isinstance(stmt, ast.ClassDef):
    # If it's a class, iterate over its body member to find methods
    for body_item in stmt.body:
    # Not a method, skip
    if not isinstance(body_item, ast.FunctionDef):

    We're all set for checking the method, which is body_item. First, we need to check if it's already declared as static. If so, we don't have to do any further check and we can bail out.

    for stmt in ast.walk(self.tree):
    # Ignore non-class
    if not isinstance(stmt, ast.ClassDef):
    # If it's a class, iterate over its body member to find methods
    for body_item in stmt.body:
    # Not a method, skip
    if not isinstance(body_item, ast.FunctionDef):
    # Check that it has a decorator
    for decorator in body_item.decorator_list:
    if (isinstance(decorator, ast.Name)
    and decorator.id == 'staticmethod'):
    # It's a static function, it's OK
    # Function is not static, we do nothing for now

    Note that we use the special for/else form of Python, where the else is evaluated unless we used break to exit the for loop.

    for stmt in ast.walk(self.tree):
    # Ignore non-class
    if not isinstance(stmt, ast.ClassDef):
    # If it's a class, iterate over its body member to find methods
    for body_item in stmt.body:
    # Not a method, skip
    if not isinstance(body_item, ast.FunctionDef):
    # Check that it has a decorator
    for decorator in body_item.decorator_list:
    if (isinstance(decorator, ast.Name)
    and decorator.id == 'staticmethod'):
    # It's a static function, it's OK
    first_arg = body_item.args.args[0]
    except IndexError:
    yield (
    "H905: method misses first argument",
    # Check next method

    We finally added some check! We grab the first argument from the method signature. Unless it fails, and in that case, we know there's a problem: you can't have a bound method without the self argument, therefore we raise the H905 code to signal a method that misses its first argument.

    Now you know why we registered this second pep8 code along with H904 in setup.cfg. We have here a good opportunity to kill two birds with one stone.

    The next step is to check if that first argument is used in the code of the method.

    for stmt in ast.walk(self.tree):
    # Ignore non-class
    if not isinstance(stmt, ast.ClassDef):
    # If it's a class, iterate over its body member to find methods
    for body_item in stmt.body:
    # Not a method, skip
    if not isinstance(body_item, ast.FunctionDef):
    # Check that it has a decorator
    for decorator in body_item.decorator_list:
    if (isinstance(decorator, ast.Name)
    and decorator.id == 'staticmethod'):
    # It's a static function, it's OK
    first_arg = body_item.args.args[0]
    except IndexError:
    yield (
    "H905: method misses first argument",
    # Check next method
    for func_stmt in ast.walk(body_item):
    if six.PY3:
    if (isinstance(func_stmt, ast.Name)
    and first_arg.arg == func_stmt.id):
    # The first argument is used, it's OK
    if (func_stmt != first_arg
    and isinstance(func_stmt, ast.Name)
    and func_stmt.id == first_arg.id):
    # The first argument is used, it's OK
    yield (
    "H904: method should be declared static",

    To that end, we iterate using ast.walk again and we look for the use of the same variable named (usually self, but if could be anything, like cls for @classmethod) in the body of the function. If not found, we finally yield the H904 error code. Otherwise, we're good.


    I've submitted this patch to hacking, and, finger crossed, it might be merged one day. If it's not I'll create a new Python package with that check for flake8. The actual submitted code is a bit more complex to take into account the use of abc module and include some tests.

    As you may have notice, the code walks over the module AST definition several times. There might be a couple of optimization to browse the AST in only one pass, but I'm not sure it's worth it considering the actual usage of the tool. I'll let that as an exercise for the reader interested in contributing to OpenStack. 😉

    Happy hacking!

    A book I wrote talking about designing Python applications, state of the art, advice to apply when building your application, various Python tips, etc. Interested? Check it out.

    16 February, 2015 11:39AM by Julien Danjou

    Vincent Sanders

    To a child, often the box a toy came in is more appealing than the toy itself.

    I think Allen Klein might not have been referring to me when he said that but I do seem to like creating boxes for my toys.

    Lenovo laptop with ultrabay ejected
    My Lenovo laptop has an Ultrabay, these are a way to easily swap optical and hard drives drives. They allow me to carry around additional storage and, providing I remembered to pack the drive, access optical media.
    Over time I have acquired several additional hard drives housed in Ultrabay caddies. Generally I only need to access one at a time but increasingly I want to have more than one available.

    Lenovo used to sell docking stations with multiple Ultrabays but since Series 3 was introduced this is no longer the case as the docks have been reduced to port replicators.

    One solution is to buy a SATA to USB convertor which lets you use the drive externally. However once you have more than one drive this becomes somewhat untidy, not to mention all those unhoused drives on your desk become something of a hazard.

    Recently after another close call I decided what I needed was a proper external enclosure to house all my drives. After some extensive googling I found nothing suitable ready to buy. Most normal people would give up at this point, I appear to be an abnormal person so I got the CAD package out.

    A few hours of design and a load of laser cutting later I came up with a four bay enclosure that now houses all my Ultrabay caddies.

    The design was slightly evolved to accommodate the features of some older caddies and allow a pencil to be used to eject the drives (I put a square hole in the back)

    The completed unit uses about £10 of plastic and takes 30 minutes to lasercut.

    The only issue with the enclosure as manufactured is that Makespace ran out of black plastic stock and I had to use transparent to finish so it is not in classic black as lenovo intended.

    As usual all the design files are publicly available from my design repo.

    16 February, 2015 12:31AM by Vincent Sanders (noreply@blogger.com)

    February 15, 2015

    Antonio Terceiro

    rmail: reviving upstream maintaince

    It is always fun to write new stuff, and be able to show off that shiny new piece of code that just come out of your brilliance and/or restless effort. But the world does not spin based just on shiny things; for free software to continue making the world work, we also need the dusty, and maybe and little rusty, things that keep our systems together. Someone needs to make sure the rust does not take over, and that these venerable but useful pieces of code keep it together as the ecosystem around them evolves. As you know, Someone is probably the busiest person there is, so often you will have to take Someone’s job for yourself.

    rmail is a Ruby library able to parse, modify, and generate MIME mail messages. While handling transitions of Ruby interpreters in Debian, it was one of the packages we always had to fix for new Ruby versions, to the point where the Debian package has accumulated quite a few patches. The situation became ridiculous.

    It was considered to maybe drop it from the Debian archive, but dropping it would mean either also dropping feed2imap and sup or porting both to other mail library.

    Since doing this type of port is always painful, I decided instead to do something about the sorry state in which rmail was on the upstream side.

    The reasons why it was not properly maintained upstream does not matter: people lose interest, move on to other projects, are not active users anymore; that is normal in free software projects, and instead of blaming upstream maintainers in any way we need to thank them for writing us free software in the first place, and step up to fix the stuff we use.

    I got in touch with the people listed as owner for the package on rubygems.org, and got owner permission, which means I can now publish new versions myself.

    With that, I cloned the repository where the original author had imported the latest code uploaded to rubygems and had started to receive contributions, but that repository was inactive for more than one year. It had already got some contributions from the sup developers which never made it in a new rmail release, so the sup people started using their own fork called “rmail-sup”.

    Already in my repository, I have imported all the patches that still made sense from the Debian repository, did a bunch of updates, mainly to modernize the build system, and did a 1.1.0 release to rubygems.org. This release is pretty much compatible with 1.0.0, but since I did not test it with Ruby versions older than than one in my work laptop (2.1.5), I bumped the minor version number as warning to prospective users still on older Ruby versions.

    In this release, the test suite passes 100% clean, what always gives my mind a lot of comfort:

    $ rake
    /usr/bin/ruby2.1 -I"lib:." -I"/usr/lib/ruby/vendor_ruby" "/usr/lib/ruby/vendor_ruby/rake/rake_test_loader.rb" "test/test*.rb"
    Loaded suite /usr/lib/ruby/vendor_ruby/rake/rake_test_loader
    Finished in 2.096916712 seconds.
    166 tests, 24213 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
    100% passed
    79.16 tests/s, 11546.95 assertions/s

    And in the new release I have just uploaded to the Debian experimental suite (1.1.0-1), I was able to drop all of the patches and just use the upstream source as is.

    So that’s it: if you use rmail for anything, consider testing version 1.1.0-1 from Debian experimental, or 1.1.0 from rubygems.org if you into that, and report any bugs to the [github repository](https://github.com/terceiro/rmail). My only commitment for now is keep it working, but if you want to add new features I will definitively review and merge them.

    15 February, 2015 03:37PM

    Jingjie Jiang

    Less is More

    Things haven’t gone well recently.

    I have been working on refactoring stuff for the past two or three weeks. At first, I found it much more exciting and interesting than merely fixing minor bugs. So I worked on most parts of the codebase, and made lots of changes.

    It took me much more time and energy to do all this stuff. But sadly, the more energy results in a more broken codebase. It seems all these efforts are a waste since the code is no longer be able to be merged. It was very frustrating.

    Get back on track

    Well, I guess such trials and failures are just inevitable on the way towards an experienced developer. Despite all the divergences and dismay I have gone through, my internship must be back on track.

    I am now more realistic. My first and foremost task is to GET THINGS DONE by focusing on small and doable changes. Although it seems the improvement is LESS, but it means MORE to have a not-so-perfect finished-product (in my view), rather than a to-be-perfect messy.

    Journey continues.

    15 February, 2015 02:13PM by sophiejjj

    hackergotchi for Sergio Talens-Oliag

    Sergio Talens-Oliag


    I haven't blogged for a long time, but I've decided that I'm going to try to write again, at least about technical stuff.

    My plan was to blog about the projects I've been working on lately, the main one being the setup of the latest version of Kolab with the systems we already have at work, but I'll do that on the next days.

    Today I'm just going to make a list of the tools I use on a daily basis and my plans to start using additional ones in the near future.

    Shells, Terminals and Text Editors

    I do almost all my work on Z Shell sessions running inside tmux; for terminal emulation I use gnome-terminal on X, VX ConnectBot on Android systems and iTerm2 on Mac OS X.

    For text editing I've been using Vim for a long time (even on Mobile devices) and while I'm aware I don't know half of the things it can do, what I know is good enough for my day to day needs.

    In the past I also used Emacs as a programming editor and my main tool to write HTML, SGML and XML, but since I haven't really needed an IDE for a long time and I mainly use Lightweight Markup Languages I haven't used it for a long time (I briefly tried to use Org mode, but for some reason I ended up leaving it).

    Documentation formats and tools

    Since a long time ago I've been an advocate of Lightweight Markup Languages; I started to use LaTeX and Lout, then moved to SGML/XML formats (LinuxDoc and DocBook) and finally moved to plain text based formats.

    I started using Wiki formats (parsewiki) and soon moved to reStructuredText; I also use other markup languages like Markdown (for this blog, aka ikiwiki) and tried MultiMarkdown to replace reStructuredText for general use, but as I never liked Markdown syntax I didn't liked an extended version of it.

    While I've been using ReStructuredText for a long time, I recently found Asciidoctor and the Asciidoc format and I guess I'll be using it instead of rst whenever I can (I still need to try the slide backends and conversions to ODT, but if that works I guess I'll write all my new documents using Asciidoc).

    Programming languages

    I'm not a developer, but I read and patch a lot of free software code written on a lot of different programming languages (I wouldn't be able to write whole programs on most of them, but thanks to Stack Overflow I'm usually able to fix what I need).

    Anyway, I'm able to program in some languages; I write a lot of shell scripts and I go for Python and C when I need something more complicated.

    On the near future I plan to read about javascript programming and nodejs (I'll probably need it at work) and I already started looking at Haskell (I guess it was time to learn about functional programming and after reading about it, it looks like haskell is the way to go for me).

    Version Control

    For a long time I've been a Subversion user, at least for my own projects, but seems that everything has moved to git now and I finally started to use it (I even opened a github account) and plan to move all my personal subversion repositories at home and at work to git, including the move of all my debian packages from svn-buildpackage to git-buildpackage.

    Further Reading

    With the previous plans in mind, I've started reading a couple of interesting books:

    Now I just need to get enough time to finish reading them ... ;)

    15 February, 2015 09:45AM

    hackergotchi for Clint Adams

    Clint Adams

    Now with Stripe and Twitter too

    I just had the best Valentine's Day ever.

    15 February, 2015 05:17AM

    hackergotchi for Matthew Palmer

    Matthew Palmer

    The Vicious Circle of Documentation

    Ever worked at a company (or on a codebase, or whatever) where it seemed like, no matter what the question was, the answer was written down somewhere you could easily find it? Most people haven’t, sadly, but they do exist, and I can assure you that it is an absolute pleasure.

    On the other hand, practically everyone has experienced completely undocumented systems and processes, where knowledge is shared by word-of-mouth, or lost every time someone quits.

    Why are there so many more undocumented systems than documented ones out there, and how can we cause more well-documented systems to exist? The answer isn’t “people are lazy”, and the solution is simple – though not easy.

    Why Johnny Doesn’t Read

    When someone needs to know something, they might go look for some documentation, or they might ask someone else or just guess wildly. The behaviour “look for documentation” is often reinforced negatively, by the result “documentation doesn’t exist”.

    At the same time, the behaviours “ask someone” and “guess wildly” are positively reinforced, by the results “I get my question answered” and/or “at least I can get on with my work”. Over time, people optimise their behaviour by skipping the “look for documentation” step, and just go straight to asking other people (or guessing wildly).

    Why Johnny Doesn’t Write

    When someone writes documentation, they’re hoping that people will read it and not have to ask them questions in order to be productive and do the right thing. Hence, the behaviour “write documentation” is negatively reinforced by the results “I still get asked questions”, and “nobody does things the right way around here, dammit!”

    Worse, though, is that there is very little positive reinforcement for the author: when someone does read the docs, and thus doesn’t ask a question, the author almost certainly doesn’t know they dodged a bullet. Similarly, when someone does things the right way, it’s unlikely that anyone will notice. It’s only the mistakes that catch the attention.

    Given that the experience of writing documentation tends to skew towards the negative, it’s not surprising that eventually, the time spent writing documentation is reallocated to other, more utility-producing activities.

    Death Spiral

    The combination of these two situations is self-reinforcing. While a suitably motivated reader might start by strictly looking for documentation, or an author initially be enthused to always fully documenting their work, over time the “reflex” will be for readers to just go ask someone, because “there’s never any documentation!”, and for authors to not write documentation because “nobody bothers to read what I write anyway!”.

    It is important to recognise that this iterative feedback loop is the “natural state” of the reader/author ecosystem, resulting in something akin to thermodynamic entropy. To avoid the system descending into chaos, energy needs to be constantly applied to keep the system in order.

    The Solution

    Effective methods for avoiding the vicious circle can be derived from the things that cause it. Change the forces that apply themselves to readers and authors, and they will behave differently.

    On the reader’s side, the most effective way to encourage people to read documentation is for it to consistently exist. This means that those in control of a project or system mustn’t consider something “done” until the documentation is in a good state. Patches shouldn’t be landed, and releases shouldn’t be made, unless the documentation is altered to match the functional changes being made. Yes, this requires discipline, which is just a form of energy application to prevent entropic decay.

    Writing documentation should be an explicit and well-understood part of somebody’s job description. Whoever is responsible for documentation needs to be given the time to do it properly. Writing well takes time and mental energy, and that time needs to be factored into the plans. Never forget that skimping on documentation, like short-changing QA or customer support, is a false economy that will cost more in the long term than it saves in the short term.

    Even if the documentation exists, though, some people are going to tend towards asking people rather than consulting the documentation. This isn’t a moral failing on their part, but only happens when they believe that asking someone is more beneficial to them than going to the documentation. To change the behaviour, you need to change the belief.

    You could change the belief by increasing the “cost” of asking. You could fire (or hellban) anyone who ever asks a question that is answered in the documentation. But you shouldn’t. You could yell “RTFM!” at everyone who asks a question. Thankfully that’s one acronym that’s falling out of favour.

    Alternately, you can reduce the “cost” of getting the answer from the documentation. Possibly the largest single productivity boost for programmers, for example, has been the existence of Google. Whatever your problem, there’s a pretty good chance that a search or two will find a solution. For your private documentation, you probably don’t have the power of Google available, but decent full-text search systems are available. Use them.

    Finally, authors would benefit from more positive reinforcement. If you find good documentation, let the author know! It requires a lot of effort (comparatively) to look up an author’s contact details and send them a nice e-mail. The “like” button is a more low-energy way of achieving a similar outcome – you click the button, and the author gets a warm, fuzzy feeling. If your internal documentation system doesn’t have some way to “close the loop” and let readers easily give authors a bit of kudos, fix it so it does.

    Heck, even if authors just know that a page they wrote was loaded N times in the past week, that’s better than the current situation, in which deafening silence persists, punctuated by the occasional plaintive cry of “Hey, do you know how to…?”.

    Do you have any other ideas for how to encourage readers to read, and for authors to write?

    15 February, 2015 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

    February 14, 2015

    John Goerzen

    Willis Goerzen – a good reason to live in Kansas

    From time to time, people ask me, with a bit of a disbelieving look on their face, “Tell me again why you chose to move to Kansas?” I can explain something about how people really care about their neighbors out here, how connections through time to a place are strong, how the people are hard-working, achieve great things, and would rather not talk about their achievements too much. But none of this really conveys it.

    This week, as I got word that my great uncle Willis Goerzen passed away, it occured to me that the reason I live in Kansas is simple: people like Willis.

    Willis was a man that, through and through, simply cared. For everyone. He had hugs ready anytime. When I used to see him in church every Sunday, I’d usually hear his loud voice saying, “Well John!” Then a hug, then, “How are you doing?” When I was going through a tough time in life, hugs from Willis and Thelma were deeply meaningful. I could see how deeply he cared in his moist eyes, the way he sought me out to offer words of comfort, reassurance, compassion, and strength.

    Willis didn’t just defy the stereotypes on men having to hide their emotions; he also did so by being just gut-honest. Americans often ask, in sort of a greeting, “How are you?” and usually get an answer like “fine”. If I asked Willis “How are you?”, I might hear “great!” or “it’s hard” or “pretty terrible.” In a place where old-fashioned stoicism is still so common, this was so refreshing. Willis and I could have deep, heart-to-heart conversations or friendly ones.

    Willis also loved to work. He worked on a farm, in construction, and then for many years doing plumbing and heating work. When he retired, he just kept on doing it. Not for the money, but because he wanted to. I remember calling him up one time about 10 years ago, asking if he was interested in helping me with a heating project. His response: “I’ll hitch up the horses and be right there!” (Of course, he had no horses anymore.) When I had a project to renovate what had been my grandpa’s farmhouse (that was Willis’s brother), he did all the plumbing work. He told me, “John, it’s great to be retired. I can still do what I love to do, but since I’m so cheap, I don’t have to be fast. My old knees can move at their own speed.” He did everything so precisely, built it so sturdy, that I used to joke that if a tornado struck the house, the house would be a pile of rubble but the ductwork would still be fine.

    One of his biggest frustrations about ill health was being unable to work, and in fact he had a project going before cancer started to get the best of him. He was quite distraught that, for the first time in his life, he didn’t properly finish a job.

    Willis installed a three-zone system (using automated dampers to send heat or cool from a single furnace/AC into only the parts of the house where it was needed) for me. He had never done that before. The night Willis and his friend Bob came over to finish the setup was one to remember. The two guys, both in their 70s, were figuring it all out, and their excitement was catching. By the time the evening was over, I certainly was more excited about thermostats than I ever had been in my life.

    I heard a story about him once – he was removing some sort of noxious substance from someone’s house. I forget what it was — whatever it was, it had pretty bad long-term health effects. His comment: “Look, I’m old. It’s not going to be this that does me in.” And he was right.

    In his last few years, Willis started up a project that only Willis would dream up. He invited people to bring him all their old and broken down appliances and metal junk – air conditioners, dehumidifiers, you name it. He carefully took them apart, stripped them down, and took the metals into a metal salvage yard. He then donated all the money he got to a charity that helped the poor, and it was nearly $5000.

    Willis had a sense of humor about him that he somehow deployed at those perfect moments when you least expected it. Back in 2006, before I had moved into the house that had been grandpa’s, there was a fire there. I lost two barns (one was the big old red one with lots of character) and a chicken house. When I got out there to see what had happened, Willis was already there. It was quite the disappointment for me. Willis asked me if grandpa’s old manure spreader was still in the chicken house. (Cattle manure is sometimes used as a fertilizer.) This old manure spreader was horse-drawn. I told him it was, and so it had burned up. So Willis put his arm around me, and said, “John, do you know what we always used to call a manure spreader?” “Nope.” “Shit-slinger!” That was so surprising I couldn’t help but break out laughing. Willis was the only person that got me to laugh that day.

    In his last few years, Willis battled several health ailments. When he was in a nursing home for a while due to complications from knee surgery, I’d drop by to visit. And lately as he was declining, I tried to drop in at his house to visit with Willis and Thelma as much as possible. Willis was always so appreciative of those visits. He always tried to get in a hug if he could, even if Thelma and I had to hold on to him when he stood up. He would say sometimes, “John, you are so good to come here and visit with me.” And he’d add, “I love you.” As did I.

    Sometimes when Willis was felling down about not being able to work more, or not finish a project, I told him how he was an inspiration to me, and to many others. And I reminded him that I visited with him because I wanted do, and being able to do that meant as much to me as it did to him. I’m not sure if he ever could quite believe how deeply true that was, because his humble nature was a part of who he was.

    My last visit earlier this week was mostly with Thelma. Willis was not able to be very alert, but I held his hand and made sure to tell him that I love and care for him that time. I’m not sure if he was able to hear, but I am sure that he didn’t need to. Willis left behind a community of hundreds of people that love him and had their lives touched by his kind and inspirational presence.

    14 February, 2015 10:55PM by John Goerzen