December 01, 2015

Thorsten Alteholz

My Debian Activities in November 2015

FTP assistant

This month I marked 352 packages for accept and rejected 61 of them. I had to send only 15 emails to maintainers.

I also started to work on #796095 and #796784, but my first patch was rejected. So expect more to come here …

Squeeze LTS

This was my seventeenth month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian.

Due to Toshiba becoming the first platinum sponsor, I got a workload of 21.25h. This is a new and delightful record! Altogether I uploaded those DLAs:

  • [DLA 341-1] php5 security update
  • [DLA 343-1] libpng security update
  • [DLA 355-1] libxml2 security update
  • [DLA 356-1] libsndfile security update

I also started to work on two bugs that were filed against the pseudo-package, which are somehow related to the security team: #796095 and #796784 (see above). Moreover I started to work on the next php5 upload, which will happen at the end of December.

As more and more people work at the LTS frontdesk now, this month I could chill out a bit and let the others do the work.

Other stuff

As the Advent season started again I would also like to draw some attention to the Debian Med Advent Calendar. It was announced here and like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-) .

01 December, 2015 04:32PM by alteholz

Scott Kitterman

Debian LTS Work November 2015

This was my seventh month as a Freexian sponsored LTS contributor. I was assigned 8 hours for the month of November.

As I did last month, I worked on review and testing of the proposed MySQL 5.5 packages for squeeze-lts and did a bit more work on Quassel.  It has been suggested that maybe we ought to just EOL Quassel since backporting the necessary fixes is so complicated.  I think they may be right, but I haven’t quite given up yet.

I reviewed CVE-2015-6360 for SRTP and my assessment was that squeeze-lts was not affected (same for the other Debian releases while I was at it).

I published one security update, it was for libphp-snoopy.  This resolves the outstanding security issues by updating to the newest version as was done for all other Debian releases.

Finally, in the interest of getting better support in tools for Debian LTS, I came up with a patch for the pull-debian-source[1] script in ubuntu-dev-tools so that it will download Debian LTS packages correctly.  Although it took a bit of investigating, the patch turned out to be very simple.  I filed bug #806749.  I also started looking at the distro-info package (thinking I’d need it updated to fix pull-debian-source, which turned out not to be the case), but didn’t finish it yet.  I plan to work on that this month.

[1] Even though this is in ubuntu-dev-tools and not devscripts, there’s really nothing Ubuntu specific about it.

01 December, 2015 01:54PM by skitterman

hackergotchi for Laura Arjona

Laura Arjona

Software Freedom Conservancy supporter

I think it’s important that organizations as Software Freedom Conservancy exist.
They provide a non-profit home, infrastructure, and advice for FLOSS projects. They take care that the will of the project members, choosing free software licenses, is respected by third parties. They care about “all the rest” so free software contributors can focus in improving the software itself. They have an agreement with the Debian community to protect the freedoms that Debian Developers provide to the Debian end users (and derivative distributions).

So I decided to join as supporter. I’m happy that this week there is a matching fund so my donation will count double.

I hope that many others join too, so the organization’s voice and action continues loyal to their goals and representative of all the projects (wether big or small) under their umbrella. This way (small donations from many individuals as funding model), no single or few actors can use their big money as pressure to deviate or block Conservancy’s action.

We free software/free knowledge contributors know very well the power that many micro actions can provide, when coordinated towards the common good, isn’t it?

Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, Economic Aspects, English, Free Software, licenses, Project Management

01 December, 2015 01:47PM by larjona

Enrico Zini


When Akonadi silently fails to sync your calendar...

Bug severity: seriously ruining my life.

Try to use korganizer to create a calendar entry when the server is not reachable (say, you are offline, or you typed the wrong password), and you may find that you end up with no error messages, an entry that shows up perfectly fine, but that will never be synced to the server, ever again.

I use korganizer, radicale and caldav for important things. The practical ramifications of me inserting entries in korganizer, seeing that everything looks ok, and then not finding them on my phone while on the go, are scary.

Think of things like importing .ics files with flight schedules, entering tax deadlines, time and places for customer meetings, time and places of arrival of loved ones I'm supposed to pick up.

I spent time setting up my own infrastructure for this exactly because I care that all of this works reliably.

And now I urgently took a morning off work to find a way to detect those entries that Akonadi is refusing to update,

The whole thing is cumbersome to run, but if you are using kdepim-based tools to manage your calendars and sync them across devices, you may want to give it a go every once in a while.

You can find the script and the notes I took so far on the issue at

01 December, 2015 01:01PM

Vincent Sanders

HTTP to screen

I recently presented a talk at the Debian miniconf in Cambridge. This was a new talk explaining what goes on in a web browser to get a web page on screen.

The presentation was filmed and my slides are also available. I think it went over pretty well despite the venues lighting adding a strobe ambiance to part of proceedings.

I thought the conference was a great success overall and enjoyed participating. I should like to thank Cosworth for allowing me time to attend and for providing some sponsorship.

01 December, 2015 09:39AM by Vincent Sanders (

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in November 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 21.25 hours on Debian LTS. During this time I worked on the following things:

  • From November 2nd to November 8th, I was handling the LTS frontdesk, triaging new CVE, filing bugs, and ensuring timely answers on the mailing list. I pushed 26 commits to the security tracker. While investigating CVE-2015-7183 I discovered more embedded copies of nspr (which resulted in #804058). I also commented on the upstream fix for CVE-2015-5602 which looked like insufficient.
  • Prepared and released DLA-339-1 on libhtml-scrubber-perl fixing one CVE.
  • Prepared and released DLA-350-1 on eglibc with a non-trivial backport fixing one CVE.
  • Prepared and released DLA-353-1 on imagemagick fixing two security issues without CVE yet (and marking one as not-affecting squeeze).
  • Added a third patch after review by the upstream author on my still pending bouncycastle update. The upstream author asked me to further defer the update as they have some related fixes coming up.
  • I did preparatory work for DLA-352-1 by identifying the upstream commits that fixed the security issue.
  • I spent some time checking issues that have been assigned for a long time without any visible progress being made in the hope to unblock them (libvncserver, pound, quassel).

The Debian Administrator’s Handbook

Now that the English version has been finalized for Debian 8 Jessie (I uploaded the package to Debian Unstable), I concentrated my efforts on the French version. The book has been fully translated and we’re now finalizing the print version that Eyrolles will again edit.

Paris Open Source Summit

On November 18th and 19th, I was in Paris for the Paris Open Source Summit. I helped to hold a booth for Debian France during two days (with the help of François and several others).

François Vuillemin, Juliette Belin and Raphaël HertzogFrançois Vuillemin, Juliette Belin and Raphaël Hertzog

On the booth, we had the visit of Juliette Belin who created the theme and the artwork of Debian 8 Jessie. We lacked goodies but we organized a lottery to win 12 copies of my French book.

Debian packaging work

Django. After two weeks of preparation for revers dependencies, I uploaded Django 1.8 to unstable and raised the severity of remaining bugs. Later I uploaded a new upstream point release (1.8.6). I also handled a release critical bug first by opening a ticket upstream and then by writing a patch and submitting it upstream. I uploaded 1.8.7-2 to Debian with my patch.

I also submittted another small fix which has been rejected because the manual page is generated via Sphinx and I thus had to file a bug against Sphinx (which I did). A work-around has been found in the mean time.

apt-xapian-index NMU. A long time ago, I filed a release critical bug against that package (#793681) but the maintainer did not handle it. Fortunately Sven Joachim prepared an NMU and I just uploaded his work. This resulted in another problem due bash-completion changes that Sven promptly fixed and I uploaded a second NMU a few days later.

Gnome-shell-timer. I forwarded #805347 to gnome-shell-timer issue #29 but gnome-shell-timer is abandoned upstream. On a suggestion of Paul Wise, I tried to get this nice extension integrated into gnome-shell-extensions but the request has been turned down. Is there anyone with javascript skills who would like to adopt this project as an upstream developer? It’s a low maintenance project with a decent and loyal user base.

Misc. I fixed bug #804763 in zim which was the result of a bad Debian-specific patch.
I sponsored pylint-plugin-utils_0.2.3-2.dsc for Joseph Herlant to fix a release critical bug. I filed 806237 against lintian. I filed more tickets upstream, related to my Kali packaging work: one against sddm, one against john

Other Debian-related work

Distro-Tracker. I finally merged the work of Orestis Ioannou on bug #756766 which added the possibility to browse old news of each package.

Debian Installer. I implemented two small features that we wanted in Kali: I fixed #647405 to have a way to disable “deb-src” lines in generated sources.list files. I also filed #805291 to see how to allow kernel command line preseeding to override initrd preseeding… the fix is trivial and it works in Kali. I just have to commit it in Debian, I was hoping to get an ack from someone in charge before doing it.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 December, 2015 09:00AM by Raphaël Hertzog

hackergotchi for Michal Čihař

Michal Čihař

Time for change

It has been seven years since I've joined SUSE (for second time, but that's different story). As everything has to come to the end, I've decided to make a change in my life and leave safety net of being employed and go for new experience with freelancer life.

This will give me more time to spend on free software projects where I'm involved. Of course I need to earn some money to live, so many decisions about where to spend my time will be backed by money...

First of all I will work on phpMyAdmin, where I was chosen as a contractor (of two for this year). This will be half time job for me and you will see weekly reports in my blog, similar to what Madhura is doing.

Second priority will be Weblate, especially the hosting solution. I believe that this is something what can work quite well in the long term, but the tool needs some development to make it as great as I would like to have it. If you want me to extend hosting for free software projects, you can do it by money :-).

And nobody knows what projects come next. There is some work to be done on Gammu and Wammu, but given that I don't have any recent device to use it, it's sometimes hard to fix the bugs there. Of course this can change if I get some money to work on that.

PS: It's not that SUSE would be bad place to work. It's actually pretty great if you're looking for work with free software. You work there on free software, with great people and you get quite some freedom. As bonus once or twice in a year, there is Hackweek which you can spend on anything. And of course they have lot of open positions :-).

Filed under: English Gammu phpMyAdmin SUSE Weblate | 0 comments

01 December, 2015 08:09AM by Michal Čihař (

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already December.

Already December. 2015 is going to close.

01 December, 2015 01:15AM by Junichi Uekawa

John Goerzen

Where does a person have online discussions anymore?

Back in the day, way back in the day perhaps, there were interesting places to hang out online. FidoNet provided some discussion groups — some local, some more national or international. Then there was Usenet, with the same but on a more grand scale.

There were things I liked about both of them.

They fostered long-form, and long-term, discussion. Replies could be thoughtful, and a person could think about it for a day before replying.

Socially, you would actually get to know the people in the communities you participated in. There would be regulars, and on FidoNet at least, you might bump into them in different groups or even in real life. There was a sense of community. Moreover, there was a slight barrier to entry and that was, perhaps, a good thing; there were quite a lot of really interesting people and not so many people that just wanted answers to homework questions.

Technologically, you got to bring your own client. They were also decentralized, without any one single point of failure, and could be downloaded and used offline. You needed very little in terms of Internet connection.

They both had some downsides; Usenet, in particular, often lacked effective moderation. Not everyone wrote thoughtful posts.

Is there anything like it these days? I’ve sometimes heard people suggest Reddit. It shares some of those aspects, and even has some clients capable of offline operation. However, what it doesn’t really have is long-form discussion. I often find that if I am 6 hours late to a thread, nobody will bother to read my reply because it’s off their radar already. This happens so often that I rarely bother to participate anymore; I am not going to sit at reddit hitting refresh all day long.

There are a few web forums, but they suffer from all sorts of myriad problems; no cohesive community, the “hot topic” vanishing issue of Reddit, the single point of failure, etc.

For awhile, Google+ looked like it might head this way. But I don’t think it really has. I still feel as if there is a vacuum out there.

Any thoughts?

01 December, 2015 12:22AM by John Goerzen

hackergotchi for Simon Richter

Simon Richter

Debian at the 32C3

In case you are going to 32C3, you may be interested to join us at the Debian Assembly there.

As in the last years, this is going to be fairly informal, so if you are loosely affiliated with Debian or want to become a member by the time 34C3 rolls around, you are more than welcome to show up and sit there.

01 December, 2015 12:18AM

November 30, 2015

hackergotchi for Lunar


Reproducible builds: week 31 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

Reiner Herrmann submitted a patch against debhelper to make dh_installinit source files in a stable order.

Chris Lamb found how to make cython output deterministic by ordering the keys used to traverse a dict.

Reiner Herrmann proposed a patch for pyside-tools to remove the timestamps embedded by rcc in the generated Python code.

Mattia Rizzolo rebased our custom version of debhelper on version 9.20151126.

As no objections have been made so far, Mattia Rizzolo has filled #805872 asking -Wdate-time to be turned on by default in dpkg-buildflag. Guillem has since sent a final warning before proceeding as such in the next dpkg upload.

Russ Allbery added support for SOURCE_DATE_EPOCH in podlators 4.00 which Niko Tyni intend to backport to Perl 5.22.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: fontforge, golang-github-tinylib-msgp, libpango-perl, libparanamer-java, libxaw, sqljet, stringtemplate4, uzbl, zope-mysqlda.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

  • tiger/1:3.2.3-13 uploaded by Javier Fernández-Sanguino Peña, original patch by Daniel Kahn Gillmor.

Patches submitted which have not made their way to the archive yet:

  • #805773 on keylaunch by Chris Lamb: removes an automatically-updated copyright date from the build system.
  • #805787 on deja-dup by Reiner Herrmann: use system help2man instead of embedded copy.
  • #806321 on coreutils by Lunar: use system help2man instead of embedded copy.
  • #806434 on mboxcheck by Reiner Herrmann: set the embedded date to the last date from the changelog.
  • #806452 on lierolibre by Reiner Herrmann: set LC_ALL instead of LANG to ensure how dd output looks like.
  • #806490 on binutils by Reiner Herrmann: filter user and date from test output.
  • #806517 on onboard by Reiner Herrmann: sort the list of items parsed from pkg-config.
  • #806547 on netsniff-ng by Reiner Herrmann: set LC_ALL=C when enumerating files to link.
  • #806551 on libosmocore by Reiner Herrmann: use C locale and UTC when formatting the changelog date.
  • #806552 on remake by Reiner Herrmann: set the mtime of the texinfo source to the latest debian/changelog entry.
  • #806564 on libdigidoc by Reiner Herrmann: add support for SOURCE_DATE_EPOCH in VersionInfo.cmake.

Lunar reported two issues making xz-utils unreproducible (#806328, #806331).

A seventh armhf build node has been added (resulting of two more armhf build jobs). Thanks to Vagrant Cascadian for putting this Raspberry Pi 2B to help. (h01ger) has been made more robust against network and proxy failures. (h01ger)

A new 100 GB partition has been set up on to prevent disk space issues. Thanks to ProfitBricks for its continuous support to our continuous test system. (h01ger)

New graphs showing usertagged bugs have been added on the dashboard to measure the progress without FTBFS issues. Please note that comparing the two graphs might be misleading as more than 1300 FTBFS bugs have been inventoried. (h01ger)

Package reviews

78 reviews have been removed, 116 added and 49 updated this week.

25 new FTBFS have been filed by Chris West, Chris Lamb and Santiago Vila.

New issues identified this week: timestamps_in_documentation_generated_with_libwibble, copyright_year_in_documentation_generated_by_sphinx, timestamps_in_documentation_generated_by_glib_genpod, random_order_of_tmpfiles_in_postinst, random_order_in_cython_output, timestamps_in_python_code_generated_by_pyside.

Reiner Herrmann and Lunar improved the prebuilder script: the script can now be called through a symlink, run parallel builds, calls diffoscope by its new name and ensure to install its recommends, and save the text output aside the HTML one.

Reiner also added a script to lookup the last update of notes for a given package.


Santiago Villa has been recently working on making sure that Arch:all packages were properly buildable by running dpkg-buildpackage -A. This uncovered a question that is probably not currently addressed by the policy: on which architectures should architecture-independent be buildable?

30 November, 2015 10:57PM

hackergotchi for Neil Williams

Neil Williams

bashrc-git snippets

Just in case someone else finds these useful, some bash functions I’ve got into the habit of having in ~/.bashrc:

mcd(){ mkdir "$1"; cd "$1"; }

gum(){ git checkout "$1" && git rebase master && git checkout master; }

gsb() { LIST=`git branch|egrep -v '(release|staging|trusty|playground|stale)'|tr '\n' ' '|tr -d '*'`; git show-branch $LIST; }

gleaf(){ git branch --merged master | egrep -v '(release|staging|trusty|playground|pipeline|review|stale)'; }

mcd is the oldest one and the simplest. The others are just useful git management shortcuts. I can use gum to bring a feature branch back to master and gsb to show me which branches need to be rebased on master, typically after a pull. The list of excluded branches includes branches which should not be rebased against master (I could do some processing of git branch -r to not have those hardcoded) but the odd one is stale. Sometimes, I get an idea for a feature which is too intrusive, too messy or just too incomplete to be rebased against master. Rather than losing the idea or wasting time rebasing, I’m getting into the habit of renaming the branch foo as stale-foo and gsb then leaves it alone. Equally, there are frequently times when I need to have a feature branch based on another feature branch, sometimes several feature branches deep. Identifying these branches and avoiding rebasing on the wrong branch is important to not waste time.

gsb takes a bit of getting used to, but basically the shorter and cleaner the output, the less work needs to be done. As shown, gsb is git show-branch under the hood. What I’m looking for is multiple commits listed between a branch and master. Then I know which branches to use with gum. …

Finally, gleaf shows which feature branches can be dropped with git branch -d.

30 November, 2015 10:49PM by Neil Williams

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in November 2015

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):


  • Presented at MiniDebConf Cambridge 2015 on the current status of Debian's Reproducible Builds effort.
  • Contributed initial Debian support to Red Hat Product Security's repository of certificates shipped by various vendors and Open Source Projects. (#1)
  • Wrote a proof-of-concept version of Guix's challenge command to determine if an installed binary package is reproducible or not. (code)
  • Started initial work on a b2evolution package.
  • Arranged logistics for the Reproducible Builds summit in Athens.

My work in the Reproducible Builds project was also covered in more depth in Lunar's weekly reports (#27, #28, #29, #30).


This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Issued DLA 349-1 for python-django correcting an potential settings leak.
  • Issued DLA 351-1 for redmine fixing a data disclosure vulnerability.
  • Worked on multiple iterations of a fix for CVE-2011-5325 in busybox, not yet complete in order to additionally cover hardlinks.
  • Frontdesk duties.


  • redis — Addressing CVE-2015-8080, a buffer-overflow security issue.
  • python-django — Uploading the latest RC release to experimental.
  • strip-nondeterminism — Disable stripping Mono binaries as it is was too aggressive preventing some package installs.
  • gunicorn — Correct Python interpreter path references in gunicorn3-debian.
  • python-redis — New upstream release.
  • ispell-lt — Making the build reproducible.

30 November, 2015 09:46PM

hackergotchi for Pablo Lorenzzoni

Pablo Lorenzzoni

Duas dicas para acelerar o APT

Às vezes você só quer um pouco mais de velocidade nos downloads do APT e não tem muito como modificar muito a instalação do cliente. Duas dicas simples podem ganhar minutos preciosos:

Coloque em algum dos /etc/apt.conf.d (sugiro criar o /etc/apt.conf.d/71parallel) a seguinte linha:

Acquire::Queue-Mode "host";

Isso faz com que o modo de queue do APT seja orientado ao host e não ao tipo de URL. Dependendo dos seus sources, isso acelera mais do que o modo access padrão.

A segunda dica é um hack que encontrei há algum tempo em um blog que faz o download prévio das URLs que serão utilizadas na operação do APT para o /var/cache/apt/archives usando xargs:



(apt-get -y --print-uris $@ | egrep -o -e "http://[^\']+" | xargs -r -l${NBATCH} -P${NPARALLEL} wget -nv -P "/var/cache/apt/archives/") && apt-get $@

Ajuste os parâmetros NBATCH e NPARALLEL e boa sorte.

30 November, 2015 09:41PM by spectra

Michael Vogt

APT 1.1 released

After 1.5 years of work we released APT 1.1 this week! I’m very excited about this milestone.

The new 1.1 has some nice new features but it also improves a lot of stuff under the hood. With APT 1.0 we did add a lot of UI improvements, this time the focus is on the reliability of the acquire system and the library.

Some of the UI highlights include:

  • apt install local-file.deb works
  • apt build-dep foo.dsc works
  • apt supports most of the common apt-get/apt-cache commands so you save some typing :)
  • apt update progress reporting much more accurate
  • apt-cache showsrc --only-source srcpkgname does the right thing
  • The --force-yes option is split into the more fine grained --allow-{downgrades, remove-essential, change-held} options
  • Documentation and help output improvements
  • apt-mark supports more states
  • Support for deb822 style sources.list.d files

Under the hood:

  • No more “guessing” when fetching files (we did this to support old repository formats) only download stuff that is listed in the {,In}release file).
  • support for by-hash index downloads (once the servers support that no more hashsum-mismatch errors because of proxies or transparent proxies)
  • we support downloading additional files that are opaque for apt itself (like apt-file or appstream data)
  • the acquire system is more atomic and more robust, no more issues with captive portals
  • protection about a class of endless-data attacks from hostile MITM
  • disallow signed repositories from ever becoming unsigned
  • privilege dropping in the acquire methods
  • if {,In}Release did not change, do not bother checking the other indexes (lot less HITs on the mirrors on not-modified resources)
  • SRV record support
  • improved policy engine
  • key pinning for sources
  • deprecation of some library functions
  • support for IDN domains

Whats also very nice is that apt is now the exact same version on Ubuntu and Debian (no more delta between the packages)!

If you want to know more, there is nice video from David Kalnischkies Debconf 2015 talk about apt at Julian Andres Klode also wrote about the new apt some weeks ago here.

The (impressive) full changelog is available at And git has an even more detailed log if you are even more curious :)

Enjoy the new apt!

30 November, 2015 09:37PM by mvogt

Andrew Shadura

Support Software Freedom Conservancy

The Software Freedom Conservancy are desperately looking for financial support after one of their corporate supporters have stopped their sponsorship. This week, there’s an anonymous pledge to match donations from new supporters.

Becoming an SFC supporter will help them fight for our software freedom. I have signed up for a monthly donation, and I suggest you do so too here.

30 November, 2015 06:26PM

Mark Brown

Unconscious biases

Matthew Garrett’s recent very good response to Eric Raymond’s recent post opposing inclusiveness efforts in free software reminded me of something I’ve been noticing more and more often: a very substantial proportion of the female developers I encounter working on the kernel are from non-European cultures where I (and I expect most people from western cultures) lack familiarity with the gender associations of all but the most common and familiar names. This could be happening for a lot of reasons – it could be better entry paths to kernel development in those cultures (though my experience visiting companies in the relevant countries makes me question that), it could be that the sample sizes are so regrettably small that this really is just anecdote but I worry that some of what’s going on is that the cultural differences are happening to mask and address some of the unconscious barriers that get thrown up.

30 November, 2015 12:32PM by Mark Brown

Petter Reinholdtsen

The GNU General Public License is not magic pixie dust

A blog post from my fellow Debian developer Paul Wise titled "The GPL is not magic pixie dust" explain the importance of making sure the GPL is enforced. I quote the blog post from Paul in full here with his permission:

Become a Software Freedom Conservancy Supporter!

The GPL is not magic pixie dust. It does not work by itself.
The first step is to choose a copyleft license for your code.
The next step is, when someone fails to follow that copyleft license, it must be enforced
and its a simple fact of our modern society that such type of work
is incredibly expensive to do and incredibly difficult to do.

-- Bradley Kuhn, in FaiF episode 0x57

As the Debian Website used to imply, public domain and permissively licensed software can lead to the production of more proprietary software as people discover useful software, extend it and or incorporate it into their hardware or software products. Copyleft licenses such as the GNU GPL were created to close off this avenue to the production of proprietary software but such licenses are not enough. With the ongoing adoption of Free Software by individuals and groups, inevitably the community's expectations of license compliance are violated, usually out of ignorance of the way Free Software works, but not always. As Karen and Bradley explained in FaiF episode 0x57, copyleft is nothing if no-one is willing and able to stand up in court to protect it. The reality of today's world is that legal representation is expensive, difficult and time consuming. With in hiatus until some time in 2016, the Software Freedom Conservancy (a tax-exempt charity) is the major defender of the Linux project, Debian and other groups against GPL violations. In March the SFC supported a lawsuit by Christoph Hellwig against VMware for refusing to comply with the GPL in relation to their use of parts of the Linux kernel. Since then two of their sponsors pulled corporate funding and conferences blocked or cancelled their talks. As a result they have decided to rely less on corporate funding and more on the broad community of individuals who support Free Software and copyleft. So the SFC has launched a campaign to create a community of folks who stand up for copyleft and the GPL by supporting their work on promoting and supporting copyleft and Free Software.

If you support Free Software, like what the SFC do, agree with their compliance principles, are happy about their successes in 2015, work on a project that is an SFC member and or just want to stand up for copyleft, please join Christopher Allan Webber, Carol Smith, Jono Bacon, myself and others in becoming a supporter. For the next week your donation will be matched by an anonymous donor. Please also consider asking your employer to match your donation or become a sponsor of SFC. Don't forget to spread the word about your support for SFC via email, your blog and or social media accounts.

I agree with Paul on this topic and just signed up as a Supporter of Software Freedom Conservancy myself. Perhaps you should be a supporter too?

30 November, 2015 08:55AM

hackergotchi for Michal Čihař

Michal Čihař

Gammu 1.36.7

Yesterday, Gammu 1.36.7 has been released.

This time the list of changes got bigger, improving compatibility with many devices:

  • Support devices which do not report full network status.
  • Disable Huawei unsolicited messages on startup.
  • Various improvements for Huawei modems.
  • Fixed compilation on Windows.
  • Fixed regression with Siemens AX75.
  • Improved decoding of USSD responses.
  • Properly decode emojis to console or files backend.
  • Added support for proxying the connection through arbitrary command.
  • SMSD now tracks retries count per message.

You an support further Gammu development at Bountysource salt.

Filed under: English Gammu Wammu | 0 comments

30 November, 2015 08:09AM by Michal Čihař (

Stein Magnus Jodal

November contributions

The following is a short summary of my open source work in November. My hope is that keeping better track of what I’m doing will help me reflect on how I spend my time, and help me to focus my efforts better.



  • Released Mopidy-Spotify 2.2.0: Fixes related to duplicate “Starred” playlists and albums from year 0.
  • Moved Mopidy’s Travis CI testing from Ubuntu 12.04 to Ubuntu 14.04, to prepare for GStreamer 1.x, and eventually testing with Python 3.4. PR #1341
  • Worked on porting Mopidy from GStreamer 0.10 to PyGI and GStreamer 1.x. PR #1339
  • Briefly looked at what remains to get Mopidy running on both Python 2.7 and 3.4+ when we’ve landed the port to GStreamer 1.x. Doesn’t look too bad, except that ConfigParser doesn’t want to work with bytes in Python 3, so there’s no easy way to read a config file referring to a path on a non-UTF-8 file system.


  • Fixed two old crawlers. Added two new crawlers.
  • Needs to upgrade to Django 1.8 before the Django 1.7 security support ends this December.

30 November, 2015 12:00AM

November 29, 2015

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

gtrends 1.3.0 now on CRAN: Google Trends in R

Sometime earlier last year, I started to help Philippe Massicotte with his gtrendsR package---which was then still "hiding" in relatively obscurity on BitBucket. I was able to assist with a few things related to internal data handling as well as package setup and package builds--but the package is really largely Philippe's. But then we both got busy, and it wasn't until this summer at the excellent useR! 2015 conference that we met and concluded that we really should finish the package. And we both remained busy...

Lo and behold, following a recent transfer to this GitHub repository, we finalised a number of outstanding issues. And Philippe was even kind enough to label me a co-author. And now the package is on CRAN as of yesterday. So install.packages("gtrendsR") away and enjoy!

Here is a quiick demo:

## load the package, and if options() are set appropriately, connect
## alternatively, also run   gconnect("someuser", "somepassword")

## using the default connection, run a query for three terms
res <- gtrends(c("nhl", "nba", "nfl"))

## plot (in default mode) as time series

## plot via googeVis to browser
## highlighting regions (probably countries) and cities
plot(res, type = "region")
plot(res, type = "cities")

The time series (default) plot for this query came out as follows a couple of days ago:

Example of gtrendsR query and plot

One really nice feature of the package is the rather rich data structure. The result set for the query above is actually stored in the package and can be accessed. It contains a number of components:

R> data(sport_trend)
R> names(sport_trend)
[1] "query"     "meta"      "trend"     "regions"   "topmetros"
[6] "cities"    "searches"  "rising"    "headers"  

So not only can one look at trends, but also at regions, metropolitan areas, and cities --- even plot this easily via package googleVis which is accessed via options in the default plot method. Furthermore, related searches and rising queries may give leads to dynamics within the search.

Please use the standard GitHub issue system for bug reports, suggestions and alike.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 November, 2015 08:35PM

hackergotchi for Steve McIntyre

Steve McIntyre

Supporting the Software Freedom Conservancy too!

Software Freedom Conservancy Supporter

I'm happy to chip in and help the awesome folks at the Software Freedom Conservancy with funding for their work, standing up for copyleft. It's clear that there are a lot of people who will ignore the terms of Free Software licensing, whether by oversight or deliberately, so the SFC are doing an important job working to defend the Freedoms that lots of us are using every day as developers and users.

It seems that some of SFC's corporate supporters have stopped sponsorship since the beginning of their lawsuit in Germany against VWware for GPL violations. Maybe some folks are happy to support SFC, but not when they really push things like this. Let's hope that we can find many more individual supporters to cover SFC's funding needs instead. This week, there's an anonymous pledge to match donations from new supporters so right now is an even better time to sign up!

Thanks to Paul Wise for posting about this and indirectly prodding me to sign up too.

29 November, 2015 08:19PM

hackergotchi for Matthew Garrett

Matthew Garrett

What is hacker culture?

Eric Raymond, author of The Cathedral and the Bazaar (an important work describing the effectiveness of open collaboration and development), recently wrote a piece calling for "Social Justice Warriors" to be ejected from the hacker community. The primary thrust of his argument is that by calling for a removal of the "cult of meritocracy", these SJWs are attacking the central aspect of hacker culture - that the quality of code is all that matters.

This argument is simply wrong.

Eric's been involved in software development for a long time. In that time he's seen a number of significant changes. We've gone from computers being the playthings of the privileged few to being nearly ubiquitous. We've moved from the internet being something you found in universities to something you carry around in your pocket. You can now own a computer whose CPU executes only free software from the moment you press the power button. And, as Eric wrote almost 20 years ago, we've identified that the "Bazaar" model of open collaborative development works better than the "Cathedral" model of closed centralised development.

These are huge shifts in how computers are used, how available they are, how important they are in people's lives, and, as a consequence, how we develop software. It's not a surprise that the rise of Linux and the victory of the bazaar model coincided with internet access becoming more widely available. As the potential pool of developers grew larger, development methods had to be altered. It was no longer possible to insist that somebody spend a significant period of time winning the trust of the core developers before being permitted to give feedback on code. Communities had to change in order to accept these offers of work, and the communities were better for that change.

The increasing ubiquity of computing has had another outcome. People are much more aware of the role of computing in their lives. They are more likely to understand how proprietary software can restrict them, how not having the freedom to share software can impair people's lives, how not being able to involve themselves in software development means software doesn't meet their needs. The largest triumph of free software has not been amongst people from a traditional software development background - it's been the fact that we've grown our communities to include people from a huge number of different walks of life. Free software has helped bring computing to under-served populations all over the world. It's aided circumvention of censorship. It's inspired people who would never have considered software development as something they could be involved in to develop entire careers in the field. We will not win because we are better developers. We will win because our software meets the needs of many more people, needs the proprietary software industry either can not or will not satisfy. We will win because our software is shaped not only by people who have a university degree and a six figure salary in San Francisco, but because our contributors include people whose native language is spoken by so few people that proprietary operating system vendors won't support it, people who live in a heavily censored regime and rely on free software for free communication, people who rely on free software because they can't otherwise afford the tools they would need to participate in development.

In other words, we will win because free software is accessible to more of society than proprietary software. And for that to be true, it must be possible for our communities to be accessible to anybody who can contribute, regardless of their background.

Up until this point, I don't think I've made any controversial claims. In fact, I suspect that Eric would agree. He would argue that because hacker culture defines itself through the quality of contributions, the background of the contributor is irrelevant. On the internet, nobody knows that you're contributing from a basement in an active warzone, or from a refuge shelter after escaping an abusive relationship, or with the aid of assistive technology. If you can write the code, you can participate.

Of course, this kind of viewpoint is overly naive. Humans are wonderful at noticing indications of "otherness". Eric even wrote about his struggle to stop having a viscerally negative reaction to people of a particular race. This happened within the past few years, so before then we can assume that he was less aware of the issue. If Eric received a patch from someone whose name indicated membership of this group, would there have been part of his subconscious that reacted negatively? Would he have rationalised this into a more critical analysis of the patch, increasing the probability of rejection? We don't know, and it's unlikely that Eric does either.

Hacker culture has long been concerned with good design, and a core concept of good design is that code should fail safe - ie, if something unexpected happens or an assumption turns out to be untrue, the desirable outcome is the one that does least harm. A command that fails to receive a filename as an argument shouldn't assume that it should modify all files. A network transfer that fails a checksum shouldn't be permitted to overwrite the existing data. An authentication server that receives an unexpected error shouldn't default to granting access. And a development process that may be subject to unconscious bias should have processes in place that make it less likely that said bias will result in the rejection of useful contributions.

When people criticise meritocracy, they're not criticising the concept of treating contributions based on their merit. They're criticising the idea that humans are sufficiently self-aware that they will be able to identify and reject every subconscious prejudice that will affect their treatment of others. It's not a criticism of a desirable goal, it's a criticism of a flawed implementation. There's evidence that organisations that claim to embody meritocratic principles are more likely to reward men than women even when everything else is equal. The "cult of meritocracy" isn't the belief that meritocracy is a good thing, it's the belief that a project founded on meritocracy will automatically be free of bias.

Projects like the Contributor Covenant that Eric finds so objectionable exist to help create processes that (at least partially) compensate for our flaws. Review of our processes to determine whether we're making poor social decisions is just as important as review of our code to determine whether we're making poor technical decisions. Just as the bazaar overtook the cathedral by making it easier for developers to be involved, inclusive communities will overtake "pure meritocracies" because, in the long run, these communities will produce better output - not just in terms of the quality of the code, but also in terms of the ability of the project to meet the needs of a wider range of people.

The fight between the cathedral and the bazaar came from people who were outside the cathedral. Those fighting against the assumption that meritocracies work may be outside what Eric considers to be hacker culture, but they're already part of our communities, already making contributions to our projects, already bringing free software to more people than ever before. This time it's Eric building a cathedral and decrying the decadent hordes in their bazaar, Eric who's failed to notice the shift in the culture that surrounds him. And, like those who continued building their cathedrals in the 90s, it's Eric who's now irrelevant to hacker culture.

(Edited to add: for two quite different perspectives on why Eric's wrong, see Tim's and Coraline's posts)

comment count unavailable comments

29 November, 2015 06:43PM

hackergotchi for Steve Kemp

Steve Kemp

Spent the weekend improving the internet

This weekend I've mostly been tidying up some personal projects and things.

This was updated to use recaptcha on the sign-up page, which is my attempt to cut down on the 400+ spam-registrations it receives every day.

I've purged a few thousand bogus-accounts, which largely existed to point to spam-sites in their profile-pages. I go through phases where I do this, but my heuristics have always been a little weak.

This site offers free dynamic DNS for a few hundred users. I closed fresh signups due to it being abused by spammers, but it does have some users and I sometimes add new people who ask politely.

Unfortunately some users hammer it, trying to update their DNS records every 60 seconds or so. (One user has spent the past few months updating their IP address every 30 seconds, ironically their external IP hadn't changed in all that time!)

So I suspended a few users, and implemented a minimum-update threshold: Nobody can update their IP address more than once every fifteen minutes now.

Literate Emacs Configuration File

Working towards my stateless home-directory I've been tweaking my dotfiles, and the last thing I did today was move my Emacs configuration over to a literate fashion.

My main emacs configuration-file is now a markdown file, which contains inline-code. The inline-code is parsed at runtime, and executed when Emacs launches. The init.el file which parses/evals is pretty simple, and I'm quite pleased with it. Over time I'll extend the documantion and move some of the small snippets into it.

Offsite backups

My home system(s) always had a local backup, maintained on an external 2Tb disk-drive, along with a remote copy of some static files which were maintained using rsync. I've now switched to having a virtual machine host the external backups with proper incrementals - via attic, which beats my previous "only one copy" setup.

Virtual Machine Backups

On a whim a few years ago I registered which I use to maintain backups of my personal virtual machines. That still works, though I'll probably drop the domain and use or similar in the future.

FWIW the external backups are hosted on BigV, which gives me a 2Tb "archive" disk for a £40 a month. Perfect.

29 November, 2015 02:00PM

Russ Allbery

podlators 4.00

podlators is the distribution that includes the Pod::Man and Pod::Text modules for Perl, plus the pod2man and pod2text driver scripts (among a few other, more minor things).

I've been working on a new release of this for a couple of years and got trapped in a cycle of always wanting to finish up one more thing before making a release. (Really need to fix Unicode handling once and for all! Oh, I have a much better idea for how to do testing! I should really revise all of this code for my current coding style!) But some discussions elsewhere reminded me of the merits of release early and often, so I decided to finally put something out.

There are quite a few accumulated changes, although not as many as the major version increase would indicate. I did that so that I could standardize on the same version number in all of the modules and switch to a much simpler versioning scheme. That required increasing the major version to something higher than all the component modules.

Other than that, there are mostly a bunch of bug fixes, but also a lot of changes to Pod::Man to support Debian's reproducible build effort. The code is now more predictable and reliable about how it generates dates, and supports two new ways of forcing the date in generated documentation to a particular value so that builds are more predictable.

I also changed the build system over to Module::Build, although it also provides a Makefile.PL file so that it can be built as part of Perl core.

You can get the latest version from the podlators distribution page.

29 November, 2015 03:31AM

November 28, 2015

hackergotchi for Thorsten Glaser

Thorsten Glaser

FixedMisc [MirOS] for GNU GRUB2

If you install the xfonts-base package from my APT repository you now not only get the FixedMisc [MirOS] type from The MirOS Project type foundry for the X Window System, but now also for GNU GRUB2:

(read more…)

28 November, 2015 06:42PM by MirOS Developer tg (

Andreas Metzler

Dual boot Debian (stretch) and Windows 10 (UEFI)

I have just assembled and upgraded to a new computer (ct 11 Watt PC with Skylake i5-6500) and chose to go from BIOS/MBR to UEFI/GPT. The computer needs to also run Windows (in this case 10) and so I had googled in vain before for a matching dual-boot howto.

It turned out to be unnecessary, it is straightforward.

  1. Install Windows and leave some space for Linux. (I used rufus to build a UEFI USB install image).
  2. Install Debian. Stretch-Alpha4 di just worked and adds a entry for Windows to the grub menu
  3. Reboot and enter UEFI setup, and select the Debian installation as boot option.

28 November, 2015 06:36PM by Andreas Metzler

hackergotchi for Daniel Pocock

Daniel Pocock

Disabling Dynamic Currency Conversion (DCC) in Airbnb

In many travel-related web sites for airlines and hotels, there is some attempt to sting the customer with an extra fee by performing a currency conversion at an inflated exchange rate. Sometimes it is only about five percent and this may not appear to be a lot but in one case a hotel was trying to use a rate that increased the cost of my booking by 30%. This scheme/scam is referred to as Dynamic Currency Conversion (DCC). Sometimes the website says that they are making it "easy" for you by giving you a "guaranteed" exchange rate that "might" be better than the rate from your bank. Sometimes a hotel or restaurant in a tourist location insists that you have to pay in a currency that is not the same as the currency on your booking receipt or their menu card, this is also a DCC situation.

Reality check: these DCC rates are universally bad. Last time I checked, my own credit card only has a 0.9% fee for currency conversion. Credit card companies have become a lot more competitive but the travel industry hasn't.

Airbnb often claims that they want to help the little guy and empower people, at least that is the spin they were using when New York city authorities were scrutinizing their business model. Their PR blog tries to boast about the wonderful economic impact of Airbnb.

But when it comes to DCC, the economic impact is universally bad for the customer and good for Airbnb's bosses. Most sites just turn on DCC by default and add some little opt-out link or checkbox that you have to click every time you book. Airbnb, however, is flouting regulations and deceiving people by trying to insist that you can't manually choose the currency you'll use for payment.

Fortunately, Visa and Mastercard have insisted that customers do have the right to know the DCC exchange rate and choose not to use DCC.

What are the rules?

Looking at the Visa system, the Visa Product and Service Rules, page 371, s5.9.7.4 include the statement that the merchant (Airbnb) must "Inform the Cardholder that Dynamic Currency Conversion is optional".

The same section also says that Airbnb must "Not use any language or procedures that may cause the Cardholder to choose Dynamic Currency Conversion by default". When you read the Airbnb help text about currencies, do you think the language and procedures there comply with Visa's regulations?

What does Airbnb have to say about it?

I wrote to Airbnb to ask about this. A woman called Eryn H replied "As it turns out we cannot provide our users with the option to disable currency conversion."

She went on to explain "When it comes to currency converting, we have to make sure that the payments and payouts equal to be the same amount, this is why we convert it as well as offer to convert it for you. We took it upon ourselves to do this for our users as a courtesy, not so that we can inconvenience any users.". That, and the rest of Eryn's email, reads like a patronizing copy-and-paste response that we've all come to dread from some poorly trained customer service staff these days.

Miss H's response also includes this little gem: "Additionally, if you pay in a currency that’s different from the denominated currency of your payment method, your payment company (for example, your credit or bank card issuer) or third-party payment processor may apply a currency conversion rate or fees to your payment. Please contact your provider for information on what rates and fees may apply as these are not controlled by or known to Airbnb." and what this really means is that if Airbnb forces you to use a particular currency, with their inflated exchange rate and that is not the currency used by your credit card then you will have another currency conversion fee added by your bank, so you suffer the pain of two currency conversions. This disastrous scenario comes about because some clever person at Airbnb wanted to show users a little "courtesy", as Miss H describes it.

What can users do?

As DCC is optional and as it is not clear on the booking page, there are other things a user can do.

At the bottom of the Airbnb page you can usually find an option to view prices in a different currency. You can also change your country of residence in the settings to ensure you view prices in the host currency. This allows you to see the real price, without the DCC steal.

People have been able to email or call Airbnb and have DCC disabled for their account. Not all their telephone staff seem to understand these requests and apparently it is necessary to persist and call more than once. In the long term, the cost savings outweigh the time it may take even if you spend 20 minutes on the phone getting it fixed.

Whatever you do, with any travel site, print a copy of the information page showing the price in host currency. After doing that for an Airbnb booking and before making any payment, send a message to the host quoting the total price in their currency and stating DCC is not authorized. If Airbnb does wrongly convert the currency, send a letter to the credit card company asking for a full refund/chargeback on the basis that the transaction in the wrong currency was not an authorized transaction. It is important to ensure that you do not agree to the payment using Verified-by-Visa or Mastercard Securecode and do not pay with a debit card as these things can undermine your chances of a successful chargeback.

The chargeback rules are very clear about this. On the Visa website, the Guide for the Lodging Industry describes all the chargeback reason codes. On page 46, reason code 76 is described for cases such as these:

  • Cardholder was not advised that Dynamic Currency Conversion (DCC) would occur
  • Cardholder was refused the choice of paying in the merchant’s local currency

If you feel that Airbnb's web site was not operating in compliance with these rules, while many other web sites have made the effort to do so, why shouldn't you demand a correction by your bank? Once enough people do this, don't be surprised if Airbnb fixes their site.

28 November, 2015 05:08PM by Daniel.Pocock

November 27, 2015

Russ Allbery

remctl 3.10

remctl is a simple and secure remote command execution protocol using GSS-API. Essentially, it's the thinnest and simplest possible way to deploy remote network APIs for commands using Kerberos authentication and encryption.

Most of the work in this release is around supporting anonymous authentication for an upcoming project of mine. This included cleaning up ACL handling so that clients that authenticated with anonymous PKINIT didn't count as ANYUSER (it's not likely this would have been a security problem for existing users, since you would have had to enable anonymous service tickets in your KDC), and adding new anyuser:auth and anyuser:anonymous ACLs that are explicit about whether anonymous users are included.

With this change, it's possible, using a KDC with anonymous service tickets enabled, to use anonymous PKINIT to make entirely unauthenticated remctl calls. I plan on using this with wallet to support initial system key bootstrapping using external validation of whether a system is currently allowed to bootstrap keys. Note that you need to be very careful when enabling anonymous service tickets, since many other Kerberos applications (including remctl prior to this release) assume that any client that can get a service ticket is in some way authenticated.

Also new in this release, the server now sets the REMOTE_EXPIRES environment variable to the time when the authenticated remote session will expire. This is usually th expiration time of the user's credentials. I'm planning on using this as part of a better kx509 replacement to issue temporary X.509 certificates from Kerberos tickets, limited in lifetime to the lifetime of the Kerberos ticket.

This release also includes some portability fixes, a bug fix for the localgroup ACL for users who are members of lots of local groups, and some (mildly backwards-incompatible) fixes for the Python RemctlError exception class.

You can get the latest release from the remctl distribution page.

27 November, 2015 11:42PM

Simon Josefsson

Automatic Replicant Backup over USB using rsync

I have been using Replicant on the Samsung SIII I9300 for over two years. I have written before on taking a backup of the phone using rsync but recently I automated my setup as described below. This work was prompted by a screen accident with my phone that caused it to die, and I noticed that I hadn’t taken regular backups. I did not lose any data this time, since typically all content I create on the device is immediately synchronized to my clouds. Photos are uploaded by the ownCloud app, SMS Backup+ saves SMS and call logs to my IMAP server, and I use DAVDroid for synchronizing contacts, calendar and task lists with my instance of ownCloud. Still, I strongly believe in regular backups of everything, so it was time to automate this.

For my use-case, taking backups of the phone whenever I connect it to one of my laptops is sufficient. I typically connect it to my laptops for charging at least every other day. My laptops are all running Debian, but this should be applicable to most modern GNU/Linux system. This is not Replicant-specific, although you need a rooted phone. I thought that automating this would be simple, but I got to learn the ins and outs of systemd and udev in the process and this ended up taking the better part of an evening.

I started out adding an udev rule and a small script, thinking I could invoke the backup process from the udev rule. However rsync would magically die after running a few seconds. After an embarrassing long debugging session, finally I found someone with a similar problem which led me to a nice writeup on the topic of running long-running services on udev events. I created a file /etc/udev/rules.d/99-android-backup.rules with the following content:

ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="323048a5ae82918b", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"
ACTION=="add", SUBSYSTEMS=="usb", ENV{ID_SERIAL_SHORT}=="4df9e09c25e75f63", TAG+="systemd", ENV{SYSTEMD_WANTS}+="android-backup@$env{ID_SERIAL_SHORT}.service"

The serial numbers correspond to the device serial numbers of the two devices I wish to backup. The adb devices command will print them for you, and you need to replace my values with the values from your phones. Next I created a systemd service to describe a oneshot service. The file /etc/systemd/system/android-backup@.service have the following content:

ExecStart=/usr/local/sbin/android-backup %I

The at-sign (“@”) in the service filename signal that this is a service that takes a parameter. I’m not enough of an udev/systemd person to explain these two files using the proper terminology, but at least you can pattern-match and follow the basic idea of them: the udev rule matches the devices that I’m interested in (I don’t want this to happen to all random Android devices I attach, hence matching against known serial numbers), and it causes a systemd service with a parameter to be started. The systemd service file describe the script to run, and passes on the parameter.

Now for the juicy part, the script. I have /usr/local/sbin/android-backup with the following content.


export ANDROID_SERIAL="$1"

exec 2>&1 | logger

if ! test -d "$DIRBASE-$ANDROID_SERIAL"; then
    echo "could not find directory: $DIRBASE-$ANDROID_SERIAL"
    exit 1

set -x

adb wait-for-device
adb root
adb wait-for-device
adb shell printf "address\nuid = root\ngid = root\n[root]\n\tpath = /\n" \> /mnt/secure/rsyncd.conf
adb shell rsync --daemon --no-detach --config=/mnt/secure/rsyncd.conf &
adb forward tcp:6010 tcp:873
sleep 2
rsync -av --delete --exclude /dev --exclude /acct --exclude /sys --exclude /proc rsync://localhost:6010/root/ $DIRBASE-$ANDROID_SERIAL/
: rc $?
adb forward --remove tcp:6010
adb shell rm -f /mnt/secure/rsyncd.conf

This script warrant more detailed explanation. Backups are placed under, e.g., /var/backups/android-323048a5ae82918b/ for later off-site backup (you do backup your laptop, right?). You have to manually create this directory, as a safety catch to not wildly rsync data into non-existing directories. The script logs everything using syslog, so run a tail -F /var/log/syslog& when setting this up. You may want to reduce verbosity of rsync if you prefer (replace rsync -av with rsync -a). The script runs adb wait-for-device which you rightly guessed will wait for the device to settle. Next adb root is invoked to get root on the device (reading all files from the system naturally requires root). It takes some time to switch, so another wait-for-device call is needed. Next a small rsyncd configuration file is created in /mnt/secure/rsyncd.conf on the phone. The file tells rsync do listen on localhost, run as root, and use / as the path. By default, rsyncd is read-only so the host will not be able to upload any data over rsync, just read data out. Next rsync is started on the phone. The adb forward command forwards port 6010 on the laptop to port 873 on the phone (873 is the default rsyncd port). Unfortunately, setting up the TCP forward appears to take some time, and adb wait-for-device will not wait for that to complete, hence an ugly sleep 2 at this point. Next is the rsync invocation itself, which just pulls in everything from the phone to the laptop, excluding some usual suspects. The somewhat cryptic : rc $? merely logs the exit code of the rsync process into syslog. Finally we clean up the TCP forward and remove the rsyncd.conf file that was temporarily created.

This setup appears stable to me. I can plug in a phone and a backup will be taken. I can even plug in both my devices at the same time, and they will run at the same time. If I unplug a device, the script or rsync will error out and systemd cleans up.

If anyone has ideas on how to avoid the ugly temporary rsyncd.conf file or the ugly sleep 2, I’m interested. It would also be nice to not have to do the ‘adb root’ dance, and instead have the phone start the rsync daemon when connecting to my laptop somehow. TCP forwarding might be troublesome on a multi-user system, but my laptops aren’t. Killing rsync on the phone is probably a good idea too. If you have ideas on how to fix any of this, other feedback, or questions, please let me know!

27 November, 2015 11:33PM by simon

Russ Allbery

rra-c-util 5.9

A minor release of my C utility library, including some changes required for the previous release of pam-afs-session and the upcoming release of remctl.

The Kerberos portability layer now correctly defines the strings for dealing with anonymous principals when built with Heimdal, and adds KRB5_ANON_REALM (required for doing the authentication). The PAM testing framework has some improvements for handling pam_modutil_getpwnam and supports testing against PAM_SESSION_ERR.

You can get the latest version from the rra-c-util distribution page.

27 November, 2015 10:42PM

hackergotchi for Erich Schubert

Erich Schubert

ELKI 0.7.0 on Maven and GitHub

Version 0.7.0 of our data mining toolkit ELKI is now available on the project homepage, GitHub and Maven.
You can also clone this example project to get started easily.
What is new in ELKI 0.7.0? Too much, see the release notes, please!
What is ELKI exactly?
ELKI is a Java based data mining toolkit. We focus on cluster analysis and outlier detection, because there are plenty of tools available for classification already. But there is a kNN classifier, and a number of frequent itemset mining algorithms in ELKI, too.
ELKI is highly modular. You can combine almost everything with almost everything else. In particular, you can combine algorithms such as DBSCAN, with arbitrary distance functions, and you can choose from many index structures to accelerate the algorithm. But because we separate them well, you can add a new index, or a new distance function, or a new data type, and still benefit from the other parts. In other tools such as R, you cannot easily add a new distance function into an arbitrary algorithm and get good performance - all the fast code in R is written in C and Fortran; and cannot be easily extended this way. In ELKI, you can define a new data type, new distance function, new index, and still use most algorithms. (Some algorithms may have prerequisites that e.g. your new data type does not fulfill, of course).
ELKI is also very fast. Of course a good C code can be faster - but then it usually is not as modular and easy to extend anymore.
ELKI is documented. We have JavaDoc, and we annotate classes with their scientific references (see a list of all references we have). So you know which algorithm a class is supposed to implement, and can look up details there. This makes it very useful for science.
ELKI is not: a turnkey solution. It aims at researchers, developers and data scientists. If you have a SQL database, and want to do a point-and-click analysis of your data, please get a business solution instead with commercial support.

27 November, 2015 05:27PM

hackergotchi for Gergely Nagy

Gergely Nagy

Feeding Emacs

For the past fifteen years, I have been tweaking my ~/.emacs continously, most recently by switching to Spacemacs. With that switch done, I started to migrate a few more things to Emacs, an Atom/RSS reader being one that's been in the queue for years - ever since Google Reader shut down. Since March 2013, I have been a Feedly user, but I wanted to migrate to something better for a long time. I wanted to use Free Software, for one.

I saw a mention of Elfeed somewhere a little while ago, and in the past few days, I decided to give it a go. The results are pretty amazing.


27 November, 2015 04:00PM by Gergely Nagy

hackergotchi for Norbert Preining

Norbert Preining

Slick Google Map 0.3 released

I just have pushed a new version of the Slick Google Map plugin for WordPress to the servers. There are not many changes, but a crucial fix for parsing coordinates in DMS (degree-minute-second) format.


The documentation described that all kind of DMS formats can be used to specify a location, but these DMS encoded locations were sent to Google for geocoding. Unfortunately it seems Google is incapable of handling DMS formats, and returns slightly of coordinates. By using a library for DMS conversion, which I adapted slightly, it is now possible to use a wide variety of location formats. Practically everything that can reasonably interpreted as a location will be properly converted to decimal coordinates.

Plans for the next release are:

  • prepare for translation via the WordPress translation team
  • add html support for the marker text via uuencoding

Please see the dedicated page or the WordPress page for more details and downloads.


27 November, 2015 07:17AM by Norbert Preining

Paul Wise

The GPL is not magic pixie dust

Become a Software Freedom Conservancy Supporter!

The GPL is not magic pixie dust. It does not work by itself.
The first step is to choose a copyleft license for your code.
The next step is, when someone fails to follow that copyleft license, it must be enforced
and its a simple fact of our modern society that such type of work
is incredibly expensive to do and incredibly difficult to do.

-- Bradley Kuhn, in FaiF episode 0x57

As the Debian Website used to imply, public domain and permissively licensed software can lead to the production of more proprietary software as people discover useful software, extend it and or incorporate it into their hardware or software products. Copyleft licenses such as the GNU GPL were created to close off this avenue to the production of proprietary software but such licenses are not enough. With the ongoing adoption of Free Software by individuals and groups, inevitably the community's expectations of license compliance are violated, usually out of ignorance of the way Free Software works, but not always. As Karen and Bradley explained in FaiF episode 0x57, copyleft is nothing if no-one is willing and able to stand up in court to protect it. The reality of today's world is that legal representation is expensive, difficult and time consuming. With in hiatus until some time in 2016, the Software Freedom Conservancy (a tax-exempt charity) is the major defender of the Linux project, Debian and other groups against GPL violations. In March the SFC supported a lawsuit by Christoph Hellwig against VMware for refusing to comply with the GPL in relation to their use of parts of the Linux kernel. Since then two of their sponsors pulled corporate funding and conferences blocked or cancelled their talks. As a result they have decided to rely less on corporate funding and more on the broad community of individuals who support Free Software and copyleft. So the SFC has launched a campaign to create a community of folks who stand up for copyleft and the GPL by supporting their work on promoting and supporting copyleft and Free Software.

If you support Free Software, like what the SFC do, agree with their compliance principles, are happy about their successes in 2015, work on a project that is an SFC member and or just want to stand up for copyleft, please join Christopher Allan Webber, Carol Smith, Jono Bacon, myself and others in becoming a supporter. For the next week your donation will be matched by an anonymous donor. Please also consider asking your employer to match your donation or become a sponsor of SFC. Don't forget to spread the word about your support for SFC via email, your blog and or social media accounts.

27 November, 2015 03:48AM

November 26, 2015

hackergotchi for Olivier Berger

Olivier Berger

Handling video files produced for a MOOC on Windows with git and git-annex

This post is intended to document some elements of workflow that I’ve setup to manage videos produced for a MOOC, where different colleagues work collaboratively on a set of video sequences, in a remote way.

We are a team of several schools working on the same course, and we have an incremental process, so we need some collaboration over a quite long period of many remote authors, over a set of video sequences.

We’re probably going to review some of the videos and make changes, so we need to monitor changes, and submit versions to colleagues on remote sites so they can criticize and get later edits. We may have more that one site doing video production. Thus we need to share videos along the flow of production, editing and revision of the course contents, in a way that is manageable by power users (we’re all computer scientists, used to SVN or Git).

I’ve decided to start an experiment with Git and Git-Annex to try and manage the videos like we use to do for slides sources in LaTeX. Obviously the main issue is that videos are big files, demanding in storage space and bandwidth for transfers.

We want to keep a track of everything which is done during the production of the videos, so that we can later re-do some of the video editing, for instance if we change the graphic design elements (logos, subtitles, frame dimensions, additional effects, etc.), for instance in the case where we would improve the classes over the seasons. On the other hand, not all colleagues want to have to download a full copy of all rushes on their laptop if they just want to review on particular sequence of the course. They will only need to download the final edit MP4. Even if they’re interested in being able to fetch all the rushes, should they want to try and improve the videos.

Git-Annex brings us the ability to decouple the presence of files in directories, managed by regular Git commands, from the presence of the file contents (the big stuff), which is managed by Git-Annex.

Here’s a quick description of our setup :

  • we do screen capture and video editing with Camtasia on a Windows 7 system. Camtasia (although proprietary) is quite manageable without being a video editing expert, and suits quite well our needs in terms of screen capture, green background shooting and later face insertion over slides capture, additional “motion design”-like enhancement, etc.
  • the rushes captured (audio, video) are kept on that machine
  • the MP4 rendering of the edits are performed on that same machine
  • all these files are stored locally on that computer, but we perform regular backups, on demand, on a remote system, with rsync+SSH. We have installed git for Windows so we use bash and rsync and ssh from git’s install. SSH happens using a public key without a passphrase, to connect easily to the Linux remote, but that isn’t mandatory.
  • the mirrored files appear on a Linux filesystem on another host (running Debian), where the target is actually managed with git and git-annex.
  • there we handle all the files added, removed or modified with git-annex.
  • we have 2 more git-annex remote repos, accessed through SSH (again using a passphrase-less public key), run by GitoLite, to which git-annex rsyncs copies of all the file contents. These repos are on different machines keeping backups in case of crashes. git-annex is setup to mandate keeping at least 2 copies of files (numcopies).
  • colleagues in turn clone from either of these repos and git-annex get to download the video contents, only for files which they are interested in (for instance final edits, but not rushes), which they can then play locally on their preferred OS and video player.

Why didn’t we use git-annex on windows directly, on the Windows host which is the source of the files ?

We tried, but that didn’t make it. Git-Annex assistant somehow crashed on us, thus causing the Git history to be strange, so that became unmanageable, and more important, we need robust backups, so we can’t allow to handle something we don’t fully trust: shooting again a video is really costly (setting up again a shooting set, with lighting, cameras, and a professor who has to repeat again the speech!).

The rsync (with –delete on destination) from windows to Linux is robust. Git-Annex on Linux seems robust so far. That’s enough for now :-)

The drawback is that we need manual intervention for starting the rsync, and also that we must make sure that the rsync target is ready to get a backup.

The target of the rsync on Linux is a git-annex clone using the default “indirect” mode, which handles the files as symlinks to the actual copies managed by git-annex inside the .git/ directory. But that ain’t suitable to be compared to the origin of the rsync mirror which are plain files on the Windows computer.

We must then do a “git-annex edit” on the whole target of the rsync mirror before the rsync, so that the files are there as regular video files. This is costly, in terms of storage, and also copying time (our repo contains around 50 Gb, and the Linux host is a rather tiny laptop).

After the rsync, all the files need to be compared to the SHA256 known to git-annex so that only modified files are taken into account in the commit. We perform a “git-annex add” on the whole files (for new files having appeared at rsync time), and then a “git-annex sync”. That takes a lot of time, since all SHA256 computations are quite long for such a set of big files (the video rushes and edited videos are in HD).

So the process needs to be the following, on the target Linux host:

  1. git annex add .
  2. git annex sync
  3. git annex copy . –to server1
  4. git annex copy . –to server2
  5. git annex edit .
  6. only then : rsync

Iterate ad lib 😉

I would have preferred to have a working git-annex on windows, but that is a much more manageable process for me for now, and until we have more videos to handle in our repo that our Linux laptop can hold, we’re quite safe.

Next steps will probably involve gardening the contents of the repo on the Linux host so we’re only keeping copies of current files, and older copies are only kept on the 2 servers, in case of later need.

I hope this can be useful to others, and I’d welcome suggestions on how to improve our process.

26 November, 2015 09:51AM by Olivier Berger

Tiago Bortoletto Vaz

Birthday as in the good old days

This year I've got zero happy birthday spam message from phone, post, email, and from random people on that Internet social thing. In these days, that's a WOW, yes it is.

On the other hand, full of love and simple celebrations together with local ones. A few emails and phone calls from close friends/family who are physically distant.

I'm happier than ever with my last years' choices of caring about my privacy, not spending time with fake relationships and keeping myself an unimportant one for the $SYSTEM. That means a lot for me.

26 November, 2015 12:43AM by Tiago Bortoletto Vaz

November 25, 2015

hackergotchi for Steve Kemp

Steve Kemp

A transient home-directory?

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

25 November, 2015 02:00PM

hackergotchi for Daniel Pocock

Daniel Pocock

Introducing elfpatch, for safely patching ELF binaries

I recently had a problem with a program behaving badly. As a developer familiar with open source, my normal strategy in this case would be to find the source and debug or patch it. Although I was familiar with the source code, I didn't have it on hand and would have faced significant inconvenience having it patched, recompiled and introduced to the runtime environment.

Conveniently, the program has not been stripped of symbol names, and it was running on Solaris. This made it possible for me to whip up a quick dtrace script to print a log message as each function was entered and exited, along with the return values. This gives a precise record of the runtime code path. Within a few minutes, I could see that just changing the return value of a couple of function calls would resolve the problem.

On the x86 platform, functions set their return value by putting the value in the EAX register. This is a trivial thing to express in assembly language and there are many web-based x86 assemblers that will allow you to enter the instructions in a web-form and get back hexadecimal code instantly. I used the bvi utility to cut and paste the hex code into a copy of the binary and verify the solution.

All I needed was a convenient way to apply these changes to all the related binary files, with a low risk of error. Furthermore, it needed to be clear for a third-party to inspect the way the code was being changed and verify that it was done correctly and that no other unintended changes were introduced at the same time.

Finding or writing a script to apply the changes seemed like the obvious solution. A quick search found many libraries and scripts for reading ELF binary files, but none offered a patching capability. Tools like objdump on Linux and elfedit on Solaris show the raw ELF data, such as virtual addresses, which must be converted manually into file offsets, which can be quite tedious if many binaries need to be patched.

My initial thought was to develop a concise C/C++ program using libelf to parse the ELF headers and then calculating locations for the patches. While searching for an example, I came across pyelftools and it occurred to me that a Python solution may be quicker to write and more concise to review.

elfpatch (on github) was born. As input, it takes a text file with a list of symbols and hexadecimal representations of the patch for each symbol. It then reads one or more binary files and either checks for the presence of the symbols (read-only mode) or writes out the patches. It can optionally backup each binary before changing it.

25 November, 2015 10:30AM by Daniel.Pocock

Drone strikes coming to Molenbeek?

The St Denis siege last week and the Brussels lockdown this week provides all of us in Europe with an opportunity to reflect on why over ten thousand refugees per day have been coming here from the middle east, especially Syria.

At this moment, French warplanes and American drones are striking cities and villages in Syria, killing whole families in their effort to shortcut the justice system and execute a small number of very bad people without putting them on trial. Some observers estimate air strikes and drones kill twenty innocent people for every one bad guy. Women, children, the sick, elderly and even pets are most vulnerable. The leak of the collateral murder video simultaneously brought Wikileaks into the public eye and demonstrated how the crew of a US attack helicopter had butchered unarmed civilians and journalists like they were playing a video game.

Just imagine that the French president had sent the fighter jets to St Denis and Molenbeek instead of using law enforcement. After all, how are the terrorists there any better or worse than those in Syria, don't they deserve the same fate? Or what if Obama had offered to help out with a few drone strikes on suburban Brussels? After all, if the drones are such a credible solution for Syria's future, why won't they solve Brussels' (perceived) problems too?

If the aerial bombing "solution" had been attempted in a western country, it would have lead to chaos. Half the population of Paris and Brussels would find themselves camping at the migrant camps in Calais, hoping to sneak into the UK in the back of a truck.

Over a hundred years ago, Russian leaders proposed a treaty agreeing never to drop bombs from balloons and the US and UK happily signed it. Sadly, the treaty wasn't updated after the invention of fighter jets, attack helicopters, rockets, inter-continental ballistic missiles, satellites and drones.

The reality is that asymmetric warfare hasn't worked and never will work in the middle east and as long as it is continued, experts warn that Europe may continue to face the consequences of refugees, terrorists and those who sympathize with their methods. By definition, these people can easily move from place to place and it is ordinary citizens and small businesses who will suffer a lot more under lockdowns and other security measures.

In our modern world, people often look to technology for shortcuts. The use of drones in the middle east is a shortcut from a country that spent enormous money on ground invasions of Iraq and Afghanistan and doesn't want to do it again. Unfortunately, technological shortcuts can't always replace the role played by real human beings, whether it is bringing law and order to the streets or in any other domain.

Aerial bombardment - by warplane or by drone - carries an implicitly racist message, that the people abused by these drone attacks are not equivalent to the rest of us, they can't benefit from the normal procedures of justice, they don't have rights, they are not innocent until proven guilty and they are expendable.

The French police deserve significant credit for the relatively low loss of life in the St Denis siege. If their methods and results were replicated in Syria and other middle eastern hotspots, would it be more likely to improve the situation in the long term than drone strikes?

25 November, 2015 07:28AM by Daniel.Pocock

hackergotchi for Ben Armstrong

Ben Armstrong

Debian Live After Debian Live

Get involved

After this happened, my next step was to get re-involved in Debian Live to help it carry on after the loss of Daniel. Here’s a quick update on some team progress, notes that could help people building Stretch images right now, and what to expect next.

Team progress

  • Iain uploaded live-config, incorporating an important fix, #bc8914bc, that prevented images from booting.
  • I want to get live-images ready for an upload, including #8f234605 to fix wrong config/bootloaders that prevented images from building.

Test build notes

  • As always, build Stretch images with latest live-build from Sid (i.e. 5.x).
  • Build Stretch images, not Sid, as there’s less of a chance of dependency issues spoiling the build, and that’s the default anyway.
  • To make build iterations faster, make sure the config is modified to not build source & not include installer (edit auto/config before ‘lb config’) and use an apt caching proxy.
  • Don’t forget to inject fixed packages (e.g. live-config) into each config. Use apt pinning as per live-manual, or drop the debs into config/packages.chroot.

Test boot notes

  • Use kvm, giving it enough ram (-m 1024 works for me).
  • For gnome-desktop and kde-desktop, use -vga qxl, or else the desktop will crash and restart repeatedly.
  • When using qxl, edit boot params to add qxl.modeset=1 (workaround for #779515, which will be fixed in kernel >= 4.3).
  • My gnome image test was spoiled by #802929. The mouse doesn’t work (pointer moves, but no buttons work). Waiting on a new kernel to fix this. This is a test environment related bug only, i.e. should work fine on hardware. (Test pending.)
  • The Stretch standard, lxde-desktop, cinnamon-desktop, xfce-desktop, and gnome-desktop images all built and booted fine (except for the gnome issue noted above).
  • The Stretch kde-desktop and mate-desktop images are next on my list to test, along with Jessie images.
  • I’ve only tested on the standard and lxde-desktop images that if the installer is included, booting from the Install boot menu option starts the installer (i.e. didn’t do an actual install).

Coming soon

See the TODO in the wiki. We’re knocking these off steadily. It will be faster with more people helping (hint, hint).


25 November, 2015 01:00AM by Ben Armstrong

November 24, 2015

hackergotchi for Bernd Zeimetz

Bernd Zeimetz online again

Finally, is back online and I’m planning to start blogging again! Part of the reason why I became inactive was the usage of ikiwiki, which is great, but at end unnecessarily complicated. So I’ve migrated by page to - a static website generator, written in go. Hugo has an active community and it is easy to create themes for it or to enhance it. Also it is using plain Markdown syntax instead of special ikiwiki syntax mixed into it - should make it easy to migrate away again if necessary.

In case somebody else would like to convert from ikiwiki to Hugo, here is the script I’ve hacked together to migrate my old blog posts.


find . -type f -name '*.mdwn' | while read i; do
        echo '+++'
        slug="$(echo $i | sed 's,.*/,,;s,\.mdwn$,,')"
        echo "slug = \"${slug}\""
        echo "title = \"$(echo $i | sed 's,.*/,,;s,\.mdwn$,,;s,_, ,g;s/\b\(.\)/\u\1/;s,debian,Debian,g')\""
        if grep -q 'meta updated' $i; then
            echo -n 'date = '
            sed '/meta updated/!d;/.*meta updated.*/s,.*=",,;s,".*,,;s,^,",;s,$,",' $i
            echo -n 'date = '
            git log --diff-filter=A --follow --format='"%aI"' -1 -- $i
        if grep -q '\[\[!tag' $i; then
            echo -n 'tags ='
            sed '/\[\[!tag/!d;s,[^ ]*tag ,,;s,\]\],,;s,\([^ ]*\),"\1",g;s/ /,/g;s,^,[,;s,$,],' $i
        echo 'categories = ["linux"]'
        echo 'draft = false'
        echo '+++'
        echo ''

        sed -e '/\[\[!tag/d' \
            -e '/meta updated/d' \
            -e '/\[\[!plusone *\]\]/d' \
            -e 's,\[\[!img files[0-9/]*/\([^ ]*\) alt="\([^"]*\).*,![\2](../\1),g' \
            -e 's,\[\([^]]*\)\](\([^)]*\)),[\1](\2),g' \
            -e 's,\[\[\([^|]*\)|\([^]]*\)\]\],[\1](\2),g' \
    } > $tmp
    #cat $tmp; rm $tmp 
    mv $tmp `echo $i | sed 's,\.mdwn,.md,g'`

For the planet Debian readers - only linux related posts will show up on the planet. If you are interested in my mountain activities and other things I post, please follow my blog on directly.

24 November, 2015 07:41PM

Carl Chenet

db2twitter: Twitter out of the browser

You have a database, a tweet pattern and wants to automatically tweet on a regular basis? No need for RSS, fancy tricks, 3rd party website to translate RSS to Twitter or whatever. Just use db2twitter.

db2twitter is pretty easy to use!  First define your Twitter credentials:


Then your database information:


Then the pattern of your tweet, a Python-style formatted string:

tweet={} hires a {}{}

Add db2twitter in your crontab:

*/10 * * * * db2witter db2twitter db2twitter.ini

And you’re all set! db2twitter will generate and tweet the following tweets:

MyGreatCompany hires a web developer
CoolStartup hires a devops skilled in Docker

db2twitter is developed by and run for, the job board of th french-speaking Free Software and Opensource community.


db2twitter also has cool options like;

  • only tweet during user-specified time (e.g 9AM-6PM)
  • use user-specified SQL filter in order to get data from the database (e.g only fetch rows where status == « edited »)

db2twitter is coded in Python 3.4, uses SQlAlchemy (see supported database types) and  Tweepy. The official documentation is available on readthedocs.

24 November, 2015 06:00PM by Carl Chenet

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Salut Salon

I don't really remember where or how I stumbled upon this four women so I'm sorry that I can't give credit where credit is due, and I even do believe that I started writing a blog entry about them already somewhere. Anyway, I want to present you today Salut Salon. They might play classic instruments, but not in a classic way. But see and hear yourself:

  • Wettstreit zu viert: This is the first that I stumbled upon that did catch my attention. Lovely interpretation of classic tunes and sweet mixup.
  • Ievan Polkka: I love the catchy tune—and their interpretation of the song.
  • We'll Meet Again: While the history of the song might not be so laughable the giggling of them is just contagious. :)

So like always, enjoy!

/music | permanent link | Comments: 1 | Flattr this

24 November, 2015 08:26AM by Rhonda

hackergotchi for Michal Čihař

Michal Čihař

Wammu 0.40

Yesterday, Wammu 0.40 has been released.

The list of changes is not really huge:

  • Correctly escape XML output.
  • Make error message selectable.
  • Fixed spurious D-Bus error message.
  • Translation updates.

I will not make any promises for future releases (if there will be any) as the tool is not really in active development.

Filed under: English Gammu Wammu | 0 comments

24 November, 2015 08:09AM by Michal Čihař (

November 23, 2015

hackergotchi for Riku Voipio

Riku Voipio

Using ser2net for serial access.

Is your table a mess of wires? Do you have multiple devices connected via serial and can't remember which is /dev/ttyUSBX is connected to what board? Unless you are a embedded developer, you are unlikely to deal with serial much anymore - In that case you can just jump to the next post in your news feed.

Introducting ser2net

Usually people start with minicom for serial access. There are better tools - picocom, screen, etc. But to easily map multiple serial ports, use ser2net. Ser2net makes serial ports available over telnet.

Persistent usb device names and ser2net

To remember which usb-serial adapter is connected to what, we use the /dev/serial tree created by udev, in /etc/ser2net.conf:

# arndale
7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT
# cubox
7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT
# sonic-screwdriver
7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBIT
The by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a Patch from BTS is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at This patch among other interesting features can be also be found in various ser2net forks in github.

Setting easy to remember names

Finally, unless you want to memorize the port numbers, set TCP port to name mappings in /etc/services:

# Local services
arndale 7004/tcp
cubox 7005/tcp
sonic-screwdriver 7006/tcp
Now finally:
telnet localhost sonic-screwdriver
^Mandatory picture of serial port connection in action

23 November, 2015 07:55PM by Riku Voipio (

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Regarding fdupes

Dear readers,

There is a very useful tool for finding and merging shared permanent storage, and its name is fdupes. There was a terrible occurrence in the software after version 1.51, however. They removed the -L argument because too many people were complaining about lost data. It sounds like user error to me, and so I continue to use this one. I have to build from source, since the newer versions do not have the -L option.

And so there you are. I recommend using it, even though this most useful feature has been deprecated and removed from the software. Perhaps there should be a fdupes-danger package in Debian?

23 November, 2015 06:04PM by C.J. Adams-Collier

hackergotchi for Lunar


Reproducible builds: week 30 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

  • Markus Koschany uploaded antlr3/3.5.2-3 which includes a fix by Emmanuel Bourg to make the generated parser reproducible.
  • Markus Koschany uploaded maven-bundle-plugin/2.4.0-2 which includes a fix by Emmanuel Bourg to use the date in the DEB_CHANGELOG_DATETIME variable in the file embedded in the jar files.
  • Niels Thykier uploaded debhelper/9.20151116 which makes the timestamp of directories created by dh_install, dh_installdocs, and dh_installexamples reproducible. Patch by Niko Tyni.

Mattia Rizzolo uploaded a version of perl to the “reproducible” repository including the patch written by Niko Tyni to add support for SOURCE_DATE_EPOCH in Pod::Man.

Dhole sent an updated version of his patch adding support for SOURCE_DATE_EPOCH in GCC to the upstream mailing list. Several comments have been made in response which have been quickly addressed by Dhole.

Dhole also forwarded his patch adding support for SOURCE_DATE_EPOCH in libxslt upstream.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: antlr3/3.5.2-3, clusterssh, cme, libdatetime-set-perl, libgraphviz-perl, liblingua-translit-perl, libparse-cpan-packages-perl, libsgmls-perl, license-reconcile, maven-bundle-plugin/2.4.0-2, siggen, stunnel4, systemd, x11proto-kb.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Vagrant Cascadian has set up a new armhf node using a Raspberry Pi 2. It should soon be added to the Jenkins infrastructure.

diffoscope development

diffoscope version 42 was release on November 20th. It adds a missing dependency on python3-pkg-resources and to prevent similar regression another autopkgtest to ensure that the command line is functional when Recommends are not installed. Two more encoding related problems have been fixed (#804061, #805418). A missing Build-Depends has also been added on binutils-multiarch to make the test suite pass on architectures other than amd64.

Package reviews

180 reviews have been removed, 268 added and 59 updated this week.

70 new “fail to build from source” bugs have been reported by Chris West, Chris Lamb and Niko Tyni.

New issue this week: randomness_in_ocaml_preprocessed_files.


Jim MacArthur started to work on a system to rebuild and compare packages built on using .buildinfo and

On December 1-3rd 2015, a meeting of about 40 participants from 18 different free software projects will be held in Athens, Greece with the intent of improving the collaboration between projects, helping new efforts to be started, and brainstorming on end-user aspects of reproducible builds.

23 November, 2015 04:43PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

CDs should come with download codes

boxes of CDs & the same data on MicroSD

boxes of CDs & the same data on MicroSD

There's a Vinyl resurgence going on, with vinyl record sales growing year-on-year. Many of the people buying records don't have record players. Many records are sold including a download code, granting the owner an (often one-time) opportunity to download a digital copy of the album they just bought.

Some may be tempted to look down upon those buying vinyl records, especially those who don't have a means to play them. The record itself is, now more than ever, a physical totem rather than a media for the music. But is this really that different to how we've treated audio CDs this century?

For at least 15 years, I've ripped every CD I've bought and then stored it in a shoebox. (I'm up to 10 shoeboxes). The ripped copy is the only thing I listen to. The CD is little more than a totem, albeit one which I have to use in a relatively inconvenient ritual in order to get something I can conveniently listen to.

The process of ripping CDs has improved a lot in this time, but it's still a pain. CD-ROM drives are also becoming a lot more scarce. Ripping is not necessary reliable, either. The best tool to verify a rip is AccurateRip, a privately-owned database of track checksums. The private status is a problem for the community (Remember what happened to CDDB?) and is only useful if other people using an AccurateRip-supported ripper have already successfully ripped the CD.

Then there's things like CD pre-emphasis. It turns out that the Red Book standard defines a rarely-used flag that means the CD (or individual tracks) have had pre-emphasis applied to the treble-end of the frequency spectrum. The CD player is supposed to apply de-emphasis on playback. This doesn't happen if you fetch the audio data digitally, so it becomes the CD rippers responsibility to handle this. CD rippers have only relatively recently grown support for it. Awareness has been pretty low, so low that nobody has a good idea about how many CDs actually have pre-emphasis set: it's thought to be very rare, but (as far as I know) MusicBrainz doesn't (yet) track it.

So some proportion of my already-ripped CDs may have actually been ripped incorrectly, and I can't easily determine which ones without re-ripping them all. I know that at least my Quake computer game CD has it set, and I have suspicions about some other releases.

Going forward, this could be avoided entirely if CDs were treated more like totems, as vinyl records are, than the media delivering the music itself, and if record labels routinely included download cards with audio CDs. For just about anyone, no matter how the music was obtained, media-less digital is the canonical form for engaging with it. Attention should also be paid to make sure that digital releases are of a high quality: but that's a topic for another blog post.

23 November, 2015 04:06PM

On BBC 6 Music

Back in July I had a question of mine read out on the Radcliffe and Maconie programme on BBC 6 Music. The pair were interviewing Stephen Morris of New Order and I took the opportunity to ask a question about backing vocals on the 1989 song "Run2". Here's the question and answer (318K MP3, 21s):

23 November, 2015 11:02AM

hackergotchi for Gergely Nagy

Gergely Nagy

Keyboard updates

Last Friday, I compiled a list of keyboards I'm interested in, and received a lot of incredible feedback, thank you all! This allowed me to shorten the list considerably, two basically two pieces. I'm reasonably sure by now which one I want to buy (both), but will spend this week calming down to avoid impulse-buying. My attention was also brought to a few keyboards originally not on my list, and I'll take this opportunity to present my thoughts on those too.

The Finalists




  • Great design, by the looks of it.
  • Mechanical keys.
  • Open source hardware and firmware, thus programmable.
  • Thumb keys.
  • Available as an assembled product, from multiple sources.


  • Primarily a kit, but assembled available.
  • Assembled versions aren't as nice as home-made variants.


The keyboard looks interesting, primarily due to the thumb keys. From the ErgoDox EZ campaign, I'm looking at $270. That's friendly, and makes ErgoDox a viable option! (Thanks @miffe!)

There's also another option, FalbaTech, which ships sooner, I can customize the keyboard to some extent, and Poland is much closer to Hungary than the US. With this option, I'm looking at $205 + shipping, a very low price for what the keyboard has to offer. (Thanks @pkkolos for the suggestion!)

Keyboardio M01

Keyboardio Model 01


  • Mechanical keyboard.
  • Hardwood body.
  • Blank and dot-only keycaps option.
  • Open source: firmware, hardware, and so on. Comes with a screwdriver.
  • The physical key layout has much in common with my TypeMatrix.
  • Numerous thumb-accessible keys.
  • A palm key, that allows me to use the keyboard as a mouse.
  • Fully programmable LEDs.
  • Custom macros, per-application even.


  • Fairly expensive.
  • Custom keycap design, thus rearranging them physically is not an option, which leaves me with the blank or dot-only keycap options only.
  • Available late summer, 2016.


With shipping cost and whatnot, I'm looking at something in the $370 ballpark, which is on the more expensive side. On the other hand, I get a whole lot of bang for my buck: LEDs, two center bars (tripod mounting sounds really awesome!), hardwood body, and a key layout that is very similar to what I came to love on the TypeMatrix.

I also have a thing for wooden stuff. I like the look of it, the feel of it.

The Verdict

Right now, I'm seriously considering the Model 01, because even if it is about twice the price of the ErgoDox, it also offers a lot more: hardwood body (I love wood), LEDs, palm key. I also prefer the layout of the thumb keys on the Model 01.

The Model 01 also comes pre-assembled, looks stunning, while the ErgoDox pales a little in comparsion. I know I could make it look stunning too, but I do not want to build things. I'm not good at it, I don't want to be good at it, I don't want to learn it. I hate putting things together. I'm the kind of guy who needs three tries to put together a set of IKEA shelves, and I'm not exaggerating. I also like the shape of the keys better on the Model 01.

Nevertheless, the ErgoDox is still an option, due to the price. I'd love to buy both, if I could. Which means that once I'm ready to replace my keyboard at work, I will likely buy an ErgoDox. But for home, Model 01 it is, unless something even better comes along before my next pay.

The Kinesis Advantage was also a strong contender, but I ended up removing it from my preferred options, because it doesn't come with blank keys, and is not a split keyboard. And similar to the ErgoDox, I prefer the Model 01's thumb-key layout. Despite all this, I'm very curious about the key wells, and want to try it someday.

Suggested options



Suggested by Andred Carter, a very interesting keyboard with a unique design.


  • Portable, foldable.
  • Active support for forearm and hand.
  • Hands never obstruct the view.


  • Not mechanical.
  • Needs a special inlay.
  • Best used for word processing, programmers may run into limitations.


I like the idea of the keyboard, and if it wouldn't need a special inlay, but used a small screen or something to show the keys, I'd like it even more. Nevertheless, I'm looking for a mechanical keyboard right now, which I can also use for coding.

But I will definitely keep the Yogitype in mind for later!

Matias Ergo Pro

Matias Ergo Pro


  • Mechanical keys.
  • Simple design.
  • Split keyboard.


  • Doesn't seem to come with a blank keys option, nor in Dvorak.
  • No thumb key area.
  • Neither open source, nor open hardware.
  • I have no need for the dedicated undo, cut, paste keys.
  • Does not appear to be programmable.


This keyboard hardly meets any of my desired properties, and doesn't have anything standing out in comparison with the others. I had a quick look at this when compiling my original list, but was quickly discarded. Nevertheless, people asked me why, so I'm including my reasoning here:

While it is a split keyboard, with a fairly simple design, it doesn't come in the layout I'd prefer, nor with blank keys. It lacks the thumb key area that ErgoDox and the Model 01 have, and which I developed an affection for.

Microsoft Sculpt Ergonomic Keyboard

Microsoft Sculpt


  • Numpad is a separate unit.
  • Reverse tilt.
  • Well positioned, big Alt keys.
  • Cheap.


  • Not a split keyboard.
  • Not mechanical.
  • No blank or Dvorak option as far as I see.


This keyboard does not buy me much over my current TypeMatrix 2030. If I'd be looking for the cheapest possible among ergonomic keyboards, this would be my choice. But only because of the price.

Truly Ergonomic Keyboard

Truly Ergonomic Keyboard


  • Mechanical.
  • Detachable palm rest.
  • Programmable firmware.


  • Not a split keyboard.
  • Layouts are virtual only, the printed keycaps stay QWERTY, as far as I see.
  • Terrible navigation key setup.


Two important factors for me are physical layout and splittability. This keyboard fails both. While it is a portable device, that's not a priority for me at this time.

23 November, 2015 11:00AM by Gergely Nagy

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack Liberty and Debian

Long over due post

It’s been a long time I haven’t written here. And lots of things happened in the OpenStack planet. As a full time employee with the mission to package OpenStack in Debian, it feels like it is kind of my duty to tell everyone about what’s going on.

Liberty is out, uploaded to Debian

Since my last post, OpenStack Liberty, the 12th release of OpenStack, was released. In late August, Debian was the first platform which included Liberty, as I proudly outran both RDO and Canonical. So I was the first to make the announcement that Liberty passed most of the Tempest tests with the beta 3 release of Liberty (the Beta 3 is always kind of the first pre-release, as this is when feature freeze happens). Though I never made the announcement that Liberty final was uploaded to Debian, it was done just a single day after the official release.

Before the release, all of Liberty was living in Debian Experimental. Following the upload of the final packages in Experimental, I uploaded all of it to Sid. This represented 102 packages, so it took me about 3 days to do it all.

Tokyo summit

I had the pleasure to be in Tokyo for the Mitaka summit. I was very pleased with the cross-project sessions during the first day. Lots of these sessions were very interesting for me. In fact, I wish I could have attended them all, but of course, I can’t split myself in 3 to follow all of the 3 tracks.

Then there was the 2 sessions about Debian packaging on upstream OpenStack infra. The goal is to setup the OpenStack upstream infrastructure to allow packaging using Gerrit, and gating each git commit using the usual tools: building the package and checking there’s no FTBFS, running checks like lintian, piuparts and such. I knew already the overview of what was needed to make it happen. What I didn’t know was the implementation details, which I hoped we could figure out during the 1:30 slot. Unfortunately, this didn’t happen as I expected, and we discussed more general things than I wished. I was told that just reading the docs from the infra team was enough, but in reality, it was not. What currently needs to happen is building a Debian based image, using disk-image-builder, which would include the usual tools to build packages: git-buildpackage, sbuild, and so on. I’m still stuck at this stage, which would be trivial if I knew a bit more about how upstream infra works, since I already know how to setup all of that on a local machine.

I’ve been told by Monty Tailor that he would help. Though he’s always a very busy man, and to date, he still didn’t find enough time to give me a hand. Nobody replied to my request for help in the openstack-dev list either. Hopefully, with a bit of insistence, someone will help.

Keystone migration to Testing (aka: Debian Stretch) blocked by python-repoze.who

Absolutely all of OpenStack Liberty, as of today, has migrated to Stretch. All? No. Keystone is blocked by a chain of dependency. Keystone depends on python-pysaml2, itself blocked by python-repoze.who. The later, I upgraded it to version 2.2. Though python-repoze.what depends on version <= 1.9, which is blocking the migration. Since python-repoze.who-plugins, python-repoze.what and python-repoze.what-plugins aren’t used by any package anymore, I asked for them to be removed from Debian (see #805407). Until this request is processed by the FTP masters, Keystone, which is the most important piece of OpenStack (it does the authentication) will be blocked for migration to Stretch.

New OpenStack server packages available

On my presentation at Debconf 15, I quickly introduced new services which were released upstream. I have since packaged them all:

  • Barbican (Key management as a Service)
  • Congress (Policy as a Service)
  • Magnum (Container as a Service)
  • Manila (Filesystem share as a Service)
  • Mistral (Workflow as a Service)
  • Zaqar (Queuing as a Service)

Congress, unfortunately, was not accepted to Sid yet, because of some licensing issues, especially with the doc of python-pulp. I will correct this (remove the non-free files) and reattempt an upload.

I hope to make them all available in jessie-backports (see below). For the previous release of OpenStack (ie: Kilo), I skipped the uploads of services which I thought were not really critical (like Ironic, Designate and more). But from the feedback of users, they would really like to have them all available. So this time, I will upload them all to the official jessie-backports repository.

Keystone v3 support

For those who don’t know about it, Keystone API v3 means that, on top of the users and tenant, there’s a new entity called a “domain”. All of the Liberty is now coming with Keystone v3 support. This includes the automated Keystone catalog registration done using debconf for all *-api packages. As much as I could tell by running tempest on my CI, everything still works pretty well. In fact, Liberty is, to my experience, the first release of OpenStack to support Keystone API v3.

Uploading Liberty to jessie-backports

I have rebuilt all of Liberty for jessie-backports on my laptop using sbuild. This is more than 150 packages (166 packages currently). It took me about 3 days to rebuild them all, including unit tests run at build time. As soon as #805407 is closed by the FTP masters, all what’s remaining will be available in Stretch (mostly Keystone), and the upload will be possible. As there will be a lot of NEW packages (from the point of view of backports), I do expect that the approval will take some time. Also, I have to warn the original maintainers of the packages that I don’t maintain (for example, those maintained within the DPMT), that because of the big number of packages, I will not be able to process the usual communication to tell that I’m uploading to backports. However, here’s the list of package. If you see one that you maintain, and that you wish to upload the backport by yourself, please let me know. Here’s the list of packages, hopefully, exhaustive, that I will upload to jessie-backports, and that I don’t maintain myself:

alabaster contextlib2 kazoo python-cachetools python-cffi python-cliff python-crank python-ddt python-docker python-eventlet python-git python-gitdb python-hypothesis python-ldap3 python-mock python-mysqldb python-pathlib python-repoze.who python-setuptools python-smmap python-unicodecsv python-urllib3 requests routes ryu sphinx sqlalchemy turbogears2 unittest2 zzzeeksphinx.

More than ever, I wish I could just upload these to a PPA^W Bikeshed, to minimize the disruption for both the backports FTP masters, other maintainers, and our OpenStack users. Hopefully, Bikesheds will be available soon. I am sorry to give that much approval work to the backports FTP masters, however, using the latest stable system with the latest release, is what most OpenStack users really want to do. All other major distributions have specific repositories too (ie: RDO for CentOS / Red Hat, and cloud archive for Ubuntu), and stable-backports is currently the only place where I can upload support for the Stable release.

Debian listed as supported distribution on

Good news! If you go at you will see a list of supported distributions. I am proud to be able to tell that, after 6 months of lobbying from my side, Debian is also listed there. The process of having Debian there included talking with folks from the OpenStack foundation, and having Bdale to sign an agreement so that the Debian logo could be reproduced on Thanks to Bdale Garbee, Neil McGovern, Jonathan Brice, and Danny Carreno, without who this wouldn’t have happen.

23 November, 2015 08:30AM by Goirand Thomas

November 21, 2015

hackergotchi for Bálint Réczey

Bálint Réczey

Wireshark 2.0 switched default UI to Qt in unstable

Wireshark With the latest release the Wireshark Project decided to make the Qt GUI the default interface. In line with Debian’s Policy the packages shipped by Debian also switched the default GUI to minimize the difference from upstream. The GTK+ interface which was the previous default is still available from the wireshark-gtk package.

You can read more about the new 2.0.0 release in the release notes or on the Wireshark Blog featuring some of the improvements.

Happy sniffing!

update: Wireshark 2.0.0 will be available from testing and jessie-backports in a week. Ubuntu users can already download binary packages from the Wireshark stable releases PPA maintained by the Wireshark Project (including me:-)).

21 November, 2015 10:54PM by Réczey Bálint

hackergotchi for Jonathan McDowell

Jonathan McDowell

Updating a Brother HL-3040CN firmware from Linux

I have a Brother HL-3040CN networked colour laser printer. I bought it 5 years ago and I kinda wish I hadn’t. I’d done the appropriate research to confirm it worked with Linux, but I didn’t realise it only worked via a 32-bit binary driver. It’s the only reason I have 32 bit enabled on my house server and I really wish I’d either bought a GDI printer that had an open driver (Samsung were great for this in the past) or something that did PCL or Postscript (my parents have an Xerox Phaser that Just Works). However I don’t print much (still just on my first set of toner) and once setup the driver hasn’t needed much kicking.

A more major problem comes with firmware updates. Brother only ship update software for Windows and OS X. I have a Windows VM but the updater wants the full printer driver setup installed and that seems like overkill. I did a bit of poking around and found reference in the service manual to the ability to do an update via USB and a firmware file. Further digging led me to a page on resurrecting a Brother HL-2250DN, which discusses recovering from a failed firmware flash. It provided a way of asking the Brother site for the firmware information.

First I queried my printer details:

$ snmpwalk -v 2c -c public hl3040cn.local iso.
iso. = STRING: "MODEL=\"HL-3040CN series\""
iso. = STRING: "SPEC=\"0001\""
iso. = STRING: "FIRMID=\"MAIN\""
iso. = STRING: "FIRMVER=\"1.11\""
iso. = STRING: "FIRMVER=\"1.02\""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""

I used that to craft an update file which I sent to Brother via curl:

curl -X POST -d @hl3040cn-update.xml -H "Content-Type:text/xml" --sslv3

This gave me back some XML with a URL for the latest main firmware, version 1.19, filename LZ2599_N.djif. I downloaded that and took a look at it, discovering it looked like a PJL file. I figured I’d see what happened if I sent it to the printer:

cat LZ2599_N.djf | nc hl3040cn.local 9100

The LCD on the front of printer proceeded to display something like “Updating Program” and eventually the printer re-DHCPed and indicated the main firmware had gone from 1.11 to 1.19. Great! However the PCLPS firmware was still at 1.02 and I’d got the impression that 1.04 was out. I didn’t manage to figure out how to get the Brother update website to give me the 1.04 firmware, but I did manage to find a copy of LZ2600_D.djf which I was then able to send to the printer in the same way. This led to:

$ snmpwalk -v 2c -c public hl3040cn.local iso.
iso. = STRING: "MODEL=\"HL-3040CN series\""
iso. = STRING: "SPEC=\"0001\""
iso. = STRING: "FIRMID=\"MAIN\""
iso. = STRING: "FIRMVER=\"1.19\""
iso. = STRING: "FIRMVER=\"1.04\""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""
iso. = STRING: ""

Cool, eh?

[Disclaimer: This worked for me. I’ve no idea if it’ll work for anyone else. Don’t come running to me if you brick your printer.]

21 November, 2015 01:27PM

November 20, 2015

hackergotchi for Jonathan Dowland

Jonathan Dowland


It's been at least a year since I last did any work on Debian, but this week I finally uploaded a new version of squishyball, an audio sample comparison tool, incorporating a patch from Thibaut Girka which fixes the X/X/Y test method. Shamefully Thibaut's patch is nearly a year old too. Better late than never...

I've also uploaded a new version of smartmontools which updates the package to the new upstream version. I'm not the regular maintainer for this package, but it is in the set of packages covered by the collab-maint team. To be polite I uploaded it to DELAYED-7, so it will take a week to hit unstable. I've temporarily put a copy of the package here in the meantime.

20 November, 2015 09:08PM

John Goerzen

I do not fear

I am so saddened by the news this week. The attacks in Paris, Beirut, and Mali. The reaction of fear, anger, and hate. Governors racing to claim they will keep out refugees, even though they lack the power to do so. Congress voting to keep out refugees.

Emotions are a powerful thing. They can cause people to rise up and accomplish stunning things that move humanity forward. And they can move us back. Fear, and the manipulation of it, is one of those.

What have I to fear?

Even if the United States accepted half a million Syrian refugees tomorrow, I would be far more likely to die in a car accident than at the hands of a Syrian terrorist. I am a careful and cautious person, but I understand that life is not lived unless risk is balanced. I know there is a risk of being in a car crash every time I drive somewhere — but if that kept me at home, I would never see my kids’ violin concert, the beautiful “painted” canyon of Texas, or the Flint Hills of Kansas. So I drive smart and carefully, but I still drive without fear. I accept this level of risk as necessary to have a life worth living in this area (where there are no public transit options and the nearest town is miles away).

I have had pain in my life. I’ve seen grandparents pass away, I’ve seen others with health scares. These things are hard to think about, but they happen to us all at some point.

What have I to fear?

I do not fear giving food to the hungry, shelter to the homeless, comfort to those that have spent the last years being shot at. I do not fear helping someone that is different than me. If I fail to do these things for someone because of where they come from or what their holy book is, then I have become less human. I have become consumed by fear. I have let the terrorists have control over my life. And I refuse to do that.

If governors really wanted to save lives, they would support meaningful mass transit alternatives that would prevent tens of thousands of road deaths a year. They would support guaranteed health care for all. They would support good education, science-based climate change action, clean water and air, mental health services for all, and above all, compassion for everyone.

By supporting Muslim registries, we look like Hitler to them. By discriminating against refugees based on where they’re from or their religion, we support the terrorists, making it easy for them to win hearts and minds. By ignoring the fact that entering the country as a refugee takes years, as opposed to entering as a tourist taking only minutes, we willfully ignore the truth about where dangers lie.

So what do I have to fear?

Only, as the saying goes, fear. Fear is making this country turn its backs on the needy. Fear is making not just the US but much of Europe turn its backs on civil liberties and due process. Fear gives the terrorists control, and that helps them win.

I refuse. I simply refuse to play along. No terrorist, no politician, no bigot gets to steal MY humanity.

Ultimately, however, I know that the long game is not one of fear. The arc of the universe bends towards justice, and ultimately, love wins. It takes agonizingly long sometimes, but in the end, love wins.

So I do not fear.

20 November, 2015 07:22PM by John Goerzen

hackergotchi for Daniel Pocock

Daniel Pocock

Databases of Muslims and homosexuals?

One US presidential candidate has said a lot recently, but the comments about making a database of Muslims may qualify as the most extreme.

Of course, if he really wanted to, somebody with this mindset could find all the Muslims anyway. A quick and easy solution would involve tracing all the mobile phone signals around mosques on a Friday. Mr would-be President could compel Facebook and other social networks to disclose lists of users who identify as Muslim.

Databases are a dangerous side-effect of gay marriage

In 2014 there was significant discussion about Brendan Eich's donation to the campaign against gay marriage.

One fact that never ranked very highly in the debate at the time is that not all gay people actually support gay marriage. Even where these marriages are permitted, not everybody who can marry now is choosing to do so.

The reasons for this are varied, but one key point that has often been missed is that there are two routes to marriage equality: one involves permitting gay couples to visit the register office and fill in a form just as other couples do. The other route to equality is to remove all the legal artifacts around marriage altogether.

When the government does issue a marriage certificate, it is not long before other organizations start asking for confirmation of the marriage. Everybody from banks to letting agents and Facebook wants to know about it. Many companies outsource that data into cloud CRM systems such as Salesforce. Before you know it, there are numerous databases that somebody could mine to make a list of confirmed homosexuals.

Of course, if everybody in the world was going to live happily ever after none of this would be a problem. But the reality is different.

While discrimination: either against Muslims or homosexuals - is prohibited and can even lead to criminal sanctions in some countries, this attitude is not shared globally. Once gay people have their marriage status documented in the frequent flyer or hotel loyalty program, or in the public part of their Facebook profile, there are various countries where they are going to be at much higher risk of prosecution/persecution. The equality to marry in the US or UK may mean they have less equality when choosing travel destinations.

Those places are not as obscure as you might think: even in Australia, regarded as a civilized and laid-back western democracy, the state of Tasmania fought tooth-and-nail to retain the criminalization of virtually all homosexual conduct until 1997 when the combined actions of the federal government and high court compelled the state to reform. Despite the changes, people with some of the most offensive attitudes are able to achieve and retain a position of significant authority. The same Australian senator who infamously linked gay marriage with bestiality has successfully used his position to set up a Senate inquiry as a platform for conspiracy theories linking Halal certification with terrorism.

There are many ways a database can fall into the wrong hands

Ironically, one of the most valuable lessons about the risk of registering Muslims and homosexuals was an injustice against the very same tea-party supporters a certain presidential candidate is trying to woo. In 2013, it was revealed IRS employees had started applying a different process to discriminate against groups with Tea party in their name.

It is not hard to imagine other types of rogue or misinformed behavior by people in positions of authority when they are presented with information that they don't actually need about somebody's religion or sexuality.

Beyond this type of rogue behavior by individual officials and departments, there is also the more sinister proposition that somebody truly unpleasant is elected into power and can immediately use things like a Muslim database, surveillance data or the marriage database for a program of systematic discrimination. France had a close shave with this scenario in the 2002 presidential election when
Jean-Marie Le Pen, who has at least six convictions for racism or inciting racial hatred made it to the final round in a two-candidate run-off with Jacques Chirac.

The best data security

The best way to be safe- wherever you go, both now and in the future - is not to have data about yourself on any database. When filling out forms, think need-to-know. If some company doesn't really need your personal mobile number, your date of birth, your religion or your marriage status, don't give it to them.

20 November, 2015 06:02PM by Daniel.Pocock

hackergotchi for Gergely Nagy

Gergely Nagy

Looking for a keyboard

Even though I spend more time staring at the screen than typing, there are times when I - after lots and lots of prior brain work - sit down and start typing, a lot. A couple of years ago, I started to feel pain in my wrists, and there were multiple occasions when I had to completely stop writing for longer periods of time. These were situations I obviously did not want repeated, so I started to look for remedies. First, I bought a new keyboard, a TypeMatrix 2300, which while not ergonomic, was a huge relief for my hands and wrists. I also started to learn Dvorak, but that's still something that is kind-of in progress: my left hand can write Dvorak reasonably fast, but my right one seems to be Qwerty-wired, even after a month of typing Dvorak almost exclusively.

This keyboard served me well for the past five year or so. But recently, I started to look for a replacement, partly triggered by a Clojure/conj talk I watched. I got as far as assembling a list of keyboards I'm interested in, but I have a hard time choosing. This blog post here serves two purposes then: first to make a clear pros/cons list for myself, second, to solicit feedback from others who may have more experience with any of the options below.

Update: There is a [follow up post], with a few more keyboards explored, and a semi-final verdict. Thanks everyone for the feedback and help, much appreciated!

Lets start with the current keyboard!

TypeMatrix 2030

TypeMatrix 2030


  • The Matrix architecture, with straight vertical key columns has been incredibly convenient.
  • Enter and Backspace in the middle, both large: loving it.
  • Skinnable (easier to clean, and aids in learning a new layout).
  • Optional dvorak skin, and a hardware Dvorak switch.
  • The layout (cursor keys, home/end, page up/down, etc) is something I got used to very fast.
  • Multimedia keys close by with Fn.
  • Small, portable, lightweight - ideal for travel.


  • Small: while also a feature, this is a downside too. Shoulder position is not ideal.
  • Skins: while they are a terrific aid when learning a new layout, and make cleaning a lot easier, they wear off quickly. Sometimes fingernails are left to grow too long, and that doesn't do good to the skin. One of my two QWERTY layouts has a few holes already, sadly.
  • Not a split keyboard, which is starting to feel undesirable.


All in all, this is a keyboard I absolutely love, and am very happy with. Yet, I feel I'm ready to try something different. With my skins aging, and the aforementioned Clojure/conj talk, the desire to switch has been growing for a while now.

Desired properties

There are a few desired properties of the keyboard I want next. The perfect keyboard need not have all of these, but the more the merrier.

  • Ergonomic design.
  • Available in Dvorak, or with blank keys.
  • Preferably a split keyboard, so I can position the two parts as I see fit.
  • Ships to Hungary, or Germany, in a reasonable time frame. (If all else fails, shipping to the US may work too, but I'd rather avoid going through extra hoops.)
  • Mechanical keys preferred. But not the loud clicky type: I work in an office; and at home, I don't want to wake my wife either.

I plan to buy one keyboard for a start, but may end up buying another to bring to work (like I did with the TypeMatrix, except my employer at the time bought the second one for me). At work, I will continue using the TypeMatrix, most likely, but I'm not sure yet.

Anyhow, there are a number of things I do with my computer that require a keyboard:

  • I write code, a considerable amount.
  • I write prose, even more than code. Usually in English, sometimes in Hungarian.
  • I play games. Most of them, with a dedicated controller, but there are some where I use the keyboard a lot.
  • I browse the web, listen to music, and occasionally edit videos.
  • I multi-task all the time.
  • 90% of my time is spent within Emacs (recently switched to Spacemacs).
  • I hate the mouse, with a passion. Trackballs, trackpoints and touchpads even more. If I can use my keyboard to do mouse-y stuff well enough to control the browser, and do some other things that do not require precise movement (that is, not games), I'll be very happy.

I am looking for a keyboard that helps me do these things. A keyboard that will stay with me not for five years or a decade, but pretty much forever.

The options

Ultimate Hacking Keyboard

Ultimate Hacking Keyboard


  • Split keyboard.
  • Mechanical keys (with a quiet option).
  • Ships to Hungary. Made in Hungary!
  • Optional addons: three extra buttons and a small trackball for the left side, and a trackball for the right side. While I'm not a big fan of the mouse, the primary reasons is that I have to move my hand. If it's in the middle, that sounds much better.
  • Four layers of the factory keymap: I love the idea of these layers, especially the mouse layer.
  • Programmable, so I can define any layout I want.
  • Open source firmware, design and agent!
  • An optional palm rest is available as well.
  • Blank option available.


  • Likely not available before late summer, 2016.
  • No thumb keys.
  • Space/Mod arrangement feels alien.
  • The LED area is useless to me, and bothers my eye. Not a big deal, but still.
  • While thumb keys are available for the left side, not so for the right one. I'd rather have keys there than a trackball. The only reason I'd want the $50 addon set, is the left thumb-key module (which also seems to have a trackpoint, another pointless gadget).


The keyboard looks nice, has a lot of appealing features. It is programmable, so much so that by the looks of it, I could emulate the hardware dvorak switch my TypeMatrix has. However, I'm very unhappy with the addons, so there's that too.

All in all, this would cost me about $304 (base keyboard, modules, palm rest and shipping). Not too bad, certainly a strong contender, despite the shortcomings.




  • Great design, by the looks of it.
  • Mechanical keys.
  • Open source hardware and firmware, thus programmable.
  • Thumb keys.
  • Available via ErgoDox EZ as an assembled product.


  • Primarily a kit, but assembled available.
  • Not sure when it'd ship (december shipments are sold out).


The keyboard looks interesting, primarily due to the thumb keys. From the ErgoDox EZ campaign, I'm looking at $270. That's friendly, and makes ErgoDox a viable option! (Thanks @miffe!)

Kinesis Advantage

Kinesis Advantage


  • Mechanical keys, Cherry-MX brown.
  • Separate thumb keys.
  • Key wells look interesting.
  • Available right now.
  • QWERTY/Dvorak layout available.


  • Not a split keyboard.
  • Not open source, neither hardware, nor firmware.
  • Shipping to Hungary may be problematic.
  • The QWERTY/Dvorak layout is considerably more expensive.
  • Judging by some of the videos I saw, keys are too loud.


The key wells look interesting, but it's not a split keyboard, nor is it open source. The cost come out about $325 plus shipping and VAT and so on, so I'm probably looking at something closer to $400. Nah. I'm pretty sure I can rule this out.

Kinesis FreeStyle2

Kinesis FreeStyle2


  • Split keyboard.
  • Available right now.
  • Optional accessory, to adjust the slope of the keyboard.


  • Not open source, neither hardware, nor firmware.
  • Doesn't seem to be mechanical.
  • Shipping to Hungary may be problematic.
  • No Dvorak layout.
  • No thumb keys.


While a split keyboard, at a reasonably low cost ($149 + shipping + VAT), it lacks too many things to be considered a worthy contender.




  • Mechanical keyboard.
  • Key wells.
  • Thumb keys.
  • Built in palm rest.
  • Available in Dvorak too.


  • Not a split keyboard.
  • The center numeric area looks weird.
  • Not sure about programmability.
  • Not open source.
  • Expensive.


Without shipping, I'm looking at £450. That's a very steep price. I love the wells, and the thumb keys, but it's not split, and customisability is a big question here.




  • Sleek, compact design.
  • No keycaps.
  • Mechanical keyboard.
  • Open source firmware.
  • More keys within thumbs reach.
  • Available right now.


  • Ships as a DIY kit.
  • Not a split keyboard.


While not a split keyboard, it does look very interesting, and the price is much lower than the rest: $149 + shipping ($50 or so). It is similar - in spirit - to my existing TypeMatrix. It wouldn't take much to get used to, and is half the price of the alternatives. A strong option, for sure.

Keyboardio M01

Keyboardio Model 01


  • Mechanical keyboard.
  • Hardwood body.
  • Blank and dot-only keycaps option.
  • Open source: firmware, hardware, and so on. Comes with a screwdriver.
  • The physical key layout has much in common with my TypeMatrix.
  • Numerous thumb-accessible keys.
  • A palm key, that allows me to use the keyboard as a mouse.
  • Fully programmable LEDs.
  • Custom macros, per-application even.


  • Fairly expensive.
  • Custom keycap design, thus rearranging them physically is not an option, which leaves me with the blank or dot-only keycap options only.
  • Available late summer, 2016.


With shipping cost and whatnot, I'm looking at something in the $370 ballpark, which is on the more expensive side. On the other hand, I get a whole lot of bang for my buck: LEDs, two center bars (tripod mounting sounds really awesome!), hardwood body, and a key layout that is very similar to what I came to love on the TypeMatrix.

I also have a thing for wooden stuff. I like the look of it, the feel of it.

The Preference List

After writing this all up, I think I prefer the Model 01, but the UHK and ErgoDox come close too!

The UHK is cheaper, but not by a large margin. It lacks the thumb keys and the palm key the M01 has. It also looks rather dull (sorry). They'd both ship about the same time, but, the M01 is already funded, while the UHK is not (mind you, there's a pretty darn high chance it will be).

The ErgoDox has thumb keys, split keyboard, and is open source. Compared to the UHK, we have the thumb keys, and less distraction, for a better price. But the case is not so nice. Compared to the Model 01: no leds, or center bar, and an inferior case. But, much better price, which is an important factor too.

Then, there's the Atreus. While it's a DIY kit, it is much more affordable than the rest, and I could have it far sooner. Yet... it doesn't feel like a big enough switch from my current keyboard. I might as well continue using the TypeMatrix then, right?

The rest, I ruled out earlier, while I was reviewing them anyway.

So, the big question is: should I invest close to $400 into a keyboard that looks stunning, and will likely grow old with me? Or should I give up some of the features, and settle for one of the $300 ones, that'll also grow old with me. Or is there an option I did not consider, that may match my needs and preferences better?

If you, my dear reader, got this far, and have a suggestion, please either tweet at me, or write an email, or reach me over any other medium I am reachable at (including IRC, hanging out as algernon on FreeNode and OFTC).

Thank you in advance, to all of you who contact me, and help me choose a keyboard!

20 November, 2015 05:45PM by Gergely Nagy

Sylvain Beucler

Rebuilding Android proprietary SDK binaries

Going back to Android recently, I saw that all tools binaries from the Android project are now click-wrapped by a quite ugly proprietary license, among others an anti-fork clause (details below). Apparently those T&C are years old, but the click-wrapping is newer.

This applies to the SDK, the NDK, Android Studio, and all the essentials you download through the Android SDK Manager.

Since I keep my hands clean of smelly EULAs, I'm working on rebuilding the Android tools I need.
We're talking about hours-long, quad-core + 8GB-RAM + 100GB-disk-eating builds here, so I'd like to publish them as part of a project who cares.

As a proof-of-concept, the Replicant project ships a 4.2 SDK and I contributed build instructions for ADT and NDK (which I now use daily).

(Replicant is currently stuck to a 2013 code base though.)

I also have in-progress instructions on my hard-drive to rebuild various newer versions of the SDK/API levels, and for the NDK whose releases are quite hard to reproduce (no git tags, requires fixes committed after the release, updates are partial rebuilds, etc.) - not to mention that Google doesn't publish the source code until after the official release (closed development) :/ And in some cases like Android Support Repository [not Library] I didn't even find the proper source code, only an old prebuilt.

Would you be interested in contributing, and would you recommend a structure that would promote Free, rebuilt Android *DK?

The legalese

Anti-fork clause:

3.4 You agree that you will not take any actions that may cause or result in the fragmentation of Android, including but not limited to distributing, participating in the creation of, or promoting in any way a software development kit derived from the SDK.

So basically the source is Apache 2 + GPL, but the binaries are non-free. By the way this is not a GPL violation because right after:

3.5 Use, reproduction and distribution of components of the SDK licensed under an open source software license are governed solely by the terms of that open source software license and not this License Agreement.

Still, AFAIU by clicking "Accept" to get the binary you still accept the non-free "Terms and Conditions".

(Incidentally, if Google wanted SDK forks to spread and increase fragmentation, introducing an obnoxious EULA is probably the first thing I'd have recommended. What was its legal team thinking?)

Indemnification clause:

12.1 To the maximum extent permitted by law, you agree to defend, indemnify and hold harmless Google, its affiliates and their respective directors, officers, employees and agents from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorneys fees) arising out of or accruing from (a) your use of the SDK, (b) any application you develop on the SDK that infringes any copyright, trademark, trade secret, trade dress, patent or other intellectual property right of any person or defames any person or violates their rights of publicity or privacy, and (c) any non-compliance by you with this License Agreement.

Usage restriction:

3.1 Subject to the terms of this License Agreement, Google grants you a limited, worldwide, royalty-free, non-assignable and non-exclusive license to use the SDK solely to develop applications to run on the Android platform.

3.3 You may not use the SDK for any purpose not expressly permitted by this License Agreement. Except to the extent required by applicable third party licenses, you may not: (a) copy (except for backup purposes), modify, adapt, redistribute, decompile, reverse engineer, disassemble, or create derivative works of the SDK or any part of the SDK; or (b) load any part of the SDK onto a mobile handset or any other hardware device except a personal computer, combine any part of the SDK with other software, or distribute any software or device incorporating a part of the SDK.

If you know the URLs, you can still direct-download some of the binaries which don't embed the license, but all this feels fishy. GNU licensing didn't answer me (yet). Maybe debian-legal has an opinion?

In any case, the difficulty to reproduce the *DK builds is worrying enough to warrant an independent rebuild.

Did you notice this?

20 November, 2015 02:18PM

No to ACTA - Paris

Today, there were events all around Europe to block ACTA.

In Paris, the protest started at Place de la Bastille :

APRIL was present, with in particular its president Lionel Allorge, and two members who wore the traditional anti-DRM suit :

Jérémie Zimmermann from La Quadrature du Net gave a speech and urged people to contact their legal representatives, in addition to protesting in the street :

The protest was cheerful and free of violence :

It got decent media coverage :

Notable places it crossed include Place des Victoires :

and Palais Royal, where it ended :

Next protest is in 2 weeks, on March 10th. Update your agenda!

20 November, 2015 02:18PM