July 01, 2016

Thorsten Alteholz

My Debian Activities in June 2016

FTP assistant

This month I marked 233 packages for accept and rejected 29. I also sent 11 emails to maintainers asking questions. Currently there are 33 packages in NEW and the minimum this week has been as low as 24 packages. Come on you fellow developers, where are your packages? I am sure you can do better :-) .

Debian LTS

This was my twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.75h. This resulted in patches for 13 CVEs and the following uploads:

  • [DLA 522-1] python2.7 security update
  • [DLA 533-1] php5 security update
  • [DLA 534-1] libgd2 security update
  • [DLA 536-1] wget security update

I also looked at mxml and libstruts1.2-java and marked CVEs for these packages as “no-dsa”. I also reviewed a patch of Salvatore for an embargoed CVE of xerces-c. Last but not least I looked at the remaining two CVEs for asterisk, but was not really able to create working patches …

This month I called again for testing php5. Thanks a lot to Stefan and anybody else who sent in their reports! As there are already new CVEs for php5 available, I am afraid I need your support again in July …

This month I also had another term of frontdesk work and answered questions or looked for CVEs that are important for Wheezy LTS or could be ignored.

Other stuff

I made some progress with the Alljoyn framework. Up to now the following packages are available:

  • alljoyn-core-1504
  • alljoyn-core-1509
  • alljoyn-core-1604
  • alljoyn-gateway-1504
  • alljoyn-services-1504
  • alljoyn-services-1509
  • alljoyn-thin-client-1504
  • alljoyn-thin-client-1509
  • alljoyn-thin-client-1604
  • duktape

Unfortunately as some of those modules still need to be released in current versions, there are some gaps.

Anyway, the next uploads will include an XMPP connector, to basically bridge a local AllJoyn bus to a remote AllJoyn bus over XMPP. Further, with the lighting module, real lamps can be switched on and off and much more. Also the Home Appliances and Entertainment Service Framework seems to be interesting as well.

In the Javascript world I uploaded some new packages …

  • node-strip-ansi
  • node-lodash-compat
  • node-has-flag
  • node-errs
  • node-ejs
  • node-absolute-path

… and uploaded new versions for the following packages:

  • node-base62
  • node-array-flatten
  • node-eventsource
  • node-xmlhttprequest-ssl
  • node-wrappy

01 July, 2016 08:45PM by alteholz

hackergotchi for Joachim Breitner

Joachim Breitner

When to reroll a six

This is a story about counterintuitive probabilities and how a small bit of doubt turned out to be very justified.

It begins with the game “To Court the King” (German: „Um Krone und Kragen“). It is a nice game with dice and cards, where you start with a few dice, and use your dice rolls to buy additional cards, which give you extra dice or special powers to modify the dice that you rolled. You can actually roll your dice many times, but every time, you have to set aside at least one die, which you can no longer change or reroll, until eventually all dice have been set aside.

A few years ago, I have played this game a lot, both online (on yucata.de) as well as in real life. It soon became apparent that it is almost always better to go for the cards that give you an extra die, instead of those that let you modify the dice. Intuitively, this is because every additional die allows you to re-roll your dice once more.

I concluded that if I have a certain number of dice (say, n), and I want to have a sum as high as possible at the end, then it may make sense to reroll as many dice as possible, setting aside only those showing a 6 (because that is the best you can get) or, if there is no dice showing a 6, then a single die with the best score. Besides for small number of dice (2 or 3), where even a 4 or 5 is worth keeping, this seemed to be a simple, obvious and correct stategy to maximize the expected outcome of this simplified game.

It is definitely simple and obvious. But some doubt that it was correct remained. Having one more die still in the game (i.e. not set aside) definitely improves your expected score, because you can reroll the dice more often. How large is this advantage? What if it ever execeeds 6 – then it would make sense to reroll a 6. The thought was strange, but I could not dismiss it.

So I did what one does these days if one has a question: I posed it on the mathematics site of StackExchange. That was January 2015, and nothing happened.

I tried to answer it myself a month later, or at least work towards at an answer, and did that by brute force. Using a library for probabilitstic calculations for Haskell I could write some code that simply calculated the various expected values of n dice for up to n = 9 (beyond that, my unoptimized code would take too long):

1:  3.50000 (+3.50000)
2:  8.23611 (+4.73611)
3: 13.42490 (+5.18879)
4: 18.84364 (+5.41874)
5: 24.43605 (+5.59241)
6: 30.15198 (+5.71592)
7: 35.95216 (+5.80018)
8: 41.80969 (+5.85753)
9: 47.70676 (+5.89707)

The result supported the hypothesis that there is no point in rolling a 6 again: The value of an additional die grows and approaches 6 from beyond, but – judging from these number – is never going to reach it.

Then again nothing happend. Until 14 month later, when some Byron Schmuland came along, found this an interesting puzzle, and set out a 500 point bounty to whoever solved this problem. This attracted a bit attention, and a few not very succesful attempts at solving this. Eventually it reached twitter, where Roman Cheplyaka linked to it.

I do not know if the tweet made a difference, but a day later some joriki came along, and he had a very good idea: Why not make our life easier and think about dice with less sides, and look at 3 instead of 6. This way, and using a more efficient implementation, he could do a similar calculation for up to 50 dice. And it was very lucky that he went to 50, and not just 25, because up to 27, the results were very much as expected, approaching value of +3 from below. But then it surpassed +3 and became +3.000000008463403.

In other words: If you have roll 28 dice, and you have exactly two dice showing a 3, then it gives you better expected score if you set aside only one 3, and not both of them. The advantage is miniscule, but that does not matter – it is there.

From then on, the results behaved strangely. Between 28 and 34, the additional value was larger than 3. Then, from 35 on again lower than 2. It oscillated. Something similar could be observed when the game is played with coins.

Eventually, joriki improved his code and applied enough tricks so that he could solve it also for the 6-sided die: The difference of the expected value of 198 dice and having 199 dice is larger than 6 (by 10 − 21...)!

The optimizations that allowed him to calculate these numbers in a reasonable amount of time unfortunately was to assume that my original hypothesis (never rerolling a 6 is optimal), which held until n < 199. But this meant that for n > 199, the code did not yield correct results.

What is the rationale of the story? Don’t trust common sense when it comes to statistics; don’t judge a sequence just from a few initial numbers; if you have an interesting question, post it online and wait for 16 months.

01 July, 2016 07:47PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Leo 'costela' Antunes

Leo 'costela' Antunes

Yet another letsencrypt (ACME) client

Well, I apparently joined the hordes of people writing ACME (the Protocol behind Let’s Encrypt) clients.

Like the fairy tale Goldilocks, I couldn’t find a client in the right spot between minimalistic and full-featured for my needs: acme-tiny was too bare-bones; the official letsencrypt client (now called certbot) too huge; and simp_le came very close, but it’s support for pluggable certificate formats made it just a bit too big for me.

So, wile (named after another famous user of ACME products) was born. Maybe it will fill someone else’s very subjective needs.

01 July, 2016 06:35PM by Leo Antunes

Kevin Avignon

Elena 'valhalla' Grandi

Busy/idle status indicator

Busy/idle status indicator

About one year ago, during my first http://debconf15.debconf.org/, I've felt the need for some way to tell people whether I was busy on my laptop doing stuff that required concentration or just passing some time between talks etc. and available for interruptions, socialization or context switches.

One easily available method of course would have been to ping me on IRC (and then probably go on chatting on it while being in the same room, of course :) ), but I wanted to try something that allowed for less planning and worked even in places with less connectivity.

My first idea was a base laptop sticker with two statuses and then a removable one used to cover the wrong status and point to the correct one, and I still think it would be nice, but having it printed is probably going to be somewhat expensive, so I shelved the project for the time being.


Lately, however, I've been playing with hexagonal stickers https://terinjokes.github.io/StickerConstructorSpec/ and decided to design something on this topic, whith the result in the figure above, with the “hacking” sticker being my first choice, and the “concentrating” alternative probably useful while surrounded by people who may misunderstand the term “hacking”.

While idly looking around for sticker printing prices I realized that it didn't necessarly have to be a sticker and started to consider alternatives.

One format I'm trying is inspired by "do not disturb" door signs: I've used some laminating pouches I already had around which are slightly bigger than credit-card format (but credit-card size would also work of course ) and cut a notch so that they can be attached to the open lid of a laptop.


They seem to fit well on my laptop lid, and apart from a bad tendency to attract every bit of lint in a radius of a few meters the form factor looks good. I'll try to use them at the next conference to see if they actually work for their intended purpose.

SVG sources (and a PDF) are available on my website http://www.trueelena.org/computers/projects/busy_idle_indicator.html under the CC-BY-SA license.

01 July, 2016 04:24PM by Elena ``of Valhalla''

Free Software dreams

Free Software dreams

Tonight I've dreamt I was inside https://wl.widelands.org/, as a barbarian being invaded by the atlanteans.

I've had the same thing happening to me a few times with Battle for Wesnoth http://wesnoth.org/

Mayyybe it is a sign that lately I've been playing it too much, but I'm quite happy with the fact that free software / culture is influencing my dreams.

Thanks to everybody who is involved into Free Culture for creating enough content so that this can happen.

01 July, 2016 01:58PM by Elena ``of Valhalla''

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in June 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian packaging

Django and Python. I uploaded Django 1.9.7 and filed an upstream ticket (#26755) for a failure seen in its DEP-8 tests.

I packaged/sponsored python-django-modeltranslation and python-paypal. I opened a pull request on model-translation to fix failing tests in the Debian package build.

I packaged a new python-django-jsonfield (1.0.0), filed a bug and discovered some regression in its PostgreSQL support. I helped on the upstream ticket and I have been granted commit rights. I used this opportunity to do some bug triage and push a few fixes. I also discussed the future of the module and ended up starting a discussion on Django’s developer list about the possibility to add a JSONField to the core.

CppUTest. I uploaded a new upstream version (3.8) with more than a year of work. I found out that make install does not install a required header so I opened a ticket with a patch. The package ended up not compiling on quite a few architectures so I opened a ticket and prepared a fix for some of those failures with the help of the upstream developers. I also added a DEP-8 tests after having uploaded a broken (untested) package…

systemd support in net-snmp and postfix. I worked on adding native systemd service units to net-snmp (#782243) and postfix (#715188). In both cases, the maintainers have not been very reactive so far so I uploaded my changes as delayed NMU.

pkg-security team. The team that I started quietly a few months ago is now growing, both with new members and new packages. I created the required Teams/pkg-security wiki page. I sponsored xprobe, hydra, made an upload of medusa to merge Kali changes into Debian (and at the same time submitting the patch to upstream).

fontconfig. After having read Jonathan McDowell’s analysis of a bug that I experienced multiple times (and that many Kali users had too), I opened bug #828037 to get it fixed once for all. Unfortunately, nothing happened yet.

DebConf 16

I spent some time to prepare the 2 talks and the BoF that I will give/manage in Cape Town next week:

  • Kali Linux’s Experience https://debconf16.debconf.org/talks/39/
  • 2 Years of Work of Paid Contributors in the Debian LTS Project https://debconf16.debconf.org/talks/40/
  • Using Debian Money to Fund Debian Projects https://debconf16.debconf.org/talks/41/

Distro Tracker

I continued to mentor Vladimir Likic who managed to finish his first patch. He is now working on documentation for new contributors based on his recent experience.

I enhanced the tox configuration to run tests with Django 1.8 LTS with fatal warnings (python -Werror) so as to ensure that I’m not relying on any deprecated feature and so that I can be sure that the codebase will work on the next Django LTS release (1.11). Thanks to this, I did discover quite a few places where I have been using deprecated API and I fixed them all (the JSONField update to 1.0.0 I mentionned above was precisely to fix such a warning).

I also fixed a few more issues with folded mail headers that you can’t inject back in a new Message object and with messages lacking the subject field. All those have been caught through real (spam) email generating exceptions wich are then mailed to me.

Kali related work

I uploaded a new live-boot (5.20160608) to Debian to fix a bug where the boot process was blocking on some timeout.

I forwarded a Kali bug against libatk-wrapper-java (#827741) which turned out to be an OpenJDK bug.

I filed #827749 against reprepro to request a way to remove selected internal file references. This is required if you want to be able to make a file disappear and if that file is part of a snapshot that you want to keep despite this. But in truth, my real need is to be able to replace the .orig.tar.gz used by Kali by the orig.tar.gz used by Debian… those conflicts break the mirroring/import script.


I have been using salt to deploy a new service, and I developed patches for a few issues in salt formulas. I also created a new letsencrypt-sh formula to manage TLS certificates with the letsencrypt.sh ACME client.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 July, 2016 01:14PM by Raphaël Hertzog

Kevin Avignon

Paul Wise

DebCamp16 day 7

Usual spam reporting. Review wiki RecentChanges. Provide feedback for the staging site of the new codebase for screenshots.d.n. Redirect bugs-search.d.o complaint to the BTS maintainers. Point out pastebinit already supports fpaste.org. Polish chromium-bsu, make a new upstream release to fix Debian RC bug #822711. Upload screenshot of chromium-bsu menu. Notify chromium-bsu package maintainers in other distros (hug whohas). Avoid checking WAV files for spelling errors in cats. Make the old PTS download i18n data over https. File #829092 to get the per-package i18n data to use https for links. Point someone on mentors to the Debian PHP group wiki page.

01 July, 2016 05:38AM

June 30, 2016

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in June 2016

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):


My work in the Reproducible Builds project was covered in our weekly reports. (#58, #59 & #60)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Extended the lts-cve-triage.py script to ignore packages that are not subject to Long Term Support.

  • Issued DLA 512-1 for mantis fixing an XSS vulnerability.
  • Issued DLA 513-1 for nspr correcting a buffer overflow in a sprintf utility.
  • Issued DLA 515-1 for libav patching a memory corruption issue.
  • Issued DLA 524-1 for squidguard fixing a reflected cross-site scripting vulnerability.
  • Issued DLA 525-1 for gimp correcting a use-after-free vulnerability in the channel and layer properties parsing process.


  • redis (2:3.2.1-1) — New upstream bugfix release, plus subsequent upload to the backports repository.
  • python-django (1.10~beta1-1) — New upstream experimental release.
  • libfiu (0.94-5) — Misc packaging updates.

RC bugs

I also filed 170 FTBFS bugs against a7xpg, acepack, android-platform-dalvik, android-platform-frameworks-base, android-platform-system-extras, android-platform-tools-base, apache-directory-api, aplpy, appstream-generator, arc-gui-clients, assertj-core, astroml, bamf, breathe, buildbot, cached-property, calf, celery-haystack, charmtimetracker, clapack, cmake, commons-javaflow, dataquay, dbi, django-celery, django-celery-transactions, django-classy-tags, django-compat, django-countries, django-floppyforms, django-hijack, django-localflavor, django-markupfield, django-model-utils, django-nose, django-pipeline, django-polymorphic, django-recurrence, django-sekizai, django-sitetree, django-stronghold, django-taggit, dune-functions, elementtidy, epic4-help, fcopulae, fextremes, fnonlinear, foreign, fort77, fregression, gap-alnuth, gcin, gdb-avr, ggcov, git-repair, glance, gnome-twitch, gnustep-gui, golang-github-audriusbutkevicius-go-nat-pmp, golang-github-gosimple-slug, gprbuild, grafana, grantlee5, graphite-api, guacamole-server, ido, jless, jodreports, jreen, kdeedu-data, kdewebdev, kwalify, libarray-refelem-perl, libdbusmenu, libdebian-package-html-perl, libdevice-modem-perl, libindicator, liblrdf, libmail-milter-perl, libopenraw, libvisca, linuxdcpp, lme4, marble, mgcv, mini-buildd, mu-cade, mvtnorm, nose, octave-epstk, onioncircuits, opencolorio, parsec47, phantomjs, php-guzzlehttp-ringphp, pjproject, pokerth, prayer, pyevolve, pyinfra, python-asdf, python-ceilometermiddleware, python-django-bootstrap-form, python-django-compressor, python-django-contact-form, python-django-debug-toolbar, python-django-extensions, python-django-feincms, python-django-formtools, python-django-jsonfield, python-django-mptt, python-django-openstack-auth, python-django-pyscss, python-django-registration, python-django-tagging, python-django-treebeard, python-geopandas, python-hdf5storage, python-hypothesis, python-jingo, python-libarchive-c, python-mhash, python-oauth2client, python-proliantutils, python-pytc, python-restless, python-tidylib, python-websockets, pyvows, qct, qgo, qmidinet, quodlibet, r-cran-gss, r-cran-runit, r-cran-sn, r-cran-stabledist, r-cran-xml, rgl, rglpk, rkt, rodbc, ruby-devise-two-factor, ruby-json-schema, ruby-puppet-syntax, ruby-rspec-puppet, ruby-state-machine, ruby-xmlparser, ryu, sbd, scanlogd, signond, slpvm, sogo, sphinx-argparse, squirrel3, sugar-jukebox-activity, sugar-log-activity, systemd, tiles, tkrplot, twill, ucommon, urca, v4l-utils, view3dscene, xqilla, youtube-dl & zope.interface.

FTP Team

As a Debian FTP assistant I ACCEPTed 186 packages: akonadi4, alljoyn-core-1509, alljoyn-core-1604, alljoyn-gateway-1504, alljoyn-services-1504, alljoyn-services-1509, alljoyn-thin-client-1504, alljoyn-thin-client-1509, alljoyn-thin-client-1604, apertium-arg, apertium-arg-cat, apertium-eo-fr, apertium-es-it, apertium-eu-en, apertium-hbs, apertium-hin, apertium-isl, apertium-kaz, apertium-spa, apertium-spa-arg, apertium-tat, apertium-urd, arc-theme, argus-clients, ariba, beast-mcmc, binwalk, bottleneck, colorfultabs, dh-runit, django-modeltranslation, dq, dublin-traceroute, duktape, edk2, emacs-pdf-tools, eris, erlang-p1-oauth2, erlang-p1-sqlite3, erlang-p1-xmlrpc, faba-icon-theme, firefox-branding-iceweasel, golang-1.6, golang-defaults, golang-github-aelsabbahy-gonetstat, golang-github-howeyc-gopass, golang-github-oleiade-reflections, golang-websocket, google-android-m2repository-installer, googler, goto-chg-el, gr-radar, growl-for-linux, guvcview, haskell-open-browser, ipe, labplot, libalt-alien-ffi-system-perl, libanyevent-fcgi-perl, libcds-savot-java, libclass-ehierarchy-perl, libconfig-properties-perl, libffi-checklib-perl, libffi-platypus-perl, libhtml-element-library-perl, liblwp-authen-oauth2-perl, libmediawiki-dumpfile-perl, libmessage-passing-zeromq-perl, libmoosex-types-portnumber-perl, libmpack, libnet-ip-xs-perl, libperl-osnames-perl, libpodofo, libprogress-any-perl, libqtpas, librdkafka, libreoffice, libretro-beetle-pce-fast, libretro-beetle-psx, libretro-beetle-vb, libretro-beetle-wswan, libretro-bsnes-mercury, libretro-mupen64plus, libservicelog, libtemplate-plugin-datetime-perl, libtext-metaphone-perl, libtins, libzmq-ffi-perl, licensecheck, link-grammar, linux, linux-signed, lua-busted, magics++, mkalias, moka-icon-theme, neutron-vpnaas, newlisp, node-absolute-path, node-ejs, node-errs, node-has-flag, node-lodash-compat, node-strip-ansi, numba, numix-icon-theme, nvidia-graphics-drivers, nvidia-graphics-drivers-legacy-304xx, nvidia-graphics-drivers-legacy-340xx, obs-studio, opencv, pacapt, pgbackrest, postgis, powermock, primer3, profile-sync-daemon, pyeapi, pypandoc, pyssim, python-cutadapt, python-cymruwhois, python-fisx, python-formencode, python-hkdf, python-model-mommy, python-nanomsg, python-offtrac, python-social-auth, python-twiggy, python-vagrant, python-watcherclient, python-xkcd, pywps, r-bioc-deseq2, r-bioc-dnacopy, r-bioc-ensembldb, r-bioc-geneplotter, r-cran-adegenet, r-cran-adephylo, r-cran-distory, r-cran-fields, r-cran-future, r-cran-globals, r-cran-htmlwidgets, r-cran-listenv, r-cran-mlbench, r-cran-mlmrev, r-cran-pheatmap, r-cran-pscbs, r-cran-r.cache, refind, relatorio, reprotest, ring, ros-ros-comm, ruby-acts-as-tree, ruby-chronic-duration, ruby-flot-rails, ruby-numerizer, ruby-u2f, selenium-firefoxdriver, simgrid, skiboot, smtpping, snap-confine, snapd, sniffles, sollya, spin, subuser, superlu, swauth, swift-plugin-s3, syncthing, systemd-bootchart, tdiary-theme, texttable, tidy-html5, toxiproxy, twinkle, vmtk, wait-for-it, watcher, wcslib & xapian-core.

30 June, 2016 08:32PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RMarkdown and Metropolis/Mtheme

Nick Tierney asked on Twitter about rmarkdown and metropolis about whether folks had used RMarkdown-driven LaTeX Beamer presentations. And the answer is a firm hell yeah. I have been using mtheme (and/or a local variant I called 'm2') as well as the newer (renamed) release mtheme for the last year or two for all my RMarkdown-based presentations as you can see from my presentations page.

And earlier this year back I cleaned this up and wrote myself local Ubuntu packages which are here on Launchpad. I also have two GitHub repos for the underlying .deb package code: - the pkg-latex-metropolis package for the LaTeX part (which is also in TeXlive in an older version) - the pkg-fonts-fira for the underlying (free) font (and this sadly cannot build on launchpad as it needs a download step).

To round things up, I now also created a public 'sample' repo on GitHub. It is complete for all but the custom per-presenteation header.tex that modifies colours, add local definitions etc as needed for each presentation.

With that, Happy Canada Day (tomorrow, though) -- never felt better to be part of something Glorious and Free, and also free of Brexit, Drumpf and other nonsense.

30 June, 2016 05:43PM

Russell Coker


In Australia we are about to have a federal election, so we inevitably have a lot of stupid commentary and propaganda about politics.

One thing that always annoys me is the claim that we shouldn’t have small parties. We have two large parties, Liberal (right-wing, somewhat between the Democrats and Republicans in the US) and Labor which is somewhat similar to Democrats in the US. In the US the first past the post voting system means that votes for smaller parties usually don’t affect the outcome. In Australia we have Instant Runoff Voting (sometimes known as “The Australian Ballot”) which has the side effect of encouraging votes for small parties.

The Liberal party almost never wins enough seats to make government on it’s own, it forms a coalition with the National party. Election campaigns are often based on the term “The Coalition” being used to describe a Liberal-National coalition and the expected result if “The Coalition” wins the election is that the leader of the Liberal party will be Prime Minister and the leader of the National party will be the Deputy Prime Minister. Liberal party representatives and supporters often try to convince people that they shouldn’t vote for small parties and that small parties are somehow “undemocratic”, seemingly unaware of the irony of advocating for “The Coalition” but opposing the idea of a coalition.

If the Liberal and Labor parties wanted to form a coalition they could do so in any election where no party has a clear majority, and do it without even needing the National party. Some people claim that it’s best to have the major parties take turns in having full control of the government without having to make a deal with smaller parties and independent candidates but that’s obviously a bogus claim. The reason we have Labor allying with the Greens and independents is that the Liberal party opposes them at every turn and the Liberal party has a lot of unpalatable policies that make alliances difficult.

One thing that would be a good development in Australian politics is to have the National party actually represent rural voters rather than big corporations. Liberal policies on mining are always opposed to the best interests of farmers and the Liberal policies on trade aren’t much better. If “The Coalition” wins the election then the National party could insist on a better deal for farmers in exchange for their continued support of Liberal policies.

If Labor wins more seats than “The Coalition” but not enough to win government directly then a National-Labor coalition is something that could work. I think that the traditional interest of Labor in representing workers and the National party in representing farmers have significant overlap. The people who whinge about a possible Green-Labor alliance should explain why they aren’t advocating a National-Labor alliance. I think that the Labor party would rather make a deal with the National party, it’s just a question of whether the National party is going to do what it takes to help farmers. They could make the position of Deputy Prime Minister part of the deal so the leader of the National party won’t miss out.

30 June, 2016 03:26PM by etbe

hackergotchi for Steve Kemp

Steve Kemp

So I've been busy.

The past few days I've been working on my mail client which has resulted in a lot of improvements to drawing, display and correctness.

Since then I've been working on adding GPG-support. My naive attempt was to extract the signature, and the appropriate body-part from the message. Write them both to disk then I could validate via:

gpg --verify msg.sig msg

However that failed, and it took me a long to work out why. I downloaded the source to mutt, which can correctly verify an attached-signature, then hacked lib.c to neuter the mutt_unlink function. That left me with a bunch of files inside $TEMPFILE one of which provided the epiphany.

A message which is to be validated is indeed written out to disk, just as I would have done, as is the signature. Ignoring the signature the message is interesting:

Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On Mon, 27 Jun 2016 08:08:14 +0200


Bob Smith

The reason I'd failed to validate my message-body was because I'd already decoded the text of the MIME-part, and I'd also lost the prefixed two lines "Content-type:.." and Content-Transfer:.... I'm currently trying to work out if it is possible to get access to the RAW MIME-part-text in GMIME.

Anyway that learning aside I've made a sleazy hack which just shells out to mimegpg, and this allows me to validate GPG signatures! That's not the solution I'd prefer, but that said it does work, and it works with inline-signed messages as well as messages with application/pgp-signature MIME-parts.

Changing the subject now. I wonder how many people read to the end anyway?

I've been in Finland for almost a year now. Recently I was looking over websites and I saw that the domain steve.fi was going to expire in a few weeks. So I started obsessively watching it. Today I claimed it.

So I'll be slowly moving things from beneath steve.org.uk to use the new home steve.fi.

I also setup a mini-portfolio/reference site at http://steve.kemp.fi/ - which was a domain I registered while I was unsure if I could get steve.fi.

Finally now is a good time to share more interesting news:

  • I've been reinstated as a Debian developer.
  • We're having a baby.
    • Interesting times.

30 June, 2016 06:52AM

hackergotchi for Sean Whitton

Sean Whitton


This summer I’m living in a flat five minutes walk from Bucheon station, near Seoul. Today there is a threat of rain and it’s very humid, which tends to make one feel that time has stopped: it’s as if everyone and everything is waiting for the rain to fall before getting on with their lives. There are two other reasons why one might think that time has stopped. There is a household goods shop outside the station that has a poster up which says “last day of business”, but of course it says this every day. A few weeks ago it said “last three days of business” instead, but they must have decided that was starting to look implausible or something. They do various things to look like they’re struggling to get rid of their wares. The other day they just piled everying up in a huge pile on the street outside the shop. They have a guy with a megaphone shouting all day about how cheap everything is in an urgent tone.

The other reason to think time has stopped is that “today’s coffee” in Starbucks is always the same coffee. On the little blackboard that all Starbucks branches have they have written: “now brewing: hot: iced coffee blend. iced: iced coffee blend.” Every time I order a cup of today’s coffee I have to wait five minutes while they actually brew it because it seems like no-one else is ordering it. And it tastes exactly the same as yesterday’s coffee.

30 June, 2016 03:50AM

June 29, 2016

Paul Wise

DebCamp16 day 6

Redirect one person contacting the Debian sysadmin and web teams to Debian user support. Review wiki RecentChanges. Usual spam reporting. Check and fix a derivatives census issue. Suggest sending the titanpad maintainence issue to a wider audience. Update check-all-the-things and copyright review tools wiki page for licensecheck/devscripts split. Ask if debian-debug could be added to mirror.dc16.debconf.org. Discuss more about the devscripts/licensecheck split. Yesterday I grrred at Debian perl bug #588017 that causes vulnerabilities in check-all-the-things, tried to figure out the scope of the issue and workaround all of the issues I could find. (Perls are shiny and Check All The thingS can be abbreviated as cats) Today I confirmed with the reporter (Jakub Wilk) that the patch mitigates this. Release check-all-the-things to Debian unstable (finally!!). Discuss with the borg about syncing cats to Ubuntu. Notice autoconf/automake being installed as indirect cats build-deps (via debhelper/dh-autoreconf) and poke relevant folks about this. Answer question about alioth vs debian.org LDAP.

29 June, 2016 08:49PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Batch of the Next Thing Co.'s C.H.I.P. computers on its way to DebConf!)

Hello world!

I'm very happy to inform that the Next Thing Co. has shipped us a pack of 50 C.H.I.P. computers to be given away at DebConf! What is the C.H.I.P.? As their tagline says, it's the world's first US$9 computer. Further details:


All in all, it's a nice small ARM single-board computer; I won't bore you on this mail with tons of specs; suffice to say they are probably the most open ARM system I've seen to date.

So, I agreed with Richard, our contact at the company, I would distribute the machines among the DebConf speakers interested in one. Of course, not every DebConf speaker wants to fiddle with an adorable tiny piece of beautiful engineering, so I'm sure I'll have some spare computers to give out to other interested DebConf attendees. We are supposed to receive the C.H.I.P.s by Monday 4; if you want to track the package shipment, the DHL tracking number is 1209937606. Don't DDoS them too hard!

So, please do mail me telling why do you want one, what your projects are with it. My conditions for this giveaway are:

  • I will hand out the computers by Thursday 7.
  • Preference goes to people giving a talk. I will "line up" requests on two queues, "speaker" and "attendee", and will announce who gets one in a mail+post to this list on the said date.
  • With this in mind, I'll follow a strict "first come, first served".

To sign up for yours, please mail gwolf+chip@gwolf.org - I will capture mail sent to that alias ONLY.

29 June, 2016 08:28PM by gwolf

Olivier Grégoire

Fifth week at GSoC: push information from the daemon!

*Last week I worked to create a window for the gnome client to display information.*

This week I worked on linking the D-BUS with the gnome client.
To do that I needed to modify the LRC.
-Create a QT slot to catch the signal from the D-BUS
-Create a signal connect with a lambda function on the client

Unfortunately, I can only push a single variable at the time. So, I chose to use MAP to contain all my information. After changing this type in the daemon, D-Bus, LRC and the gnome client. Everything finally work!

29 June, 2016 04:57PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Debcamp NBD work

I had planned to do some work on NBD while here at debcamp. Here's a progress report:

Task Concept Code Tested
Change init script so it uses /etc/nbdtab rather than /etc/nbd-client for configuration
Change postinst so it converts existing /etc/nbd-client files to /etc/nbdtab
Change postinst so it generates /etc/nbdtab files from debconf
Create systemd unit for nbd based on /etc/nbdtab
Write STARTTLS support for client and/or server

The first four are needed to fix Debian bug #796633, of which "writing the systemd unit" was the one that seemed hardest. The good thing about debcamp, however, is that experts are aplenty (thanks Tollef), so that part's done now.

What's left:

  • Testing the init script modifications that I've made, so as to support those users who dislike systemd. They're fairly straightforward, and I don't anticipate any problems, but it helps to make sure.
  • Migrating the /etc/nbd-client configuration file to an nbdtab(5) one. This should be fairly straightforward, it's just a matter of Writing The Code(TM).
  • Changing the whole debconf setup so it writes (and/or updates) an nbdtab(5) file rather than a /etc/nbd-client shell snippet. This falls squarely into the "OMFG what the F*** was I thinking when I wrote that debconf stuff 10 years ago" area. I'll probably deal with it somehow. I hope. Not so sure how to do so yet, though.

If I manage to get all of the above to work and there's time left, I'll have a look at implementing STARTTLS support into nbd-client and nbd-server. A spec for that exists already, there's an alternative NBD implementation which has already implemented it, and preliminary patches exist for the reference implementation, so it's known to work; I just need to spend some time slapping the pieces together and making it work.

Ah well. Good old debcamp.

29 June, 2016 01:07PM

hackergotchi for Michal Čihař

Michal Čihař

PHP shapefile library

Since quite a long time phpMyAdmin had embedded the bfShapeFiles library for import of geospatial data. Over the time we had to apply fixes to it to stay compatible with newer PHP versions, but there was really no development. Unfortunately, as it seems to be only usable PHP library which can read and write ESRI shapefiles.

With recent switch of phpMyAdmin to dependency handling using Composer I wondered if we should get rid of the last embedded PHP library, which was this one - bfShapeFiles. As I couldn't find alive library which would work well for us, I resisted that for quite long, until pull request to improve it came in. At that point I've realized that it's probably better to separate it and start to improve it outside our codebase.

That's when phpmyadmin/shapefile was started. The code is based on bfShapeFiles, applies all fixes which were used in phpMyAdmin and adds improvements from the pull request. On top of that it has brand new testsuite (the coverage is still much lower than I'd like to have) and while writing the tests several parsing issues have been discovered and fixed. Anyway you can now get the source from GitHub or install using Composer from Packagist.

PS: While fixing parser bugs I've looked at other parsers as well to see how they handle some situations unclear in the specs and I had to fix Python pyshp on the way as well :-).

Filed under: Debian English phpMyAdmin | 0 comments

29 June, 2016 08:00AM

June 28, 2016

Reproducible builds folks

First steps towards getting containers working

Author: ceridwen

The 0.1 alpha release of reprotest has been accepted into Debian unstable and is available for install at packages.debian.org or through apt.

I've been working on redesigning reprotest so that it runs commands through autopkgtest's adt_testbed interface. For the most part, I needed to replace explicit calls to Python standard library functions for copying files and directories with calls to adt_testbed.Testbed.command() with copyup and copydown, and to use Testbed.execute() and Testbed.check_exec() to run commands instead of subprocess.

To test reprotest on the actual containers requires having containers constructed for this purpose. autopkgtest has a test that builds a minimal chroot. I considered doing something like this approach or using BusyBox. However, I have a Python script that mocks a build process, which requires having Python available in the container, and while I looked into busybox-python and MicroPython to keep the footprint small, I decided that for now this would take too much work and decided to go straight to the autopkgtest recommendations for building containers, mk-sbuild and vmdebootstrap. (I also ended up discovering a bug in debootstrap.) This means that to get the tests run requires some manual setup at the moment. In the long run, I'd like to improve that, but it's not an immediate priority. While working on adding tests for the other containers supported by autopkgtest, I also converted to py.test so that I could use fixtures and parametrization to run the Cartesian product of each variation with each container.

With tests written, I started trying to verify that my new code worked. One problem I encountered while trying to debug was that I wasn't getting full error output. In VirtSubproc.check_exec(), execute_timeout() acts something like a Popen() call:

(status, out, err) = execute_timeout(None, timeout, real_argv,
                                     stdout=stdout, stderr=subprocess.PIPE)

if status:
    bomb("%s%s failed (exit status %d)" %
         ((downp and "(down) " or ""), argv, status))
if err:
    bomb("%s unexpectedly produced stderr output `%s'" %
         (argv, err))

The problem with this is that if the call returns a non-zero exit code, which is typical for program failures, stderr doesn't get included in the error message.

I changed the first if-block to:

if status:
    bomb("%s%s failed (exit status %d)\n%s" %
         ((downp and "(down) " or ""), argv, status, err))

Another example is that autopkgtest calls schroot with the --quiet flag, which in one case was making schroot fail without any output due to a misconfiguration. I'm still trying to find and eliminate more places where errors are silenced.

autopkgtest was designed to be installed with Debian's packaging system, which handles arbitrary files and directory layouts. Unfortunately, setuptools is completely different in a way that doesn't work well with autopkgtest's design. (I'm sure this is partly because setuptools has to support all the different major OSes that run Python, including Windows.) As I discussed last week, autopkgtest has Python scripts in virt/ that are executed by subprocess calls in adt_testbed. Because these scripts import files from lib/, there needs to be an __init__.py in virt/ to make it into a package and a sys.path hack in each script to allow it to find modules in lib/. Unfortunately, setuptools will not install this structure. First, setuptools will not install any file without a .py extension into a package. Theoretically, this is fixable, the files in virt/ are Python scripts so I could rename them. (Theoretically, there's supposed to be some workaround involving MANIFEST.in or package_data in setup.py, but I have yet to find any documentation or explanation giving a method for installing non-Python files inside a Python package.) Second, however, setuptools does not preserve the executable bit when installing package files. The obvious workaround, changing the subprocess calls so that they invoke python virt/foo.py rather than virt/foo.py requires changing all the internal calls in the autopkgtest code, which I'm loathe to do for fear of breaking it. (It's not clear to me I can easily find all of the calls, for starters.)

There are about three solutions to this I see at the moment, all of them difficult. The first involves using either the scripts keyword or console_scripts entry point in setup.py, as explained here. The scripts keyword is supposed to preserve the executable bit according to this StackExchange question, but I haven't verified this myself, and like everything to do with setuptools I don't trust anything anyone says about it without testing it myself. It also has the disadvantage of dumping them all into the common scripts directory. Using console_scripts involves rewriting all of them to have an executable function I can refer to in setup.py. I worry that this would be both fragile and break existing expectations in the rest of the autopkgtest code, but it might be the best solution. The third solution involves refactoring of all the autopkgtest code to import the code in the scripts rather than running it through subprocess calls. I'm reluctant to do this because I think it's almost certain to break things that will require significant work to fix.

Getting setuptools to install the autopkgtest code correctly is one blocker for the next release. Another is that autopkgtest's handling of errors during the build process involves closing the adt_testbed.Testbed so it won't take further commands. Unfortunately, this handling runs before any cleanup code I write to run outside it, which means that at the moment errors during the build will result in things like disorderfs being left mounted.

The last release blocker is that adt_testbed doesn't have any way to set a working directory when running commands. For instance, the virt/schroot script always calls schroot with --directory=/. I thought about trying to use absolute paths, but decided this was unintuitive and impractical. For the user, this would mean that instead of running something simple like make in the correct directory, they would have to run make --file=/absolute/path/to/Makefile or something similar, making all paths absolute. I worry that some build scripts wouldn't handle this correctly, either: for instance, running python setup.py from a different directory can have different effects because Python's path is initialized to contain the current directory. Changing this is going to require going deeper into the autopkgtest code than I'd hoped.

I intend to try to resolve these three issues over the next week and then prepare the next release, though how much progress I make depends on how thorny they turn out to be.

28 June, 2016 11:19PM

Jose M. Calhariz

at daemon 3.1.20, with 3 fixes

From the Debian BUG system I incorporated 3 fixes. One of them is experimental. It fixes a broken code but may have side effects. Please test it.

  • New release 3.1.20:
   * Add option b to getopt, (Closes: #812972).
   * Comment a possible broken code, (Closes: #818508).
   * Add a fflush to catch more errors during writes, (Closes: #801186).

You may download from here at_3.1.20.orig.tar.gz.

28 June, 2016 09:14PM by Jose M. Calhariz

Paul Wise

DebCamp16 day 5

Beat head against shiny cats (no animals were harmed). Discuss the spice of sillyness. Forward a wiki bounce to the person. Mention my gobby git mail cron job. Start adopting the adequate package. Discuss cats vs licensecheck with Jonas. Usual spam reporting. Review wiki RecentChanges. Whitelisted one user in the wiki anti-spam system. Finding myself longing for a web technology. Shudder and look at the twinklies.

28 June, 2016 07:14PM

John Goerzen

A great day for a flight with the boys

I tend to save up my vacation time to use in summer for family activities, and today was one of those days.

Yesterday, Jacob and Oliver enjoyed planning what they were going to do with me. They ruled out all sorts of things nearby, but they decided they would like to fly to Ponca City, explore the oil museum there, then eat at Enrique’s before flying home.

Of course, it is not particularly hard to convince me to fly somewhere. So off we went today for some great father-son time.

The weather on the way was just gorgeous. We cruised along at about a mile above ground, which gave us pleasantly cool air through the vents and a smooth ride. Out in the distance, a few clouds were trying to form.


Whether I’m flying or driving, a pilot is always happy to pass a small airport. Here was the Winfield, KS airport (KWLD):


This is a beautiful time of year in Kansas. The freshly-cut wheat fields are still a vibrant yellow. Other crops make a bright green, and colors just pop from the sky. A camera can’t do it justice.

They enjoyed the museum, and then Oliver wanted to find something else to do before we returned to the airport for dinner. A little exploring yielded the beautiful and shady Garfield Park, complete with numerous old stone bridges.


Of course, the hit of any visit to Enrique’s is their “ice cream tacos” (sopapillas with ice cream). Here is Oliver polishing off his.


They had both requested sightseeing from the sky on our way back, but both fell asleep so we opted to pass on that this time. Oliver slept through the landing, and I had to wake him up when it was time to go. I always take it as a compliment when a 6-year-old sleeps through a landing!


Most small airports have a bowl of candy setting out somewhere. Jacob and Oliver have become adept at finding them, and I will usually let them “talk me into” a piece of candy at one of them. Today, after we got back, they were intent at exploring the small gift shop back home, and each bought a little toy helicopter for $1.25. They may have been too tired to enjoy it though.

They’ve been in bed for awhile now, and I’m still smiling about the day. Time goes fast when you’re having fun, and all three of us were. It is fun to see them inheriting my sense of excitement at adventure, and enjoying the world around them as they go.

The lady at the museum asked how we had heard about them, and noticed I drove up in an airport car (most small airports have an old car you can borrow for a couple hours for free if you’re a pilot). I told the story briefly, and she said, “So you flew out to this small town just to spend some time here?” “Yep.” “Wow, that’s really neat. I don’t think we’ve ever had a visitor like you before.” Then she turned to the boys and said, “You boys are some of the luckiest kids in the world.”

And I can’t help but feel like the luckiest dad in the world.

28 June, 2016 03:57AM by John Goerzen

June 27, 2016

hackergotchi for Jonathan McDowell

Jonathan McDowell

Hire me!

It’s rare to be in a position to be able to publicly announce you’re looking for a new job, but as the opportunity is currently available to me I feel I should take advantage of it. That’s especially true given the fact I’ll be at DebConf 16 next week and hope to be able to talk to various people who might be hiring (and will, of course, be attending the job fair).

I’m coming to the end of my Masters in Legal Science and although it’s been fascinating I’ve made the decision that I want to return to the world of tech. I like building things too much it seems. There are various people I’ve already reached out to, and more that are on my list to contact, but I figure making it more widely known that I’m in the market can’t hurt with finding the right fit.

  • Availability: August 2016 onwards. I can wait for the right opportunity, but I’ve got a dissertation to write up so can’t start any sooner.
  • Location: Preferably Belfast, Northern Ireland. I know that’s a tricky one, but I’ve done my share of moving around for the moment (note I’ve no problem with having to do travel as part of my job). While I prefer an office environment I’m perfectly able to work from home, as long as it’s as part of a team that is tooled up for disperse workers - in my experience being the only remote person rarely works well. There’s a chance I could be persuaded to move to Dublin for the right role.
  • Type of role: I sit somewhere on the software developer/technical lead/architect spectrum. I expect to get my hands dirty (it’s the only way to learn a system properly), but equally if I’m not able to be involved in making high level technical decisions then I’ll find myself frustrated.
  • Technology preferences: Flexible. My background is backend systems programming (primarily C in the storage and networking spaces), but like most developers these days I’ve had exposure to a bunch of different things and enjoy the opportunity to learn new things.

I’m on LinkedIn and OpenHUB, which should give a bit more info on my previous experience and skill set. I know I’m light on details here, so feel free to email me to talk about what I might be able to specifically bring to your organisation.

27 June, 2016 10:21PM

Paul Wise

DebCamp16 day 4

Usual spam reporting. Review wiki RecentChanges. Rain glorious rain! Err... Update a couple of links on the debtags team page. Report Debian bug #828718 against tracker.debian.org. Update links to debtags on DDPO and the old PTS. Report minor Debian bug #828722 against debtags.debian.org. Update the debtags for check-all-the-things. More code and check fixes for check-all-the-things. Gravitate towards the fireplace and beat face against annoying access point, learn of wpa_cli blacklist & wpa_cli bssid from owner of devilish laptop. Ask stakeholders for feedback/commits before the impending release of check-all-the-things to Debian unstable. Meet developers of the One^WGNU Ring, discuss C++ library foo. Contribute some links to an open hardware thread. Point out the location of the Debian QA SVN repository. Clear skies at night, twinkling delight.

27 June, 2016 08:25PM

Scarlett Clark

Debian: Reproducible builds update

A quick update to note that I did complete extra-cmake-modules and was given the green light to push upstream and in Debian and will do so asap.
Due to circumstances out of my control, I am moving a few states over and will have to continue my efforts when I arrive at
my new place of residence in a few days. Thanks
for understanding.


27 June, 2016 05:58PM by Scarlett Clark

John Goerzen

I’m switching from git-annex to Syncthing

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

27 June, 2016 01:02PM by John Goerzen

hackergotchi for Alessio Treglia

Alessio Treglia

A – not exactly United – Kingdom


Island of Ventotene – Roman harbour

There once was a Kingdom strongly United, built on the honours of the people of Wessex, of Mercia, Northumbria and East Anglia who knew how to deal with the invasion of the Vikings from the east and of Normans from the south, to come to unify the territory under an umbrella of common intents. Today, however, 48% of them, while keeping solid traditions, still know how to look forward to the future, joining horizons and commercial developments along with the rest of Europe. The remaining 52%, however, look back and can not see anything in front of them if not a desire of isolation, breaking the European dream born on the shores of Ventotene island in 1944 by Altiero Spinelli, Ernesto Rossi and Ursula Hirschmann through the “Manifesto for a free and united Europe“. An incurable fracture in the country was born in a referendum on 23 June, in which just over half of the population asked to terminate his marriage to the great European family, bringing the UK back by 43 years of history.

<Read More…[by Fabio Marzocca]>

27 June, 2016 07:54AM by Fabio Marzocca

Bits from Debian

DebConf16 schedule available

DebConf16 will be held this and next week in Cape Town, South Africa, and we're happy to announce that the schedule is already available. Of course, it is still possible for some minor changes to happen!

The DebCamp Sprints already started on 23 June 2016.

DebConf will open on Saturday, 2 July 2016 with the Open Festival, where events of interest to a wider audience are offered, ranging from topics specific to Debian to a wider appreciation of the open and maker movements (and not just IT-related). Hackers, makers, hobbyists and other interested parties are invited to share their activities with DebConf attendees and the public at the University of Cape Town, whether in form of workshops, lightning talks, install parties, art exhibition or posters. Additionally, a Job Fair will take place on Saturday, and its job wall will be available throughout DebConf.

The full schedule of the Debian Conference thorough the week is published. After the Open Festival, the conference will continue with more than 85 talks and BoFs (informal gatherings and discussions within Debian teams), including not only software development and packaging but also areas like translation, documentation, artwork, testing, specialized derivatives, maintenance of the community infrastructure, and other.

There will also be also a plethora of social events, such as our traditional cheese and wine party, our group photo and our day trip.

DebConf talks will be broadcast live on the Internet when possible, and videos of the talks will be published on the web along with the presentation slides.

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf16, particularly our Platinum Sponsor Hewlett Packard Enterprise.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise actively participates in open source. Thousands of developers across the company are focused on open source projects, and HPE sponsors and supports the open source community in a number of ways, including: contributing code, sponsoring foundations and projects, providing active leadership, and participating in various committees.

27 June, 2016 07:00AM by Laura Arjona Reina

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Hello, Sense!

A while back, I saw a Kickstarter for one of the most well designed and pretty sleep trackers on the market. I fell in love with it, and it has stuck with me since.

A few months ago, I finally got my hands on one and started to track my data. Naturally, I now want to store this new data with the rest of the data I have on myself in my own databases.

I went in search of an API, but I found that the Sense API hasn't been published yet, and is being worked on by the team. Here's hoping it'll land soon!

After some subdomain guessing, I hit on api.hello.is. So, naturally, I went to take a quick look at their Android app and network traffic, lo and behold, there was a pretty nicely designed API.

This API is clearly an internal API, and as such, it's something that should not be considered stable. However, I'm OK with a fragile API, so I've published a quick and dirty API wrapper for the Sense API to my GitHub..

I've published it because I've found it useful, but I can't promise the world, (since I'm not a member of the Sense team at Hello!), so here are a few ground rules of this wrapper:

  • I make no claims to the stability or completeness.
  • I have no documentation or assurances.
  • I will not provide the client secret and ID. You'll have to find them on your own.
  • This may stop working without any notice, and there may even be really nasty bugs that result in your alarm going off at 4 AM.
  • Send PRs! This is a side-project for me.

This module is currently Python 3 only. If someone really needs Python 2 support, I'm open to minimally invasive patches to the codebase using six to support Python 2.7.

Working with the API:

First, let's go ahead and log in using python -m sense.

$ python -m sense
Sense OAuth Client ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense OAuth Client Secret: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense email: paultag@gmail.com
Sense password: 
Attempting to log into Sense's API
Attempting to query the Sense API
The humidity is **just right**.
The air quality is **just right**.
The light level is **just right**.
It's **pretty hot** in here.
The noise level is **just right**.

Now, let's see if we can pull up information on my Sense:

>>> from sense import Sense
>>> sense = Sense()
>>> sense.devices()
{'senses': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '11a1', 'last_updated': 1466991060000, 'state': 'NORMAL', 'wifi_info': {'rssi': 0, 'ssid': 'Pretty Fly for a WiFi (2.4 GhZ)', 'condition': 'GOOD', 'last_updated': 1462927722000}, 'color': 'BLACK'}], 'pills': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '2', 'last_updated': 1466990339000, 'battery_level': 87, 'color': 'BLUE', 'state': 'NORMAL'}]}

Neat! Pretty cool. Look, you can even see my WiFi AP! Let's try some more and pull some trends out.

>>> values = [x.get("value") for x in sense.room_sensors()["humidity"]][:10]
>>> min(values)
>>> max(values)

I plan to keep maintaining it as long as it's needed, so I welcome co-maintainers, and I'd love to see what people build with it! So far, I'm using it to dump my room data into InfluxDB, pulling information on my room into Grafana. Hopefully more to come!

Happy hacking!

27 June, 2016 01:42AM by Paul Tagliamonte

June 26, 2016

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru 1.3.0 released

I've just released version 1.3.0 of Nageru, my live software video mixer.

Things have been a bit quiet on the Nageru front recently, for two reasons: First, I've been busy with moving (from Switzerland to Norway) and associated job change (from Google to MySQL/Oracle). Things are going well, but these kinds of changes tend to take, well, time and energy.

Second, the highlight of Nageru 1.3.0 is encoding of H.264 streams meant for end users (using x264), not just the Quick Sync Video streams from earlier versions, which work more as a near-lossless intermediate format meant for transcoding to something else later. Like with most things video, hitting such features really hard (I've been doing literally weeks of continuous stream testing) tends to expose weaknesses in upstream software.

In particular, I wanted x264 speed control, where the quality is tuned up and down live as the content dictates. This is mainly because the content I want to stream this summer (demoscene competitions) varies from the very simple to downright ridiculously complex (as you can see, YouTube just basically gives up and creates gray blocks). If you have only one static quality setting, you will have the choice between something that looks like crap for everything, and one that drops frames like crazy (or, if your encoding software isn't all that, like e.g. using ffmpeg(1) directly, just gets behind and all your clients' streams just stop) when the tricky stuff comes. There was an unofficial patch for speed control, but it was buggy, not suitable for today's hardware and not kept at all up to date with modern x264 versions. So to get speed control, I had to work that patch pretty heavily (including making it so that it could work in Nageru directly instead of requiring a patched x264)… and then it exposed a bug in x264 proper that would cause corruption when changing between some presets, and I couldn't release 1.3.0 before that fix had at least hit git.

Similarly, debugging this exposed an issue with how I did streaming with ffmpeg and the MP4 mux (which you need to be able to stream H.264 directly to HTML5 <video> without any funny and latency-inducing segmenting business); to know where keyframes started, I needed to flush the mux before each one, but this messes up interleaving, and if frames were ever dropped right in front of a keyframe (which they would on the most difficult content, even at speed control's fastest presets!), the “duration” field of the frame would be wrong, causing the timestamps to be wrong and even having pts < dts in some cases. (VLC has to deal with flushing in exactly the same way, and thus would have exactly the same issue, although VLC generally doesn't transcode variable-framerate content so well to begin with, so the heuristics would be more likely to work. Incidentally, I wrote the VLC code for this flushing back in the day, to be able to stream WebM for some Debconf.) I cannot take credit for the ffmpeg/libav fixes (that was all done by Martin Storsjö), but again, Nageru had to wait for the new API they introduce (that just signals to the application when a keyframe is about to begin, removing the need for flushing) to get into git mainline. Hopefully, both fixes will get into releases soon-ish and from there one make their way into stretch.

Apart from that, there's a bunch of fixes as always. I'm still occasionally (about once every two weeks of streaming or so) hitting what I believe is a bug in NVIDIA's proprietary OpenGL drivers, but it's nearly impossible to debug without some serious help from them, and they haven't been responding to my inquiries. Every two weeks means that you could be hitting it in a weekend's worth of streaming, so it would be nice to get it fixed, but it also means it's really really hard to make a reproducible test case. :-) But the fact that this is currently the worst stability bug (and that you can work around it by using e.g. Intel's drivers) also shows that Nageru is pretty stable these days.

26 June, 2016 10:00PM

Iustin Pop

Random things of the week - brexit and the pretzel

Random things of the week

In no particular order (mostly).

Coming back from the US, it was easier dealing with the jet-lag this time; doing sports in the morning or at noon and eating light on the evening helps a lot.

The big thing of the week, that has everybody talking, is of course brexit. My thoughts, as written before on a facebook comment: Direct democracy doesn't really work if it's done once in a blue moon. Wikipedia says there have been thirteen referendums in UK since 1975, but most of them (10) on devolution issues in individual countries, and only three were UK-wide referendums (quoting from the above page): the first on membership of the European Economic Community in 1975, the second on adopting the Alternative vote system in parliamentary elections in 2011, and the third one is the current one. Which means that a referendum is done every 13 years or so.

At this frequency, people are not a) used to inform themselves on the actual issues, b) believing that your vote actually will change things, and most likely c) not taking the "direct-democracy" aspect seriously (thinking beyond the issue at hand and how will it play together with all the rest of the political decisions). The result is what we've seen, leave politicians already backpedalling on issues, and confusion that yes, leave votes actually counted.

My prognosis for what's going to happen:

  • one option, this gets magically undone, and there will be rejoicing at the barely avoided big damage (small damage already done).
  • otherwise, UK will lose significantly from the economy point of view, enough that they'll try being out of the EU officially but "in" the EU from the point of view of trade.
  • in any case, large external companies will be very wary of investing in production in UK (e.g. Japanese car manufacturers), and some will leave.
  • most of the 52% who voted leave will realise that this was a bad outcome, in around 5 years.
  • hopefully, politicians (both in the EU and in the UK) will try to pay more attention to inequality (here I'm optimistic).

We'll see what happens though. Reading comments on various websites still make me cringe at how small some people think: "independence" from the EU when the real issue is EU versus the other big blocks—US, China, in the future India; and "versus" not necessarily in a conflict sense, but simply as negotiating power, economic treaties, etc.

Back to more down-to-earth things: this week was quite a good week for me. Including commutes, my calendar turned out quite nice:

Week calendar

The downside was that most of those were short runs or bike sessions. My runs are now usually 6.5K, and I'll try to keep to that for a few weeks, in order to be sure that bone and ligaments have adjusted, and hopefully keep injuries away.

On the bike front, the only significant thing was that I did as well the Zwift Canyon Ultimate Pretzel Mission, on the last day of the contest (today): 73.5Km in total, 3h:27m. I've done 60K rides on Zwift before, so the first 60K were OK, but the last ~5K were really hard. Legs felt like logs of wood, I was only pushing very weak output by the end although I did hydrate and fuel up during the ride. But, I was proud of the fact that on the last sprint (about 2K before the end of the ride), I had ~34s, compared to my all-time best of 29.2s. Was not bad after ~3h20m of riding and 1300 virtual meters of ascent. Strava also tells me I got 31 PRs on various segments, but that's because I rode on some parts of Watopia that I never rode before (mostly the reverse ones).

Overall, stats for this week: ~160Km in total (virtual and real, biking and running), ~9 hours spent doing sports. Still much lower than the amount of time I was playing computer games, so it's a net win ☺

Have a nice start of the week everyone, and keep doing what moves you forward!

26 June, 2016 07:59PM

Paul Wise

DebCamp16 day 3

Review, approve chromium, gnome-terminal and radeontop screenshots. Disgusted to see the level of creativity GPL violators have. Words of encouragement on #debian-mentors. Pleased to see Tails reproducible builds funding by Mozilla. Point out build dates in versions leads to non-reproducible builds. Point out apt-file search to someone looking for a binary of kill. Review wiki RecentChanges. Alarmingly windy. Report important Debian bug #828215 against unattended-upgrades. Clean up some code in check-all-the-things and work on fixing Debian bug #826089. Wind glorious wind! Much clearer day, nice view of the mountain. More check-all-the-things code clean up and finish up fixing Debian bug #826089. Twinkling city lights and more wind. Final code polish during dinner/discussion. Wandering in the wind amongst the twinklies. Whitelisted one user in the wiki anti-spam system. Usual spam reporting.

26 June, 2016 07:31PM

hackergotchi for Michal Čihař

Michal Čihař

Troja bridge in Prague

I think it's time to renew tradition of photography posts on this blog. I will start with pictures taken few weeks ago on Troja bridge, which is the newest bridge over the Vltava river in Prague.

Filed under: Debian English Photography | 0 comments

26 June, 2016 04:00PM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Integrating Cython extension with setuptools and unit testing

I was reviewing changes for indic-trans as part of GSoC 2016. The module is an improvisation for our original transliteration module which was doing its job by substitution.

This new module uses machine learning of some sort and utilizes Cython, numpy and scipy. Student had kept pre-compiled shared library in the git tree to make sure it builds and passes the test. But this was not correct way. I started looking at way to build these files and remove it from the code base.

There is a cython documentation for distutils but none for setuptools. Probably it is similar to other Python extension integration into setuptools, but this was first time for me so after a bit of searching and trial and error below is what I did.

We need to use Extensions class from setuptools and give it path to modules we want to build. In my case beamsearch and viterbi are 2 modules. So I added following lines to setup.py

from setuptools.extension import Extension
from Cython.Build import cythonize

extensions = [


First argument to Extensions is the module name and second argument is a list of files to be used in building the module. The additional inculde_dirs argument is not normally necessary unless you are working in virtualenv. In my system the build used to work without this but it was failing in Travis CI, so added it to fix the CI builds. OTOH it did work without this on Circle CI.

Next is provide this extensions to ext_modules argument to setup as shown below


And for the reference here is full setup.py after modifications.

#!/usr/bin/env python

from setuptools import setup
from setuptools.extension import Extension
from Cython.Build import cythonize

import numpy

extensions = [


So now we can build the extensions (shared library) using following command.

python setup.py build_ext

Another challenge I faced was missing extension when running test. We use pbr in above project and testrepository with subunit for running tests. Looks like it does not build extensions by default so I modified the Makefile to build the extension in place before running test. The travis target of my Makefile is as follows.

     [ ! -d .testrepository ] || \
             find .testrepository -name "times.dbm*" -delete
     python setup.py build_ext -i
     python setup.py test --coverage \
     flake8 --max-complexity 10 indictrans

I had to build the extension in place using -i switch. This is because other wise the tests won't find the indictrans._decode.beamsearch and indictrans._decode.viterbi modules. What basically -i switch does is after building shared library symlinks it to the module directory, in ourcase indictrans._decode

The test for existence of .testrepository folder is over come this bug in testrepository which results in test failure when running tests using tox.

26 June, 2016 02:24PM by copyninja

Kevin Avignon

Tech questions 1-9 : LINQ questions

Hey guys, This is a new series I will try to maintain to the best of my capabilities. I have this awesome blogger who happens to be also a Microsoft MVP called Iris Classon. After her first year of programming, she started to ask and get answers for what she’d call “stupid question”. Why would … Continue reading Tech questions 1-9 : LINQ questions

26 June, 2016 12:10PM by KevinAvignon

hackergotchi for Clint Adams

Clint Adams

A local script for local people

This isn't actually answering the question, but it's close. It's also horrible, so whoever adopts Enrico's script should also completely rewrite this or burn it along with the stack of pizza boxes and the grand piano.



set -e



# this doesn't handle hokey fetch failures
#(for fpr in $(hkt list --keyring ${keyring} --output-format JSON | jq '.[].publickey.fpr')
#  hokey fetch --keyserver "${keyserver}" --validation-method MatchPrimaryKeyFingerprint "${(Q)fpr}"
#done) >${NEWKEYS}
#gpg2 --no-default-keyring --keyring ${NEWKEYRING} --import ${NEWKEYS}

cp "${keyring}" "${NEWKEYRING}"
gpg2 --no-default-keyring --keyring ${NEWKEYRING} --refresh

hkt findpaths --keyring ${NEWKEYRING} '' '' '' > ${PATHS}
id=$(awk -F, "/${myfpr})\$/ {sub(/\(/,BLANKY,\$1);print \$1;}" ${PATHS})
grep -e ",\[${id}," -e ",${id}\]" ${PATHS} | sort -n | tail -n 10 > ${FARTHEST_TEN}
targetids=(${(f)"${$((sed 's/^.*\[//;s/,.*$//;' ${FARTHEST_TEN}; sed 's/\])$//;s/.*,//;' ${FARTHEST_TEN}) | sort -n -u | grep -v "^${id}$")}"})
targetfprs=($(for i in ${targetids}; do awk -F, "/\(${i},[^[]/ {sub(/\)/,BLANKY,\$2); print \$2}" ${PATHS}; done))
gpg2 --no-default-keyring --keyring ${NEWKEYRING} --list-keys ${targetfprs}


pub   rsa4096/0x664F1238AA8F138A 2015-07-14 [SC]
      Key fingerprint = 3575 0B8F B6EF 95FF 16B8  EBC0 664F 1238 AA8F 138A
uid                   [ unknown] Daniel Lange <dl.ml1@usrlocal.de>
sub   rsa4096/0x03BEE1C11DB1954B 2015-07-14 [E]

pub   rsa4096/0xDF23DA3396978EB3 2014-09-05 [SC]
      Key fingerprint = BBBC 58B4 5994 CF9C CC56  BCDA DF23 DA33 9697 8EB3
uid                   [  undef ] Michael Meskes <michael@fam-meskes.de>
uid                   [  undef ] Michael Meskes <meskes@postgresql.org>
uid                   [  undef ] Michael Meskes <michael.meskes@credativ.com>
uid                   [  undef ] Michael Meskes <meskes@debian.org>
sub   rsa4096/0x85C3AFFECF0BF9B5 2014-09-05 [E]
sub   rsa4096/0x35D857C0BBCB3B25 2014-11-04 [S]

pub   rsa4096/0x1E953E27D4311E58 2009-07-12 [SC]
      Key fingerprint = C2FE 4BD2 71C1 39B8 6C53  3E46 1E95 3E27 D431 1E58
uid                   [  undef ] Chris Lamb <chris@chris-lamb.co.uk>
uid                   [  undef ] Chris Lamb <lamby@gnu.org>
uid                   [  undef ] Chris Lamb <lamby@debian.org>
sub   rsa4096/0x72B3DBA98575B3F2 2009-07-12 [E]

pub   rsa4096/0xDF6D76C44D696F6B 2014-08-15 [SC] [expires: 2017-06-03]
      Key fingerprint = 1A6F 3E63 9A44 67E8 C347  6525 DF6D 76C4 4D69 6F6B
uid                   [ unknown] Sven Bartscher <sven.bartscher@weltraumschlangen.de>
uid                   [ unknown] Sven Bartscher <svenbartscher@yahoo.de>
uid                   [ unknown] Sven Bartscher <kritzefitz@debian.org>
sub   rsa4096/0x9E83B071ED764C3A 2014-08-15 [E]
sub   rsa4096/0xAEB25323217028C2 2016-06-14 [S]

pub   rsa4096/0x83E33BD7D4DD4CA1 2015-11-12 [SC] [expires: 2017-11-11]
      Key fingerprint = 0B5A 33B8 A26D 6010 9C50  9C6C 83E3 3BD7 D4DD 4CA1
uid                   [ unknown] Jerome Charaoui <jerome@riseup.net>
sub   rsa4096/0x6614611FBD6366E7 2015-11-12 [E]
sub   rsa4096/0xDB17405204ECB364 2015-11-12 [A] [expires: 2017-11-11]

pub   rsa4096/0xF823A2729883C97C 2014-08-26 [SC]
      Key fingerprint = 8ED6 C3F8 BAC9 DB7F C130  A870 F823 A272 9883 C97C
uid                   [ unknown] Lucas Kanashiro <kanashiro@debian.org>
uid                   [ unknown] Lucas Kanashiro <kanashiro.duarte@gmail.com>
sub   rsa4096/0xEE6E5D1A9C2F5EA6 2014-08-26 [E]

pub   rsa4096/0x2EC0FFB3B7301B1F 2014-08-29 [SC] [expires: 2017-04-06]
      Key fingerprint = 76A2 8E42 C981 1D91 E88F  BA5E 2EC0 FFB3 B730 1B1F
uid                   [ unknown] Niko Tyni <ntyni@debian.org>
uid                   [ unknown] Niko Tyni <ntyni@cc.helsinki.fi>
uid                   [ unknown] Niko Tyni <ntyni@iki.fi>
sub   rsa4096/0x129086C411868FD0 2014-08-29 [E] [expires: 2017-04-06]

pub   rsa4096/0xAA761F51CC10C92A 2016-06-20 [SC] [expires: 2018-06-20]
      Key fingerprint = C9DE 2EA8 93EE 4C86 BE73  973A AA76 1F51 CC10 C92A
uid                   [ unknown] Roger Shimizu <rogershimizu@gmail.com>
sub   rsa4096/0x2C2EE1D5DBE7B292 2016-06-20 [E] [expires: 2018-06-20]
sub   rsa4096/0x05C7FD79DD03C4BB 2016-06-20 [S] [expires: 2016-09-18]

Note that this completely neglects potential victims who are unconnected within the KSP set.

26 June, 2016 10:05AM

Niels Thykier

Anti-declarative packaging – top 15 build-helpers inserting maintscripts

Debian packages can run arbitrary code via “maintainer scripts” (sometimes shortened into “maintscripts”) during installation/removal etc. While they certainly have their use cases, their failure modes causes “exciting” bugs like “fails to install” or the dreaded “fails to remove”.

They also have other undesirable effects such as:

  • Bugs in/Updates to auto-generated snippets require a rebuild of all packages (not to mention the obvious code-duplication in all packages).
  • In case of circular dependencies[1] all having “postinst” scripts, dpkg will have to guess which package to configure first.
  • They require forking a shell at least once for each maintscript.
  • They complicate the implementations of e.g. detached chroot creation.

Accordingly, I think we should aim for a more declarative packaging style.  To help facilitate this, I have implemented 3 tracking tags in Lintian.

With these, we were able to learn that 73.5% of all packages do not have any of these scripts.  But I can now also produce a list of helpers that insert the most maintainer script snippets. The current top 15 is:

  1. “dhpython” with 3775 instances
    • This is an umbrella for all helpers using dh-python’s python module, see #827774.
  2. dh_installmenu with 1861 instances
  3. dh_makeshlibs with 1396 remaining instances
  4. dh_installinit with 1224 instances
  5. dh_python2 with 1168 instances
  6. dh_installdebconf with 772 instances
  7. dh_installdeb with 754 instances
    • These are the dpkg-maintscript-helper snippets for “rm_conffile”, “mv_conffile” etc.  Hopefully in the near future, dpkg will support these directly.
  8. dh_systemd_enable with 447 instances
  9. dh_installemacsen with 179 instances
  10. dh_icons with 165 instances
  11. dh_installtex with 137 instances
  12. dh_apache2 with 117 instances
  13. dh_installudev with 98 instances
  14. dh_installxfonts with 87 instances
  15. dh_systemd_start with 79 instances

With this list, it seems to me that some obvious focus areas would be:

  • Replacing the python scripts (I presume it is the byte-code handling, but I have not looked at this at all)
  • Migrating away from menu files
  • Support enabling + starting/stopping/restarting a service declaratively.
    • This might have a “hidden” requirement on declaratively creating service users if we want these packages to become truly “maintscript-less”.

Eventually we will also have to dig through all the “manual” maintainer scripts. But I think we got plenty to start with.:)


[1] For some, circular dependencies in itself is an issue. I can certainly appreciate them as being suboptimal, but most of the issues we have are probably caused by insufficient tooling rather than a theoretical issue (that is, if we remove all postinst scripts).

Filed under: Debhelper, Debian, Lintian

26 June, 2016 07:51AM by Niels Thykier

Kevin Avignon

Shaping your profesional skills structure

Hey guys, So, professional shaped skills… What’s that. Basically, it’s the form your skills take concerning your expertise in your individual field(s). This form will depend on both depth and broadness. Trying to learn as many things as possible will lead to little depth and a large broadness of skills. The exact opposite leads to … Continue reading Shaping your profesional skills structure

26 June, 2016 12:58AM by KevinAvignon

June 25, 2016

Dimitri John Ledkov

Post-Brexit - The What Now?

Out of 46,500,001 electorate 17,410,742 voted to leave, which is a mere 37.4% or just over a third. [source]. On my books this is not a clear expression of the UK wishes.

The reaction that the results have caused are devastating. The Scottish First Minister has announced plans for 2nd Scottish Independence referendum [source]. Londoners are filing petitions calling for Independent London [source, source]. The Prime Minister announced his resignation [source]. Things are not stable.

I do not believe that super majority of the electorate are in favor of leaving the EU. I don't even believe that those who voted to leave have considered the break up of the UK as the inevitable outcome of the leave vote. There are numerous videos on the internet about that, impossible to quantify or reliably cite, but for example this [source]

So What Now?


I urge everyone to start protesting the outcome of the mistake that happened last Thursday. 4th of July is a good symbolic date to show your discontent with the UK governemnt and a tiny minority who are about to cause the country to fall apart with no other benefits. Please stand up and make yourself heard.
  • General Strikes 4th & 5th of July
There are 64,100,000 people living in the UK according to the World Bank, maybe the government should fear and listen to the unheard third. The current "majority" parliament was only elected by 24% of electorate.

It is time for people to actually take control, we can fix our parliament, we can stop austerity, we can prevent the break up of the UK, and we can stay in the EU. Over to you.

ps. How to elect next PM?

Electing next PM will be done within the Conservative Party, and that's kind of a bummer, given that the desperate state the country currently is in. It is not that hard to predict that Boris Johnson is a front-runner. If you wish to elect a different PM, I urge you to splash out 25 quid and register to be a member of the Conservative Party just for one year =) this way you will get a chance to directly elect the new Leader of the Conservative Party and thus the new Prime Minister. You can backdoor the Conservative election here.

25 June, 2016 07:24PM by Dimitri John Ledkov (noreply@blogger.com)

June 24, 2016

hackergotchi for Joey Hess

Joey Hess

twenty years of free software -- part 5 pristine-tar

I've written retrospectively about pristine-tar before, when I stopped maintaining it. So, I'll quote part of that here:

[...] a little bit about the reason I wrote pristine-tar in the
first place. There were two reasons:

1. I was once in a talk where someone mentioned that Ubuntu had/was
   developing something that involved regenerating orig tarballs
   from version control.
   I asked the obvious question: How could that possibly be done
   The (slightly hung over) presenter did not have a satesfactory
   response, so my curiosity was piqued to find a way to do it.
   (I later heard that Ubuntu has been using pristine-tar..)

2. Sometimes code can be subversive. It can change people's perspective
   on a topic, nudging discourse in a different direction. It can even
   point out absurdities in the way things are done. I may or may not
   have accomplished the subversive part of my goals with pristine-tar.

Code can also escape its original intention. Many current uses of
pristine-tar fall into that category. So it seems likely that some
people will want it to continue to work even if it's met the two goals
above already.

For me, the best part of building pristine-tar was finding an answer to the question "How could that possibly be done technically?" It was also pretty cool to be able to use every tarball in Debian as the test suite for pristine-tar.

I'm afraid I kind of left Debian in the lurch when I stopped maintaining pristine-tar.

"Debian has probably hundreds, if not thousands of git repositories using pristine-tar. We all rely now on an unmaintained, orphaned, and buggy piece of software." -- Norbert Preining

So I was relieved when it finally got a new maintainer just recently.

Still, I don't expect I'll ever use pristine-tar again. It's the only software I've built in the past ten years that I can say that about.

Next: ?twenty years of free software -- part 6 moreutils

24 June, 2016 01:38PM

Kevin Avignon

Tech questions 10-17: FP questions

Hey guys, Today’s post is to make you understand that even is oriented-object programming (OOP) feels now finally natural and exquisite, they are better ways to design and implement your solutions to make them better and of course, safer. My goal today is to make you want to adopt a functional mindset when creating software … Continue reading Tech questions 10-17: FP questions

24 June, 2016 12:07PM by KevinAvignon

hackergotchi for Norbert Preining

Norbert Preining

Rest in peace UK

I am mourning for the UK. I feel so much pain and pity for all my good friends over there. Stupidity has won again. Good bye UK, your long reign has found its end. The rest is silence.




(Graphic from The Guardian – EU referendum results in full)

24 June, 2016 04:22AM by Norbert Preining

Debian/TeX Live 2016.20160623-1

About one month has passed since we did release TeX Live 2016, and more than a month since the last Debian packages, so it is high time to ship out a new checkout of upstream. Nothing spectacular new here, just lots and lots of updates since the freeze.


I am dedicating this release to those intelligent beings who voted against the stupid Brexit and for remaining in the EC! – I am still optimist!

New packages

aucklandthesis, autobreak, cquthesis, getargs, hustthesis, ietfbibs, linop, markdown, olsak-misc, optidef, sanitize-umlaut, umbclegislation, wordcount, xcntperchap.

Updated packages

academicons, achemso, acmart, acro, animate, apa6, arabluatex, archaeologie, babel-hungarian, beamertheme-epyt, beebe, biblatex-abnt, biblatex-anonymous, biblatex-bookinother, biblatex-caspervector, biblatex-chicago, biblatex-manuscripts-philology, biblatex-morenames, biblatex-opcit-booktitle, biblatex-philosophy, biblatex-realauthor, biblatex-source-division, biblatex-subseries, bidi, bookcover, bxjscls, caption, chemformula, chemmacros, circuitikz, cloze, cochineal, context, csplain, cstex, datetime2, denisbdoc, dvipdfmx-def, epstopdf, erewhon, exsol, fbb, fibeamer, fithesis, fontawesome, fontspec, fonts-tlwg, geschichtsfrkl, getmap, glossaries, glossaries-extra, graphics, graphics-cfg, gregoriotex, gzt, he-she, hook-pre-commit-pkg, hyperref, ifluatex, keyvaltable, koma-script, l3build, latex, latex-bin, limap, lollipop, lshort-chinese, luaotfload, luatex85, luatex-def, luatexja, lua-visual-debug, marginnote, mcf2graph, media9, minted, mptopdf, msu-thesis, musixtex, navigator, nwejm, oberdiek, patchcmd, pdfcomment, pdftex-def, pdfx, pkuthss, platex, pstricks, ptex, ptex2pdf, ptex-base, ptex-ng, reledmac, repere, scheme-xml, sduthesis, showlabels, tableaux, tcolorbox, tex4ht, texinfo, texlive-scripts, tex-overview, textpos, tools, translations, tudscr, unicode-data, uplatex, uptex, xassoccnt, xcharter, xetex, xindy, yathesis, ycbook.


24 June, 2016 02:33AM by Norbert Preining

June 23, 2016

Jaminy Prabaharan

GSoC-Journey till Mid term

Hi readers,

Here comes my journey till the mid-term (June 21st) as a blog to share my experience.

I  have previously worked on some social related projects such as “smart guidance for blind” and “sensor based wireless controller”. I have been selected as a speaker for FOSSASIA-16 (Asia’s premier technological event)to talk on the project “smart guidance for blind”.FOSSASIA speakers. It was a great experience participating in the technological event in Singapore science centre.Got an opportunity to meet open source contributors from all over the world(even though it is an Asian event, participation was all over from the world). There were pre-meetups for FOSSASIA on the day before three-day event.I have attended the one organised by RedHat, Singapore.Discussed on many topics related to open source.

Three days of FOSSASIA event was a great experience.It was the second time as a speaker in an international conference.My talk was on the second day.Sharing is the best way to increase your knowledge. Talks and workshops were brainstorming.Learnt many new things and got the courage to contribute to the open source.Met Daniel Pocock in Debian exhibition table.Meeting awesome people can be the turning point of life.Had a discussion about the Debian projects and it motivated me for open source software.We have discussed about the Real Time Communication and was encouraged to apply for GSoC  (Google Summer of Code). As per our discussion, prepared the project proposal on “improving voice,video and chat communication with free software” and submitted it for GSoC. I have been selected to contribute for Debian with stipend from Google.

This was my first application for GSoC and I have been selected to contribute for open source and free software. I would like to thank Google and Debian for giving this amazing experience.

Learning and coding have begun.Updated my laptop with Jessie, latest version of Debian.Get acquainted with the new platform.Got to learn many things about Real Time Communication.Learnt more about SIP, XMPP, peer-to-peer technology to work on my project.It’s always better to be clear with theory before coding.When it comes to voice and video over IP, most people nowadays are quick to use Skype, Whatsapp, or Viber. My main goals of the project are helping people to avoid using proprietary communications tools like Skype, Viber and WhatsApp and simplifying the setup of free alternatives like Jitsi, Linphone, Ekiga, Tox (qtox), Mumble.Downloaded some of the already available open source VoIP to find the problems behind it and improve it further.Bootstrapping any business relevant network based on these free alternatives is still hard.

Would you like to list the senders, receivers and date of the messages in the inbox  of your mail.Python has a library file IMAP which can be used to connect to an email account, examine every message in every folder and look at the “To”, “From” and “CC” headers of every email message in the folder.

Do you have phone numbers and other contact details in old emails? Would you like a quick way to data-mine your inbox to find them and help migrate them to your address book? Got the help from phonenumbers library for parsing, formatting, and validating international phone numbers.I would like to share how I imported this library file into my coding.Download the given library file and open the file in the terminal.Type

$ python setup.py install

to install the library file.Now you can call the functions by importing phonenumbers.

You can go through the code in my GitHub profile here.(Recently started committing my projects in GitHub)

Iain R. Learmonth joined my journey as a mentor.Helped in solving some issues in my coding through GitHub.

It was a wonderful journey till now.Will be working further to improve voice, video and chat communication with free software.Stay connected to know more about my  further journey through GSoC.


23 June, 2016 03:04PM by Jaminy.P

hackergotchi for Jonathan McDowell

Jonathan McDowell

Fixing missing text in Firefox

Every now and again I get this problem where Firefox won’t render text correctly (on a Debian/stretch system). Most websites are fine, but the odd site just shows up with blanks where the text should be. Initially I thought it was NoScript, but turning that off didn’t help. Daniel Silverstone gave me a pointer today that the pages in question were using webfonts, and that provided enough information to dig deeper. The sites in question were using Cantarell, via:

src: local('Cantarell Regular'), local('Cantarell-Regular'), url(cantarell.woff2) format('woff2'), url(cantarell.woff) format('woff');

The Firefox web dev inspector didn’t show it trying to fetch the font remotely, so I removed the local() elements from the CSS. That fixed the page, letting me pinpoint the problem as a local font issue. I have fonts-cantarell installed so at first I tried to remove it, but that breaks gnome-core. So instead I did an fc-list | grep -i cant to ask fontconfig what it thought was happening. That gave:

/usr/share/fonts/opentype/cantarell/Cantarell-Regular.otf.dpkg-tmp: Cantarell:style=Regular
/usr/share/fonts/opentype/cantarell/Cantarell-Bold.otf.dpkg-tmp: Cantarell:style=Bold
/usr/share/fonts/opentype/cantarell/Cantarell-Bold.otf: Cantarell:style=Bold
/usr/share/fonts/opentype/cantarell/Cantarell-Oblique.otf: Cantarell:style=Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-Regular.otf: Cantarell:style=Regular
/usr/share/fonts/opentype/cantarell/Cantarell-Bold-Oblique.otf: Cantarell:style=Bold-Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-Oblique.otf.dpkg-tmp: Cantarell:style=Oblique
/usr/share/fonts/opentype/cantarell/Cantarell-BoldOblique.otf: Cantarell:style=BoldOblique

Hmmm. Those .dpkg-tmp files looked odd, and sure enough they didn’t actually exist. So I did a sudo fc-cache -f -v to force a rebuild of the font cache and restarted Firefox (it didn’t seem to work before doing so) and everything works fine now.

It seems that fc-cache must have been run at some point when dpkg had not yet completed installing an update to the fonts-cantarell package. That seems like a bug - fontconfig should probably ignore .dpkg* files, but equally I wouldn’t expect it to be run before dpkg had finished its unpacking stage fully.

23 June, 2016 02:23PM

hackergotchi for Joey Hess

Joey Hess

twenty years of free software -- part 4 ikiwiki-hosting

ikiwiki-hosting is a spin-off from ikiwiki. I wrote it to manage many ikiwiki instances for Branchable, and made it free software out of principle.

While Branchable has not reached the point of providing much income, it's still running after 6 years. Ikiwiki-hosting makes it pretty easy to maintain it, and I host all of my websites there.

A couple of other people have also found ikiwiki-hosting useful, which is not only nice, but led to some big improvements to it. Mostly though, releasing the software behind the business as free software caused us to avoid shortcuts and build things well.

Next: twenty years of free software -- part 5 pristine-tar

23 June, 2016 12:26PM

June 22, 2016

Scarlett Clark

KDE: Debian: *ubuntu snappy: Reproducible builds, Randa! and much more…

#Randa2016 KDE Sprint

#Randa2016 KDE Sprint


I am very late on post due to travel, Flu, jetlag sorry!


For this I was able to come up with a patch for kconfig_compiler to encode generated files to utf-8.
Review request is here:
This has been approved and I will be pushing it as soon as I patch the qt5 frameworks version.

Both kde4libs and kf5 kconfig has been pushed upstream kde.


WIP this has been a steep learning curve, according to the notes it was an easy embedded kernel version, that was not the case! After grueling hours of
trying to sort out randomness in debug output I finally narrowed it down to cases where QStringLiteral was used and there were non letter characters eg. (” <") These were causing debug symbols to generate with ( lambda() ) which caused unreproducible symbol/debug files. It is now a case of fixing all of these in the code to use QString::fromUtf8 seems to fix this. I am working on a mega patch for upstream and it should be ready early in the week. This last week I spent a large portion making my through a mega patch for kxmlgui, when it was suggested to me to write a small qt app to test QStringLiteral isolated and sure enough two build were byte for byte identical. So this means that QStringLiteral may not be the issue at all. With some more assistance I am to expand my test app with several QStringLiterals of varying lengths, we have suspicion it is a padding issue, which complicates things.

I am still fighting with this one, will set aside to simmer for now, as I have no idea how to fix padding issues.

I am testing a patch to fix umask issues for anyone that uses the kapptemplate generation macro. Thank you Simon for pointing me to this.
known affected:

The kapptemplate generation users/groups and umask patch has been pushed upstream.

KDE Randa!:
Despite managing to get a terrible Flu I accomplished more than I would have at home without awesome devs to help me out!

  • I have delegated the windows backend to Hannah and Kevin, if emerge is successful with Windows we will implement it on OSX as well.
  • Android docker image is up and running.
  • Several snappy packages done. Improved the snapcraft.yaml creation automation scripts started by Harald. Got help from
    David ( he even made a patch! ) with some issues we were facing with kio.
  • KDE CI DSL adjustments for 5 new platforms
  • Port tools/* python scripts to python3


  • Python automation scripts can no longer find projects except qt5… Need to get help from Ben as these are originally his.
  • Finish yaml CI files

Randa as usual was an amazing experience. Yes it is very hard work, but you have the beauty of the Swiss Alps at your fingertips! Not to mention all the
friendly faces and collaboration. A big thank you to all supporters and the Randa team!

Please help make KDE better by supporting the very important Randa Sprint:

Have a great day.

22 June, 2016 04:47PM by Scarlett Clark

hackergotchi for Joey Hess

Joey Hess

twenty years of free software -- part 3 myrepos

myrepos is kind of just an elaborated foreach (@myrepos) loop, but its configuration and extension in a sort of hybrid between an .ini file and shell script is quite nice and plenty of other people have found it useful.

I had to write myrepos when I switched from subversion to git, because git's submodules are too limited to meet my needs, and I needed a tool to check out and update many repositories, not necessarily all using the same version control system.

It was called "mr" originally, but I renamed the package because it's impossible to google for "mr". This is the only software I've ever renamed.

Next: twenty years of free software -- part 4 ikiwiki-hosting

22 June, 2016 04:24PM

Andrew Cater

Why share / why collaborate? - Some useful sources outside Debian.

"We will encourage you to develop the three great virtues of a programmer: laziness, impatience, and hubris."
[Larry Wall, Programming Perl, O'Reilly Assoc. (and expanded at http://c2.com/cgi/wiki?LazinessImpatienceHubris) ]

Because "A mind is a terrible thing to waste"
 [The above copyright Young and Rubicam, advertisers, for UNC Fund, 1960s]

"Why I Must Write GNU

I consider that the Golden Rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way. I cannot in good conscience sign a nondisclosure agreement or a software license agreement. ... "
[rms, GNU Manifesto copyright 1985-2014 Free Software Foundation Inc. https://www.gnu.org/gnu/manifesto.html]

"La pédagogie, l’information, la culture et le débat d’opinion sont le seul fait des utilisateurs, des webmestres indépendants et des initiatives universitaires et associatives."
 Education, information, culture and debate can only come from users, independent webmasters, academic or associative organizations.
[le minirézo http://www.uzine.net/article60.html]

We value:
  1. Contributors and facilitators over ‘editors’ and ‘authors’
  2. Collaboration over indiviualised production
  3. Here and now production over sometime soon production
  4. Meaningful credit for all contributors over single author attribution
 https://github.com/greyscalepress/manifestos - from whom much of the above quotations were abstracted - Manifestos for the Internet Age
Grayscale Press ISBN-13:978-2-940561-02-5]

[Note] Github repository is marked with licence of CC-Zero but explicitly states that licences of the individual pieces of writing should be respected

So - collaboration matters. Not repeating needless make-work that someone else has already done matters. Giving due credit: sharing: doing and "do-ocracy" matters above all

Perversely, Acknowledging prior work and prior copyright correctly is the beginning and end of the law. Only by doing this conscientiously and sharing in giving due credit can any of us truly participate.

It seems clear to me at least that contributing openly and freely, allowing others to make use of your expertise, opinions, prior experience can anyone progress in good conscience.

Accordingly, I recommend to my work colleagues and those I advise that they only consider FLOSS licences, that they do not make use of code snippets or random, unlicensed code culled form Github and that they contribute

22 June, 2016 03:56PM by Andrew Cater (noreply@blogger.com)

"But I'm a commercial developer / a government employee"

Following on:

Having seen some posts about this elsewhere on the 'Net:

  • Your copyright remains your own unless you assign it
  • Establish what you are being paid for: are you being paid for :
  1. Your specific area of FLOSS expertise (or)
  2. Your time / hours in an area unrelated to your FLOSS expertise (or)
  3. A job that has no impact or bearing on your FLOSS expertise (or)
  4. Your time / hours only - and negotiate accordingly
Your employer may be willing to negotiate / grant you an opt-out clause to protect your FLOSS expertise /  accept an additional non-exclusive licence to your FLOSS code / be prepared to sign an assignment e.g.

"You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright
interest in the program `Gnomovision'
(which makes passes at compilers) written
by James Hacker.

signature of Ty Coon, 1 April 1989
Ty Coon, President of Vice"

If none of the above is feasible: don't contribute anything that crosses the streams and mingles commercial and FLOSS expertise, however much you're offered to do so.

Patents / copyrights

"In the 1980s I had not yet realized how confusing it was to speak of “the issue” of “intellectual property”. That term is obviously biased; more subtle is the fact that it lumps together various disparate laws which raise very different issues. Nowadays I urge people to reject the term “intellectual property” entirely, lest it lead others to suppose that those laws form one coherent issue. The way to be clear is to discuss patents, copyrights, and trademarks separately. See further explanation of how this term spreads confusion and bias."
 [http://www.gnu.org/gnu/manifesto.en.html - footnote 8.]

If you want to assert a patent - it's probably not FLOSS. Go away :)

If you want to assert a trademark of your own - it's probably not FLOSS. Go away :)
 [Trademarks may ordinarily be outside the scope of normal FLOSS legal considerations - but should be acknowledged wherever they occur both as a matter of law and as a matter of courtesy]

Copyright gives legal standing (locus standi in the terminology of English common law) to sue for infringement - that's the basis of licence enforcement actions.

Employees of governments and those doing government work
  • Still have the right to own authorship and copyrights and to negotiate accordingly
  • May need to establish more clearly what they're being paid for
  • May be able to advise, influence or direct policy towards FLOSS in their own respective national jurisdiction
  • Should, ideally, be primariily acknowledged as individuals, holding and maintaining an individual reputation  and only secondarily as contractors/employees/others associated with government work.
  • Contribution to national / international standards, international agreements and shared working practices should be informed in the light of FLOSS work.
This is complex: some FLOSS contributors see a significant amount of this as immaterial to them in the same way that some indigenous populations do not acknowledge imposed colonial legal structures as valid - but both value systems can co-exist

22 June, 2016 03:48PM by Andrew Cater (noreply@blogger.com)

How to share collaboratively

Following on:

When contributing to mailing lists and fora:
  • Contribute constructively - no one likes to be told "You've got a REALLY ugly baby there" or equivalent.
  • Think through what you post: check references and check that it reads clearly and is spelled correctly
  • Add value
 When contributing bug reports:
  •  Provide as full details of hardware and software as you have
  • Answer questions carefully: Ask questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html
  • Be prepared to follow up queries / provide sufficient evidence to reproduce behaviour or provide pathological test cases 
  • Provide a patch if possible: even if it's only pseudocode
When adding to / modifying FLOSS software:
  • Keep pristine sources that you have downloaded
  • Maintain patch series against pristine source
  • Talk to the originators of the software / current maintainers elsewhere
  • Follow upstream style if feasible / a consistent house style if not
  • Be generous in what you accept: be precise in what you put out
  • Don't produce licence conflicts - check and check again that your software can be distributed.
  • Don't apply inconsistent copyrights
When writing new FLOSS software / "freeing" prior commercial/closed code under a FLOSS licence
  • Make permissions explicit and publish under a well established FLOSS licence 
  • Be generous to potential contributors and collaborators: render them every assistance so that they can help you better
  • Be generous in what you accept: be precise in what you put out
  • Don't produce licence conflicts - check and check again that your software can be distributed.
  • Don't apply inconsistent copyrights: software you write is your copyright at the outset until you assign it elsewhere
  • Contribute documentation / examples
  • Maintain a bugtracker and mailing lists for your software
If you are required to sign a contributor license agreement [CLA]
  • Ensure that you have the rights you purport to assign
  • Assign the minimum of rights necessary - if you can continue to allow full and free use of your code, do so
  • Meet any  required code of conduct [CoC] stipulations in addition to the CLA
Always remember in all of this: just because you understand your code and your working practices doesn't mean that anyone else will.
There is no automatic right to contribution nor any necessary assumption or precondition that collaborators will come forward.
Just because you love your own code doesn't mean that it merits anyone else's interest or that anyone else should value it thereby
"Just because it scratches your itch doesn't mean that it scratches anyone else's - or that it's actually any good / any use to anyone else"

22 June, 2016 03:19PM by Andrew Cater (noreply@blogger.com)

Satyam Zode

GSoC 2016 Week 4 and 5: Reproducible Builds in Debian

This is a brief report on my last week work with Debian Reproducible Builds.

In week 4, I mostly worked on designing an interfaces and tackling different issues related to argument completion feature of diffoscope and in week 5 I worked on adding hiding .buildinfo from .changes files.

Update for last week’s activities

  • I researched different diffoscope outputs. In reproducible-builds testing framework only differences of .buildinfo files are given but I needed diffoscope outputs for .changes files. Hence, I had to build packages locally using our experimental tool chain. My goal was to generate different outputs and to see how I can hide .buildinfo files from .changes.
  • I updated argument completion patch as per suggestions given by Paul Wise (pabs). Patch has been reviewed by Mattia Rizzolo, Holger Levsen and merged by Reiner Herrmann (deki) into diffoscope master. This patch closes #826711. Thanks all for support.

  • For Ignore .buildinfo files when comparing .changes files, we finally decided to enable this by default and without having any command line option to hide.

  • Last week I researched more on .changes and .buildinfo files. After getting guidelines from Lunar I was able to understand the need of this feature. I am in the middle of implementation of this particular problem.

Goal for upcoming week:

  • Finish the implementation of hiding .buildinfo from .changes
  • Start thinking on interfaces and discuss about different use cases.

I am thankful to Shirish Agarwal for helping me through visa process. But, unfortunately I won’t get visa till 5th July. So I don’t think, I would make it to debconf this year. I will certainly attend Debconf 2017. Good news for me is I have passed mid-term evaluations of Google Summer of Code 2016. I will continue my work to improve Debian. Even, I have post GSoC plans ready for Debian project ;)

Have a nice day :)

22 June, 2016 10:47AM

Andrew Cater

Why I must use Free Software - and why I tell others to do so

My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times,  I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
 In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace  1996  https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much  a "do-ocracy"  - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.

22 June, 2016 09:43AM by Andrew Cater (noreply@blogger.com)

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

Batch photo manipulation via free software tools?

I have a need for batch-processing pictures. My requirements are fairly simple:

  • Resize the image to fit Facebook's preferred 960 pixel box.
  • Insert Copyright, Byline and Bylinetitle into the EXIF data.
  • Optionally, paste my watermark onto a predefined corner of the image.
  • Optionally, adjust the white balance.
  • Rename the file according to a specific syntax.
  • Save the result to a predefined folder.

Until recently, I was using Phatch to perform all of this. Unfortunately, it cannot edit the EXIF data of my current Lumix camera, whose JPEG it claims to be MPO. I am thus forced to look for other options. Ideally, I would do this via a script inside gThumb (which is my main photo editing software), but I cannot seem to find adequate documentation on how to achieve this.

I am thus very interested in hearing about other options to achieve the same result. Ideas, anyone?

22 June, 2016 08:12AM by Martin-Éric (noreply@blogger.com)

hackergotchi for Clint Adams

Clint Adams

Only in San Francisco would one brag about this

“I dated Appelbaum!” she said.

“I gotta go,” I said.

22 June, 2016 06:46AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Answering to a CACM «Viewpoint»: on the patent review process

I am submitting a comment to Wen Wen and Chris Forman's Viewpoint on the Communications of the ACM, titled Economic and business dimensions: Do patent commons and standards-setting organizations help navigate patent thickets?. I believe my comment is worth sharing a bit more openly, so here it goes. Nevertheless, please refer to the original article; it makes very interesting and valid points, and my comment should be taken as an extra note on a great text only!

I was very happy to see an article with this viewpoint published. This article, however, mentions some points I believe should be further stressed out as problematic and important. Namely, still at the introduction, after mentioning that patents «are intended to provide incentives for innovation by granting to inventors temporary monopoly rights», the next paragraph continues, «The presence of patent thickets may create challenges for ICT producers. When introducing a new product, a firm must identify patents its product may infringe upon.»

The authors continue explaining the needed process — But this simple statement should be enough to explain how the patent system is broken and needs repair.

A requisite for patenting an invention was originally the «inventive» and «non-obvious» characteristics. Anything worth being granted a patent should be inventive enough, it should be non-obvious to an expert in the field.

When we see huge bodies of awarded (and upheld) patents falling in the case the authors mention, it becomes clear that the patent applications were not thoroughly researched prior to their patent grant. Sadly, long gone are the days where the United States Patent and Trademarks Office employed minds such as Albert Einstein's; nowadays, the office is more a rubber-stamping bureaucracy where most patents are awarded, and this very important requisite is left open to litigation: If somebody is found in breach of a patent, they might choose to defend the issue that the patent was obvious to an expert. But, of course, that will probably cost more in legal fees than settling for an agreement with the patent holder.

The fact that in our line of work we must take care to search for patents before releasing any work speaks a lot about the process. Patents are too easily granted. They should be way stricter; the occurence of an independent developer mistakenly (and innocently!) breaching a patent should be most unlikely, as patents should only be awarded to truly non-obvious solutions.

22 June, 2016 04:40AM by gwolf

June 21, 2016

hackergotchi for Matthew Garrett

Matthew Garrett

I've bought some more awful IoT stuff

I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comment count unavailable comments

21 June, 2016 11:11PM

Ian Wienand

Zuul and Ansible in OpenStack CI

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

Overview of OpenStack CI with Zuul and Ansible
  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

21 June, 2016 10:16PM by Ian Wienand

hackergotchi for Gunnar Wolf

Gunnar Wolf

Relax and breathe...

Time passes. I had left several (too many?) pending things to be done un the quiet weeks between the end of the lective semestre and the beginning of muy Summer trip to Winter. But Saturday gets closer every moment... And our long trip to the South begins.

Among many other things, I wanted to avance with some Debían stuff - both packaging and WRT keyring analysis. I want to contacto some people I left pending interactions with, but honestly, that will only come face to face un Capetown.

As to "real life", I hace too many pending issues at work to even begin with; I hope to get some time at South África todo do some decent UNAM sysadmining. Also, I want to play the idea of using Git for my students' workflow (handing in projects and assignments, at least)... This can be interesting to talk with the Debían colleagues about, actually.

As a Masters student, I'm making good advances, and will probably finish muy class work next semester, six months ahead of schedule, but muy thesis work so far has progressed way slower than what I'd like. I have at least a better defined topic and approach, so I'll start the writing phase soon.

And the personal life? Family? I am more complete and happy than ever before. My life su completely different from two years ago. Yes, that was obvious. But it's also the only thing I can come up with. Having twin babies (when will they make the transition from "babies" to "kids"? No idea... We will find out as it comes) is more than beautiful, more than great. Our life has changed in every possible aspect. And yes, I admire my loved Regina for all of the energy and love she puts on the babies... Life is asymetric, I am out for most of the day... Mommy is always there.

As I said, happier than ever before.

21 June, 2016 07:30PM by gwolf