January 23, 2017

Sam Hartman

Cudos to GDB Contributors

I recently diagnosed a problem in Debian's pam-p11 package. This package allegedly permits logging into a computer using a smart card or USB security token containing an ssh key. If you know the PIN and have the token, then your login attempt is authorized against the ssh authorized keys file. This seems like a great way to permit console logins as root to machines without having a shared password. Unfortunately, the package didn't work very well for me. It worked once, then all future attempts to use it segfaulted. I'm familiar with how PAM works. I understand the basic ideas behind PKCS11 (the API used for this type of smart card), but was completely unfamiliar with this particular PAM module and the PKCS11 library it used. The segfault was in an area of code I didn't even expect that this PAM module would ever call. Back in 1994, that would have been a painful slog. Gdb has improved significantly since then, and I'd really like to thank all the people over the years who made that possible. I was able to isolate the problem in just a couple of hours of debugging. Here are some of the cool features I used:
  • "target record-full" which allows you to track what's going on so you can go backwards and potentially bisect where in a running program something goes wrong. It's not perfect; it seems to have trouble with memset and a few other functions, but it's really good.
  • Hardware watch points. Once you know what memory is getting clobbered, have the hardware report all changes so you can see who's responsible.
  • Hey, wait, what? I really wish I had placed a breakpoint back there. With "target record-full" and "reverse-continue," you can. Set the breakpoint and then reverse continue, and time runs backwards until your breakpoint gets hit.
  • I didn't need it for this session, but "set follow-fork-mode" is very handy for certain applications. There's even a way to debug both the parent and child of a fork at the same time, although I always have to go look up the syntax. It seems like it ought to be "set follow-fork-mode both," and there was once a debugger that used that syntax, but Gdb uses different syntax for the same concept.
Anyway, with just a couple of hours and no instrumentation of the code, I managed to track down how a bunch of structures were being freed as an unexpected side effect of one of the function calls. Neither I nor the author of the pam-p11 module expected that (although it is documented and does make sense in retrospect). Good tools make life easier.

23 January, 2017 02:32PM

hackergotchi for Matthew Garrett

Matthew Garrett

Android permissions and hypocrisy

I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.

Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.

So, to that end, here are the permissions that Kaspersky request on Android:
  • android.permission.READ_CONTACTS
  • android.permission.WRITE_CONTACTS
  • android.permission.READ_SMS
  • android.permission.WRITE_SMS
  • android.permission.READ_PHONE_STATE
  • android.permission.CALL_PHONE
  • android.permission.SEND_SMS
  • android.permission.RECEIVE_SMS
  • android.permission.RECEIVE_BOOT_COMPLETED
  • android.permission.WAKE_LOCK
  • android.permission.WRITE_EXTERNAL_STORAGE
  • android.permission.SUBSCRIBED_FEEDS_READ
  • android.permission.READ_SYNC_SETTINGS
  • android.permission.WRITE_SYNC_SETTINGS
  • android.permission.WRITE_SETTINGS
  • android.permission.INTERNET
  • android.permission.ACCESS_COARSE_LOCATION
  • android.permission.ACCESS_FINE_LOCATION
  • android.permission.READ_CALL_LOG
  • android.permission.WRITE_CALL_LOG
  • android.permission.RECORD_AUDIO
  • android.permission.SET_PREFERRED_APPLICATIONS
  • android.permission.WRITE_APN_SETTINGS
  • android.permission.READ_CALENDAR
  • android.permission.WRITE_CALENDAR
  • android.permission.KILL_BACKGROUND_PROCESSES
  • android.permission.RESTART_PACKAGES
  • android.permission.MANAGE_ACCOUNTS
  • android.permission.GET_ACCOUNTS
  • android.permission.MODIFY_PHONE_STATE
  • android.permission.CHANGE_NETWORK_STATE
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.CHANGE_WIFI_STATE
  • android.permission.VIBRATE
  • android.permission.READ_LOGS
  • android.permission.GET_TASKS
  • android.permission.EXPAND_STATUS_BAR
  • com.android.browser.permission.READ_HISTORY_BOOKMARKS
  • com.android.browser.permission.WRITE_HISTORY_BOOKMARKS
  • android.permission.CAMERA
  • com.android.vending.BILLING
  • android.permission.SYSTEM_ALERT_WINDOW
  • android.permission.BATTERY_STATS
  • android.permission.MODIFY_AUDIO_SETTINGS
  • com.kms.free.permission.C2D_MESSAGE
  • com.google.android.c2dm.permission.RECEIVE

Every single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?

There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?

The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job.

comment count unavailable comments

23 January, 2017 07:58AM

January 22, 2017

hackergotchi for Shirish Agarwal

Shirish Agarwal

Debian contributions and World History

Beware, this would be slightly longish.

Debian Contributions

In the last couple of weeks, was lucky to put up a patch against debian-policy which had been bothering me for a long-long time.

The problem statement is simple, man-pages historically were made by software engineers for software-engineers. The idea, probably then was you give the user some idea of what the software does and the rest the software engineer would garner from reading the source-code. But over period of time, the audience has changed. While there are still software engineers who use GNU/Linux for the technical excellence, the man-pages have not kept up with this new audience who perhaps are either not technically so sound that or they do not want to take the trouble to reading the source-code to understand how things flow. An ‘example’ or ‘examples’ in a man-page gives us (the lesser mortals) some insight as how the command works, how the logic flows.

A good example of a man-page is the ufw man-page –

Deny all access to port 53:

ufw deny 53

Allow all access to tcp port 80:

ufw allow 80/tcp

Allow all access from RFC1918 networks to this host:

ufw allow from
ufw allow from
ufw allow from

Deny access to udp port 514 from host

ufw deny proto udp from to any port 514

Allow access to udp port 5469 from port 5469:

ufw allow proto udp from port 5469 to port 5469

Now if we had man-pages like the above which give examples, then the user at least can try to accomplish whatever s/he is trying to do. I truly believe not having examples in a man-page kills 50% of your audience and people who could potentially use your tool.

Personal wishlist – The only thing (and this might be my failure) is we need a good way to search through a man-page. The only way I know is using ‘/’ and try to give a pattern. Lots of times it fails because I, the user doesn’t know the exact keyword which the documenter was using. What would be nice, great if we do have some sort of parser where I tell it, ‘$this is what I’m looking for’ and the parser tries the pattern + all its synonyms and whatever seems to be most relevant passage from the content, in this case – a manpage it tells me. It would make my life a lot easier while at the same time force people to document more and more.

I dunno if there has been any research or study of the relationship between good documentation and popularity of a program. I know there are lots of different tiny bits which make or break a program, one of which would definitely be documentation and in that a man-page IF it’s a command-line tool.

A query on Quora gives some indication https://www.quora.com/How-comprehensible-do-you-find-Unix-Linux-Man-pages although the low response rate tells its own story.

there have been projects like man2html and man2pdf and others which try to make the content more accessible to people who are not used to the man-page interface but till you don’t have ‘Examples’ the other things can work only so far. Also if anybody talks about X project which claims to solve this problem they will have to fight manpages who have been around like forever.

As can be seen in the patch, did some rookie mistakes as can be seen. I also filed a lintian bug at the same time. Hope the patch does get merged at some point in debian-policy and then a check introduced in lintian in some future release. I do agree with anarcat’s assertion that it should be at the level of the manpage missing level.

I am no coder but finding 14,000 binary packages without a manpage left me both shocked and surprised.

I came to know about manpage-alert from the devscripts package to know which all binary packages that have been installed but not have man-pages.

I hope to contribute a manpage or two if I across a package I’m somewhat comfortable with. I have made a beginning of sorts by running manpage alert and putting the output in a .txt file which I would grep through manually and see if something interesting jumps at me.

The learning garnered from putting the patch to the debian package resulted in another patch but this time for an upstream project altogether. As can be seen all are just baby-steps that even a non-coder can take.

Another couple of bugs I filed which were fixed were of a sim called ‘unknown-horizons’ . A 2D realtime strategy simulation. I had filed three bugs, two of which were fixed in 2 days, the 3rd I hope is also fixed soonish.

Lastly, I spent most of the week-end poring over packages who have left files in /etc/bash_completion.d/ . I spent almost 4-5 odd hours as each package in question as well as entries found in /etc/bash-completion.d/$filename I had to find which package it belonged to first –

[$] dpkg -S /etc/bash_completion.d/git-prompt

git: /etc/bash_completion.d/git-prompt

I know that dpkg-query also does the same –

[$] dpkg-query -S /etc/bash_completion.d/git-prompt

git: /etc/bash_completion.d/git-prompt

But I am used to plain dpkg although do know that dpkg-query can do lot more intimate searching in various ways than dpkg can.

Once the package name was established, first simulate the purge –

[$] sudo aptitude -s purge git

[sudo] password for shirish:
The following packages will be REMOVED:
0 packages upgraded, 0 newly installed, 1 to remove and 14 not upgraded.
Need to get 0 B of archives. After unpacking 29.5 MB will be freed.
The following packages have unmet dependencies:
libgit-wrapper-perl : Depends: git but it is not going to be installed
git-extras : Depends: git (>= 1.7.0) but it is not going to be installed
bup : Depends: git but it is not going to be installed
git-remote-gcrypt : Depends: git but it is not going to be installed
git-svn : Depends: git (> 1:2.11.0) but it is not going to be installed
Depends: git ( 1:2.11.0) but it is not going to be installed
Depends: git (= 1:1.8.1) but it is not going to be installed
git-core : Depends: git (> 1: but it is not going to be installed
The following actions will resolve these dependencies:

Remove the following packages:
1) bup [0.29-2 (now, testing, unstable)]
2) fdroidserver [0.7.0-1 (now, testing, unstable)]
3) git-annex [6.20161012-1 (now, testing)]
4) git-core [1:2.11.0-2 (now, testing, unstable)]
5) git-extras [4.2.0-1 (now, testing, unstable)]
6) git-remote-gcrypt [1.0.1-1 (now, testing, unstable)]
7) git-repair [1.20151215-1 (now, unstable)]
8) git-svn [1:2.11.0-2 (now, testing, unstable)]
9) gitk [1:2.11.0-2 (now, testing, unstable)]
10) libgit-wrapper-perl [0.047-1 (now, testing, unstable)]
11) python3-git [2.1.0-1 (now, testing, unstable)]
12) svn2git [2.4.0-1 (now, testing, unstable)]

Leave the following dependencies unresolved:
13) devscripts recommends libgit-wrapper-perl
14) dh-make-perl recommends git
15) fdroidserver recommends git
16) git-annex recommends git-remote-gcrypt (>= 0.20130908-6)
17) gplaycli recommends fdroidserver
18) python-rope recommends git-core

Accept this solution? [Y/n/q/?] q
Abandoning all efforts to resolve these dependencies.

Then I made a note of all the packages being affected, saw purging all of them wouldn’t call others (the Package dependency Hell), made the purge and then reinstalled anew.

The reason I did this is that many a times during upgrade, either during update/upgrade sometimes the correct action doesn’t happen. To take the git’s example itself, there were two files git-extras and git-prompt which were in /etc/bash_completion.d/ both of which were showing their source as git. Purging git and installing git afresh removed git-extras file and git-prompt is the only one remaining.

While blogging about the package, did try to grep through changelog.Debian.gz and changelog.gz in git –

┌─[shirish@debian] - [/usr/share/doc/git] - [10046]
└─[$] zless changelog.gz

and similarly –

┌─[shirish@debian] - [/usr/share/doc/git] - [10046]
└─[$] zless changelog.Debian.gz

But failed to find any mention of the now gone git-extras. Doing this with all the packages took considerable time as didn’t want to deal with any potential fallout later on. For instance, ufw (uncomplicated firewall) also had an entry in /etc/bash_completion.d/, hence before purging ufw, took backup of all the rules I have made, did a successful simulation

[$] sudo aptitude -s purge ufw gufw

The following packages will be REMOVED:
gufw{p} ufw{p}
0 packages upgraded, 0 newly installed, 2 to remove and 14 not upgraded.
Need to get 0 B of archives. After unpacking 4,224 kB will be freed.

Note: Using 'Simulate' mode.
Do you want to continue? [Y/n/?] y
Would download/install/remove packages.

purged the packages, reinstalled it and then re-added all the rules. Doing it all for various sundry packages, had to do it manually as there is no one size fits all solution.

A sensitive one was grub which still has an entry in /etc/bash_completion.d/grub. Doing it wrong could have resulted in a non-bootable situation. There are workarounds for that, but it would have taken quite a bit of time, energy, notes and bit of recall factor what I did the last time something like that happened. Doing it manually, being present meant I could do it rightly the first time.

So, was it worth it – It would be if the package maintainers do the needful and the remaining entries are moved out of /etc/bash_completion.d/ to /usr/share/bash-completions and some to my favourite /usr/share/zsh/vendor-completions/ – for instance –

┌─[shirish@debian] - [/usr/share/zsh/vendor-completions] - [10064]
└─[$] ll -h _youtube-dl

-rw-r--r-- 1 root root 3.2K 2016-12-01 08:48 _youtube-dl

But trying to get all or even major packages to use zsh-completions would be hard work and would take oodles of time and this concerns upstream stuff, also very much outside what I was sharing.

World History

Before, during and even after South-African experience, I was left wondering why India and South Africa, two countries who had similar histories at least the last couple of hundred years or more, the final result of Independence was so different for both the countries. It took me quite sometime to articulate that in a form of question , while the answers were interesting, from what little I know of India itself, if I were an Englishman I would never leave ‘Hindustan’. What the people answering failed to take into account was that in that era it was ‘Hindustan’ or un-divided India.

Pre-partition map of India

Pre-partition map of India

This map can be found at https://commons.wikimedia.org/wiki/File:British_Indian_Empire_1909_Imperial_Gazetteer_of_India.jpg and is part of quite a few Indian articles. I would urge people to look at the map in-depth. Except for the Central India Agency and Central India Provinces, most of the other regions were quite comfortable weather-wise.

Hence I can’t help but feel the assertion that Britishers didn’t like India (as to live here) slightly revolting. See the excerpt/take on Dale Kennedy ‘s The Magic Mountains: Hill Stations and the British Raj. Berkeley: University of California Press, 1996. xv + 264 pp. . A look at the list of hill stations of divided India is enough to tell that there were lot of places which either were founded by the Britishers or they chose to live there. And this is not all, there are supposed to lot of beautiful places even in Pakistan, especially in North East Frontier, Swat for instance. While today it’s infamous for Taliban and Islamic Terrorism, there was a time it was known for its beauty.

The second most difficult mountain in the world - K2, Pakistan

The second most difficult mountain in the world – K2, Pakistan

Trivia – After Everest, K2 is the smaller one although whatever I have read of people’s accounts, most people who ascended all 14 8,000 metre peaks say K2 is technically more tougher than Everest and after Everest has the highest casualty rate.

Also places like the disputed North half of Pakistan Occupied Kashmir, Gilgit–Baltistan, Extreme northern Punjab of Pakistan , Northern half of Khyber-Pakhtunkhawa province and Northern Balochistan all of these places would have been more than conducive to the Britishers as it is near to the British climate (snow and pleasant weather all year round). It really is a pity that Pakistan chose to become a terrorist state where it could have become one of the more toured places of Asia. I really feel nauseous and sad at the multiple chances that Pakistan frittered away, it could have been something else.

Filed under: Miscellenous Tagged: #bash-completion, #British Raj, #contributions, #debian, #debian-policy, #Debian-QA, #Hill Stations, #India Independance Movement, #lintian, #obsolete-conffiles, #unknown-horizons, #weather, #World History, adequate, Pakistan, tourism

22 January, 2017 11:06PM by shirishag75

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru loopback test

Nageru, my live video mixer, is in the process of gaining HDMI/SDI output for bigscreen use, and in that process, I ran some loopback tests (connecting the output of one card into the input of another) to verify that I had all the colorspace parameters right. (This is of course trivial if you are only sending one input on bit-by-bit, but Nageru is much more flexible, so it really needs to understand what each pixel means.)

It turns out that if you mess up any of these parameters ever so slightly, you end up with something like this, this or this. But thankfully, I got this instead on the very first try, so it really seems it's been right all along. :-)

(There's a minor first-generation loss in that the SDI chain is 8-bit Y'CbCr instead of 10-bit, but I really can't spot it with the naked eye, and it doesn't compound through generations. I plan to fix that for those with spare GPU power at some point, possibly before 1.5.0 release.)

22 January, 2017 10:47PM


My first (public) blog ever

Hi there! Welcome to the first (public) blog I've ever had.

22 January, 2017 08:23PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Improving debugging via email, followup

Half a year ago I wrote a blog post about debugging over email. This is a follow-up.

The blog post summarised:

  • Have an automated way to collect all usual informaion needed for debugging: versions, config and log files, etc.

  • Improve error messages so the users can solve their issues themselves.

  • Give users better automated diagnostics tools.

Based on further thinking and feedback, I add:

  • When a program notices a problem that may indicate a bug in it, it should collect the necessary information itself, automatically, in a way that the user just needs to send to the developers / support.

  • The primary goal should be to help people solve their own problems.

  • A secondary goal is to make the problem reproducible by the developers, or otherwise make it easy to fix bugs without access to the original system where the problem was manifested.

I've not written any code to help with this remote debugging, but it's something I will start experimenting with in the near future.

Further ideas welcome.

22 January, 2017 03:50PM

Elena 'valhalla' Grandi

New pajama

New pajama

I may have been sewing myself a new pajama.


It was plagued with issues; one of the sleeve is wrong side out and I only realized it when everything was almost done (luckily the pattern is symmetric and it is barely noticeable) and the swirl moved while I was sewing it on (and the sewing machine got stuck multiple times: next time I'm using interfacing, full stop.), and it's a bit deformed, but it's done.

For the swirl, I used Inkscape to Simplify (Ctrl-L) the original Debian Swirl a few times, removed the isolated bits, adjusted some spline nodes by hand and printed on paper. I've then cut, used water soluble glue to attach it to the wrong side of a scrap of red fabric, cut the fabric, removed the paper and then pinned and sewed the fabric on the pajama top.
As mentioned above, the next time I'm doing something like this, some interfacing will be involved somewhere, to keep me sane and the sewing machine happy.

Blogging, because it is somewhat relevant to Free Software :) and there are even sources https://www.trueelena.org/clothing/projects/pajamas_set.html#downloads, under a DFSG-Free license :)

22 January, 2017 03:43PM by Elena ``of Valhalla''

January 21, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Updated Example Repo for RMarkdown and Metropolis/Mtheme

During useR! 2016, Nick Tierney had asked on Twitter about rmarkdown and metropolis and whether folks had used RMarkdown-driven LaTeX Beamer presentations. My firm hell yeah answer, based on having used mtheme outright or in local mods for quite some time (see my talks page), lead to this blog post of mine describing this GitHub repo I had quickly set up during breaks at useR! 2016. The corresponding blog post and the repo have some more details on how I do this, in particular about local packages (also with sources on GitHub) for the non-standard fonts I use.

This week I got around to updating the repo / example a little by making the default colours (in my example) a little less awful, and a page on blocks and, most importantly, turning the example into the animated gif below:

And thanks to the beautiful tint package -- see its repo and CRAN package --- I now know how to create a template package. So if there is interest (and spare time), we could build a template package for RStudio too.

With that, may I ask a personal favour of anybody still reading the post? Please do not hit my twitter handle with questions for support. All my code is an GitHub, and issue tickets there are much preferred. Larger projects like Rcpp also have their own mailing lists, and it is much better to use those. And if you like neither, maybe ask on StackOverflow. But please don't spam my Twitter handle. Thank you.

21 January, 2017 07:23PM

hackergotchi for Clint Adams

Clint Adams

January 20, 2017

hackergotchi for Joachim Breitner

Joachim Breitner

Global almost-constants for Haskell

More than five years ago I blogged about the “configuration problem” and a proposed solution for Haskell, which turned into some Template Haskell hacks in the seal-module package.

With the new GHC proposal process in plase, I am suddenly much more inclined to write up my weird wishes for the Haskell language in proposal form, to make them more coherent, get feedback, and maybe (maybe) actually get them implemented. But even if the proposal is rejected it is still a nice forum to discuss these ideas.

So I turned my Template Haskell hack into a proposed new syntactic feature. The idea is shamelessly stolen from Isabelle, including some of the keywords, and would allow you to write

context fixes progName in
  foo :: Maybe Int -> Either String Int
  foo Nothing  = Left $ progName ++ ": no number given"
  foo (Just i) = bar i

  bar :: Int -> Either String Int
  bar 0 = Left $ progName ++ ": zero no good"
  bar n = Right $ n + 1

instead of

foo :: String -> Maybe Int -> Either String Int
foo progName Nothing  = Left $ progName ++ ": no number given"
foo progName (Just i) = bar progName  i

bar :: String -> Int -> Either String Int
bar progName 0 = Left $ progName ++ ": zero no good"
bar progName n = Right $ n + 1

when you want to have an “almost constant” parameter.

I am happy to get feedback at the GitHub pull request.

20 January, 2017 06:03PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.10.1

This is first security bugfix release for Weblate. This has to come at some point, fortunately the issue is not really severe. But Weblate got it's first CVE ID today, so it's time to address it in a bugfix release.

Full list of changes:

  • Do not leak account existence on password reset form (CVE-2017-5537).

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: Debian English SUSE Weblate | 0 comments

20 January, 2017 02:00PM

January 19, 2017

hackergotchi for Matthew Garrett

Matthew Garrett

Android apps, IMEIs and privacy

There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

19 January, 2017 11:36PM

hackergotchi for Daniel Pocock

Daniel Pocock

Which movie most accurately forecasts the Trump presidency?

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

19 January, 2017 07:31PM by Daniel.Pocock

January 18, 2017

Reproducible builds folks

Reproducible Builds: week 90 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 8 and Saturday January 14 2017:

Upcoming Events

  • The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd

  • Dennis Gilmore and Holger Levsen will present about "Reproducible Builds and Fedora" at Devconf.cz on February, 27th.

  • Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th

Reproducible work in other projects

Reproducible Builds have been mentioned in the FSF high-priority project list.

The F-Droid Verification Server has been launched. It rebuilds apps from source that were built by f-droid.org and checks that the results match.

Bernhard M. Wiedemann did some more work on reproducibility for openSUSE.

Bootstrappable.org (unfortunately no HTTPS yet) was launched after the initial work was started at our recent summit in Berlin. This is another topic related to reproducible builds and both will be needed in order to perform "Diverse Double Compilation" in practice in the future.

Toolchain development and fixes

Ximin Luo researched data formats for SOURCE_PREFIX_MAP and explored different options for encoding a map data structure in a single environment variable. He also continued to talk with the rustc team on the topic.

Daniel Shahaf filed #851225 ('udd: patches: index by DEP-3 "Forwarded" status') to make it easier to track our patches.

Chris Lamb forwarded #849972 upstream to yard, a Ruby documentation generator. Upstream has fixed the issue as of release 0.9.6.

Alexander Couzens (lynxis) has made mksquashfs reproducible and is looking for testers. It compiles on BSD systems such as FreeBSD, OpenBSD and NetBSD.

Bugs filed

Chris Lamb:

Lucas Nussbaum:

Nicola Corna:

Reviews of unreproducible packages

13 package reviews have been added and 13 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been added:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)
  • Lucas Nussbaum (11)
  • Nicola Corna (1)

diffoscope development

Bugs in diffoscope in the last year

Many bugs were opened in diffoscope during the past few weeks, which probably is a good sign as it shows that diffoscope is much more widely used than a year ago. We have been working hard to squash many of them in time for Debian stable, though we will see how that goes in the end…

reproducible-website development


  • Ximin Luo and Holger Levsen worked on stricter tests to check that /dev/shm and /run/shm are both mounted with the correct permissions. Some of our build machines currently still fail this test, and the problem is probably the root cause of the FTBFS of some packages (which fails with issues regarding sem_open). The proper fix is still being discussed in #851427.

  • Valerie Young worked on creating and linking autogenerated schema documentation for our database used to store the results.

  • Holger Levsen added a graph with diffoscope crashes and timeouts.

  • Holger also further improved the daily mail notifications about problems.


This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

18 January, 2017 09:52PM

hackergotchi for Jan Wagner

Jan Wagner

Migrating Gitlab non-packaged PostgreSQL into omnibus-packaged

With the release of Gitlab 8.15 it was announced that PostgreSQL needs to be upgraded. As I migrated from a source installation I used to have an external PostgreSQL database instead of using the one shiped with the omnibus package.
So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before.

Let's have a look into the databases:

$ sudo -u postgres psql -d template1
psql (9.2.18)  
Type "help" for help.

gitlabhq_production=# \l  
                                             List of databases
         Name          |       Owner       | Encoding | Collate |  Ctype  |        Access privileges
 gitlabhq_production   | git               | UTF8     | C.UTF-8 | C.UTF-8 |
 gitlab_mattermost     | git               | UTF8     | C.UTF-8 | C.UTF-8 |
gitlabhq_production=# \q  

Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs.

$ su postgres -c "pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql" && \
su postgres -c "pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql" && \  
/etc/init.d/postgresql stop

Activate PostgreSQL shipped with Gitlab Omnibus

$ sed -i "s/^postgresql\['enable'\] = false/#postgresql\['enable'\] = false/g" /etc/gitlab/gitlab.rb && \
sed -i "s/^#mattermost\['enable'\] = true/mattermost\['enable'\] = true/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  

Testing if the connection to the databases works

$ su - git -c "psql --username=gitlab  --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.

gitlabhq_production=# \q  
$ su - git -c "psql --username=gitlab  --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.

mattermost_production=# \q  

Ensure pg_trgm extension is enabled

$ sudo gitlab-psql -d gitlabhq_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
$ sudo gitlab-psql -d mattermost_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'

Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too.

$ sed -i "s/OWNER TO git;/OWNER TO gitlab;/" /tmp/gitlabhq_production.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlabhq_production.sql  
$ sed -i "s/OWNER TO git;/OWNER TO gitlab_mattermost;/" /tmp/gitlab_mattermost.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlab_mattermost.sql  

(Re)import the data

$ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql
$ sudo gitlab-psql -d gitlabhq_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d gitlabhq_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  
$ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql
$ sudo gitlab-psql -d mattermost_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d mattermost_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  

Make use of the shipped PostgreSQL

$ sed -i "s/^gitlab_rails\['db_/#gitlab_rails\['db_/" /etc/gitlab/gitlab.rb && \
sed -i "s/^mattermost\['sql_/#mattermost\['sql_/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  

Now you should be able to connect to all the Gitlab services again.

Optionally remove the external database

apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common  

Maybe you also want to purge the old database content

apt-get purge postgresql-9.4  

18 January, 2017 09:08PM by Jan Wagner

Michael Stapelberg

manpages.debian.org has been modernized

https://manpages.debian.org has been modernized! We have just launched a major update to our manpage repository. What used to be served via a CGI script is now a statically generated website, and therefore blazingly fast.

While we were at it, we have restructured the paths so that we can serve all manpages, even those whose name conflicts with other binary packages (e.g. crontab(5) from cron, bcron or systemd-cron). Don’t worry: the old URLs are redirected correctly.

Furthermore, the design of the site has been updated and now includes navigation panels that allow quick access to the manpage in other Debian versions, other binary packages, other sections and other languages. Speaking of languages, the site serves manpages in all their available languages and respects your browser’s language when redirecting or following a cross-reference.

Much like the Debian package tracker, manpages.debian.org includes packages from Debian oldstable, oldstable-backports, stable, stable-backports, testing and unstable. New manpages should make their way onto manpages.debian.org within a few hours.

The generator program (“debiman”) is open source and can be found at https://github.com/Debian/debiman. In case you would like to use it to run a similar manpage repository (or convert your existing manpage repository to it), we’d love to help you out; just send an email to stapelberg AT debian DOT org.

This effort is standing on the shoulders of giants: check out https://manpages.debian.org/about.html for a list of people we thank.

We’d love to hear your feedback and thoughts. Either contact us via an issue on https://github.com/Debian/debiman/issues/, or send an email to the debian-doc mailing list (see https://lists.debian.org/debian-doc/).

18 January, 2017 05:20PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

RetroPie, NES Classic and Bluetooth peripherals

I wanted to write a more in-depth post about RetroPie the Retro Gaming Appliance OS for Raspberry Pis, either technically or more positively, but unfortunately I don't have much positive to write.

What I hoped for was a nice appliance that I could use to play old games from the comfort of my sofa. Unfortunately, nine times out of ten, I had a malfunctioning Linux machine and the time I'd set aside for jumping on goombas was being spent trying to figure out why bluetooth wasn't working. I have enough opportunities for that already, both at work and at home.

I feel a little bad complaining about an open source, volunteer project: in its defence I can say that it is iterating fast and the two versions I tried in a relatively short time span were rapidly different. So hopefully a lot of my woes will eventually be fixed. I've also read a lot of other people get on with it just fine.

Instead, I decided the Nintendo Classic NES Mini was the plug-and-play appliance for me. Alas, it became the "must have" Christmas toy for 2016 and impossible to obtain for the recommended retail price. I did succeed in finding one in stock at Toys R Us online at one point, only to have the checkout process break and my order not go through. Checking Stock Informer afterwards, that particular window of opportunity was only 5 minutes wide. So no NES classic for me!

My adventures in RetroPie weren't entirely fruitless, thankfully: I discovered two really nice pieces of hardware.

ThinkPad Keyboard

ThinkPad Keyboard

The first is Lenovo's ThinkPad Compact Bluetooth Keyboard with TrackPoint, a very compact but pleasant to use Bluetooth keyboard including a trackpoint. I miss the trackpoint from my days as a Thinkpad laptop user. Having a keyboard and mouse combo in such a small package is excellent. My only two complaints would be the price (I was lucky to get one cheaper on eBay) and the fact it's bluetooth only: there's a micro-USB port for charging, but it would be nice if it could be used as a USB keyboard too. There's a separate, cheaper USB model.

8bitdo SFC30

8bitdo SFC30

The second neat device is a clone of the SNES gamepad by HK company 8bitdo called the SFC30. This looks and feels very much like the classic Nintendo SNES controller, albeit slightly thicker from front to back. It can be used in a whole range of different modes, including attached USB; Bluetooth pretending to be a keyboard; Bluetooth pretending to be a controller; and a bunch of other special modes designed to work with iOS or Android devices in various configurations. The manufacturer seem to be actively working on firmware updates to further enhance the controller. The firmware is presently closed source, but it would not be impossible to write an open source firmware for it (some people have figured out the basis for the official firmware).

I like the SFC30 enough that I spent some time trying to get it working for various versions of Doom. There are just enough buttons to control a 2.5D game like Doom, whereas something like Quake or a more modern shooter would not work so well. I added support for several 8bitdo controllers directly into Chocolate Doom (available from 2.3.0 onwards) and into SDL2, a popular library for game development, which I think is used by Steam, so Steam games may all gain SFC30 support in the future too.

18 January, 2017 05:14PM

hackergotchi for Bálint Réczey

Bálint Réczey

My debian-devel pledge

I pledge that before sending each email to the debian-devel mailing list I move forward at least one actionable bug in my packages.

18 January, 2017 11:33AM by Réczey Bálint

Mike Hommey

Announcing git-cinnabar 0.4.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.3.2?

  • Various bug fixes.
  • Updated git to 2.11.0 for cinnabar-helper.
  • Now supports bundle2 for both fetch/clone and push (https://www.mercurial-scm.org/wiki/BundleFormat2).
  • Now Supports git credential for HTTP authentication.
  • Now supports git push --dry-run.
  • Added a new git cinnabar fetch command to fetch a specific revision that is not necessarily a head.
  • Added a new git cinnabar download command to download a helper on platforms where one is available.
  • Removed upgrade path from repositories used with version < 0.3.0.
  • Experimental (and partial) support for using git-cinnabar without having mercurial installed.
  • Use a mercurial subprocess to access local mercurial repositories.
  • Cinnabar-helper now handles fast-import, with workarounds for performance issues on macOS.
  • Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
  • Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
  • Various improvements to the rules to build the helper.
  • Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.
  • Fail graft earlier when no commit was found to graft
  • Allow graft to work with git version < 1.9
  • Allow git cinnabar bundle to do the same grafting as git push

18 January, 2017 09:21AM by glandium

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live January 2017

As the freeze of the next release is closing in, I have updated a bunch of packages around TeX: All of the TeX Live packages (binaries and arch independent ones) and tex-common. I might see whether I get some updates of ConTeXt out, too.

The changes in the binaries are mostly cosmetic: one removal of a non-free (unclear-free) file, and several upstream patches got cherrypicked (dvips, tltexjp contact email, upmendex, dvipdfmx). I played around with including LuaTeX v1.0, but that breaks horribly with the current packages in TeX Live, so I refrained from it. The infrastructure package tex-common got a bugfix for updates from previous releases, and for the other packages there is the usual bunch of updates and new packages. Enjoy!

New packages

arimo, arphic-ttf, babel-japanese, conv-xkv, css-colors, dtxdescribe, fgruler, footmisx, halloweenmath, keyfloat, luahyphenrules, math-into-latex-4, mendex-doc, missaali, mpostinl, padauk, platexcheat, pstring, pst-shell, ptex-fontmaps, scsnowman, stanli, tinos, undergradmath, yaletter.

Updated packages

acmart, animate, apxproof, arabluatex, arsclassica, babel-french, babel-russian, baskervillef, beamer, beebe, biber, biber.x86_64-linux, biblatex, biblatex-apa, biblatex-chem, biblatex-dw, biblatex-gb7714-2015, biblatex-ieee, biblatex-philosophy, biblatex-sbl, bidi, calxxxx-yyyy, chemgreek, churchslavonic, cochineal, comicneue, cquthesis, csquotes, ctanify, ctex, cweb, dataref, denisbdoc, diagbox, dozenal, dtk, dvipdfmx, dvipng, elocalloc, epstopdf, erewhon, etoolbox, exam-n, fbb, fei, fithesis, forest, glossaries, glossaries-extra, glossaries-french, gost, gzt, historische-zeitschrift, inconsolata, japanese-otf, japanese-otf-uptex, jsclasses, latex-bin, latex-make, latexmk, lt3graph, luatexja, markdown, mathspec, mcf2graph, media9, mendex-doc, metafont, mhchem, mweights, nameauth, noto, nwejm, old-arrows, omegaware, onlyamsmath, optidef, pdfpages, pdftools, perception, phonrule, platex-tools, polynom, preview, prooftrees, pst-geo, pstricks, pst-solides3d, ptex, ptex2pdf, ptex-fonts, qcircuit, quran, raleway, reledmac, resphilosophica, sanskrit, scalerel, scanpages, showexpl, siunitx, skdoc, skmath, skrapport, smartdiagram, sourcesanspro, sparklines, tabstackengine, tetex, tex, tex4ht, texlive-scripts, tikzsymbols, tocdata, uantwerpendocs, updmap-map, uplatex, uptex, uptex-fonts, withargs, wtref, xcharter, xcntperchap, xecjk, xellipsis, xepersian, xint, xlop, yathesis.

18 January, 2017 06:37AM by Norbert Preining

Hideki Yamane

It's all about design

From Arturo's blog
When I asked why not Debian, the answer was that it was very difficult to install and manage.
It's all about design, IMHO.
Installer, website, wiki... It should be "simple", not verbose, not cheap.

18 January, 2017 02:45AM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.8: Windows support for proto3

Issue ticket #20 demonstrated that we had not yet set up Windows for version 3 of Google Protocol Buffers ("Protobuf") -- while the other platforms support it. So I made the change, and there is release 0.4.8.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

The NEWS file summarises the release as follows:

Changes in RProtoBuf version 0.4.8 (2017-01-17)

  • Windows builds now use the proto3 library as well (PR #21 fixing #20)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 January, 2017 01:21AM

January 17, 2017

Arturo Borrero González

Debian is a puzzle: difficult

Debian is a puzzle

Debian is very difficult, a puzzle. This surprising statement was what I got last week when talking with a group of new IT students (and their teachers).

I would like to write down here what I was able to obtain from that conversation.

From time to time, as part of my job at CICA, we open the doors of our datacenter to IT students from all around Andalusia (our region) who want to learn what we do here and how we do it. All our infraestructure and servers are primarily built using FLOSS software (we have some exceptions, like backbone routers and switches), and the most important servers run Debian.

As part of the talk, when I am in such a meeting with a visiting group, I usually ask about which technologies they use and learn in their studies. The other day, they told me they use mostly Ubuntu and a bit of Fedora.

When I asked why not Debian, the answer was that it was very difficult to install and manage. I tried to obtain some facts about this but I failed in what seems to be a case of bad fame, a reputation problem which was extended among the teachers and therefore among the students. I didn’t detect any branding biasing or the like. I just seems lack of knowledge, and bad Debian reputation.

Using my DD powers and responsabilities, I kindly asked for feedback to improve our installer or whatever they may find difficult, but a week later I have received no email so far.

Then, what I obtain is nothing new:

  • we probably need more new-users feedback
  • we have work to do in the marketing/branding area
  • we have very strong competitors out there
  • we should keep doing our best

I myself recently had to use the Ubuntu installer in a laptop, and it didn’t seem that different to the Debian one: same steps and choices, like in every other OS installation.

Please, spread the word: Debian is not difficult. Certainly not perfect, but I don’t think that installing and using Debian is such a puzzle.

17 January, 2017 05:10PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Linux Tablet-Mode Usability

In my ongoing quest to get Tablet-Mode working on my Hybrid machine, here's how I've been living with it so far. My intent is to continue using Free Software for both use cases. My wishful thought is to use the same software under both use cases. 

  • Browser: On the browser front, things are pretty decent. Chromium has good support for Touchscreen input. Most of the Touchscreen use cases work well with Chromium. On the Firefox side, after a huge delay, finally, Firefox seems to be catching up. Hopefully, with Firefox 51/52, we'll have a much more usable Touchscreen browser.
  • Desktop Shell: One of the reason of migrating to GNOME was its touch support. From what I've explored so far, GNOME is the only desktop shell that has touch support natively done. The feature isn't complete yet, but is fairly well usable.
    • Given that GNOME has touchscreen support native, it is obvious to be using GNOME equivalent of tools for common use cases. Most of these tools inherit the touchscreen capabilities from the underneath GNOME libraries.
    • File Manager: Nautilus has decent support for touch, as a file manager. The only annoying bit is a right-click equivalent. Or in touch input sense, a long-press.
    • Movie Player: There's a decent movie player, based on GNOME libs; GNOME MPV. In my limited use so far, this interface seems to have good support. Other contenders are:
      • SMPlayer is based on Qt libs. So initial expectation would be that Qt based apps would have better Touch support. But I'm yet to see any serious Qt application with Touch input support. Back to SMPlayer, the dev is pragmatic enough to recognize tablet-mode users and as such has provided a so called "Tablet Mode" view for SMPlayer (The tooltip did not get captured in the screenshot).
      • MPV doesn't come with a UI but has basic management with OSD. And in my limited usage, the OSD implementation does seem capable to take touch input.
  • Books / Documents: GNOME Documents/Books is very basic in what it has to offer, to the point that it is not much useful. But since it is based on the same GNOME libraries, it enjoys native touch input support. Calibre, on the other hand, is feature rich. But it is based on (Py)Qt. Touch input is told to work for Windows. For Linux, there's no support yet. The good thing about Calibre is that it has its own UI, which is pretty decent in a Tablet-Mode Touch workflow.
  • Photo Management: With compact digital devices commonly available, digital content (Both Photos and Videos) is on the rise. The most obvious names that come to mind are Digikam and Shotwell.
    • Shotwell saw its reincarnation in the recent past. From what I recollect, it does have touch support but was lacking quite a bit in terms of features, as compared to Digikam.
    • Digikam is an impressive tool for digital content management. While Digikam is a KDE project, thankfully it does a great job in keeping its KDE dependencies to a bare minimum. But given that Digikam builds on KDE/Qt libs, I haven't had any much success in getting a good touch input solution for Tablet Mode. To make it barely usable in Table-Mode, one could choose a theme preference with bigger toolbars, labels and scrollbars. This helps in making a touch input workaround use case. As you can see, I've configured the Digikam UI with Text alongside Icons for easy touch input.
  • Email: The most common use case. With Gmail and friends, many believe standalone email clients are no more a need. But there always are users like us who prefer emails offline, encrypted emails and prefer theis own email domains. Many of these are still doable with free services like Gmail, but still.
    • Thunderbird shows its age at times. And given the state of Firefox in getting touch support (and GTK3 port), I see nothing happening with TB.
    • KMail was something I discontinued while still being on KDE. The debacle that KDEPIM was, is something I'd always avoid, in the future. Complete waste of time/resource in building, testing, reporting and follow-ups.
    • Geary is another email client that recently saw its reincarnation. I recently had explored Geary. It enjoys similar benefits like the rest applications using GNOME libraries. There was one touch input bug I found, but otherwise Geary's featureset was limited in comparison to Evolution.
    • Migration to Evolution, when migrating to GNOME, was not easy. GNOME's philosophy is to keep things simple and limited. In doing that, they restrict possible flexibilities that users may find obvious. This design philosophy is easily visible across all applications of the GNOME family. Evolution is no different. Hence, coming from TB to E was a small unlearning + newLearning curve. And since Evolution is using the same GNOME libraries, it enjoys similar benefits. Touch input support in Evolution is fairly good. The missing bit is the new Toolbar and Menu structure that many have noticed in the newer GNOME applications (Photos, Documents, Nautilus etc). If only Evolution (and the GNOME family) had the option of customization beyond the developer/project's view, there wouldn't be any wishful thoughts.
      • Above is a screenshot of 2 windows of Evoluiton. In its current form too, Evolution is a gem at times. For my RSS feeds, they are stored in a VFolder in Evolution, so that I can read them when offline. RSS feeds are something I read up in Tablet-mode. On the right is an Evolution window with larger fonts, while on the left, Evoltuion still retains its default font size. This current behavior helps me get Table-Mode Touch working to an extent. In my wishful thoughts, I wish if Evolution provided flexibility to change Toolbar icon sizes. That'd really help easily touch the delete button when in Tablet Mode. A simple button, Tablet Mode, like what SMPlayer has done, would keep users sticky with Evolution.

My wishful thought is that people write (free) software, thinking more about usability across toolkits and desktop environments. Otherwise, the year of the Linux desktoplaptop, tablet; in my opinion, is yet to come. And please don't rip apart tools, in porting them to newer versions of the toolkits. When you rip a tool, you also rip all its QA, Bug Reporting and Testing, that was done over the years.

Here's an example of a tool (Goldendict), so well written. Written in Qt, Running under GNOME, and serving over the Chromium interface.


In this whole exercise of getting a hybrid working setup, I also came to realize that there does not seem to be a standardized interface, yet, to determine the current operating mode of a running hybrid machine. From what we explored so far, every product has its own way to doing it. Most hybrids come pre-installed and supported with Windows only. So, their mode detection logic seems to be proprietary too. In case anyone is awaer of a standard interface, please drop a note in the comments.




17 January, 2017 02:02PM by Ritesh Raj Sarraf

Elizabeth Ferdman

6 Week Progress Update for PGP Clean Room

One of the PGP Clean Room’s aims is to provide users with the option to easily initialize one or more smartcards with personal info and pins, and subsequently transfer keys to the smartcard(s). The advantage of using smartcards is that users don’t have to expose their keys to their laptop for daily certification, signing, encryption or authentication purposes.

I started building a basic whiptail TUI that asks users if they will be using a smartcard:

smartcard-init.sh on Github

If yes, whiptail provides the user with the opportunity to initialize the smartcard with their name, preferred language and login, and change their admin PIN, user PIN, and reset code.

I outlined the commands and interactions necessary to edit personal info on the smartcard using gpg --card-edit and sending the keys to the card with gpg --edit-key <FPR> in smartcard-workflow. There’s no batch mode for smartcard operations and there’s no “quick” command for it just yet (as in –quick-addkey). One option would be to try this out with command-fd/command-file. Currently, python bindings for gpgme are under development so that is another possibility.

We can use this workflow to support two smartcards– one for the primary key and one for the subkeys, a setup that would also support subkey rotation.

17 January, 2017 12:00AM

January 16, 2017

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, December 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not increase but a new silver sponsor is in the process of joining. We are only missing another silver sponsor (or two to four bronze sponsors) to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 27. The situation improved a little bit compared to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 January, 2017 02:39PM by Raphaël Hertzog

hackergotchi for Maria Glukhova

Maria Glukhova

APK, images and other stuff.

2 more weeks of my awesome Outreachy journey have passed, so it is time to make an update on my progress.

I continued my work on improving diffoscope by fixing bugs and completing wishlist items. These include:

Improving APK support

I worked on #850501 and #850502 to improve the way diffoscope handles APK files. Thanks to Emanuel Bronshtein for providing clear description on how to reproduce these bugs and ideas on how to fix them.

And special thanks to Chris Lamb for insisting on providing tests for these changes! That part actually proved to be little more tricky, and I managed to mess up with these tests (extra thanks to Chris for cleaning up the mess I created). Hope that also means I learned something from my mistakes.

Also, I was pleased to see F-droid Verification Server as a sign of F-droid progress on reproducible builds effort - I hope these changes to diffoscope will help them!

Adding support for image metadata

That came from #849395 - a request was made to compare image metadata along with image content. Diffoscope has support for three types of images: JPEG, MS Windows Icon (*.ico) and PNG. Among these, PNG already had good image metadata support thanks to sng tool, so I worked on .jpeg and .ico files support. I initially tried to use exiftool for extracting metadata, but then I discovered it does not handle .ico files, so I decided to use a bigger force - ImageMagick’s identify - for this task. I was glad to see it had that handy -format option I could use to select only the necessary fields (I found their -verbose, well, too verbose for the task) and presenting them in the defined form, negating the need of filtering its output.

What was particulary interesting and important for me in terms of learning: while working on this feature, I discovered that, at the moment, diffoscope could not handle .ico files at all - img2txt tool, that was used for retrieving image content, did not support that type of images. But instead of recognizing this as a bug and resolving it, I started to think of possible workaround, allowing for retrieving image metadata even after retrieving image content failed. Definetely not very good thinking. Thanks Mattia Rizzolo for actually recognizing this as a bug and filing it, and Chris Lamb for fixing it!

Other work

Order-like differences, part 2

In the previous post, I mentioned Lunar’s suggestion to use hashing for finding order-like difference in wide variety of input data. I implemented that idea, but after discussion with my mentor, we decided it is probably not worth it - this change would alter quite a lot of things in core modules of diffoscope, and the gain would be not really significant.

Still, implementing that was an important experience for me, as I had to hack on deepest and, arguably, most difficult modules of diffoscope and gained some insight on how they work.

Comparing with several tools (work in progress)

Although my initial motivation for this idea was flawed (the workaround I mentioned earlier for .ico files), it still might be useful to have a mechanism that would allow to run several commands for finding difference, and then give the output of those that succeed, failing if and only if they all have failed.

One possible case when it might happen is when we use commands coming from different tools, and one of them is not installed. It would be nice if we still used the other and not the uninformative binary diff (that is a default fallback option for when something goes wrong with more “clever” comparison). I am still in process of polishing this change, though, and still in doubt if it is needed at all.

Side note - Outreachy and my university progress

In my Outreachy application, I promised that if I am selected into this round, I will do everything I can to unload the required time period from my university time commitements. I did that by moving most of my courses to the first half of the academic year. Now, the main thing that is left for me to do is my Master’s thesis.

I consulted my scientific advisors from both universities that I am formally attending (SFEDU and LUT - I am in double degree program), and as a result, they agreed to change my Master’s thesis topic to match my Outreachy work.

Now, that should have sounded like an excellent news - merging these activities together actually mean I can allocate much more time to my work on reproducible builds, even beyond the actual internship time period. That was intended to remove a burden from my shoulders.

Still, I feel a bit uneasy. The drawback of this decision lies in fact I have no idea on how to write scientific report based on pure practical work. I know other students from my universities have done such things before, but choosing my own topic means my scientific advisors can’t help me much - this is just out of their area of expertise.

Well, wish me luck - I’m up to the challenge!

16 January, 2017 12:00AM

January 15, 2017

Sam Hartman

Musical Visualization of Network Traffic

I've been working on a fun holiday project in my spare time lately. It all started innocently enough. The office construction was nearing its end, and it was time for my workspace to be set up. Our deployment wizard and I were discussing. Normally we stick two high-end monitors on a desk. I'm blind; that seemed silly. He wanted to do something similarly nice for me, so he replaced one of the monitors with excellent speakers. They are a joy to listen to, but I felt like I should actually do something with them. So, I wanted to play around with some sort of audio project.
I decided to take a crack at an audio representation of network traffic. The solaris version of ping used to have an audio option, which would produce sound for successful pings. In the past I've used audio queues to monitor events like service health and build status.
It seemed like you could produce audio to give an overall feel for what was happening on the network. I was imagining a quick listen would be able to answer questions like:

  1. How busy is the network

  2. How many sources are active

  3. Is the traffic a lot of streams or just a few?

  4. Are there any interesting events such as packet loss or congestion collapse going on?

  5. What's the mix of services involved

I divided the project into three segments, which I will write about in future entries:

  • What parts of the network to model

  • How to present the audio information

  • Tools and implementation

I'm fairly happy with what I have. It doesn't represent all the items above. As an example, it doesn't directly track packet loss or retransmissions, nor does it directly distinguish one service from another. Still, just because of the traffic flow, rsync sounds different from http. It models enough of what I'm looking for that I find it to be a useful tool. And I learned a lot about music and Linux audio. I also got to practice designing discrete-time control functions in ways that brought back the halls of MIT.

15 January, 2017 11:19PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.9: Next round

Yesterday afternoon, the nineth update in the 0.12.* series of Rcpp made it to the CRAN network for GNU R. Windows binaries have by now been generated; and the package was updated in Debian too. This 0.12.9 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, and the 0.12.8 release in November --- making it the thirteenth release at the steady bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 906 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by sixthythree packages over the two months since the last release -- or about a package a day!

Some of the changes in this release are smaller and detail-oriented. We did squash one annoying bug (stemming from the improved exception handling) in Rcpp::stop() that hit a few people. Nathan Russell added a sample() function (similar to the optional one in RcppArmadillo; this required a minor cleanup by for small number of other packages which used both namespaces 'opened'. Date and Datetime objects now have format() methods and << output support. We now have coverage reports via covr as well. Last but not least James "coatless" Balamuta was once more tireless on documentation and API consistency --- see below for more details.

Changes in Rcpp version 0.12.9 (2017-01-14)

  • Changes in Rcpp API:

    • The exception stack message is now correctly demangled on all compiler versions (Jim Hester in #598)

    • Date and Datetime object and vector now have format methods and operator<< support (#599).

    • The size operator in Matrix is explicitly referenced avoiding a g++-6 issues (#607 fixing #605).

    • The underlying date calculation code was updated (#621, #623).

    • Addressed improper diagonal fill for non-symmetric matrices (James Balamuta in #622 addressing #619)

  • Changes in Rcpp Sugar:

    • Added new Sugar function sample() (Nathan Russell in #610 and #616).

    • Added new Sugar function Arg() (James Balamuta in #626 addressing #625).

  • Changes in Rcpp unit tests

    • Added Environment::find unit tests and an Environment::get(Symbol) test (James Balamuta in #595 addressing issue #594).

    • Added diagonal matrix fill tests (James Balamuta in #622 addressing #619)

  • Changes in Rcpp Documentation:

    • Exposed pointers macros were included in the Rcpp Extending vignette (MathurinD; James Balamuta in #592 addressing #418).

    • The file Rcpp.bib move to directory bib which is guaranteed to be present (#631).

  • Changes in Rcpp build system

    • Travis CI now also calls covr for coverage analysis (Jim Hester in PR #591)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 January, 2017 10:26PM

hackergotchi for Mehdi Dogguy

Mehdi Dogguy

Debian from 10,000 feet

Many of you are big fans of S.W.O.T analysis, I am sure of that! :-) Technical competence is our strongest suit, but we have reached a size and sphere of influence which requires an increase in organisation.

We all love our project and want to make sure Debian still shines in the next decades (and centuries!). One way to secure that goal is to identify elements/events/things which could put that goal at risk. To this end, we've organized a short S.W.O.T analysis session at DebConf16. Minutes of the meeting can be found here. I believe it is an interesting read and is useful for Debian old-timers as well as newcomers. It helps to convey a better understanding of the project's status. For each item, we've tried to identify an action.

Here are a few things we've worked on:
  • Identify new potential contributors by attending and speaking at conferences where Free and Open Sources software are still not very well-known, or where we have too few contributors.

    Each Debian developer is encouraged to identify events where we can promote FOSS and Debian. As DPL, I'd be happy to cover expenses to attend such events.
  • Our average age is also growing over the years. It is true that we could attract more new contributors than we already do.

    We can organize short internships. We should not wait for students to come to us. We can get in touch with universities and engineering schools and work together on a list of topics. It is easy and will give us the opportunity to reach out to more students.

    It is true that we have tried in the past to do that. We may organize a sprint with interested people and share our experience on trying to do internships on Debian-related subjects. If you have successfully done that in the past and managed to attract new contributors that way, please share your experience with us!

    If you see other ways to attract new contributors, please get in touch so that we can discuss!
  • Not easy to get started in the project.

    It could be argued that all the information is available, but rather than being easily findable from on starting point, it is scattered over several places (documentation on our website, wiki, metadata on bug reports, etc…).

    Fedora and Mozilla both worked on this subject and did build a nice web application to make this easier and nicer. The result of this is asknot-ng.

    A whatcanidofor.debian.org would be wonderful! Any takers? We can help by providing a virtual machine to build this. Being a DD is not mandatory. Everyone is welcome!
  • Cloud images for Debian.

    This is a very important point since cloud providers are now major distributions consumers. We have to ensure that Debian is correctly integrated in the cloud, without making compromises on our values and philosophy.

    I believe this item has been worked on during the last Debian Cloud sprint. I am looking forward to seeing the positive effects of this sprint in the long term. I believe it does help us to build a stronger relationship with cloud providers and gives us a nice opportunity to work with them on a shared set of goals!
During next DebConf, we can review the progress that has been made on each item and discuss new ones. In addition to this session acting as a health check, I see it as a way for the DPL to discuss, openly and publicly, about the important changes that should be implemented in the project and imagine together a better future.

In the meantime, everyone should feel free to pick one item from the list and work on it. :-)

15 January, 2017 05:12PM by Mehdi (noreply@blogger.com)

January 14, 2017

Mike Gabriel

UIF bug: Caused by flawed IPv6 DNS resolving in Perl's NetAddr::IP

TL;DR; If you use NetAddr::IP->new6() for resolving DNS names to IPv6 addresses, the addresses returned by NetAddr::IP are not what you might expect. See below for details.

Issue #2 in UIF

Over the last couple of days, I tried to figure out the cause of a weird issue observed in UIF (Universal Internet Firewall [1], a nice Perl tool for setting up ip(6)tables based Firewalls).

Already a long time ago, I stumbled over a weird DNS resolving issue of DNS names to IPv6 addresses in UIF that I reported as issue #2 [2] against upstream UIF back then.

I happen to be co-author of UIF. So, I felt very ashamed all the time for not fixing the issue any sooner.

As many of us DDs try to get our packages into shape before the next Debian release these days, I find myself doing the same. I started investigating the underlying cause of issue #2 in UIF a couple of days ago.

Issue #119858 on CPAN

Today, I figured out that the Perl code in UIF is not causing the observed phenomenon. The same behaviour is reproducible with a minimal and pure NetAddr::IP based Perl script (reported as Debian bug #851388 [2]. Thanks to Gregor Herrmann for forwarding Debian bug upstream (#119858 [3]).

Here is the example script that shows the flawed behaviour:


use NetAddr::IP;

my $hostname = "google-public-dns-a.google.com";

my $ip6 = NetAddr::IP->new6($hostname);
my $ip4 = NetAddr::IP->new($hostname);

print "$ip6 <- WTF???\n";
print "$ip4\n";


... gives...

[mike@minobo ~]$ ./netaddr-ip_resolv-ipv6.pl
0:0:0:0:0:0:808:808/128 <- WTF???

In words...

So what happens in NetAddr::IP is that with the new6() "constructor" you initialize a new IPv6 address. If the address is a DNS name, NetAddr::IP internally resolves it into an IPv4 address and converts this IPv4 address into some IPv6'ish format. This bogus IPv6 address is not the one matching the given DNS name.

Impacted Software in Debian

Various Debian packages use NetAddr::IP and may be affected by this flaw, here is an incomplete list (use apt-rdepends -r libnetaddr-ip-perl for the complete list):

  • spamassassin
  • postgrey
  • postfix-policyd-spf-perl
  • mtpolicyd
  • xen-tools
  • fwsnort
  • freeip-server
  • 389-ds
  • uif

Any of the above packages could be affected if NetAddr::IP->new6(<dnsname>) is being used. I haven't checked any of the code bases, but possibly the corresponding maintainers may want to do that.



14 January, 2017 10:16PM by sunweaver

Russ Allbery

Review: Enchanters' End Game

Review: Enchanters' End Game, by David Eddings

Series: The Belgariad #5
Publisher: Del Rey
Copyright: December 1984
Printing: February 1990
ISBN: 0-345-33871-5
Format: Mass market
Pages: 372

And, finally, the conclusion towards which everything has been heading, and the events for which Castle of Wizardry was the preparation. (This is therefore obviously not the place to start with this series.) Does it live up to all the foreshadowing and provide a satisfactory conclusion? I'd say mostly. The theology is a bit thin, but Eddings does a solid job of bringing all the plot threads together and giving each of the large cast a moment to shine.

Enchanters' End Game (I have always been weirdly annoyed by that clunky apostrophe) starts with more of Garion and Belgarath, and, similar to the end of Castle of Wizardry, this feels like them rolling on the random encounter table. There is a fairly important bit with Nadraks at the start, but the remaining detour to the north is a mostly unrelated bit of world-building. Before this re-read, I didn't remember how extensive the Nadrak parts of this story were; in retrospect, I realize a lot of what I was remembering is in the Mallorean instead. I'll therefore save my commentary on Nadrak gender roles for an eventual Mallorean re-read, since there's quite a lot to dig through and much of it is based on information not available here.

After this section, though, the story leaves Garion, Belgarath, and Silk for nearly the entire book, returning to them only for the climax. Most of this book is about Ce'Nedra, the queens and kings of the west, and what they're doing while Garion and his small party are carrying the Ring into Mordor— er, you know what I mean.

And this long section is surprisingly good. We first get to see the various queens of the west doing extremely well managing the kingdoms while the kings are away (see my previous note about how Eddings does examine his stereotypes), albeit partly by mercilessly exploiting the sexism of their societies. The story then picks up with Ce'Nedra and company, including all of the rest of Garion's band, being their snarky and varied selves. There are some fairly satisfying set pieces, some battle tactics, some magical tactics, and a good bit of snarking and interplay between characters who feel like old friends by this point (mostly because of Eddings's simple, broad-strokes characterization).

And Ce'Nedra is surprisingly good here. I would say that she's grown up after the events of the last book, but sadly she reverts to being awful in the aftermath. But for the main section of the book, partly because she's busy with other things, she's a reasonable character who experiences some actual consequences and some real remorse from one bad decision she makes. She's even admirable in how she handles events leading up to the climax of the book.

Eddings does a good job showing every character in their best light, putting quite a lot of suspense (and some dramatic rescues) into this final volume, and providing a final battle that's moderately interesting. I'm not sure I entirely bought the theological ramifications of the conclusion (the bits with Polgara do not support thinking about too deeply), but the voice in Garion's head continues to be one of the better characters of the series. And Errand is a delight.

After the climax, the aftermath sadly returns to Eddings's weird war between the sexes presentation of all gender relationships in this series, and it left me with a bit of a bad taste in my mouth. (There is absolutely no way that some of these relationships would survive in reality.) Eddings portrays nearly every woman as a manipulative schemer, sometimes for good and sometimes for evil, and there is just so much gender stereotyping throughout this book for both women and men. You can tell he's trying with the queens, but women are still only allowed to be successful at politics and war within a very specific frame. Even Polgara gets a bit of the gender stereotyping, although she remains mostly an exception (and one aspect of the ending is much better than it could have been).

Ah well. One does not (or at least probably should not) read this series without being aware that it has some flaws. But it has a strange charm as well, mostly from its irreverence. The dry wise-cracking of these characters rings more true to me than the epic seriousness of a lot of fantasy. This is how people behave under stress, and this is how quirky people who know each other extremely well interact. It also keeps one turning the pages quite effectively. I stayed up for several late nights finishing it, and was never tempted to put it down and stop reading.

This is not great literature, but it's still fun. It wouldn't sustain regular re-reading for me, but a re-read after twenty years or so was pretty much exactly the experience I was hoping for: an unchallenging, optimistic story with amusing characters and a guaranteed happy ending. There's a place for that.

Followed, in a series sense, by the Mallorean, the first book of which is The Guardians of the West. But this is a strictly optional continuation; the Belgariad comes to a definite end here.

Rating: 7 out of 10

14 January, 2017 08:18PM

Sven Hoexter

moto g falcon reactivation and exodus mod

I started to reactivate my old moto g falcon during the last days of CyanogenMod in December of 2016. First step was a recovery update to TWRP 3.0.2-2 so I was able to flash CM13/14 builds. While CM14 nightly builds did not boot at all the CM13 builds did, but up to the last build wifi connections to the internet did not work. I could actually register with my wifi (Archer C7 running OpenWRT) but all apps claim the internet connection check failed and I'm offline. So bummer, without wifi a smartphone is not much fun.

I was pretty sure that wifi worked when I last used that phone about 1.5 years ago with CM11/12, so I started to dive into the forums of xda-developers to look for alternatives. Here I found out about Exodus. I've a bit of trouble trusting stuff from xda-developer forums but what the hell, the phone is empty anyway so nothing to loose and I flashed the latest falcon build.

To flash it I had to clean the whole phone, format all partitions via TWRP and then sideloaded the zip image file via adb (adb from the Debian/stretch adb package works like a charm, thank you guys!). Booted and bäm wifi works again! Now Exodus is a really striped down mod, to do anything useful with it I had to activate the developer options and allow USB debugging. Afterwards I could install the f-droid and Opera apk via "adb install foo.apk".

Lineage OS

As I could derive from another thread on xda-developers Lineage OS has the falcon still on the shortlist for 14.x nightly builds. Maybe that will be an alternative again in the future. For now Exodus is a bit behind the curve (based on Android 6.0.1 from September 2016) but at least it's functional.

14 January, 2017 01:43PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Cloning a USB LED device

A month or so ago I got involved in a discussion on IRC about notification methods for a headless NAS. One of the options considered was some sort of USB attached LED. DealExtreme had a cheap “Webmail notifier”, which was already supported by mainline kernels as a “Riso Kagaku” device but it had been sold out for some time.

This seemed like a fun problem to solve with a tinyAVR and V-USB. I had my USB relay board so I figured I could use that to at least get some code to the point that the kernel detected it as the right device, and the relay output could be configured as one of the colours to ensure it was being driven in roughly the right manner. The lack of a full lsusb dump (at least when I started out) made things a bit harder, plus the fact that the Riso uses an output report unlike the relay code, which uses a control message. However I had the kernel source for the driver and with a little bit of experimentation had something which would cause the driver to be loaded and the appropriate files in /sys/class/leds/ to be created. The relay was then successfully activated when the red LED was supposed to be on.

hid-led 0003:1294:1320.0001: hidraw0: USB HID v1.01 Device [MAIL  MAIL ] on usb-0000:00:14.0-6.2/input0
hid-led 0003:1294:1320.0001: Riso Kagaku Webmail Notifier initialized

I subsequently ordered some Digispark clones and modified the code to reflect the pins there (my relay board used pins 1+2 for USB, the Digispark uses pins 3+4). I then soldered a tricolour LED to the board, plugged it in and had a clone of the Riso Kaguku device for about £1.50 in parts (no doubt much cheaper in bulk). Very chuffed.

In case it’s useful to someone, the code is released under GPLv3+ and is available at https://the.earth.li/gitweb/?p=riso-kagaku-clone.git;a=summary or on GitHub at https://github.com/u1f35c/riso-kagaku-clone. I’m seeing occasional issues on an older Dell machine that only does USB2 with enumeration, but it generally is fine once it gets over that.

(FWIW, Jon, who started the original discussion, ended up with a BlinkStick Nano which is a neater device with 2 LEDs but still based on an Tiny85.)

14 January, 2017 11:53AM

Jamie McClelland

What's Up with WhatsApp?

Despite my jaded feelings about corporate Internet services in general, I was suprised to learn that WhatsApp's end-to-end encryption was a lie. In short, it is possible to send an encrypted message to a user that is intercepted and effectively de-crypted without the sender's knowledge.

However, I was even more surprised to read Open Whisper Systems critique of the original story, claiming that it is not a backdoor because the WhatsApp sender's client is always notified when a message is de-crypted.

The Open Whisper Systems post acknowledges that the WhatsApp sender can choose to disable these notifications, but claims that is not such a big deal because the WhatsApp server has no way to know which clients have this feature enabled and which do not, so intercepting a message is risky because it could result in the sender realizing it.

However, there is a fairly important piece of information missing, namely: as far as I can tell, the setting to notify users about key changes is disabled by default.

So, using the default installation, your end-to-end encrypted message could be intercepted and decrypted without you or the party you are communicating with knowing it. How is this not a back door? And yes, if the interceptor can't tell whether or not the sender has these notifications turned on, the interceptor runs the risk of someone knowing they have intercepted the message. Great. That's better than nothing. Except that there is strong evidence that many powerful governments on this planet routinely risk exposure in their pursuit of compromising our ability to communicate securely. And... not to mention non-governmental (or governmental) adversaries for whom exposure is not a big deal.

Furthermore a critical reason for end-to-end encrption is so that your provider does not have the technical capacity to intercept your communications. That's simply not true of WhatsApp. It is true of Signal and OMEMO, which requires the active participation of the sender to compromise the communication.

Why in the world would you distribute a client that not only has the ability to surpress such warnings, but has it enabled by default?

Some may argue that users regularly dismiss notifications like "fingerprint has changed" and that this problem is the achilles heal of secure communications. I agree. But... there is still a monumental difference between a user absent-mindedly dismissing an important security warning and never seeing the warning in the first place.

This flaw in WhatsApp is a critical reminder that secure communications doesn't just depend on a good protocol or technology, but on trust in the people who design and maintain our systems.

14 January, 2017 02:03AM

January 13, 2017

Elena 'valhalla' Grandi

Modern XMPP Server

Modern XMPP Server

I've published a new HOWTO on my website 'http://www.trueelena.org/computers/howto/modern_xmpp_server.html':

http://www.enricozini.org/blog/2017/debian/modern-and-secure-instant-messaging/ already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


I've decided to install https://prosody.im/, mostly because it was recommended by the RTC QuickStart Guide http://rtcquickstart.org/; I've heard that similar results can be reached with https://www.ejabberd.im/ and other servers.

I'm also targeting https://www.debian.org/ stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites

You will need to enable the https://backports.debian.org/ repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt https://letsencrypt.org/); and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide http://rtcquickstart.org/guide/multi/xmpp-server-prosody.html for more details.

On your firewall, you'll need to open the following TCP ports:

  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)

The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser alice@example.org

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim https://en.wikipedia.org/wiki/Messaging_spam).

prosody configuration

You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:

c2s_require_encryption = true
s2s_secure_auth = true

and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:

s2s_insecure_domains = { "gmail.com" }


For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/chat.example.org.cfg.lua with contents like the following:

VirtualHost "chat.example.org"
enabled = true
ssl = {
key = "/etc/ssl/private/example.org-key.pem";
certificate = "/etc/ssl/public/example.org.pem";

For the domains where you also want to enable MUCs, add the follwing lines:

Component "conference.chat.example.org" "muc"
restrict_room_creation = "local"

the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):

Component "upload.chat.trueelena.org" "http_upload"

The defaults are pretty sane, but see https://modules.prosody.im/mod_http_upload.html for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules

Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


Most of these come from the prosody-modules package (and thus from https://modules.prosody.im/ ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.

  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.

  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.

  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.

  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.

  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.

@Gruppo Linux Como @LIFO

13 January, 2017 12:59PM by Elena ``of Valhalla''

January 12, 2017

hackergotchi for Ben Hutchings

Ben Hutchings

Debian 8 kernel security update

There are a fair number of outstanding security issues in the Linux kernel for Debian 8 "jessie", but none of them were considered serious enough to issue a security update and DSA. Instead, most of them are being fixed through the point release (8.7) which will be released this weekend. Don't forget that you need to reboot to complete a kernel upgrade.

This update to linux (version 3.16.39-1) also adds the perf security mitigation feature from Grsecurity. You can disable unprivileged use of perf entirely by setting sysctl kernel.perf_event_paranoid=3. (This is the default for Debian "stretch".)

12 January, 2017 10:41PM

Debian LTS work, December 2016

I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 2 from November. I worked only 10 hours, so I carry over 5.5 hours.

As for the last few months, I spent all of this time working on the linux (kernel) package. I backported several security fixes and did some testing of the more invasive changes.

I also added the option to mitigate security issues in the performance events (perf) subsystem by disabling use by unprivileged users. This feature comes from Grsecurity and has been included in Debian unstable and Android kernels for a while. However, for Debian 7 LTS it has to be explicitly enabled by setting sysctl kernel.perf_event_paranoid=3.

I uploaded these changes as linux 3.2.84-1 and then (on 1st January) issued DLA 722-1.

12 January, 2017 10:30PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Laptop Mode Tools 1.71

I am pleased to announce the 1.71 release of Laptop Mode Tools. This release includes some new modules, some bug fixes, and there are some efficiency improvements too. Many thanks to our users; most changes in this release are contributions from our users.

A filtered list of changes in mentioned below. For the full log, please refer to the git repository. 

Source tarball, Feodra/SUSE RPM Packages available at:

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools


1.71 - Thu Jan 12 13:30:50 IST 2017
    * Fix incorrect import of os.putenv
    * Merge pull request #74 from Coucouf/fix-os-putenv
    * Fix documentation on where we read battery capacity from
    * cpuhotplug: allow disabling specific cpus
    * Merge pull request #78 from aartamonau/cpuhotplug
    * runtime-pm: refactor listed_by_id()
    * wireless-power: Use iw and fallback to iwconfig if it not available
    * Prefer available AC supply information over battery state to determine ON_AC
    * On startup, we want to force the full execution of LMT.
    * Device hotplugs need a forced execution for LMT to apply the proper settings
    * runtime-pm: Refactor list_by_type()
    * kbd-backlight: New module to control keyboard backlight brightness
    * Include Transmit power saving in wireless cards
    * Don't run in a subshell
    * Try harder to check battery charge
    * New module: vgaswitcheroo
    * Revive bluetooth module. Use rfkill primarily. Also don't unload (incomplete list of) kernel modules


What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power




12 January, 2017 08:54AM by Ritesh Raj Sarraf

January 11, 2017

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

3G-SDI signal support

I had to figure out what kinds of signal you can run over 3G-SDI today, and it's pretty confusing, so I thought I'd share it.

For the reference, 3G-SDI is the same as 3G HD-SDI, an extension of HD-SDI, which is an extension of the venerable SDI standard (well, duh). They're all used for running uncompressed audio/video data of regular BNC coaxial cable, possibly hundreds of meters, and are in wide use in professional and semiprofessional setups.

So here's the rundown on 3G-SDI capabilities:

  • 1080p60 supports 10-bit 4:2:2 Y'CbCr. Period.
  • 720p60/1080p30/1080i60 supports a much wider range of formats: 10-bit 4:4:4:4 RGBA (alpha optional), 10-bit 4:4:4:4 Y'CbCrA (alpha optional), 12-bit 4:4:4 RGB, 12-bit 4:4:4 Y'CbCr or finally 12-bit 4:2:2 Y'CbCr (seems rather redundant).
  • There's also a format exclusively for 1080p24 (actually 2048x1080) that supports 12-bit X'Y'Z. Digital cinema, hello. Apart from that, it supports pretty much what 1080p30 does. There's also a 2048x1080p30 (no interlaced version) mode for 12-bit 4:2:2:4 Y'CbCrA, but it seems rather obscure.

And then there's dual-link 3G-SDI, which uses two cables instead of one—and there's also Blackmagic's proprietary “6G-SDI”, which supports basically everything dual-link 3G-SDI does. But in 2015, seemingly there was also a real 6G-SDI and 12G-SDI, and it's unclear to me whether it's in any way compatible with Blackmagic's offering. It's all confusing. But at least, these are the differences from single-link to dual-link 3G-SDI:

  • 1080p60 supports essentially everything that 720p60 supports on single-link: 10-bit 4:4:4:4 RGBA (alpha optional), 10-bit 4:4:4:4 Y'CbCrA (alpha optional), 12-bit 4:4:4 RGB, 12-bit 4:4:4 Y'CbCr and the redundant 12-bit 4:2:2 Y'CbCr.
  • 2048x1080 4:4:4 X'Y'Z' now also supports 1080p25 and 1080p30.

4K? I don't know. 120fps? I believe that's also a proprietary extension of some sort.

And of course, having a device support 3G-SDI doesn't mean at all it's required to support all of this; in particular, I believe Blackmagic's systems don't support alpha at all except on their single “12G-SDI” card, and I'd also not be surprised if RGB support is rather limited in practice.

11 January, 2017 07:03PM

Sven Hoexter

Failing with F5: using experimental mv feature on a pool causes tmm to segfault

Just a short PSA for those around working with F5 devices:

TMOS 11.6 introduced an experimental "mv" command in tmsh. In the last days we tried it for the first time on TMOS 12.1.1. It worked fine for a VirtualServer but a mv for a pool caused a sefault in tmm. We're currently working with the F5 support to sort it out, they think it's a known issue. Recommendation for now is to not use mv on pools. Just do it the old way, create a new pool, assign the new pool to the relevant VS and delete the old pool.

Possible bug ID at F5 is ID562808. Since I can not find it in the TMOS 12.2 release notes I expect that this issue also applies to TMOS 12.2, but I did not verify that.

11 January, 2017 05:36PM

Reproducible builds folks

Reproducible Builds: week 89 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 1 and Saturday January 7 2017:

GSoC and Outreachy updates

Toolchain development

  • #849999 was filed: "dpkg-dev should not set SOURCE_DATE_EPOCH to the empty string"

Packages reviewed and fixed, and bugs filed

Chris Lamb:


Reviews of unreproducible packages

13 package reviews have been added, 4 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added/updated:

Upstreaming of reproducibility fixes



Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (4)

diffoscope development

diffoscope 67 was uploaded to unstable by Chris Lamb. It included contributions from :

[ Chris Lamb ]

* Optimisations:
  - Avoid multiple iterations over archive by unpacking once for an ~8X
    runtime optimisation.
  - Avoid unnecessary splitting and interpolating for a ~20X optimisation
    when writing --text output.
  - Avoid expensive diff regex parsing until we need it, speeding up diff
    parsing by 2X.
  - Alias expensive Config() in diff parsing lookup for a 10% optimisation.

* Progress bar:
  - Show filenames, ELF sections, etc. in progress bar.
  - Emit JSON on the the status file descriptor output instead of a custom

* Logging:
  - Use more-Pythonic logging functions and output based on __name__, etc.
  - Use Debian-style "I:", "D:" log level format modifier.
  - Only print milliseconds in output, not microseconds.
  - Print version in debug output so that saved debug outputs can standalone
    as bug reports.

* Profiling:
  - Also report the total number of method calls, not just the total time.
  - Report on the total wall clock taken to execute diffoscope, including

* Tidying:
  - Rename "NonExisting" -> "Missing".
  - Entirely rework diffoscope.comparators module, splitting as many separate
    concerns into a different utility package, tidying imports, etc.
  - Split diffoscope.difference into diffoscope.diff, etc.
  - Update file references in debian/copyright post module reorganisation.
  - Many other cleanups, etc.

* Misc:
  - Clarify comment regarding why we call python3(1) directly. Thanks to Jérémy
    Bobbio <lunar@debian.org>.
  - Raise a clearer error if trying to use --html-dir on a file.
  - Fix --output-empty when files are identical and no outputs specified.

[ Reiner Herrmann ]
* Extend .apk recognition regex to also match zip archives (Closes: #849638)

[ Mattia Rizzolo ]
* Follow the rename of the Debian package "python-jsbeautifier" to

[ siamezzze ]
* Fixed no newline being classified as order-like difference.

reprotest development

reprotest 0.5 was uploaded to unstable by Chris Lamb. It included contributions from:

[ Ximin Luo ]

* Stop advertising variations that we're not actually varying.
  That is: domain_host, shell, user_group.
* Fix auto-presets in the case of a file in the current directory.
* Allow disabling build-path variations. (Closes: #833284)
* Add a faketime variation, with NO_FAKE_STAT=1 to avoid messing with
  various buildsystems. This is on by default; if it causes your builds
  to mess up please do file a bug report.
* Add a --store-dir option to save artifacts.

Other contributions (not yet uploaded):

reproducible-builds.org website development


  • Debian arm64 architecture was fully tested in all three suites in just 15 days. Thanks again to Codethink.co.uk for their support!
  • Log diffoscope profiling info. (lamby)
  • Run pg_dump with -O --column-inserts to make easier to import our main database dump into a non-PostgreSQL database. (mapreri)
  • Debian armhf network: CPU frequency scaling was enabled for three Firefly boards, enabling the CPUs to run at full speed. (vagrant)
  • Arch Linux and Fedora tests have been disabled (h01ger)
  • Improve mail notifications about daily problems. (h01ger)


This week's edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian, reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

11 January, 2017 03:04PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R / Finance 2017 Call for Papers

Last week, Josh sent the call for papers to the R-SIG-Finance list making everyone aware that we will have our nineth annual R/Finance conference in Chicago in May. Please see the call for paper (at the link, below, or at the website) and consider submitting a paper.

We are once again very excited about our conference, thrilled about upcoming keynotes and hope that many R / Finance users will not only join us in Chicago in May 2017 -- but also submit an exciting proposal.

We also overhauled the website, so please see R/Finance. It should render well and fast on devices of all sizes: phones, tablets, desktops with browsers in different resolutions. The program and registration details still correspond to last year's conference and will be updated in due course.

So read on below, and see you in Chicago in May!

Call for Papers

R/Finance 2017: Applied Finance with R
May 19 and 20, 2017
University of Illinois at Chicago, IL, USA

The ninth annual R/Finance conference for applied finance using R will be held on May 19 and 20, 2017 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading.

Over the past eight years, R/Finance has included attendees from around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2017.

We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged.

All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages.

Financial assistance for travel and accommodation may be available to presenters, however requests must be made at the time of submission. Assistance will be granted at the discretion of the conference committee.

Please submit proposals online at http://go.uic.edu/rfinsubmit.

Submissions will be reviewed and accepted on a rolling basis with a final deadline of February 28, 2017. Submitters will be notified via email by March 31, 2017 of acceptance, presentation length, and financial assistance (if requested).

Additional details will be announced via the conference website as they become available. Information on previous years' presenters and their presentations are also at the conference website. We will make a separate announcement when registration opens.

For the program committee:

Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson,
Dale Rosenthal, Jeffrey Ryan, Joshua Ulrich

11 January, 2017 11:44AM

Enrico Zini

Modern and secure instant messaging

Conversations is a really nice, actively developed, up to date XMPP client for Android that has the nice feature of telling you what XEPs are supported by the server one is using:

Initial server features

Some days ago, me and Valhalla played the game of trying to see what happens when one turns them all on: I would send her screenshots from my Conversations, and she would poke at her Prosody to try and turn things on:

After some work

Valhalla eventually managed to get all features activated, purely using packages from Jessie+Backports:

All features activated

The result was a chat system in which I could see the same conversation history on my phone and on my laptop (with gajim)(https://gajim.org/), and have it synced even after a device has been offline,

We could send each other rich media like photos, and could do OMEMO encryption (same as Signal) in group chats.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.

Valhalla has documented the whole procedure.

If you make a client for a protocol with lots of extension, do like Conversations and implement a status page with the features you'd like to have on the server, and little green indicators showing which are available: it is quite a good motivator for getting them all supported.

11 January, 2017 11:43AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.1.0: Now on Windows

Last month, we released nanotime, a package to work with nanosecond timestamps. See the initial release announcement for some background material and a few first examples.

nanotime relies on the RcppCCTZ package for high(er) resolution time parsing and formatting: R itself stops a little short of a microsecond. And it uses the bit64 package for the actual arithmetic: time at this granularity is commonly represented at (integer) increments (at nanosecond resolution) relative to an offset, for which the standard epoch of Januar 1, 1970 is used. int64 types are a perfect match here, and bit64 gives us an integer64. Naysayers will point out some technical limitations with R's S3 classes, but it works pretty much as needed here.

The one thing we did not have was Windows support. RcppCCTZ and the CCTZ library it uses need real C++11 support, and the g++-4.9 compiler used on Windows falls a little short lacking inter alia a suitable std::get_time() implementation. Enter Dan Dillon who ported this from LLVM's libc++ which lead to Sunday's RcppCCTZ 0.2.0 release.

And now we have all our ducks in a row: everything works on Windows too. The next paragraph summarizes the changes for both this release as well as the initial one last month:

Changes in version 0.1.0 (2017-01-10)

  • Added Windows support thanks to expanded RcppCCTZ (closes #6)

  • Added "mocked up" demo with nanosecond delay networking analysis

  • Added 'fmt' and 'tz' options to output functions, expanded format.nanotime (closing #2 and #3)

  • Added data.frame support

  • Expanded tests

Changes in version 0.0.1 (2016-12-15)

  • Initial CRAN upload.

  • Package is functional and provides examples.

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 January, 2017 12:49AM

January 10, 2017

hackergotchi for Bálint Réczey

Bálint Réczey

Debian Developer Game of the Year

I have just finished level one, fixing all RC bugs in packages under my name, even in team-maintained ones. 🙂

Next level is no unclassified bug reports, which gonna be harder since I have just adopted shadow with 70+ open bugs. :-\

Luckily I can still go on bonus tracks which is fixing (RC) bugs in others’ packages, but one should not spend all the time on those track before finishing level 1!

PS: Last time I tried playing a conventional game I ended up fixing it in a few minutes instead.

10 January, 2017 10:03PM by Réczey Bálint

Vincent Fourmond

Version 2.1 of QSoas is out

I have just released QSoas version 2.1. It brings in a new solve command to solve arbitrary non-linear equations of one unknown. I took advantage of this command in the figure to solve the equation for . It also provides a new way to reparametrize fits using the reparametrize-fit command, a new series of fits to model the behaviour of an adsorbed 1- or 2-electrons catalyst on an electrode (these fits are discussed in great details in our recent review (DOI: 10.1016/j.coelec.2016.11.002), improvements in various commands, the possibility to now compile using Ruby 2.3 and the most recent version of the GSL library, and sketches for an emacs major mode, which you can activate (for QSoas script files, ending in .cmds) using the following snippet in $HOME/.emacs:

(autoload 'qsoas-mode "$HOME/Prog/QSoas/misc/qsoas-mode.el" nil t)
(add-to-list 'auto-mode-alist '("\\.cmds$" . qsoas-mode))

Of course, you'll have to adapt the path $HOME/Prog/QSoas/misc/qsoas-mode.el to the actual location of qsoas-mode.el.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too. Enjoy !

10 January, 2017 07:47AM by Vincent Fourmond (noreply@blogger.com)

January 09, 2017

hackergotchi for Sean Whitton

Sean Whitton


There have been a two long threads on the debian-devel mailing list about the representation of the changes to upstream source code made by Debian maintainers. Here are a few notes for my own reference.

I spent a lot of time defending the workflow I described in dgit-maint-merge(7) (which was inspired by this blog post). However, I came to be convinced that there is a case for a manually curated series of patches for certain classes of package. It will depend on how upstream uses git (rebasing or merging) and on whether the Debian delta from upstream is significant and/or long-standing. I still think that we should be using dgit-maint-merge(7) for leaf or near-leaf packages, because it saves so much volunteer time that can be better spent on other things.

When upstream does use a merging workflow, one advantage of the dgit-maint-merge(7) workflow is that Debian’s packaging is just another branch of development.

Now consider packages where we do want a manually curated patch series. It is very hard to represent such a series in git. The only natural way to do it is to continually rebase the patch series against an upstream branch, but public branches that get rebased are not a good idea. The solution that many people have adopted is to represent their patch series as a folder full of .diff files, and then use gbp pq to convert this into a rebasing branch. This branch is not shared. It is edited, rebased, and then converted back to the folder of .diff files, the changes to which are then committed to git.

One of the advantages of dgit is that there now exists an official, non-rebasing git history of uploads to the archive. It would be nice if we could represent curated patch series as branches in the dgit repos, rather than as folders full of .diff files. But as I just described, this is very hard. However, Ian Jackson has the beginnings of a workflow that just might fit the bill.

09 January, 2017 08:14PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

The Great Indian Digital Tamasha

Indian Railways

This is an extension to last month’s article/sharing where I had shared the changes that had transpired in the last 2-3 months. Now am in a position to share the kind of issues a user can go through in case he is looking for support from IRCTC to help him/her go cashless. If you a new user to use IRCTC services you wouldn’t go through this trouble.

For those who might have TL;DR issues it’s about how hard it can become to get digital credentials fixed in IRCTC (Indian Railway Catering and Tourism Corporation) –

a. 2 months back Indian Prime Minister gave a call incentivizing people to use digital means to do any commercial activities. One of the big organizations which took/takes part is IRCTC which handles the responsibility for e-ticketing millions of Rail tickets for common people. In India, a massive percentage moves by train as it’s cheaper than going by Air.

A typical fare from say Pune – Delhi (capital of India) by second class sleeper would be INR 645/- for a distance of roughly 1600 odd kms and these are monopoly rates, there are no private trains and I’m not suggesting anything of that sort, just making sure that people know.

An economy class ticket by Air for the same distance would be anywhere between INR 2500-3500/- for a 2 hour flight between different airlines. Last I checked there are around 8 mainstream airlines including flag-carrier Air India.

About 30% of the population live on less than a dollar and a half a day which would come around INR 100/-.

There was a comment some six months back on getting more people out of the poverty line. But as there are lots of manipulations in numbers for who and what denotes above poor and below poor in India and lot of it has to do with politics it’s not something which would be easily fixable.

There are lots to be said in that arena but this article is not an appropriate blog-post for that.

All in all, it’s only 3-5% of the population at the most who can travel via Air if situation demands and around 1-2% who might be frequent, business or leisure travellers.

Now while I can thankfully afford an Air Ticket if the situation so demands, my mother gets motion sickness so while together we can only travel by train.

b. With the above background, I had registered with IRCTC few years ago with another number (dual-SIM) I had purchased and was thinking that I would be using this long-term (seems to my first big mistake, hindsight 50:50) . This was somewhere in 2006/2007.

c. Few months later I found that the other service provider wasn’t giving good service or was not upto mark. I was using IDEA (the main mobile operator) throughout those times.

d. As I didn’t need the service that much, didn’t think to inform them that I want to change to another service provider at that point in time (possibly the biggest mistake, hindsight 50:50)

e. In July 2016 itself IRCTC cut service fees,

f. This was shared as a NEW news item/policy decision at November-end 2016 .

g. While I have done all that has been asked by irctc-care haven’t still got the issues resolved 😦 IRCTC’s e-mail id – care@irctc.co.in

Now in detail –

This is my first e-mail sent to IRCTC in June 2016 –

Dear Customer care,

I had applied and got username and password sometime back . The
number I had used to register with IRCTC was xxxxxxxxxx (BSNL mobile number not used anymore) . My mobile was lost and along with that the number was also lost. I had filed a complaint with the police and stopped that number as well. Now I have an another mobile number but have forgotten both the password and the security answer that I had given when I had registered . I do have all the conversations I had both with the ticketadmn@irctc.co.in as well as care@irctc.co.in if needed to prove my identity.

The new number I want to tie it with is xxxxxxxxxx (IDEA number in-use for last 10 years)

I see two options :-

a. Tie the other number with my e-mail address

b. Take out the e-mail address from the database so that I can fill in
as a new applicant.

Looking forward to hear from you.

There was lot of back and forth with various individuals on IRCTC and after a lot of back and forth, this is the final e-mail I got from them somewhere in August 2016, he writes –

Dear Customer,

We request you to send mobile bill of your mobile number if it is post paid or if it is prepaid then contact to your service provider and they will give you valid proof of your mobile number or they will give you in written on company head letter so that we may update your mobile number to update so that you may reset your password through mobile OTP.
and Kindly inform you that you can update your profile by yourself also.

1.login on IRCTC website
2.after login successfully move courser on “my profile” tab.
3.then click on “update profile”
4.re-enter your password then you can update your profile
5.click on user-profile then email id.
6. click on update.

Still you face any problem related to update profile please revert to us with the screen shots of error message which you will get at the time of update profile .

Thanks & Regards

Parivesh Patel
Executive, Customer Care

IRCTC’s response seemed responsible, valid and thought it would be a cake-walk as private providers are supposed to be much more efficient than public ones. The experience proved how wrong was I trust them with doing the right thing –

1. First I tried the twitter handle to see how IDEA uses their twitter handle.

2. The idea customer care twitter handle was mild in its response.

3. After sometime I realized that the only way out of this quagmire would perhaps be to go to a brick-mortar shop and get it resolved face-to-face. I went twice or thrice but each time something or the other would happen.

On the fourth and final time, I was able to get to the big ‘Official’ shop only to be told they can’t do anything about this and I would have to the appellate body to get the reply.

The e-mail address which they shared (and I found it later) was wrong. I sent a somewhat longish e-mail sharing all the details and got bounce-backs. The correct e-mail address for the IDEA Maharashtra appellate body is – appellette.mh@idea.aditybirla.com

I searched online and after a bit of hit and miss finally got the relevant address. Then finally on 30th December, 2016 wrote a short email to the service provider as follows –

Dear Sir,
I have been using prepaid mobile connection –

number – xxxxxxx

taken from IDEA for last 10 odd years.

I want to register myself with IRCTC for online railway booking using
my IDEA mobile number.

Earlier, I was having a BSNL connection which I discontinued 4 years back,

For re-registering myself with IRCTC, I have to fulfill their latest
requirements as shown in the email below .

It is requested that I please be issued a letter confirming my
credentials with your esteemed firm.

I contacted your local office at corner of Law College Road and
Bhandarkar Road, Pune (reference number – Q1 – 84786060793) who
refused to provide me any letter and have advised me to contact on the
above e-mail address, hence this request is being forwarded to you.

Please do the needful at your earliest.

Few days later I got this short e-mail from them –

Dear Customer,

Greetings for the day!

This is with reference to your email regarding services.

Please accept our apologies for the inconvenience caused to you and delay in response.

We regret to inform you that we are unable to provide demographic details from our end as provision for same is not available with us.

Should you need any further assistance, please call our Customer Service help line number 9822012345 or email us at customercare@idea.adityabirla.com by mentioning ten digit Idea mobile number in subject line.

Thanks & Regards,

Javed Khan

Customer Service Team

IDEA Cellular Limited- Maharashtra & Goa Circle.

Now I was at almost my wit’s end. Few days before, I had re-affirmed my e-mail address to IDEA . I went to the IDEA care site, registered with my credentials. While the https connection to the page is weak, but let’s not dwell on that atm.

I logged into the site, I went through all the drop-down menus and came across My Account > Raise a request link which I clicked on . This came to a page where I could raise requests for various things. One of the options given there was Bill Delivery. As I wasn’t a postpaid user but a prepaid user didn’t know if that would work or not I still clicked on it. It said it would take 4 days for that to happen. I absently filed it away as I was somewhat sure that nothing would happen from my previous experience with IDEA. But this time the IDEA support staff came through and shared a toll-free SMS number and message format that I could use to generate call details from the last 6 months.

The toll-free number from IDEA is 12345 and the message format is EBILL MON (short-form for month so if it’s January would be jan, so on and so forth).

After gathering all the required credentials, sent my last mail to IRCTC about a week, 10 days back –

Dear Mr. Parivesh Patel,

I was out-of-town and couldn’t do the needful so sorry for the delay.
Now that I’m back in town, I have been able to put together my prepaid
bills of last 6 months which should make it easy to establish my

As had shared before, I don’t remember my old password and the old
mobile number (BSNL number) is no longer accessible so can’t go
through that route.

Please let me know the next steps in correcting the existing IRCTC
account (which I haven’t operated ever) so I can start using it to
book my tickets.

Look forward to hearing from you.

Haven’t heard anything them from them, apart from a generated token number, each time you send a reply happens. This time it was #4763548

The whole sequence of events throws a lot of troubling questions –

a. Could IRCTC done a better job of articulating their need to me instead of the run-around I was given ?

b. Shouldn’t there be a time limit to accounts from which no transactions have been done ? I hadn’t done a single transaction since registering. When cell service providers including BSNL takes number out after a year of not using a number, why is that account active for so long ?

c. As that account didn’t have OTP at registration, dunno if it’s being used for illegal activities or something.

Update – This doesn’t seem to be a unique thing at all. Just sampling some of the tweets by people at @IRCTC_LTD https://twitter.com/praveen4al/status/775614978258718721 https://twitter.com/vis_nov25/status/786062572390932480 https://twitter.com/ShubhamDevadiya/status/794241443950948352 https://twitter.com/rajeshhindustan/status/798028633759584256 https://twitter.com/ameetsangita/status/810081624343908352 https://twitter.com/grkisback/status/813733835213078528 https://twitter.com/gbalaji_/status/804230235625394177 https://twitter.com/chandhu_nr/status/800675627384721409 , all of this just goes to show how un-unique the situation really is.

Filed under: Miscellenous Tagged: #customer-service, #demonetization, #IDEA-aditya birla, #IRCTC, #web-services, rant

09 January, 2017 01:38PM by shirishag75

Petter Reinholdtsen

Where did that package go? — geolocated IP traceroute

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to www.stortinget.no (, 30 hops max, 60 byte packets
 1  uio-gw10.uio.no (  0.447 ms  0.486 ms  0.621 ms
 2  uio-gw8.uio.no (  0.467 ms  0.578 ms  0.675 ms
 3  oslo-gw1.uninett.no (  0.385 ms  0.373 ms  0.358 ms
 4  te3-1-2.br1.fn3.as2116.net (  1.174 ms  1.172 ms  1.153 ms
 5  he16-1-1.cr1.san110.as2116.net (  2.627 ms he16-1-1.cr2.oslosda310.as2116.net (  3.172 ms he16-1-1.cr1.san110.as2116.net (  2.857 ms
 6  ae1.ar8.oslosda310.as2116.net (  0.662 ms  0.637 ms ae0.ar8.oslosda310.as2116.net (  0.622 ms
 7 (  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site www.stortinget.no. It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the www.stortinget.no web site will ask your browser to contact 8 other servers: ajax.googleapis.com, insights.hotjar.com, script.hotjar.com, static.hotjar.com, stats.g.doubleclick.net, www.google-analytics.com, www.googletagmanager.com and www.netigate.se. I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by www.stortinget.no using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by www.stortinget.no

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for www.stortinget.no

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by www.stortinget.no using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

09 January, 2017 11:20AM

hackergotchi for Guido Günther

Guido Günther

Debian Fun in December 2016

Debian LTS

November marked the 20th month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:

  • some rather quiet frontdesk days
  • updating icedove to 45.5.1 resulting in DLA-752-1 fixing 7 CVEs
  • looking whether Wheezy is affected by xsa-202, xsa-203, xsa-204 and handling the communication with credativ for these (update not yet released)
  • Assessing cURL/libcURL CVE-2016-9586
  • Assessing whether Wheezy's QEMU is affeced by security issues in 9pfs "proxy" and "handle" code
  • Releasing DLA-776-1 for samba fixing CVE-2016-2125

Other Debian stuff

Some other Free Software activites

09 January, 2017 08:24AM

hackergotchi for Riku Voipio

Riku Voipio

20 years of being a debian maintainer

fte (0.44-1) unstable; urgency=low

* initial Release.

-- Riku Voipio Wed, 25 Dec 1996 20:41:34 +0200
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.

uid Riku Voipio
sig 89A7BF01 1996-12-15 Riku Voipio
sig 4CBA92D1 1997-02-24 Lars Wirzenius
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.

09 January, 2017 08:01AM by Riku Voipio (noreply@blogger.com)

January 08, 2017

Bits from Debian

New Debian Developers and Maintainers (November and December 2016)

The following contributors got their Debian Developer accounts in the last two months:

  • Karen M Sandler (karen)
  • Sebastien Badia (sbadia)
  • Christos Trochalakis (ctrochalakis)
  • Adrian Bunk (bunk)
  • Michael Lustfield (mtecknology)
  • James Clarke (jrtc27)
  • Sean Whitton (spwhitton)
  • Jerome Georges Benoit (calculus)
  • Daniel Lange (dlange)
  • Christoph Biedl (cbiedl)
  • Gustavo Panizzo (gefa)
  • Gert Wollny (gewo)
  • Benjamin Barenblat (bbaren)
  • Giovani Augusto Ferreira (giovani)
  • Mechtilde Stehmann (mechtilde)
  • Christopher Stuart Hoskin (mans0954)

The following contributors were added as Debian Maintainers in the last two months:

  • Dmitry Bogatov
  • Dominik George
  • Gordon Ball
  • Sruthi Chandran
  • Michael Shuler
  • Filip Pytloun
  • Mario Anthony Limonciello
  • Julien Puydt
  • Nicholas D Steeves
  • Raoul Snyman


08 January, 2017 11:30PM by Jean-Pierre Giraud

hackergotchi for Steve Kemp

Steve Kemp

Patching scp and other updates.

I use openssh every day, be it the ssh command for connecting to remote hosts, or the scp command for uploading/downloading files.

Once a day, or more, I forget that scp uses the non-obvious -P flag for specifying the port, not the -p flag that ssh uses.

Enough is enough. I shall not file a bug report against the Debian openssh-client page, because no doubt compatibility with both upstream, and other distributions, is important. But damnit I've had enough.

apt-get source openssh-client shows the appropriate code:

    fflag = tflag = 0;
    while ((ch = getopt(argc, argv, "dfl:prtvBCc:i:P:q12346S:o:F:")) != -1)
          switch (ch) {
            case 'P':
                    addargs(&remote_remote_args, "-p");
                    addargs(&remote_remote_args, "%s", optarg);
                    addargs(&args, "-p");
                    addargs(&args, "%s", optarg);
            case 'p':
                    pflag = 1;

Swapping those two flags around, and updating the format string appropriately, was sufficient to do the necessary.

In other news I've done some hardware development, using both Arduino boards and the WeMos D1-mini. I'm still at the stage where I'm flashing lights, and doing similarly trivial things:

I have more complex projects planned for the future, but these are on-hold until the appropriate parts are delivered:

  • MP3 playback.
  • Bluetooth-speakers.
  • Washing machine alarm.
  • LCD clock, with time set by NTP, and relay control.

Even with a few LEDs though I've had fun, for example writing a trivial binary display.

08 January, 2017 02:45PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

SpeedHQ decoder

I reverse-engineered a video codec. (And then the CTO of the company making it became really enthusiastic, and offered help. Life is strange sometimes.)

I'd talk about this and some related stuff at FOSDEM, but there's a scheduling conflict, so I will be in Ås that weekend, not Brussels.

08 January, 2017 12:06PM

Jonas Meurer

debian lts report 2016.12

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip

08 January, 2017 10:18AM

debian lts report 2016 12

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip

08 January, 2017 10:13AM

hackergotchi for Keith Packard

Keith Packard


Finding a Libc for tiny embedded ARM systems

You'd think this problem would have been solved a long time ago. All I wanted was a C library to use in small embedded systems -- those with a few kB of flash and even fewer kB of RAM.

Small system requirements

A small embedded system has a different balance of needs:

  • Stack space is limited. Each thread needs a separate stack, and it's pretty hard to move them around. I'd like to be able to reliably run with less than 512 bytes of stack.

  • Dynamic memory allocation should be optional. I don't like using malloc on a small device because failure is likely and usually hard to recover from. Just make the linker tell me if the program is going to fit or not.

  • Stdio doesn't have to be awesomely fast. Most of our devices communicate over full-speed USB, which maxes out at about 1MB/sec. A stdio setup designed to write to the page cache at memory speeds is over-designed, and likely involves lots of buffering and fancy code.

  • Everything else should be fast. A small CPU may run at only 20-100MHz, so it's reasonable to ask for optimized code. They also have very fast RAM, so cycle counts through the library matter.

Available small C libraries

I've looked at:

  • μClibc. This targets embedded Linux systems, and also appears dead at this time.

  • musl libc. A more lively project; still, definitely targets systems with a real Linux kernel.

  • dietlibc. Hasn't seen any activity for the last three years, and it isn't really targeting tiny machines.

  • newlib. This seems like the 'normal' embedded C library, but it expects a fairly complete "kernel" API and the stdio bits use malloc.

  • avr-libc. This has lots of Atmel assembly language, but is otherwise ideal for tiny systems.

  • pdclib. This one focuses on small source size and portability.

Current AltOS C library

We've been using pdclib for a couple of years. It was easy to get running, but it really doesn't match what we need. In particular, it uses a lot of stack space in the stdio implementation as there's an additional layer of abstraction that isn't necessary. In addition, pdclib doesn't include a math library, so I've had to 'borrow' code from other places where necessary. I've wanted to switch for a while, but there didn't seem to be a great alternative.

What's wrong with newlib?

The "obvious" embedded C library is newlib. Designed for embedded systems with a nice way to avoid needing a 'real' kernel underneath, newlib has a lot going for it. Most of the functions have a good balance between speed and size, and many of them even offer two implementations depending on what trade-off you need. Plus, the build system 'just works' on multi-lib targets like the family of cortex-m parts.

The big problem with newlib is the stdio code. It absolutely requires dynamic memory allocation and the amount of code necessary for 'printf' is larger than the flash space on many of our devices. I was able to get a cortex-m3 application compiled in 41kB of code, and that used a smattering of string/memory functions and printf.

How about avr libc?

The Atmel world has it pretty good -- avr-libc is small and highly optimized for atmel's 8-bit avr processors. I've used this library with success in a number of projects, although nothing we've ever sold through Altus Metrum.

In particular, the stdio implementation is quite nice -- a 'FILE' is effectively a struct containing pointers to putc/getc functions. The library does no buffering at all. And it's tiny -- the printf code lacks a lot of the fancy new stuff, which saves a pile of space.

However, much of the places where performance is critical are written in assembly language, making it pretty darn hard to port to another processor.

Mixing code together for fun and profit!

Today, I decided to try an experiment to see what would happen if I used the avr-libc stdio bits within the newlib environment. There were only three functions written in assembly language, two of them were just stubs while the third was a simple ultoa function with a weird interface. With those coded up in C, I managed to get them wedged into newlib.

Figuring out the newlib build system was the only real challenge; it's pretty awful having generated files in the repository and a mix of autoconf 2.64 and 2.68 version dependencies.

The result is pretty usable though; my STM 32L discovery board demo application is only 14kB of flash while the original newlib stdio bits needed 42kB and that was still missing all of the 'syscalls', like read, write and sbrk.

Here's gitweb pointing at the top of the tiny-stdio tree:


And, of course you can check out the whole thing

git clone git://keithp.com/git/newlib

'master' remains a plain upstream tree, although I do have a fix on that branch. The new code is all on the tiny-stdio branch.

I'll post a note on the newlib mailing list once I've managed to subscribe and see if there is interest in making this option available in the upstream newlib releases. If so, I'll see what might make sense for the Debian libnewlib-arm-none-eabi packages.

08 January, 2017 07:32AM

January 07, 2017

hackergotchi for Lars Wirzenius

Lars Wirzenius

Hacker Noir, chapter 1: Negotiation

I participated in Nanowrimo in November, but I failed to actually finish the required 50,000 words during the month. Oh well. I plan on finishing the book eventually, anyway.

Furthermore, as an open source exhibitionist I thought I'd publish a chapter each month. This will put a bit of pressure on me to keep writing, and hopefully I'll get some nice feedback too.

The working title is "Hacker Noir". I've put the first chapter up on http://noir.liw.fi/.

07 January, 2017 08:53PM

hackergotchi for Simon Richter

Simon Richter

Crossgrading Debian in 2017

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time.

If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot.

To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64).

Step 1: Be Prepared

To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives:

pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz
pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \
    --architecture ppc64 \
    --mirror http://ftp.ports.debian.org/debian-ports \
    --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
    --debootstrapopts --include=debian-ports-archive-keyring

Step 2: Gradually Heat the Water so the Frog Doesn't Notice

Then, I added the sources to sources.list, and added the architecture to dpkg:

deb [arch=powerpc] http://ftp.debian.org/debian sid main
deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main
deb-src http://ftp.debian.org/debian sid main

dpkg --add-architecture ppc64
apt update

Step 3: Time to Go Wild

apt install dpkg:ppc64

Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64).

Manually installed the libraries, then tried again:

apt install dpkg:ppc64

Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are

apt-get download dpkg:ppc64
dpkg -i dpkg_*_ppc64.deb

or if you didn't think far enough ahead, cursing followed by

cd /tmp
ar x /var/cache/apt/archives/dpkg_*_ppc64.deb
cd /
tar -xJf /tmp/data.tar.xz
dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb

Step 4: Automate That

Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies. Thanks to pbuilder, this wasn't too bad.

With the aptitude resolver, it was then simple to upgrade a test package

aptitude install coreutils:ppc64 coreutils:powerpc-

The resolver did its thing, and asked whether I really wanted to remove an Essential package. I did, and it replaced the package just fine.

So I asked dpkg for a list of all powerpc packages installed (since it's a ppc64 dpkg, it will report powerpc as foreign), massage that into shape with grep and sed, and give the result to aptitude as a command line.

Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot. After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use.

There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with

dpkg --force-overwrite -i ...

and then resuming what aptitude was doing, using

aptitude install

So, in summary, this still works fairly well.

07 January, 2017 08:45PM