February 10, 2016

hackergotchi for Guido Günther

Guido Günther

Debian Fun in January 2016

Debian LTS

January was the ninth month I contributed to Debian LTS under the Freexian umbrella. In total I spent 13 hours working on:

  • LTS Frontdesk duties like the triaging of 34 CVEs. That was about twice as much CVEs coming as during December's frontdesk work.

  • I looked into what needs to be done DLA wise when we move from Squeeze to Wheezy. For that I added a script do find discrepancies between Squeeze LTS and Wheezy.

  • I uploaded giflib to squeeze-lts(DLA-389-1), wheezy(#812363) and jessie(#812362) proposed updates.

  • I forward ported the fix for CVE-2015-5291 for polarssl to Wheezy adding autopkgtests on the way (#812420) (forward port to Jessie happend in February)

  • I forward ported a patch for freetype fixing CVE-2014-9674 to wheezy(DSA-3461-1) - Jessie being not affected.

  • Finally I added some basic autopkgtests to icu (#813338).

There was no progress on using the same nss in all suites. This will continue in February as does the Squeeze-lts Wheezy forward porting.

Other Debian stuff

10 February, 2016 07:10AM

February 09, 2016

Mark Brown

Maintaining your email

One of the difficulties of being a kernel maintainer for a busy subsystem is that you will often end up getting a lot of mail that requires reading and handling which in turn requires sending a lot of mail out in reply. Some of that requires thought and careful consideration but a lot of it is quite routine and (perhaps surprisingly) there is often more challenge in doing a good job of handling these routine messages.

For a long time I used to hand write every reply I sent but the problem with doing that is that sending the same message a lot of times tends to result in the messages getting more and more brief as the message becomes routine and practised. Your words become more optimised and if you’ve stopped thinking about the message before you’ve finished typing it then there’s a desire to finish the typing and get on to the next thing. This is I think a lot of the reputation that kernel maintainers have for being terse and unhelpful comes from – messages that are very practised for someone sending them all the time aren’t always going to be obvious or helpful for someone who’s not so intimately familiar with what’s going on. The good part of it is that everyone is getting a personalised response and it’s easy to insert a comment about that specific situation when you’re already replying but it’s not clear that the tradeoff is a good one.

What I’ve started doing instead for most things is keeping a set of pre-written paragraphs for common cases that I can just insert into a mail and edit as needed. Hopefully it’s working well for people, it means the replies are that bit more verbose than they might otherwise be (mainly adding an explanation of why a given thing is being asked for) but can easily be adapted as needed. The one exception is the “Applied, thanks” mails I used to send when I apply a patch (literally just saying that). Those are now automatically generated by the script I use to sync my local git repository with kernel.org and very much more verbose:

From: Mark Brown <broonie@kernel.org>
To: ${CCS}
Cc: ${LIST}
Subject: ${SUBJECT}
In-Reply-To: ${MSGID}

The patch

   ${TITLE}

has been applied to the ${REPO} tree at

   ${URL} ${BRANCH}

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

(unfortunately this bit seems to be something that it’s worth pointing out)

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark

(the script does try to CC relevant lists). As well as giving people more information this also means that the mails only get sent out when things actually get published to my public repositories which avoids some confusion that used to happen sometimes with people getting my replies before I’d pushed, especially when I’d been working with poor connectivity as often happens when travelling. On the down side it’s very much an obvious form letter which some people don’t like and which can make people glaze over.

My hope with this is to make things easier on average for patch submitters and easier for me, feedback on the scripted e-mails appears to be good thus far and the goal with the pasted in content is that it should be less obvious that it’s happening so I’d expect less feedback there.

09 February, 2016 07:47PM by broonie

Jose M. Calhariz

A Selection of Talks from FOSDEM 2016

It's that time of the year where I go to FOSDEM (Free and Open Source Software Developers' European Meeting). The keynotes and the maintracks are very good, with good presentations and contents.

It's very dificult to choose what talks to see, what talks to see later in video and what talks to loose. What I leave here is my selection of talks. This selection is representative of my tastes, not of the quality of the presentations. I will give links for material that is available now. I will do periodic updates when the new material is available: video or slides.

09 February, 2016 07:23PM by Jose M. Calhariz

Mike Gabriel

Systemd based network setup on Debian Edu jessie workstations

This article describes how to use systemd-networkd on Debian Edu 8.x (aka jessie) notebooks.

What we have to deal with?

At the schools we support we have several notebooks running Debian Edu 8.x (aka jessie) in the field.

For school notebooks (classroom sets) we install the Debian Edu Workstation Profile. Those machines are mostly used over wireless network.

We know that Debian Edu also offers a Roaming Workstation Profile at installation time, but with that profile chosen, user logins create local user accounts and local home directories on the notebooks (package: libpam-mklocaluser). For our customers, we do not want that. People using the school notebooks shall always work on their NFS home directories. School notebooks shall not be usable outside of the school network.

Our woes...

The default setup on Debian Edu jessie workstations regarding networking is this:

  • systemd runs as PID 1
  • ifupdown manages static network interfaces (eth0, etc.)
  • NetworkManager manages wireless network interfaces
  • for our customers we configured NetworkManager with a system-wide WiFi (WPA2-PSK) profile

We have observed various problems with that setup:

  • By default, network interface eth0 is managed by ifupdown (via /etc/network/interfaces):
    auto eth0
    iface eth0 inet dhcp
    

    Woe no. 1: In combination with systemd, this results in a 120sec delay at system startup.

read more

09 February, 2016 06:35PM by sunweaver

Sven Hoexter

examine gpg key properties

Note to myself so I don't have to search for it the next time I've to answer security audit questions.

If you're lucky and you're running Debian you can install pgpdump and use

gpg --export-options export-minimal --export $KEYID | pgpdump

to retrieve a human friendly output. If you're unlucky you have to use

gpg --export-options export-minimal --export $KEYID | gpg --list-packets

and match the CIPHER_ALGO_∗ and DIGEST_ALGO_∗ numbers with those in include/cipher.h.

Found the information in this thread.

Update: anarcat suggested to take a look at the tools contained in hopenpgp-tools.

09 February, 2016 04:03PM

hackergotchi for Joachim Breitner

Joachim Breitner

GHC performance is rather stable

Johannes Bechberger, while working on his Bachelor’s thesis supervised by my colleague Andreas Zwinkau, has developed a performance benchmark runner and results visualizer called “temci”, and used GHC as a guinea pig. You can read his elaborate analysis on his blog.

This is particularly interesting given recent discussions about GHC itself becoming slower and slower, as for example observed by Johannes Waldmann and Anthony Cowley.

Johannes Bechberger’s take-away is that, at least for the programs at hand (which were taken from the The Computer Language Benchmarks Game, there are hardly any changes worth mentioning, as most of the observed effects are less than a standard deviation and hence insignificant. He tries hard to distill some useful conclusions from the data; the one he finds are:

  • Compile time does not vary significantly.
  • The compiler flag -O2 indeed results in faster code than -O.
  • With -O (but not -O2), GHC 8.0.1 is better than GHC 7.0.1. Maybe some optimizations were promoted to -O?

If you are interested, please head over to Johannes’s post and look at the gory details of the analysis and give him feedback on that. Also, maybe his tool temci is something you want to try out?

Personally, I find it dissatisfying to learn so little from so much work, but as he writes: “It’s so easy to lie with statistics.”, and I might add “lie to yourself”, e.g. by ignoring good advise about standard deviations and significance. I’m sure my tool gipeda (which powers perf.haskell.org) is guilty of that sin.

Maybe a different selection of test programs would yield more insight; the benchmark’s games programs are too small and hand-optimized, the nofib programs are plain old and the fibon collection has bitrotted. I would love to see a curated, collection of real-world programs, bundled with all dependencies and frozen to allow meaningful comparisons, but updated to a new, clearly marked revision, on a maybe bi-yearly basis – maybe Haskell-SPEC-2016 if that were not a trademark infringement.

09 February, 2016 03:17PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Alessio Treglia

Alessio Treglia

The poetic code

 

As well as the simple reading of a musical score is sufficient to an experienced musician to recognize the most velvety harmonic variations of an orchestral piece, so the apparent coldness of a fragment of program code can stimulate emotions of ecstatic contemplation in the developer.

Don’t be misled by the simplicity of the laconic definition of the code as a sequence of instructions to be given to a computer to solve a specific problem. Generally, a problem has multiple solutions, the most simple and fast to implement, the most economical from the point of view of machine cycles or memory, the elegant solution and the makeshift one.

However, there is always a “poetic” solution, the one that has a particular and unusual beauty and that is always generated by the inexhaustible forge of the human intuition….

[Read More…]

09 February, 2016 11:31AM by Fabio Marzocca

Ingo Juergensmann

Letsencrypt - when your blog entries don't show up on Planet Debian

Recently there is much talk on Planet Debian about LetsEncrypt certs. This is great, because using HTTPS everywhere improves security and gives the NSA some more work to decrypt the traffic.

However, when you enabled your blog with a LetsEncrypt cert, you might run into the same problem as I: your new article won't show up on Planet Debian after changing your feed URI to HTTPS. The reason seems to be quite simple: planet-venus, which is the software behind Planet Debian seems to have problems with SNI enabled websites.

When following the steps outlined in the Debian Wiki, you can check this by yourself: 

INFO:planet.runner:Fetching https://blog.windfluechter.net/taxonomy/term/2/feed via 5
ERROR:planet.runner:HttpLib2Error: Server presented certificate that does not match host blog.windfluechter.net: {'subjectAltName': (('DNS', 'abi94oesede.de'), ('DNS', 'www.abi94oesede.de')), 'notBefore': u'Jan 26 18:05:00 2016 GMT', 'caIssuers': (u'http://cert.int-x1.letsencrypt.org/',), 'OCSP': (u'http://ocsp.int-x1.letsencrypt.org/',), 'serialNumber': u'01839A051BF9D2873C0A3BAA9FD0227C54D1', 'notAfter': 'Apr 25 18:05:00 2016 GMT', 'version': 3L, 'subject': ((('commonName', u'abi94oesede.de'),),), 'issuer': ((('countryName', u'US'),), (('organizationName', u"Let's Encrypt"),), (('commonName', u"Let's Encrypt Authority X1"),))} via 5

I've filed bug #813313 for this. So, this might explain why your blog post doesn't appear on Planet Debian. Currently there seem 18 sites to be affected by this cert mismatch.

Kategorie: 
 

09 February, 2016 11:27AM by ij

hackergotchi for Michal Čihař

Michal Čihař

Weekly phpMyAdmin contributions 2016-W05

Last week was really focused on code cleanups. The biggest change was removal of PmaAbsoluteUri configuration directive, which has caused quite some pain in past and is not really needed these days (when browsers support relative paths in the Location HTTP header).

This lead to cleanup in other parts as well - support for dead Mozilla Prism is gone, used HTTPS for OpenStreetMap tiles (the map layer now works on HTTPS as well), removed ForceSSL configuration directive as this is something what really needs to be handled at web server level. To improve test coverage, several tests no longer require runkit as the header() call is wrapped within Response class and can be overridden for testing without using runkit.

The list of handled issues is not that impressive this week:

Filed under: English phpMyAdmin | 0 comments

09 February, 2016 11:00AM by Michal Čihař (michal@cihar.com)

Mike Gabriel

Résumé of our Edu Workshop in Kiel (26th - 29th January)

In the last week of January, the project IT-Zukunft Schule (Logo EDV-Systeme GmbH and DAS-NETZWERKTEAM) had visitors from Norway: Klaus Ade Johnstad and Linnea Skogtvedt from LinuxAvdelingen [1] came by for exchanging insights, knowledge, technology, stories regarding IT services at school in Norway and Nothern Germany.

This was our schedule...

Tuesday

  • 3pm – Arrival of Klaus Ade and Linnea, meet up at LOGO with coffee and cake
  • 4pm – Planning the workshop, coming up with an agenda for the next two days (Klaus Ade, Andreas, Mike)
  • 5pm – Preparing OPSI demo sites (Mike, Linnea)
  • 8pm – Grünkohl and Rotkohl and ... at Traum GmbH, Kiel (Klaus Ade, Linnea, Andreas, Mike)

Wednesday

  • 8.30am – more work on the OPSI demo site (Mike, Linnea)
  • 10am – pfSense (esp. captive portal functionality), backup solutions (Klaus Ade, all)
  • 11am – ITZkS overlay packages, basic principles of Debian packaging (Mike, special guests: Torsten, Lucian, Benni)
  • 12am-2pm – lunch break
  • 2pm – OPSI demonstration, discussion, foundation of the OpsiPackages project [2] (Mike)
  • 4pm – Puppet (Linnea)
  • 7pm – dinner time (eating in, Thai fast food :-) )
  • 20pm – Sepiida hacking (Mike, Linnea), customer care (Andreas, Klaus Ade)
  • 22:30pm – zZzZzZ time...

Thursday

read more

09 February, 2016 09:50AM by sunweaver

Orestis Ioannou

Using debsources API to determine the license of foo.bar

Following up on the hack of Matthieu - A one-liner to catch'em all! - and the recent features of Debsources I got the idea to modify a bit the one liner in order to retrieve the license of foo.bar.

The script will calculate the SHA256 hash of the file and then query the Debsources API in order to retrieve the license of that particular file.

Save the following in a file as license-of and add it in your $PATH

#!/bin/bash

function license-of {
    readlink -f $1 | xargs dpkg-query --search | awk -F ": " '{print $1}' | xargs apt-cache showsrc | grep-dctrl -s 'Package' -n '' | awk -v sha="$(sha256sum $1 | awk '{ print $1 }')" -F " " '{print "https://sources.debian.net/copyright/api/sha256/?checksum="sha"&packagename="$1""}' | xargs curl -sS
}

CMD="$1"
license-of ${CMD}

Then you can try something like:

    license-of /usr/lib/python2.7/dist-packages/pip/exceptions.py

Notes:

  • if the checksum is not found in the DB (compiled file, modified file, not part of any package) this will fail
  • if the debian/copyright file of the specific package is not machine readable then you are out of luck!
  • if there are more than 1 versions of the package you will get all the available information. If you want to get just testing then add "&suite=testing" after the &packagename="$1" in the debsources link.

09 February, 2016 08:45AM by Orestis

February 08, 2016

Ingo Juergensmann

rpcbind listening on all interfaces

Currently I'm testing GlusterFS as a replicating network filesystem. GlusterFS depends on rpcbind package. No problem with that, but I usually want to have the services that run on my machines to only listen on those addresses/interfaces that are needed to fulfill the task. This is especially important, because rpcbind can be abused by remote attackers for rpc amplification attacks (dDoS). So, the rpcbind man page states: 

-h : Specify specific IP addresses to bind to for UDP requests. This option may be specified multiple times and is typically necessary when running on a multi-homed host. If no -h option is specified, rpcbind will bind to INADDR_ANY, which could lead to problems on a multi-homed host due to rpcbind returning a UDP packet from a different IP address than it was sent to. Note that when specifying IP addresses with -h, rpcbind will automatically add 127.0.0.1 and if IPv6 is enabled, ::1 to the list.

Ok, although there is neither a /etc/default/rpcbind.conf nor a /etc/rpcbind.conf nor a sample-rpcbind.conf under /usr/share/doc/rpcbind, some quick websearch revealed a sample config file. I'm using this one: 

# /etc/init.d/rpcbind
OPTIONS=""

# Cause rpcbind to do a "warm start" utilizing a state file (default)
# OPTIONS="-w "

# Uncomment the following line to restrict rpcbind to localhost only for UDP requests
OPTIONS="${OPTIONS} -h 192.168.1.254"
#127.0.0.1 -h ::1"

# Uncomment the following line to enable libwrap TCP-Wrapper connection logging
OPTIONS="${OPTIONS} -l "

As you can see, I want to bind to 192.168.1.254. After a /etc/init.d/rpcbind restart verifying that everything works as desired with netstat is showing...

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 2084266 30777/rpcbind
tcp6 0 0 :::111 :::* LISTEN 0 2084272 30777/rpcbind
udp 0 0 0.0.0.0:848 0.0.0.0:* 0 2084265 30777/rpcbind
udp 0 0 192.168.1.254:111 0.0.0.0:* 0 2084264 30777/rpcbind
udp 0 0 127.0.0.1:111 0.0.0.0:* 0 2084260 30777/rpcbind
udp6 0 0 :::848 :::* 0 2084271 30777/rpcbind
udp6 0 0 ::1:111 :::* 0 2084267 30777/rpcbind

Whoooops! Although I've specified that rpcbind should only listen to 192.168.1.254 (and localhost as described by the man page) rpcbind is still listening on all addresses. Quick check if the process is using the correct options: 

root     30777  0.0  0.0  37228  2360 ?        Ss   16:11   0:00 /sbin/rpcbind -h 192.168.1.254 -l

Hmmm, yes, -h 192.168.1.254 is specified. Ok, something is going wrong here...

According to an entry in Ubuntus Launchpad I'm not the only one that experienced this problem. However this Launchpad entry mentioned that upstream seems to have a fix in version 0.2.3, but I experienced the same behaviour in stable as well as in unstable, where the package version is 0.2.3-0.2. Apparently the problem still exists in Debian unstable.

I'm somewhat undecided whether to file a normal bug against rpcbind or if I should label it as a security bug, because it opens a service to the public that can be abused for amplification attacks, although you might have configured rpcbind to just listen on internal addresses.

Kategorie: 
 

08 February, 2016 10:20PM by ij

Niels Thykier

Performance tuning of lintian, take 3

About 7 months ago, I wrote about we had improved Lintian’s performance. In 2.5.41, we are doing another memory reduction, where we primarily reduce the memory consumption of data about ELF binaries.  Like previously, memory reductions follows the “less is more” pattern.

My initial test subject was linux-image-4.4.0-trunk-rt-686-pae_4.4-1~exp1_i386.deb. It had a somewhat unique property that the ELF data made up a little over half the cache.

  • We do away with a lot of unnecessary default values [f4c57bb, 470875f]
    • That removed about ~3MB (out of 10.56MB) of that ELF data cache
  • Discard section information we do not use [3fd98d9]
    • This reduced the ELF data cache to 2MB (down from the 7MB).
  • Then we stop caching the output of file(1) twice [7c2bee4]
    • While a fairly modest reduction (only 0.80MB out of 16MB total), it also affects packages without ELF binaries.

At this point, we had reduced the total memory usage from 18.35MB to 8.92MB (the ELF data going from 10.56MB to 1.98MB)[1]. At this point, I figured that I was happy with the improvement and discarded my test subject.

While impressive, the test subject was unsurprisingly a special case.  The improvement in “regular” packages[2] (with ELF binaries) were closer to 8% in total.  Not being satisfied with that, I pulled one more trick.

  • Keep only “UND” and “.text” symbols [2b21621]
    • This brought coreutils (just the lone deb) another 10% memory reduction in total.

In the grand total, coreutils 8.24-1 amd64 went from 4.09MB to 3.48MB.  The ELF data cache went from 3.38MB to 2.84MB.  Similarly, libreoffice/4.2.5-1 (including its ~170 binaries) has also seen a 8.5% reduction in total cache size[3] and is now down to 260.48MB (from 284.83MB).

 

[1] If you are wondering why I in 3fd98d9 wrote “The total “cache” memory usage is approaching 1/3 of the original for that package”, then you are not alone.  I am not sure myself any more, but it seems obviously wrong.

[2] FTR: The sample size of “regular packages” is 2 in this case.  Of which one of them being coreutils…

[3] Admittedly, since “take 2” and not since 2.5.40.2 like the rest.


Filed under: Debian, Lintian

08 February, 2016 10:19PM by Niels Thykier

hackergotchi for Joachim Breitner

Joachim Breitner

Protecting static content with mod_rewrite

Since fourteen years, I have been photographing digitally and putting the pictures on my webpage. Back then, online privacy was not a big deal, but things have changed, and I had to at least mildly protect the innocent. In particular, I wanted to prevent search engines from accessing some of my pictures.

As I did not want my friends and family having to create an account and remember a password, I set up an OpenID based scheme five years ago. This way, they could use any of their OpenID enabled account, e.g. their Google Mail account, to log in, without disclosing any data to me. As my photo album consists of just static files, I created two copies on the server: the real one with everything, and a bunch of symbolic links representing the publicly visible parts. I then used mod_auth_openid to prevent access to the protected files, unless the users logged in. I never got around of actually limiting who could log in, so strangers were still able to see all photos, but at least search engine spiders were locked out.

But, very unfortunately, OpenID did never really catch on, Google even stopped being a provider, and other promising decentralized authentication schemes like Mozilla Persona are also phased out. So I needed an alternative.

A very simply scheme would be a single password that my friends and family can get from me and that unlocks the pictures. I could have done that using HTTP Auth, but that is not very user-friendly, and the login does not persist (at least not without the help of the browser). Instead, I wanted something that involves a simple HTTP form. But I also wanted to avoid server-side programming, for performance and security reasons. I love serving static files whenever it is feasible.

Then I found that mod_rewrite, Apache’s all-around-tool for URL rewriting and request mangling, supports reading and writing cookies! So I came up with a scheme that implements the whole login logic in the Apache server configuration. I’d like to describe this setup here, in case someone finds it inspiring.

I created a login.html with a simple HTML form:

<form method="GET" action="/bilder/login.html">
 <div style="text-align:center">
  <input name="password" placeholder="Password" />
  <button type="submit">Sign-In</button>
 </div>
</form>

It sends the user to the same page again, putting the password into the query string, hence the method="GET"mod_rewrite unfortunately cannot read the parameters of a POST request.

The Apache configuration is as follows:

RewriteMap public "dbm:/var/www/joachim-breitner.de/bilder/publicfiles.dbm"
<Directory /var/www/joachim-breitner.de/bilder>
 RewriteEngine On

 # This is a GET request, trying to set a password.
 RewriteCond %{QUERY_STRING} password=correcthorsebatterystaple
 RewriteRule ^login.html /bilder/loggedin.html [L,R,QSD,CO=bilderhp:correcthorsebatterystaple:www.joachim-breitner.de:2000000:/bilder]

 # This is a GET request, trying to set a wrong password.
 RewriteCond %{QUERY_STRING} password=
 RewriteRule ^login.html /bilder/notloggedin.html [L,R,QSD]

 # No point in loggin in if there is already the right password
 RewriteCond %{HTTP:Cookie} bilderhp=correcthorsebatterystaple
 RewriteRule ^login.html /bilder/loggedin.html [L,R]

 # If protected file is requested, check for cookie.
 # If no cookie present, redirect pictures to replacement picture
 RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple
 RewriteCond ${public:$0|private} private
 RewriteRule ^.*\.(png|jpg)$ /bilder/pleaselogin.png [L]

 RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple
 RewriteCond ${public:$0|private} private
 RewriteRule ^.+$ /bilder/login.html [L,R]
</Directory>

The publicfiles.dbm file is generated from a text file with lines like

login.html.en 1
login.html.de 1
pleaselogin.png 1
thumbs/20030920165701_thumb.jpg 1
thumbs/20080813225123_thumb.jpg 1
...

using

/usr/sbin/httxt2dbm -i publicfiles.txt -o publicfiles.dbm

and whitelists all files that are visible without login. Make sure it contains the login page, otherwise you’ll get a redirect circle.

The other directives in the above configure fulfill these tasks:

  • If the password (correcthorsebatterystaple) is in the query string, the server redirects the user to a logged-in-page that tells him that the login was successful and instructs him to reload the photo album. It also sets a cookie that will last very long -- after all, I want this to be convenient for my visitors. The query string parsing is not very strict (e.g. a password of correcthorsebatterystaplexkcdrules would also work), but that’s ok.
  • The next request detects an attempt to set a password. It must be wrong (otherwise the first rule would have matched), so we redirect the user to a variant of the login page that tells him so.
  • If the user tries to access the login page with a valid cookie, just log him in.
  • The next two rules implement the actual protection. If there no valid cookie and the accessed file is not whitelisted, then access is forbidden. For requests to images, we do an internal redirect to a placeholder image, while for everything else we redirect the user to the login page.

And that’s it! No resource-hogging web frameworks, not security-dubious scripting languages, and a dead-simple way to authenticate.

Oh, and if you believe you know me well enough to be allowed to see all photos: The real password is not correcthorsebatterystaple; just ask me what it is.

08 February, 2016 04:39PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Lunar

Lunar

Reproducible builds: week 41 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

After remarks from Guillem Jover, Lunar updated his patch adding generation of .buildinfo files in dpkg.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: dracut, ent, gdcm, guilt, lazarus, magit, matita, resource-agents, rurple-ng, shadow, shorewall-doc, udiskie.

The following packages became reproducible after getting fixed:

  • disque/1.0~rc1-5 by Chris Lamb, noticed by Reiner Herrmann.
  • dlm/4.0.4-2 by Ferenc Wágner.
  • drbd-utils/8.9.6-1 by Apollon Oikonomopoulos.
  • java-common/0.54 by by Emmanuel Bourg.
  • libjibx1.2-java/1.2.6-1 by Emmanuel Bourg.
  • libzstd/0.4.7-1 by Kevin Murray.
  • python-releases/1.0.0-1 by Jan Dittberner.
  • redis/2:3.0.7-2 by Chris Lamb, noticed by Reiner Herrmann.
  • tetex-brev/4.22.github.20140417-3 by Petter Reinholdtsen.

Some uploads fixed some reproducibility issues, but not all of them:

  • anarchism/14.0-4 by Holger Levsen.
  • hhvm/3.11.1+dfsg-1 by Faidon Liambotis.
  • netty/1:4.0.34-1 by Emmanuel Bourg.

Patches submitted which have not made their way to the archive yet:

  • #813309 on lapack by Reiner Herrmann: removes the test log and sorts the files packed into the static library locale-independently.
  • #813345 on elastix by akira: suggest to use the $datetime placeholder in Doxygen footer.
  • #813892 on dietlibc by Reiner Herrmann: remove gzip headers, sort md5sums file, and sort object files linked in static libraries.
  • #813912 on git by Reiner Herrmann: remove timestamps from documentation generated with asciidoc, remove gzip headers, and sort `md5sums and tclIndex files.

reproducible.debian.net

For the first time, we've reached more than 20,000 packages with reproducible builds for sid on amd64 with our current test framework.

Vagrant Cascadian has set up another test system for armhf. Enabling four more builder jobs to be added to Jenkins. (h01ger)

Package reviews

233 reviews have been removed, 111 added and 86 updated in the previous week.

36 new FTBFS bugs were reported by Chris Lamb and Alastair McKinstry.

New issue: timestamps_in_manpages_generated_by_yat2m. The description for the blacklisted_on_jenkins issue has been improved. Some packages are also now tagged with blacklisted_on_jenkins_armhf_only.

Misc.

Steven Chamberlain gave an update on the status of FreeBSD and variants after the BSD devroom at FOSDEM’16. He also discussed how jails can be used for easier and faster reproducibility tests.

The video for h01ger's talk in the main track of FOSDEM’16 about the reproducible ecosystem is now available.

08 February, 2016 03:43PM

Orestis Ioannou

Debian - your patches and machine readable copyright files are available on Debsources

TL;DR All Debian license and patches are belong to us. Discover them here and here.

In case you hadn't already stumbled upon sources.debian.net in the past, Debsources is a simple web application that allows to publish an unpacked Debian source mirror on the Web. On the live instance you can browse the contents of Debian source packages with syntax highlighting, search files matching a SHA-256 hash or a ctag, query its API, highlight lines, view accurate statistics and graphs. It was initially developed at IRILL by Stefano Zacchiroli and Matthieu Caneill.

During GSOC 2015 I helped introduce two new features.

License Tracker

Since Debsources has all the debian/copyright files and that many of them adopted the DEP-5 suggestion (machine readable copyright files) it was interesting to exploit them for end users. You may find interesting the following features:

  • an API that allows users to find the license of file "foo" or the licenses for a bunch of packages, using filenames or SHA-256 hashes

  • a better looking interface for debian/copyright files

Have a look at the documentation to discover more!

Patch tracker

The old patch tracker unfortunately died a while ago. Since Debsources stores all the patches it was, probably, natural for it to be able to exploit them and present them over the web. You can navigate through packages by prefix or by searching them here. Among the use cases:

  • a summary which contains all the patches of a package together with their diffs and summaries/subjects
  • links to view and download (quilt-3.0) patches.

Read more about the API!

Coming ...

  • In the future these informations will be added in the DB. This will allow:

    • the license tracker to provide interesting statistics and graphs about licensing trends (What do Pythonistas usually choose as a license, how many GPL-3 files are in Jessie etc). Those are going to be quite accurate since they will take into account each file in a given package and not just the "general" license of the package.

    • the patch tracker to produce a list of packages that contain patches - this will enable providing links from PTS to the patch tracker.

  • Not far in the horizon there is also an initial work for exporting debian/copyright files into SPDX documents. You can have a look at a beta / testing version on debsources-dev. (Example)

I hope you find these new features useful. Don't hesitate to report any bugs or suggestions you come accross.

08 February, 2016 09:33AM by Orestis

Russ Allbery

Converted personal web sites to TLS

I've been in favor of using TLS and encryption for as much as possible for a while, but I never wanted to pay money to the certificate cartel. I'd been using certificates from CAcert, but they're not recognized by most browsers, so it felt rude to redirect everything to TLS with one of their certificates.

Finally, the EFF and others put together Let's Encrypt with free, browser-recognized certificates and even a really solid automatic renewal system. That's perfect, and also eliminated my last excuse to go do the work, so now all of my personal web sites use TLS and HTTPS by default and redirect to the encrypted version of the web site. And better yet, all the certificates should just renew themselves automatically, meaning one less thing I have to keep track of and deal with periodically.

Many thanks to Wouter Verhelst for his short summary of how to get the Let's Encrypt client to work properly from the command line without doing all the other stuff it wants to do in order to make things easier for less sophisticated users. Also useful was the SSL Labs server test to make sure I got the modern TLS configuration right. (All my sites should now be an A. I decided to not cut off support for Internet Explorer older than version 11 yet.

I imported copies of the Debian packages needed for installation of the Let's Encrypt package on Debian jessie that weren't already in Debian backports into my personal Debian repository for my own convenience, but they're also there for anyone else.

Oh, that reminds me: this also affects the archives.eyrie.org APT repository (the one linked above), so if any of you were using that, you'll now need to install apt-transport-https and might want to change the URL to use HTTPS.

08 February, 2016 04:44AM

February 07, 2016

Mike Hommey

SSH through jump hosts, revisited

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+*
ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Update: Add missing port to -W flag when one is not given.

07 February, 2016 11:26PM by glandium

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

After FOSDEM 2016

FOSDEM was fun. It was great to see all these open source projects coming together in one place and it was really good to talk to people that were just as enthusiastic about the FOSS activities they do as I am about mine.

Thanks go to Saúl Corretgé who looked after the real-time communications dev room and made sure everything ran smoothly. I was very pleased to find that I had to stand for a couple of talks as the room was full with people eager to learn more about the world of RTC.

I was again pleased on the Sunday when I had such a great audience for my talk in the distributions dev room. Everyone was very welcoming and after the talk I had some corridor discussions with a few people that were really interesting and have given me a few new things to explore in the near future.

A few highlights from FOSDEM:

  • ReactOS: Since I last looked at this project it has really matured and is getting to be rather stable. It may be possible to start seriously considering replacing Windows XP/Vista machines with ReactOS where the applications being run just cannot be used with later versions of Windows.
  • Haiku: I used BeOS a long long time ago on my video/music PC. I can't say that I was using it over a Linux or BSD distribution for any particular reason but it worked well. I saw a talk that discussed how Haiku was keeping up-to-date with drivers and also there was a talk, that I didn't see, that talked about the new Haiku package management system. I think I may check out Haiku again in the near future, even if only for the sake of nostalgia.
  • Kolab: Continuing with the theme of things that have matured since I last looked at them, I visited the Kolab stand at FOSDEM and I was impressed with how far it has come. In fact, I was so impressed that I'm looking at using it for my primary email and calendaring in the near future.
  • picoTCP: When I did my Honours project at University, I was playing with Contiki. This looks a lot easier to get started with, even if it's perhaps missing parts of the stack that Contiki implements well. If I ever find time for doing some IoT hacking, this will be on the list of things to try out first.

This is just some of the highlights, and I know I'm missing out a lot here. One of the main things that FOSDEM has done for me is open my eyes as to how wide and diverse our community is and it has served as a reminder that there is tons of cool stuff out there if you take a moment to look around.

Also, thanks to my trip to FOSDEM, I now have four new t-shirts to add into the rotation: FOSDEM 2016, Debian, XMPP and twiki.org.

07 February, 2016 10:55PM by Iain R. Learmonth

hackergotchi for Joey Hess

Joey Hess

letsencrypt support in propellor

I've integrated letsencrypt into propellor today.

I'm using the reference letsencrypt client. While I've seen complaints that it has a lot of dependencies and is too complicated, it seemed to only need to pull in a few packages, and use only a few megabytes of disk space, and it has fewer options than ls does. So seems fine. (Although it would be nice to have some alternatives packaged in Debian.)

I ended up implementing this:

letsEncrypt :: AgreeTOS -> Domain -> WebRoot -> Property NoInfo

This property just makes the certificate available, it does not configure the web server to use it. This avoids relying on the letsencrypt client's apache config munging, which is probably useful for many people, but not those of us using configuration management systems. And so avoids most of the complicated magic that the letsencrypt client has a reputation for.

Instead, any property that wants to use the certificate can just use leteencrypt to get it and set up the server when it makes a change to the certificate:

letsEncrypt (LetsEncrypt.AgreeTOS (Just "me@my.domain")) "example.com" "/var/www"
    `onChange` setupthewebserver

(Took me a while to notice I could use onChange like that, and so divorce the cert generation/renewal from the server setup. onChange is awesome! This blog post has been updated accordingly.)

In practice, the http site has to be brought up first, and then letsencrypt run, and then the cert installed and the https site brought up using it. That dance is automated by this property:

Apache.httpsVirtualHost "example.com" "/var/www"
    (LetsEncrypt.AgreeTOS (Just "me@my.domain"))

That's about as simple a configuration as I can imagine for such a website!


The two parts of letsencrypt that are complicated are not the fault of the client really. Those are renewal and rate limiting.

I'm currently rate limited for the next week because I asked letsencrypt for several certificates for a domain, as I was learning how to use it and integrating it into propellor. So I've not quite managed to fully test everything. That's annoying. I also worry that rate limiting could hit at an inopportune time once I'm relying on letsencrypt. It's especially problimatic that it only allows 5 certs for subdomains of a given domain per week. What if I use a lot of subdomains?

Renewal is complicated mostly because there's no good way to test it. You set up your cron job, or whatever, and wait three months, and hopefully it worked. Just as likely, you got something wrong, and your website breaks. Maybe letsencrypt could offer certificates that will only last an hour, or a day, for use when testing renewal.

Also, what if something goes wrong with renewal? Perhaps letsencrypt.org is not available when your certificate needs to be renewed.

What I've done in propellor to handle renewal is, it runs letsencrypt every time, with the --keep-until-expiring option. If this fails, propellor will report a failure. As long as propellor is run periodically by a cron job, this should result in multiple failure reports being sent (for 30 days I think) before a cert expires without getting renewed. But, I have not been able to test this.

07 February, 2016 10:10PM

Iustin Pop

mt-st project new homepage

A short public notice: mt-st project new homepage at https://github.com/iustin/mt-st. Feel free to forward your distribution-specific patches for upstream integration!

Context: a while back I bought a tape unit to help me with backups. Yay, tape! All good, except that I later found out that the Debian package was orphaned, so I took over the maintenance.

All good once more, but there were a number of patches in the Debian package that were not Debian-specific, but rather valid for upstream. And there was no actual upstream project homepage, as this was quite an old project, with no (visible) recent activity; the canonical place for the project source code was an ftp site (ibiblio.org). I spoke with Kai Mäkisara, the original author, and he agreed to let me take over the maintenance of the project (and that's what I intend to do: maintenance mostly, merging of patches, etc. but not significant work). So now there's a github project for it.

There was no VCS history for the project, so I did my best to partially recreate the history: I took the debian releases from snapshots.debian.org and used the .orig.tar.gz as bulk import; the versions 0.7, 0.8, 0.9b and 1.1 have separate commits in the tree.

I also took the Debian and Fedora patches and applied them, and with a few other cleanups, I've just published the 1.2 release. I'll update the Debian packaging soon as well.

So, if you somehow read this and are the maintainer of mt-st in another distribution, feel free to send patches my way for integration; I know this might be late, as some distributions have dropped it (e.g. Arch Linux).

07 February, 2016 08:34PM

hackergotchi for Ben Armstrong

Ben Armstrong

Bluff Trail icy dawn: Winter 2016

Before the rest of the family was up, I took a brief excursion to explore the first kilometre of the Bluff Trail and check out conditions. I turned at the ridge, satisfied I had seen enough to give an idea of what it’s like out there, and then walked back the four kilometres home on the BLT Trail.

I saw three joggers and their three dogs just before I exited the Bluff Trail on the way back, and later, two young men on the BLT with day packs approaching. The parking lot had gained two more cars for a total of three as I headed home. Exercising appropriate caution and judgement, the first loop is beautiful and rewarding, and I’m not alone in feeling the draw of its delights this crisp morning.

Click the first photo below to start the slideshow.

Click to start the slideshowClick to start the slideshow
At the parking lot, some ice, but passable with caution Trail head: a few mm of sleet Many footprints since last snowfall Thin ice encrusts the bog The boardwalk offers some loose traction Mental note: buy crampons More thin bog ice Bubbles captured in the bog ice Shelves hang above receding water First challenging boulder ascent Rewarding view at the crest Time to turn back here Flowing runnels alongside BLT Trail Home soon to fix breakfast If it looks like a tripod, it is Not a very adjustable tripod, however Pretty, encrusted pool The sun peeks out briefly Light creeps down the rock face Shimmering icy droplets and feathery moss Capped with a light dusting of sleet
facebooktwittergoogle_plusredditpinterestlinkedinmail

07 February, 2016 02:45PM by Ben Armstrong

hackergotchi for Steve Kemp

Steve Kemp

Redesigning my clustered website

I'm slowly planning the redesign of the cluster which powers the Debian Administration website.

Currently the design is simple, and looks like this:

In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.

(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)

When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.

Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.

Recently though I've begun planning how it will be deployed in the future and I have a new design:

Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:

  • POST /users/login
    • POST a username/password and return 200 if valid. If bogus details return 403. If the user doesn't exist return 404.
  • GET /users/Steve
    • Return a JSON hash of user-information.
    • Return 404 on invalid user.

I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.

This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.

Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.

07 February, 2016 10:28AM

February 06, 2016

Dimitri John Ledkov

Blogging about Let's encrypt over HTTP

So let's encrypt thing started. And it can do challenges over http (serving text files) and over dns (serving .txt records).

My "infrastructure" is fairly modest. I've seen too many of my email accounts getting swamped with spam, and or companies going bust. So I got my own domain name surgut.co.uk. However, I don't have money or time to run my own services. So I've signed up for the Google Apps account for my domain to do email, blogging, etc.

Then later i got the libnih.la domain to host API docs for the mentioned library. In the world of .io startups, I thought it's an incredibly funny domain name.

But I also have a VPS to host static files on ad-hoc basis, run VPN, and an irc bouncer. My irc bouncer is ZNC and I used a self-signed certificate there, thus i had "ignore" ssl errors in all of my irc clients... which kind of defeats the purposes somewhat.

I run my VPS on i386 (to save on memory usage) and on Ubuntu 14.04 LTS managed with Landscape. And my little services are just configured by hand there (not using juju).

My first attempt at getting on the let's encrypt bandwagon was to use the official client. By fetching debs from xenial, and installing that on LTS. But the package/script there is huge, has support for things I don't need, and wants dependencies I don't have on 14.04 LTS.

However I found a minimalist implementation letsencrypt.sh implemented in shell, with openssl and curl. It was trivial to get dependencies for and configure. Specified a domains text file, and that was it. And well, added sym links in my NGINX config to serve the challenges directory & a hook to deploy certificate to znc and restart that. I've added a cronjob to renew the certs too. Thinking about it, it's not complete as I'm not sure if NGINX will pick up certificate change and/or if it will need to be reloaded. I shall test that, once my cert expires.

Tweaking config for NGINX was easy. And I was like, let's see how good it is. I pointed https://www.ssllabs.com/ssltest/ at my https://x4d.surgut.co.uk/ and I got a "C" rating. No forward secrecy, vulnerable to down grade attacks, BEAST, POODLE and stuff like that. I went googling for all types of NGINX configs and eventually found website with "best known practices" https://cipherli.st/ However, even that only got me to "B" rating, as it still has Diffie-Hellman things that ssltest caps at "B" rating. So I disabled those too. I've ended up with this gibberish:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

I call it gibberish, because IMHO, I shouldn't need to specify any of the above... Anyway I got my A+ rating.

However, security is as best as the weakest link. I'm still serving things over HTTP, maybe I should disable that. And I'm yet to check how "good" the TLS is on my znc. Or if I need to further harden my sshd configuration.

This has filled a big gap in my infrastructure. However a few things remain served over HTTP only.

http://blog.surgut.co.uk is hosted by an Alphabet's / Google's Blogger service. Which I would want to be served over HTTPS.

http://libnih.la is hosted by GitHub Inc service. Which I would want to be served over HTTPS.

I do not want to manage those services, experience load / spammers / DDoS attacks etc. But I am happy to sign CSRs with let's encrypt and deploy certs over to those companies. Or allow them to self-obtain certificates from let's encrypt on my behalf. I used gandi.net as my domain names provider, which offers an RPC API to manage domains and their zones files, thus e.g. I can also generate an API token for those companies to respond with a dns-01 challenge from let's encrypt.

One step at a time I guess.

The postings on this site are my own and don't necessarily represent any past/present/future employers' positions, strategies, or opinions.

06 February, 2016 11:30PM by Dimitri John Ledkov (noreply@blogger.com)

Andrew Shadura

Community time at Collabora

I haven’t yet blogged about this (as normally I don’t blog often), but I joined Collabora in June last year. Since then, I had an opportunity to work with OpenEmbedded again, write a kernel patch, learn lots of things about systemd (in particular, how to stop worrying about it taking over the world and so on), and do lots of other things.

As one would expect when working for a free software consultancy, our customers do understand the value of the community and contributing back to it, and so does the customer for the project I’m working on. In fact, our customer insists we keep the number of locally applied patches to, for example, Linux kernel, to minimum, submitting as much as possible upstream.

However, apart from the upstreaming work which may be done for the customer, Collabora encourages us, the engineers, to spend up to two hours weekly for upstreaming on top of what customers need, and up to five days yearly as paid Community days. These community days may be spent working on the code or doing volunteering at free software events or even speaking at conferences.

Even though on this project I have already been paid for contributing to the free software project which I maintained in my free time previously (ifupdown), paid community time is a great opportunity to contribute to the projects I’m interested in, and if the projects I’m interested in coincide with the projects I’m working with, I effectively can spend even more time on them.

A bit unfortunately for me, I haven’t spent enough time last year to plan my community days, so I used most of them in the last weeks of the calendar year, and I used them (and some of my upstreaming hours) on something that benefitted both free software community and Collabora. I’m talking about SparkleShare, a cross-platform Git-based file synchronisation solution written in C#. SparkleShare provides an easy to use interface for Git, or, actually, it makes it possible to not use any Git interface at all, as it monitors the working directory using inotify and commits stuff right after it changes. It automatically handles conflicts even for binary files, even though I have to admit its handling could still be improved.

At Collabora, we use SparkleShare to store all sorts of internal documents, and it’s being used by users not familiar with command line interfaces too. Unfortunately, the version we recently had in Debian had a couple of very annoying bugs, making it a great pain to use it: it would not notice edits in local files, or not notice new commits being pushed to the server, and that led to individual users’ edits being lost sometimes. Not cool, especially when the document has to be sent to the customer in a couple of minutes.

The new versions, 1.4 (and recently released 1.5) was reported as being much better and also fixing some crashes, but it also used GTK+ 3 and some libraries not yet packaged for Debian. Thanh Tung Nguyen packaged these packages (and a newer SparkleShare) for Ubuntu and published them in his PPA, but they required some work to be fit for Debian.

I have never touched Mono packages before in my life, so I had to learn a lot. Some time was spent talking to upstream about fixing their copyright statements (they had none in the code, and only one author was mentioned in configure.ac, and nowhere else in the source), a bit more time went into adjusting and updating the patches to the current source code version. Then, of course, waiting the packages to go through NEW. Fixing parallel build issues, waiting for buildds to all build dependencies for at least one architecture… But then, finally, on 19th of January I had the updated SparkleShare in Debian.

As you may have already guessed, this blog post has been sponsored by Collabora, the first of my employers to encourage require me to work on free software in my paid time :)

06 February, 2016 05:22PM

hackergotchi for Neil Williams

Neil Williams

lava.debian.net

With thanks to Iain Learmonth for the hardware, there is now a Debian instance of LAVA available for use and the Debian wiki page has been updated.

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing. Extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA has a long history of supporting continuous integration of the Linux kernel on ARM devices (ARMv7 & ARMv8). So if you are testing a Linux kernel image on armhf or arm64 devices, you will find a lot of similar tests already running on the other LAVA instances. The Debian LAVA instance seeks to widen that testing in a couple of ways:

  • A wider range of tests including use of Debian artifacts as well as mainline Linux builds
  • A wider range of devices by allowing developers to offer devices for testing from their own desks.
  • Letting developers share local test results easily with the community without losing the benefits of having the board on your desk.

This instance relies on the latest changes in lava-server and lava-dispatcher. The 2016.2 release has now deprecated the old, complex dispatcher and a whole new pipeline design is available. The Debian LAVA instance is running 2015.12 at the moment, I’ll upgrade to 2016.2 once the packages migrate into testing in a few days and I can do a backport to jessie.

What can LAVA do for Debian?

ARMMP kernel testing

Unreleased builds, experimental initramfs testing – this is the core of what LAVA is already doing behind the scenes of sites like http://kernelci.org/.

U-Boot ARM testing

This is what fully automated LAVA labs have not been able to deliver in the past, at least without a usable SD Mux

What’s next

LOTS. This post actually got published early (distracted by the rugby) – I’ll update things more in a later post. Contact me if you want to get involved, I’ll provide more information on how to use the instance and how to contribute to the testing in due course.

06 February, 2016 02:33PM by Neil Williams

Hideki Yamane

playing to update package (failed)


I thought to build gnome-todo package 3.19 branch.

Once tried to do that, it seems to need gtk+3.0 (>= 3.19.5), however Debian doesn't have it yet (of course, it's development branch). Then tried to build gtk+3, it needs wayland 1.90 1.9.90 that has not been in Debian yet, too. So, update local package to wayland 1.9.91, found tiny bug and sent patch, and build it (package diff was sent to maintainer - and merged), easy task.

Build again, gtk+3.0 needs "wayland-protocols" that has not been packaged in Debian, yet. Okay... (20 min work...) done! Make wayland-protocols package (not ITPed yet since who should be maintainer, under same umbrella as wayland?), not difficult.

Build newest gtk+3.0 source as 3.19.8 with cowbuilder chroot with those package (cowbuilder --login --save-after-exec --inputfile foo.deb --inputfile bar.deb), ...and failed with testsuite ;) I don't have enough knowledge to investigate it.

Back to older gtk+3.0 source, build 3.19.1 is fine (diff), but 3.19.2 was failed to build, 3.19.3 to 3.19.8 were failed with testsuite.


Time is up, "You lose!"... that's one of typical days.

06 February, 2016 12:40PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Julien Danjou

Julien Danjou

FOSDEM 2016, recap

Last week-end, I was in Brussels, Belgium for the FOSDEM, one of the greatest open source developer conference. I was not sure to go there this year (I already skipped it in 2015), but it turned out I was requested to do a talk in the shared Lua & GNU Guile devroom.

As a long time Lua user and developer, and a follower of GNU Guile for several years, the organizer asked me to run a talk that would be a link between the two languages.

I've entitled my talk "How awesome ended up with Lua and not Guile" and gave it to a room full of interested users of the awesome window manager 🙂.

We continued with a panel discussion entitled "The future of small languages Experience of Lua and Guile" composed of Andy Wingo, Christopher Webber, Ludovic Courtès, Etiene Dalcol, Hisham Muhammaad and myself. It was a pretty interesting discussion, where both language shared their views on the state of their languages.

It was a bit awkward to talk about Lua & Guile whereas most of my knowledge was year old, but it turns out many things didn't change. I hope I was able to provide interesting hindsight to both community. Finally, it was a pretty interesting FOSDEM to me, and it was a long time I didn't give talk here, so I really enjoyed it. See you next year!

06 February, 2016 10:54AM by Julien Danjou

February 05, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

Giving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

05 February, 2016 10:07PM by Daniel.Pocock

hackergotchi for Bernd Zeimetz

Bernd Zeimetz

bzed-letsencrypt puppet module

With the announcement of the Let’s Encrypt dns-01 challenge support we finally had a way to retrieve certificates for those hosts where http challenges won’t work. Also it allows to centralize the signing procedure to avoid the installation and maintenance of letsencrypt clients on all hosts.

For an implementation I had the following requirements in my mind:

  • Handling of key/csr generation and certificate signing by puppet.
  • Private keys don’t leave the host they were generated on. If they need to (for HA setups and similar cases), handling needs to be done outside of the letsencrypt puppet module.
  • Deployment and cleanup of tokens in our DNS infrastructure should be easy to implement and maintain.

After reading trough the source code of various letsencrypt client implementations I decided to use letsencrypt.sh. Mainly because its dependencies are available pretty much everywhere and adding the necessary hook is as simple as writing some lines of code in your favourite (scripting) language. My second favourite was lego, but I wanted to avoid shipping binaries with puppet, so golang was not an option.

It took me some days to find enough spare time to write the necessary puppet code, but finally I managed to release a working module today. It is still not perfect, but the basic tasks are implemented and the whole key/csr/signing chain works pretty well.

And if your hook can handle it, http-01 challenges are possible, too!

Please give the module a try and send patches if you would like to help to improve it!

05 February, 2016 07:55PM

Jose M. Calhariz

Preview of amanda 3.3.8-1

While I sort out a sponsor, my sponsor is very busy, here is a preview of the new packages. So anyone can install and test them on jessie.

The source of the packages is in collab-maint.The debs files for jessie are here:

amanda-common_3.3.8-1_cal0_i386.deb

amanda-server_3.3.8-1_cal0_i386.deb

amanda-client_3.3.8-1_cal0_i386.deb

Here comes the changelog:

amanda (1:3.3.8-1~cal0) unstable; urgency=low

  * New Upstream version
    * Changes for 3.3.8
      * s3 devices
          New NEARLINE S3-STORAGE-CLASS for Google storage.
          New AWS4 STORAGE-API
      * amcryptsimple
          Works with newer gpg2.
      * amgtar
          Default SPARSE value is NO if tar < 1.28.
          Because a bug in tar with some filesystem.
      * amstar
          support include in backup mode.
      * ampgsql
          Add FULL-WAL property.
      * Many bugs fix.
    * Changes for 3.3.7p1
      * Fix build in 3.3.7
    * Changes for 3.3.7
      * amvault
          new --no-interactivity argument.
          new --src-labelstr argument.
      * amdump
          compute crc32 of the streams and write them to the debug files.
      * chg-robot
          Add a BROKEN-DRIVE-LOADED-SLOT property.
      * Many bugs fix.
  * Refreshed patches.
  * Dropped patches that were applied by the upstream: fix-misc-typos,
    automake-add-missing, fix-amcheck-M.patch,
    fix-device-src_rait-device.c, fix-amreport-perl_Amanda_Report_human.pm
  * Change the email of the maintainer.
  * "wrap-and-sort -at" all control files.
  * swig is a new build depend.
  * Bump standard version to 3.9.6, no changes needed.
  * Replace deprecated dependency perl5 by perl, (Closes: #808209), thank
    you Gregor Herrmann for the NMU.

 -- Jose M Calhariz <jose@calhariz.com>  Tue, 02 Feb 2016 19:56:12 +0000

05 February, 2016 07:49PM by Jose M. Calhariz

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, January 2016

In January I carried over 10 hours from December and was assigned another 15 hours of work by Freexian's Debian LTS initiative. I worked a total of 15 hours. I had a few days on 'front desk' at the start of the month, as my week in that role spanned the new year.

I fixed a regression in the kernel that was introduced to all stable suites in December. I uploaded this along with some minor security fixes, and issued DLA 378-1.

I finished backporting and testing fixes to sudo for CVE-2015-5602. I uploaded an update and issued DLA 382-1, which was followed by DSA 3440-1 for wheezy and jessie.

I finished backporting and testing fixes to Claws Mail for CVE-2015-8614 and CVE-2015-8708. I uploaded an update and issued DLA 383-1. This was followed by DSA 3452-1 for wheezy and jessie, although the issues are less serious there.

I also apent a little time on InspIRCd, though this isn't a package that Freexian's customers care about and it seems to have been broken in squeeze for several years due to a latent bug in the build system. I had already backported the security fix by the time I discovered this, so I went ahead with an update fixing that regression as well, and issued DLA 384-1.

Finally, I diagnosed the regression in the update to isc-dhcp in DLA 385-1.

05 February, 2016 07:32PM

Enrico Zini

debtags-cleanup

debtags.debian.org cleaned up

Since the Debtags consolidation announcement there are some more news:

No more anonymous submissions

  • I have disabled anonymous tagging. Anyone is still able to tag via Debian Single Sign-On. SSO-enabling the site was as simple as this.
  • Tags need no review anymore to be sent to ftp-master. I have removed all the distinction in the code between reviwed and unreviewed tags, and all the code for the tag review interface.
  • The site now has an audit log for each user, that any person logged in via SSO can access via the "history" link in the top right of the tag editor page.

Official recognition as Debian Contributors

  • Tag contributions are sent to contributors.debian.org. There is no historical data for them because all submissions until now have been anonymous, but from now on if you tag packages you are finally recognised as a Debian Contributor!

Mailing lists closed

  • I closed the debtags-devel and debtags-commits mailing lists; the archives are still online.
  • I have updated the workflow for suggesting new tags in the FAQ to "submit a bug to debtags and Cc debian-devel"

We can just use debian-devel instead of debtags-devel.

Autotagging of trivial packages

  • I have introduced the concept of "trivial" packages to currently be any package in the libs, oldlibs and debug sections. They are tagged automatically by the site maintenance and are excluded from the site todo lists and tag editor. We do not need to bother about trivial packages anymore, all 13239 of them.

Miscellaneous other changes

  • I have moved the debtags vocabulary from subversion to git
  • I have renamed the tag used to mark packages not yet reviewed by humans from special::not-yet-tagged to special::unreviewed
  • At the end of every nightly maintenance, some statistics are saved into a database table. I have collected 10 years of historical data by crunching big tarballs of site backups, and fed them to the historical stats table.
  • The workflow for getting tags from the site to ftp-master is now far, far simpler. It is almost simple enough that I should manage to explain it without needing to dig through code to see what it is actually doing.

05 February, 2016 06:18PM

hackergotchi for Michal Čihař

Michal Čihař

Bug squashing in Gammu

I've not really spent much time on Gammu in past months and it was about time to do some basic housekeeping.

It's not that there would be too much of new development, I rather wanted to go through the issue tracker, properly tag issues, close questions without response and resolve the ones which are simple to fix. This lead to few code and documentation improvements.

Overall the list of closed issues is quite huge:

Do you want more development to happen on Gammu? You can support it by money.

Filed under: English Gammu python-gammu Wammu | 0 comments

05 February, 2016 11:00AM by Michal Čihař (michal@cihar.com)

February 04, 2016

Vincent Fourmond

Making oprofile work again with recent kernels

I've been using oprofile for profiling programs for a while now (and especially QSoas, because it doesn't require specific compilation options, and doesn't make your program run much more slowly (like valgrind does, which can also be used to some extent for profiling). It's a pity the Debian package was dropped long ago, but the ubuntu packages work out of the box on Debian. But, today, while trying to see what takes so long in some fits I'm running, here's what I get:
~ operf QSoas
Unexpected error running operf: Permission denied

Looking further using strace, I could see that what was not working was the first call to perf_event_open.
It took me quite a long time to understand why it stopped working and how to get it working again, so here's for those of you who googled the error and couldn't find any answer (including me, who will probably have forgotten the anwser in a couple of months). The reason behing the change is that, for security reason, non-privileged users do not have the necessary privileges since Debian kernel 4.1.3-1; here's the relevant bit from the changelog:

  * security: Apply and enable GRKERNSEC_PERF_HARDEN feature from Grsecurity,
    disabling use of perf_event_open() by unprivileged users by default
    (sysctl: kernel.perf_event_paranoid)

The solution is simple, just run as root:
~ sysctl kernel.perf_event_paranoid=1

(the default value seems to be 3, for now). Hope it helps !

04 February, 2016 08:54PM by Vincent Fourmond (noreply@blogger.com)

Petter Reinholdtsen

Using appstream in Debian to locate packages with firmware and mime type support

The appstream system is taking shape in Debian, and one provided feature is a very convenient way to tell you which package to install to make a given firmware file available when the kernel is looking for it. This can be done using apt-file too, but that is for someone else to blog about. :)

Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin | \
  awk '/Package:/ {print $2}'
firmware-qlogic
%

See the appstream wiki page to learn how to embed the package metadata in a way appstream can use.

This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides mimetype image/svg+xml | \
  awk '/Package:/ {print $2}'
bkchem
phototonic
inkscape
shutter
tetzle
geeqie
xia
pinta
gthumb
karbon
comix
mirage
viewnior
postr
ristretto
kolourpaint4
eog
eom
gimagereader
midori
%

I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

04 February, 2016 03:40PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Lenovo Yoga 2 13 running Debian with GNOME Converged Interface

I've wanted to blog about this for a while. So, though I'm terrible at creating video reviews, I'm still going to do it, rather than procrastinate every day.

 

In this video, the emphasis is on using Free Software (GNOME in particular) tools, with which soon you should be able serve the needs for Desktop/Laptop, and as well as a Tablet.

The video also touches a bit on Touchpad Gestures.

 

Categories: 

Keywords: 

Like: 

04 February, 2016 03:33PM by Ritesh Raj Sarraf

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

xf86-video-geode 2.11.18

Yesterday, I pushed out version 2.11.18 of the Geode X.Org driver. This is the driver used by the OLPC XO-1 and by a plethora of low-power desktops, micro notebooks and thin clients. This release mostly includes maintenance fixes of all sorts. Of noticeable interest is a fix for the long-standing issue that switching between X and a VT would result in a blank screen (this should probably be cherry-picked for distributions running earlier releases of this driver). Many thanks to Connor Behan for the fix!


Unfortunately, this driver still doesn't work with GNOME. On my testing host, launching GDM produces a blank screen. 'ps' and other tools show that GDM is running but there's no screen content; the screen remains pitch black. This issue doesn't happen with other display managers e.g. LightDM. Bug reports have been filed, additional information was provided, but the issue still hasn't been resolved.


Additionally, X server flat out crashes on Geode hosts running Linux kernels 4.2 or newer. 'xkbcomp' repeatedly fails to launch and X exits with a fatal error. Bug reports have been filed, but not reacted to. However, interestingly enough, X launches fine if my testing host is booted with earliers kernels, which might suggest what the actual cause of this particular bug might be:


Since kernel 4.2 entered Debian, the base level i386 kernel on Debian is now compiled for i686 (without PAE). Until now, the base level was i586. This essentially makes it pointless to build the Geode driver with GX2 support. It also means that older GX1 hardware won't be able to run Debian either, starting with the next stable release.

04 February, 2016 02:27PM by Martin-Éric (noreply@blogger.com)

hackergotchi for Daniel Pocock

Daniel Pocock

Australians stuck abroad and alleged sex crimes

Two Australians have achieved prominence (or notoriety, depending on your perspective) for the difficulty in questioning them about their knowledge of alleged sex crimes.

One is Julian Assange, holed up in the embassy of Ecuador in London. He is back in the news again today thanks to a UN panel finding that the UK is effectively detaining him, unlawfully, in the Ecuadorian embassy. The effort made to discredit and pursue Assange and other disruptive technologists, such as Aaron Swartz, has an eerie resemblance to the way the Inquisition hunted witches in the middle ages and beyond.

The other Australian stuck abroad is Cardinal George Pell, the most senior figure in the Catholic Church in Australia. The Royal Commission into child sex abuse by priests has heard serious allegations claiming the Cardinal knew about and covered up abuse. This would appear far more sinister than anything Mr Assange is accused of. Like Mr Assange, the Cardinal has been unable to travel to attend questioning in person. News reports suggest he is ill and can't leave Rome, although he is being accommodated in significantly more comfort than Mr Assange.

If you had to choose, which would you prefer to leave your child alone with?

04 February, 2016 10:30AM by Daniel.Pocock

Russell Coker

Unikernels

At LCA I attended a talk about Unikernels. Here are the reasons why I think that they are a bad idea:

Single Address Space

According to the Unikernel Wikipedia page [1] a significant criteria for a Unikernel system is that it has a single address space. This gives performance benefits as there is no need to change CPU memory mappings when making system calls. But the disadvantage is that any code in the application/kernel can access any other code directly.

In a typical modern OS (Linux, BSD, Windows, etc) every application has a separate address space and there are separate memory regions for code and data. While an application can request the ability to modify it’s own executable code in some situations (if the OS is configured to allow that) it won’t happen by default. In MS-DOS and in a Unikernel system all code has read/write/execute access to all memory. MS-DOS was the least reliable OS that I ever used. It was unreliable because it performed tasks that were more complex than CP/M but had no memory protection so any bug in any code was likely to cause a system crash. The crash could be delayed by some time (EG corrupting data structures that are only rarely accessed) which would make it very difficult to fix. It would be possible to have a Unikernel system with non-modifyable executable areas and non-executable data areas and it is conceivable that a virtual machine system like Xen could enforce that. But that still wouldn’t solve the problem of all code being able to write to all data.

On a Linux system when an application writes to the wrong address there is a reasonable probability that it will not have write access and you will immediately get a SEGV which is logged and informs the sysadmin of the address of the crash.

When Linux applications have bugs that are difficult to diagnose (EG buffer overruns that happen in production and can’t be reproduced in a test environment) there are a variety of ways of debugging them. Tools such as Valgrind can analyse memory access and tell the developers which code had a bug and what the bug does. It’s theoretically possible to link something like Valgrind into a Unikernel, but the lack of multiple processes would make it difficult to manage.

Debugging

A full Unix environment has a rich array of debugging tools, strace, ltrace, gdb, valgrind and more. If there are performance problems then tools like sysstat, sar, iostat, top, iotop, and more. I don’t know which of those tools I might need to debug problems at some future time.

I don’t think that any Internet facing service can be expected to be reliable enough that it will never need any sort of debugging.

Service Complexity

It’s very rare for a server to have only a single process performing the essential tasks. It’s not uncommon to have a web server running CGI-BIN scripts or calling shell scripts from PHP code as part of the essential service. Also many Unix daemons are not written to run as a single process, at least threading is required and many daemons require multiple processes.

It’s also very common for the design of a daemon to rely on a cron job to clean up temporary files etc. It is possible to build the functionality of cron into a Unikernel, but that means more potential bugs and more time spent not actually developing the core application.

One could argue that there are design benefits to writing simple servers that don’t require multiple programs. But most programmers aren’t used to doing that and in many cases it would result in a less efficient result.

One can also argue that a Finite State Machine design is the best way to deal with many problems that are usually solved by multi-threading or multiple processes. But most programmers are better at writing threaded code so forcing programmers to use a FSM design doesn’t seem like a good idea for security.

Management

The typical server programs rely on cron jobs to rotate log files and monitoring software to inspect the state of the system for the purposes of graphing performance and flagging potential problems.

It would be possible to compile the functionality of something like the Nagios NRPE into a Unikernel if you want to have your monitoring code running in the kernel. I’ve seen something very similar implemented in the past, the CA Unicenter monitoring system on Solaris used to have a kernel module for monitoring (I don’t know why). My experience was that Unicenter caused many kernel panics and more downtime than all other problems combined. It would not be difficult to write better code than the typical CA employee, but writing code that is good enough to have a monitoring system running in the kernel on a single-threaded system is asking a lot.

One of the claimed benefits of a Unikernel was that it’s supposedly risky to allow ssh access. The recent ssh security issue was an attack against the ssh client if it connected to a hostile server. If you had a ssh server only accepting connections from management workstations (a reasonably common configuration for running servers) and only allowed the ssh clients to connect to servers related to work (an uncommon configuration that’s not difficult to implement) then there wouldn’t be any problems in this regard.

I think that I’m a good programmer, but I don’t think that I can write server code that’s likely to be more secure than sshd.

On Designing It Yourself

One thing that everyone who has any experience in security has witnessed is that people who design their own encryption inevitably do it badly. The people who are experts in cryptology don’t design their own custom algorithm because they know that encryption algorithms need significant review before they can be trusted. The people who know how to do it well know that they can’t do it well on their own. The people who know little just go ahead and do it.

I think that the same thing applies to operating systems. I’ve contributed a few patches to the Linux kernel and spent a lot of time working on SE Linux (including maintaining out of tree kernel patches) and know how hard it is to do it properly. Even though I’m a good programmer I know better than to think I could just build my own kernel and expect it to be secure.

I think that the Unikernel people haven’t learned this.

04 February, 2016 09:48AM by etbe

Iustin Pop

X cursor theme

There's not much to talk about X cursor themes, except when they change behind your back :)

A while back, after a firefox upgrade, it—and only it—showed a different cursor theme: basically double the size, and (IMHO) uglier. Searched for a while, but couldn't figure what makes firefox special, except that it is a GTK application.

After another round of dist-upgrades, now everything except xterms were showing the big cursors. This annoyed me to no end—as I don't use a high-DPI display, the new cursors are just too damn big. Only to find out two things:

  • thankfully, under Debian, the x-cursor-theme is an alternatives entry, so it can be easily configured
  • sadly, the adwaita-icon-theme package (whose description says "default icon theme of GNOME") installs itself as a very high priority alternatives entry (90), which means it takes over my default X cursor

Sigh, Gnome.

04 February, 2016 09:46AM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Welcome Back Poster

My office door is on the second floor in front the major staircase in my building. I work with my door open so that my colleagues and my students know when I’m in. The only time I consider deviating from this policy is the first week of the quarter when I’m faced with a stream of students, usually lost on their way to class and that, embarrassingly, I am usually unable to help.

I made this poster so that these conversations can, in a way, continue even when I am not in the office.

early_quarter_doors_sign

 

04 February, 2016 06:25AM by Benjamin Mako Hill

February 03, 2016

hackergotchi for Michal Čihař

Michal Čihař

Gammu 1.37.0

Today, Gammu 1.37.0 has been released. As usual it collects bug fixes. This time there is another important change as well - improver error reporting from SMSD.

This means that when SMSD fails to connect to the database, you should get a bit more detailed error than "Unknown error".

Full list of changes:

  • Improved compatibility with ZTE MF190.
  • Improved compatibility with Huawei E1750.
  • Improved compatibility with Huawei E1752.
  • Increased detail of reported errors from SMSD.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: English Gammu | 0 comments

03 February, 2016 05:00PM by Michal Čihař (michal@cihar.com)

Sven Hoexter

Moby

Maybe my favourite song of Moby - "That's when I reach for my revolver" - is one of the more unsual ones, slightly more rooted in his Punk years and a cover version. Great artist anyway.

03 February, 2016 03:46PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Comparing Docker images

I haven't written much yet about what I've been up to at work. Right now, I'm making changes to the sources of a set of Docker images. The changes I'm making should not result in any changes to the actual images: it's just a re-organisation of the way in which they are built.

I've been using the btrfs storage driver for Docker which makes comparing image filesystems very easy from the host machine, as all the image filesystems are subvolumes. I use a bash script like the following to make sure I haven't broken anything:

oldid="$1"; newid="$2";
id_in_canonical_form() {
    echo "$1" | grep -qE '^[a-f0-9]{64}$'
}
canonicalize_id() {
    docker inspect --format '{{ .Id }}' "$1"
}
id_in_canonical_form "$oldid" || oldid="$(canonicalize_id "$oldid")"
id_in_canonical_form "$newid" || newid="$(canonicalize_id "$newid")"
cd "/var/lib/docker/btrfs/subvolumes"
sumpath() {
    cd "$1" && find . -printf "%M %4U %4G %16s %h/%f\n" | sort
}
diff -ruN "$oldid" "newid"
diff -u <(sumpath "$oldid") <(sumpath "$newid")

Using -printf means I can ignore changes in the timestamps on files which is something I am not interested in.

If it is available in your environment, Lars Wirzenius' tool Summain generates manifests that include a file checksum and could be very useful for this use-case.

03 February, 2016 02:34PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

OMFG, ls

alias ls='ls --color=auto -N'

Unfortunately it doesn't actually revert to the previous behaviour, but it's close enough.

03 February, 2016 01:54PM

hackergotchi for Thomas Goirand

Thomas Goirand

Moby

Just a quick reply to Rhonda about Moby. You can’t introduce him without telling about Go, which is the title who made him famous, very early in the age of electronic music (November 1990, according to wikipedia). Many attempted to remix this song (and Moby himself), but nothing’s as good as the original version.

03 February, 2016 06:00AM by Goirand Thomas

hackergotchi for Michal Čihař

Michal Čihař

New projects on Hosted Weblate

I had some pile of hosting requests in queue since half of January and my recent talk on FOSDEM had some impact on requests for hosting translations as well, so it's about time to process them.

New kids on the block are:

Unfortunately I had to reject some projects as well mostly due to lack of file format support. This is still the same topic - when translating project, please stick with some standard format. Preferably what is usual on your platform.

If you like this service, you can support it on Bountysource salt or Gratipay. There is also option for hosting translations of commercial products.

Filed under: English Weblate | 0 comments

03 February, 2016 05:00AM by Michal Čihař (michal@cihar.com)

February 02, 2016

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Moby

Today is one of these moods. And sometimes one needs certain artists/music to foster it. Music is powerful. There are certain bands I know that I have to stay away from when feeling down to not get too deep into it. Knowing that already helps a lot. The following is an artist that is not completely in that area, but he got powerful songs and powerful messages nevertheless; and there was this situation today that one of his songs came to my mind. That's the reason why I present you today Moby. These are the songs:

  • Why Does My Heart Feel So Bad?: The song for certain moods. And lovely at that, not dragging me too much down. Hope you like the song too. :)
  • Extreme Ways: The ending tune from the movie The Bourne Ultimatum, and I fell immediately in love with the song. I used it for a while as morning alarm, a good start into the day.
  • Disco Lies: If you consider the video disturbing you might be shutting your eyes from what animals are facing on a daily basis.

Hope you like the selection; and like always: enjoy!

/music | permanent link | Comments: 2 | Flattr this

02 February, 2016 11:08PM by Rhonda

hackergotchi for Sune Vuorela

Sune Vuorela

Compilers and error messages

So. I typo’ed up some template code the other day. And once again I learned the importance of using several c++ compilers.

Here is a very reduced version of my code:

#include <utility>
template <typename T> auto foo(const T& t) -> decltype(x.first)
{
return t.first;
}
int main()
{
foo(std::make_pair(1,2));
return 0;
}

And let’s start with the compiler I was testing with first.

MSVC (2013 and 2015)

main.cpp(8): error C2672: ‘foo’: no matching overloaded function found
main.cpp(8): error C2893: Failed to specialize function template ‘unknown-type foo(const T &)’

It is not completely clear from that error message what’s going on, so let’s try some other compilers:

GCC (4.9-5.3)

2 : error: ‘x’ was not declared in this scope
template <typename T> auto foo(const T& t) -> decltype(x.first)

That’s pretty clear. More compilers:

Clang (3.3-3.7)

2 : error: use of undeclared identifier ‘x’
template <typename T> auto foo(const T& t) -> decltype(x.first)

ICC (13)

example.cpp(2): error: identifier “x” is undefined
template <typename T> auto foo(const T& t) -> decltype(x.first)

(Yes. I mistyped the variable name used for decltype. Replacing the x with t makes it build).

Thanks to http://gcc.godbolt.org/ and http://webcompiler.cloudapp.net/ for testing with various compilers.

02 February, 2016 07:44PM by Sune Vuorela

hackergotchi for Steve Kemp

Steve Kemp

Best practice - Don't serve writeable PHP files

I deal with compromises often enough of PHP-based websites that I wish to improve hardening.

One obvious way to improve things is to not serve PHP files which are writeable by the webserver-user. This would ensure that things like wp-content/uploads didn't get served as PHP if a compromise wrote valid PHP there.

In the past using php5-suhosin would have allowd this via the suhosin.executor.include.allow_writable_files flag.

Since suhosin is no longer supported under Debian Jessie I wonder if there is a simple way to achieve this?

I've written a toy-module which allows me to call stat on every request, and return a 403 on access to writeable files/directories. But it seems like I shouldn't need to write my own code for this functionality.

Any pointers welcome; happy to post my code if that is useful but suspect not - it just shouldn't exist.

02 February, 2016 07:10PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Like peanut butter and jelly: x13binary and seasonal

This post was written by Dirk Eddelbuettel and Christoph Sax and will be posted on both author's respective blogs.

The seasonal package by Christoph Sax brings a very featureful and expressive interface for working with seasonal data to the R environment. It uses the standard tool of the trade: X-13ARIMA-SEATS. This powerful program is provided by the statisticians of the US Census Bureau based on their earlier work (named X-11 and X-12-ARIMA) as well as the TRAMO/SEATS program by the Bank of Spain. X-13ARIMA-SEATS is probably the best known tool for de-seasonalization of timeseries, and used by statistical offices around the world.

Sadly, it also has a steep learning curve. One interacts with a basic command-line tool which users have to download, install and properly reference (by environment variables or related means). Each model specification has to be prepared in a special 'spec' file that uses its own, cumbersome syntax.

As seasonal provides all the required functionality to use X-13ARIMA-SEATS from R --- see the very nice seasonal demo site --- it still required the user to manually deal with the X-13ARIMA-SEATS installation.

So we decided to do something about this. A pair of GitHub repositories provide both the underlying binary in a per-operating system form (see x13prebuilt) as well as a ready-to- use R package (see x13binary) which uses the former to provide binaries for R. And the latter is now on CRAN as package x13binary ready to be used on Windows, OS-X or Linux. And the seasonal package (in version 1.2.0 -- now on CRAN -- or later) automatically makes use of it. Installing seasaonal and x13binary in R is now as easy as:

install.packages("seasonal")

which opens the door for effortless deployment of powerful deasonalization. By default, the principal function of the package employs a number of automated techniques that work well in most circumstances. For example, the following code produces a seasonal adjustment of the latest data of US retail sales (by the Census Bureau) downloaded from Quandl:

library(seasonal) 
library(Quandl)   ## not needed for seasonal but has some niceties for Quandl data

rs <- Quandl(code="USCENSUS/BI_MARTS_44000_SM", type="ts")/1e3

m1 <- seas(rs)

plot(m1, main = "Retail Trade: U.S. Total Sales", ylab = "USD (in Billions)")

This tests for log-transformation, performs an automated ARIMA model search, applies outlier detection, tests and adjusts for trading day and easter effects, and invokes the SEATS method to perform seasonal adjustment. And this is how the adjusted series looks like:

Of course, you can access all available options of X-13ARIMA-SEATS as well. Here is an example where we adjust the latest data for Chinese exports (as tallied by the US FED), taking into account the different effects of Chinese New Year before, during and after the holiday:

xp <- Quandl(code="FRED/VALEXPCNM052N", type="ts")/1e9

m2 <- seas(window(xp, start = 2000),
          xreg = cbind(genhol(cny, start = -7, end = -1, center = "calendar"), 
                       genhol(cny, start = 0, end = 7, center = "calendar"), 
                       genhol(cny, start = 8, end = 21, center = "calendar")
          ),
          regression.aictest = c("td", "user"),
          regression.usertype = "holiday")

plot(m2, main = "Goods, Value of Exports for China", ylab = "USD (in Billions)")

which generates the following chart demonstrating a recent flattening in export activity measured in USD.

We hope this simple examples illustrates both how powerful a tool X-13ARIMA-SEATS is, but also just how easy it is to use X-13ARIMA-SEATS from R now that we provide the x13binary package automating its installation.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 February, 2016 12:58PM

hackergotchi for Norbert Preining

Norbert Preining

Gaming: The Talos Principle – Road to Gehenna

After finishing the Talos Principle I immediately started to play the extension Road to Gehenna, but was derailed near completion by the incredible Portal Stories: Mel. Now that I finally managed to escape from the test chambers my attention returned to the Road to Gehenna. As with the pair Portal 2 and Portal Stories: Mel, the challenges are going up considerably from the original Talos Principle to the Road to Gehenna. Checking the hours of game play it took me about 24h through all the riddles in Road to Gehenna, but I have to admit, I had some riddles where I needed to cheat.

road-to-gehenna.jpg

The Road to Gehenna does not bring much new game play elements, but loads of new riddles. And the best of all, playable on Linux! And as with the original game, the graphics are really well done, while still be playable on my Vaio Pro laptop with Intel integrated graphic card – a plus that is rare in the world of computer games where everyone is expected to have a high-end nVidia or Radeon card. Ok, there is not much action going on where quick graphic computations are necessary, still the impression of the game is great.
gehenna1

The riddles contain the well known elements (connectors, boxes, jammer, etc), but the settings are often spectacular, sometimes very small and narrow, just a few moves if done in the right order, sometimes like wide open fields with lots of space to explore. Transportation between various islands suspended in the air is with vents, giving you a lot of nice flight time!
gehenna2

If one searches a lot, or uses a bit of cheating, one can find good old friends from the Portal series, burried in the sand in one of the world. This is not the only easter egg hidden in the game, there are actually a lot, some of which I have not seen but only read about afterwards. Guess I need to replay the whole game.
gehenna3

Coming back to the riddles, I really believe that the makers have been ingenious in using the few items at hand to create challenging and surprising riddles. As it is so often, many of the riddles look completely impossible at first glance, and often even after staring at them for tens and tens of minutes. Until (and if) one has the the a-ha effect and understands the trick. This often still needs a lot of handwork and trial-error rounds, but all in all the game is well balanced. What is a bit a pain – similar to the original game – are collecting the stars to reach the hidden world and free the admin. There the developers overdid it in my opinion, with some rather absurd and complicated stars.
gehenna4

The end of the game, ascension of the messengers, is rather unspectacular. A short discussion on who remains and then a big closing scene with the messenger being beamed up a la Starship Enterprise, and a closing black screen. But well, the fun was with the riddles.
gehenna5

All in all an extension that is well worth the investment if one enjoyed the original Talos, and is looking for rather challenging riddles. Now that I have finished all the Portal and Talos titles, I am hard thinking of what is next … looking into Braid …

Enjoy!

02 February, 2016 11:30AM by Norbert Preining

hackergotchi for Michal Čihař

Michal Čihař

Weekly phpMyAdmin contributions 2016-W04

As I've already mentioned in separate blog post we mostly had some security issues fun in past weeks, but besides that some other work has been done as well.

I've still focused on code cleanups and identified several pieces of code which are no longer needed (given our required PHP version). Another issue related to security updates was to set testing of 4.0 branch using PHP 5.2 as this is what we've messed up in the security release (what is quite bad as this is only branch supporting PHP 5.2).

In addition to this, I've updated phpMyAdmin packages in both Debian and Ubuntu PPA.

All handled issues:

Filed under: Debian English phpMyAdmin | 0 comments

02 February, 2016 11:00AM by Michal Čihař (michal@cihar.com)

Russell Coker

Compatibility and a Linux Community Server

Compatibility/interoperability is a good thing. It’s generally good for systems on the Internet to be capable of communicating with as many systems as possible. Unfortunately it’s not always possible as new features sometimes break compatibility with older systems. Sometimes you have systems that are simply broken, for example all the systems with firewalls that block ICMP so that connections hang when the packet size gets too big. Sometimes to take advantage of new features you have to potentially trigger issues with broken systems.

I recently added support for IPv6 to the Linux Users of Victoria server. I think that adding IPv6 support is a good thing due to the lack of IPv4 addresses even though there are hardly any systems that are unable to access IPv4. One of the benefits of this for club members is that it’s a platform they can use for testing IPv6 connectivity with a friendly sysadmin to help them diagnose problems. I recently notified a member by email that the callback that their mail server used as an anti-spam measure didn’t work with IPv6 and was causing mail to be incorrectly rejected. It’s obviously a benefit for that user to have the problem with a small local server than with something like Gmail.

In spite of the fact that at least one user had problems and others potentially had problems I think it’s clear that adding IPv6 support was the correct thing to do.

SSL Issues

Ben wrote a good post about SSL security [1] which links to a test suite for SSL servers [2]. I tested the LUV web site and got A-.

This blog post describes how to setup PFS (Perfect Forward Secrecy) [3], after following it’s advice I got a score of B!

From the comments on this blog post about RC4 etc [4] it seems that the only way to have PFS and not be vulnerable to other issues is to require TLS 1.2.

So the issue is what systems can’t use TLS 1.2.

TLS 1.2 Support in Browsers

This Wikipedia page has information on SSL support in various web browsers [5]. If we require TLS 1.2 we break support of the following browsers:

The default Android browser before Android 5.0. Admittedly that browser always sucked badly and probably has lots of other security issues and there are alternate browsers. One problem is that many people who install better browsers on Android devices (such as Chrome) will still have their OS configured to use the default browser for URLs opened by other programs (EG email and IM).

Chrome versions before 30 didn’t support it. But version 30 was released in 2013 and Google does a good job of forcing upgrades. A Debian/Wheezy system I run is now displaying warnings from the google-chrome package saying that Wheezy is too old and won’t be supported for long!

Firefox before version 27 didn’t support it (the Wikipedia page is unclear about versions 27-31). 27 was released in 2014. Debian/Wheezy has version 38, Debian/Squeeze has Iceweasel 3.5.16 which doesn’t support it. I think it is reasonable to assume that anyone who’s still using Squeeze is using it for a server given it’s age and the fact that LTS is based on packages related to being a server.

IE version 11 supports it and runs on Windows 7+ (all supported versions of Windows). IE 10 doesn’t support it and runs on Windows 7 and Windows 8. Are the free upgrades from Windows 7 to Windows 10 going to solve this problem? Do we want to support Windows 7 systems that haven’t been upgraded to the latest IE? Do we want to support versions of Windows that MS doesn’t support?

Windows mobile doesn’t have enough users to care about.

Opera supports it from version 17. This is noteworthy because Opera used to be good for devices running older versions of Android that aren’t supported by Chrome.

Safari supported it from iOS version 5, I think that’s a solved problem given the way Apple makes it easy for users to upgrade and strongly encourages them to do so.

Log Analysis

For many servers the correct thing to do before even discussing the issue is to look at the logs and see how many people use the various browsers. One problem with that approach on a Linux community site is that the people who visit the site most often will be more likely to use recent Linux browsers but older Windows systems will be more common among people visiting the site for the first time. Another issue is that there isn’t an easy way of determining who is a serious user, unlike for example a shopping site where one could search for log entries about sales.

I did a quick search of the Apache logs and found many entries about browsers that purport to be IE6 and other versions of IE before 11. But most of those log entries were from other countries, while some people from other countries visit the club web site it’s not very common. Most access from outside Australia would be from bots, and the bots probably fake their user agent.

Should We Do It?

Is breaking support for Debian/Squeeze, the built in Android browser on Android <5.0, and Windows 7 and 8 systems that haven’t upgraded IE as a web browsing platform a reasonable trade-off for implementing the best SSL security features?

For the LUV server as a stand-alone issue the answer would be no as the only really secret data there is accessed via ssh. For a general web infrastructure issue it seems that the answer might be yes.

I think that it benefits the community to allow members to test against server configurations that will become more popular in the future. After implementing changes in the server I can advise club members (and general community members) about how to configure their servers for similar results.

Does this outweigh the problems caused by some potential users of ancient systems?

I’m blogging about this because I think that the issues of configuration of community servers have a greater scope than my local LUG. I welcome comments about these issues, as well as about the SSL compatibility issues.

02 February, 2016 05:44AM by etbe

February 01, 2016

hackergotchi for Lunar

Lunar

Reproducible builds: week 40 in Stretch cycle

What happened in the reproducible builds effort between January 24th and January 30th:

Media coverage

Holger Levsen was interviewed by the FOSDEM team to introduce his talk on Sunday 31st.

Toolchain fixes

Jonas Smedegaard uploaded d-shlibs/0.63 which makes the order of dependencies generated by d-devlibdeps stable accross locales. Original patch by Reiner Herrmann.

Packages fixed

The following 53 packages have become reproducible due to changes in their build dependencies: appstream-glib, aptitude, arbtt, btrfs-tools, cinnamon-settings-daemon, cppcheck, debian-security-support, easytag, gitit, gnash, gnome-control-center, gnome-keyring, gnome-shell, gnome-software, graphite2, gtk+2.0, gupnp, gvfs, gyp, hgview, htmlcxx, i3status, imms, irker, jmapviewer, katarakt, kmod, lastpass-cli, libaccounts-glib, libam7xxx, libldm, libopenobex, libsecret, linthesia, mate-session-manager, mpris-remote, network-manager, paprefs, php-opencloud, pisa, pyacidobasic, python-pymzml, python-pyscss, qtquick1-opensource-src, rdkit, ruby-rails-html-sanitizer, shellex, slony1-2, spacezero, spamprobe, sugar-toolkit-gtk3, tachyon, tgt.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

  • gnubg/1.05.000-4 by Russ Allbery.
  • grcompiler/4.2-6 by Hideki Yamane.
  • sdlgfx/2.0.25-5 fix by Felix Geyer, uploaded by Gianfranco Costamagna.

Patches submitted which have not made their way to the archive yet:

  • #812876 on glib2.0 by Lunar: ensure that functions are sorted using the C locale when giotypefuncs.c is generated.

diffoscope development

diffoscope 48 was released on January 26th. It fixes several issues introduced by the retrieval of extra symbols from Debian debug packages. It also restores compatibility with older versions of binutils which does not support readelf --decompress.

strip-nondeterminism development

strip-nondeterminism 0.015-1 was uploaded on January 27th. It fixes handling of signed JAR files which are now going to be ignored to keep the signatures intact.

Package reviews

54 reviews have been removed, 36 added and 17 updated in the previous week.

30 new FTBFS bugs have been submitted by Chris Lamb, Michael Tautschnig, Mattia Rizzolo, Tobias Frost.

Misc.

Alexander Couzens and Bryan Newbold have been busy fixing more issues in OpenWrt.

Version 1.6.3 of FreeBSD's package manager pkg(8) now supports SOURCE_DATE_EPOCH.

Ross Karchner did a lightning talk about reproducible builds at his work place and shared the slides.

01 February, 2016 11:39PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in January 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I did not ask for any paid hours this month and won’t be requesting paid hours for the next 5 months as I have a big project to handle with a deadline in June. That said I still did a few LTS related tasks:

  • I uploaded a new version of debian-security-support (2016.01.07) to officialize that virtualbox-ose is no longer supported in Squeeze and that redmine was not really supportable ever since we dropped support for rails.
  • Made a summary of the discussion about what to support in wheezy and started a new round of discussions with some open questions. I invited contributors to try to pickup one topic, study it and bring the discussion to some conclusion.
  • I wrote a blog post to recruit new paid contributors. Brian May, Markus Koschany and Damyan Ivanov candidated and will do their first paid hours over February.

Distro Tracker

Due to many nights spent on playing Splatoon (I’m at level 33, rank B+, anyone else playing it?), I did not do much work on Distro Tracker.

After having received the bug report #809211, I investigated the reasons why SQLite was no longer working satisfactorily in Django 1.9 and I opened the upstream ticket 26063 and I had a long discussion with two upstream developers to find out the best fix. The next point release (1.9.2) will fix that annoying regression.

I also merged a couple of contributions (two patches from Christophe Siraut, one adding descriptions to keywords, cf #754413, one making it more obvious that chevrons in action items are actionable to show more data, a patch from Balasankar C in #810226 fixing a bad URL in an action item).

I fixed a small bug in the “unsubscribe” command of the mail bot, it was not properly recognizing source packages.

I updated the task notifying of new upstream versions to use the data generated by UDD (instead of the data generated by Christoph Berg’s mole-based implementation which was suffering from a few bugs). 

Debian Packaging

Testing experimental sbuild. While following the work of Johannes Schauer on sbuild, I installed the version from experimental to support his work and give him some feedback. In the process I uncovered #810248.

Python sponsorship. I reviewed and uploaded many packages for Daniel Stender who keeps doing great work maintaining prospector and all its recursive dependencies: pylint-common, python-requirements-detector, sphinx-argparse, pylint-django, prospector. He also prepared an upload of python-bcrypt which I requested last month for Django.

Django packaging. I uploaded Django 1.8.8 to jessie-backports.
My stable updates for Django 1.7.11 was not handled before the release of Debian 8.3 even though it was filed more than 1.5 months before.

Misc stuff. My stable update for debian-handbook has been accepted fairly shortly after my last monthly report (thank you Adam!) so I uploaded the package once acked by a release manager. I also sponsor a backports upload of zim prepared by Joerg Desch.

Kali related work

Kernel work. The switch to Linux 4.3 in Kali resulted in a few bug reports that I investigated with the help of #debian-kernel and where I reported my findings back so that the Debian kernel could also benefit from the fixes I uploaded to Kali: first we included a patch for a regression in the vmwgfx video driver used by VMWare virtual machines (which broke the gdm login screen), then we fixed the input-modules udeb to fix support of some Logitech keyboards in debian-installer (see #796096).

Misc work. I made a non-maintainer upload of python-maxminddb to fix #805689 which had been removed from stretch and that we needed in Kali. I also had to NMU libmaxminddb since it was no longer available on armel and we actually support armel in Kali. During that NMU, it occurred to me that dh-exec could offer a feature of “optional install”, that is installing a file that exists but not failing if it doesn’t exist. I filed this as #811064 and it stirred up quite some debate.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 February, 2016 07:31PM by Raphaël Hertzog

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

State of Email Clients on Linux Based Platforms

I've been trying to catch up on my reading list (Thanks to rss2email, now I can hold the list longer, than just marking all as read). And one item from last year's end worth spending time was Thunderbird.

Thunderbird has been the email client of choice for many users. The main reason for it being popular has been, in my opinion, it being cross platform. Because that allows users an easy migration path across platforms. It also bring persistence, in terms of features and workflows, to the end users. Perhaps that must have been an important reason for many distributions (Ubuntu) and service providers to promote it as the default email client. A Windows/Mac user migrating to Ubuntu will have a lot better experience if they see familiar tools, and their data and workflows being intact.

Mozilla must have executed its plan pretty well, to have been able to get it rolling so far. Because other attempts elsewhere (KDE4 Windows) weren't so easy. Part of the reason maybe that any time a new disruptive update is rolled on (KDE4, GNOME3), a lot many frustrated users are born. It is not that people don't want change. Its just that no one likes to see things break. But unfortunately, in Free Software / Open Source world, that is taken lightly.

That's one reason why it takes Mozilla so so so long to implement Maildir in TB, when others (Evolution) have had it for so long.

So, recently, Mozilla announced its plans to drop Thunderbird development. It is not something new. Anyone using TB knows how long it has been in Maintenance/ESR mode.

What was interesting on LWN was the comments. People talked a lot about DE Native Email clients - Kmail, Sylpheed. TUI Clients and these days Browser based clients. Surprisingly, not much was talked about Evolution.

My recent move to GNOME has made me look into letting go of old tools/workflows, and try to embrace newer ones. Of them has been GNOME itself. Changing workflows for email was difficult and frustrating. But knowing that TB doesn't have a bright future, it was important to look for alternatives. Just having waited for Maildir and GTK3 port of TB for so long, was enough.

On GNOME, Evolution, may give an initial impression of being in Maintenance mode. Especially given that most GNOME apps are now moving to the new UI, which is more touch friendly. And also, because there were other efforts to have another email client on GNOME, I think it is Yorba.

But even in its current form, Evolution is a pretty impressive email client Personal Information Management tool. It already is ported to GTK3. Which implies it is capable of responding to Touch events. It sure could have a revised Touch UI, like what is happening with other GNOME Apps. But I'm happy that it has been defered for now. Revising Evolution won't be an easy task, and knowing that GNOME too is understaffed, breaking a perfectly working tool won't be a good idea.

My intent with this blog post is to give credit to my favorite GNOME application, i.e. Evolution. So next time you are looking for an email client alternative, give Evolution a try.

Today, it already does:

  • Touch UI
  • Maildir
  • Microsoft Exchange
  • GTK3
  • Addressbook, Notes, Tasks, Calendar - Most Standards Based and Google Services compatible
  • RSS Feed Manager
  • And many more that I may not have been using

The only missing piece is being cross-platform. But given the trend, and available resources, I think that path is not worthy of trying.

Keep It Simple. Support one platform and support it well.

Categories: 

Keywords: 

Like: 

01 February, 2016 06:10PM by Ritesh Raj Sarraf

Mike Gabriel

My FLOSS activities in January 2016

In January 2016 I was finally able to work on various FLOSS topics again (after two months of heavily focussed local customer work):

  • Upload of MATE 1.12 to Debian unstable
  • Debian LTS packaging and front desk work
  • Other Debian activies
  • Edu Workshop in Kiel
  • Yet another OPSI Packaging Project

Upload of MATE 1.12 to Debian testing/unstable

At the beginning of the new year, I finalized the bundle upload of MATE 1.12 to Debian unstable [1]. All uploaded packages are available in Debian testing (stretch) and Ubuntu xenial by now. MATE 1.12 will also be the version shipped in Ubuntu MATE 16.04 LTS.

Additionally, I finally uploaded caja-dropbox to Debian unstable (non-free), thanks to Vangelis Mouhtsis and Martin Wimpress for doing first steps preparations. The package has already left Debian's NEW queue, but unfortunately has been removed from Debian testing (stretch) again due to build failures in one of its dependencies.

Debian LTS work

In January 2016 I did my first round of Debian LTS front desk work [2]. Before actually starting with my front desk duty, I worked myself through the documentation and found it difficult to understand the output of the lts-cve-triage.py script. So, I proposed various improvments to the output of that script (all committed by now).

During the second week of January then, I triaged the following packages regarding known/open CVE issues:

  • isc-dhcp (CVE-2015-8605)
  • gosa (CVE-2015-8771, CVE-2014-9760)
  • roundcube (CVE-2015-8770)

read more

01 February, 2016 03:55PM by sunweaver

Stefano Zacchiroli

guest lecture Overthrowing the Tyranny of Software by John Sullivan

As part of my master class on Free and Open Source (FOSS) Software at University Paris Diderot, I invite guest lecturers to present to my students the point of views of various actors of the FOSS ecosystem --- companies, non-profits, activists, lawyers, etc.

Tomorrow, Tuesday 2 February 2016, the students will have the pleasure to have as guest lecturer John Sullivan, Executive Director of the Free Software Foundation, talking about Overthrowing the tyranny of software: Why (and how) free societies respect computer user freedom.

The lecture is open to everyone interested, but registration is recommended. Logistic and registration information, as well as the lecture abstract in both English and French is reported below.


John Sullivan's Lecture at University Paris Diderot - Overthrowing the tyranny of software: Why (and how) free societies respect computer user freedom

John Sullivan, Executive Director of the Free Software Foundation will give a lecture titled "Overthrowing the tyranny of software: Why (and how) free societies respect computer user freedom" at University Paris Diderot next Tuesday, 2 February 2016, at 12:30 in the Amphi 3B, Halle aux Farines building, Paris 75013. Map at: http://www.openstreetmap.org/way/62378611#map=19/48.82928/2.38183

The lecture will be in English and open to everyone, but registration is recommended at https://framadate.org/iPqfjNTz2535F8u4 or via email writing to zack@pps.univ-paris-diderot.fr.

Abstract:

Anyone who has used a computer for long has at least sometimes felt like a helpless subject under the tyrant of software, screaming (uselessly) in frustration at the screen to try and get the desired results. But with driverless cars, appliances which eavesdrop on conversations in our homes, mobile devices that transmit our location when we are out and about, and computers with unexpected hidden "features", our inability to control the software supposedly in our possession has become a much more serious problem than the superficial blue-screen-of-death irritations of the past.

Software which is free "as in freedom" allows anyone who has it to inspect the code and even modify it -- or ask someone trained in the dark arts of computer programming to do it for them -- so that undesirable behaviors can be removed or defused. This characteristic, applied to all software, should be a major part of foundation of free societies moving forward. To get there, we'll need individual developers, nonprofit organizations, governments, and companies all working together -- with the first two groups leading the way.


Cours Magistral de John Sullivan à l'Université Paris Diderot - Surmonter la tyrannie du logiciel: pourquoi (et comment) les sociétés libres respectent les libertés des utilisateurs

John Sullivan, Directeur Exécutif de la Free Software Foundation donnera un cours magistral ayant pour titre "Surmonter la tyrannie du logiciel: pourquoi (et comment) les sociétés libres respectent les libertés des utilisateurs" à l'Université Paris Diderot Mardi prochain, 2 février 2016, à 12h30 dans l'Amphi 3B de la Halle aux Farines, Paris 75013. Plan: http://www.openstreetmap.org/way/62378611#map=19/48.82928/2.38183

Le cours (en langue Anglaise) sera ouvert à toutes et à tous, mais l'inscription est recommandé via le formulaire https://framadate.org/iPqfjNTz2535F8u4 ou par mail à l'adresse zack@pps.univ-paris-diderot.fr.

Résumé:

Chacun de nous, au moins une fois dans sa vie, a pesté contre son ordinateur dans l'espoir (vain) d'obtenir un résultat attendu, se sentant dépossède par un tyran logiciel. Mais au jour d'aujourd'hui - avec des voitures autonomes, des dispositifs "intelligents" que nous écoutent chez nous, des portables qui transmettent notre position quand nous nous baladons, et des ordinateurs pleins des fonctionnalités cachées - notre incapacité de contrôler nos biens devient une question beaucoup plus sérieuse par rapport a l'irritation qu'auparavant nous causait l'écran bleu de la mort.

Le logiciel libre permet à chaque utilisateur d'étudier son fonctionnement et de le modifier --- ou de demander à des experts dans la magie noire de la programmation de le faire a sa place --- supprimant, ou du moins réduisant, les comportements indésirés du logiciel. Cette caractéristique du logiciel libre devrait être appliquée à chaque type de logiciel et devrait constituer un pilier des sociétés se prétendant libres. Pour achever cet idéal, développeurs, organisations à but non lucratif, gouvernements et entreprises doivent travailler ensemble. Et les développeurs et les ONG doivent se positionner au premier rang dans ce combat.

01 February, 2016 07:56AM