November 22, 2014

Jonathan Wiltshire

Getting things into Jessie (#7)

Keep in touch

We don’t really have a lot of spare capacity to check up on things, so if we ask for more information or send you away to do an upload, please stay in touch about it.

Do remove a moreinfo tag if you reply to a question and are now waiting for us again.

Do ping the bug if you get a green light about an upload, and have done it. (And remove moreinfo if it was set.)

Don’t be afraid of making sure we’re aware of progress.


Getting things into Jessie (#7) is a post from: jwiltshire.org.uk | Flattr

22 November, 2014 10:18AM by Jon

Craig Small

WordPress 4.0.1 for Debian

WordPress recently released an update that had multiple security patches for their (then) current version 4.0. This release is 4.0.1 and includes important security fixes.  The Debian packages got just uploaded, if you are running the Debian packaged wordpress, you should update to 4.0.1+dfsg-1 or later.

I am going to look at these patches and see if they can and need to be backported to wordpress 3.6.1. Unfortunately I believe they will be. I’m also asking it to be unblocked into Jessie as it is a security fix.

There was, at the time of writing, no CVE numbers.

22 November, 2014 08:49AM by Craig Small

Petter Reinholdtsen

How to stay with sysvinit in Debian Jessie

By now, it is well known that Debian Jessie will not be using sysvinit as its boot system by default. But how can one keep using sysvinit in Jessie? It is fairly easy, and here are a few recipes, courtesy of Erich Schubert and Simon McVittie.

If you already are using Wheezy and want to upgrade to Jessie and keep sysvinit as your boot system, create a file /etc/apt/preferences.d/use-sysvinit with this content before you upgrade:

Package: systemd-sysv
Pin: release o=Debian
Pin-Priority: -1

This file content will tell apt and aptitude to not consider installing systemd-sysv as part of any installation and upgrade solution when resolving dependencies, and thus tell it to avoid systemd as a default boot system. The end result should be that the upgraded system keep using sysvinit.

If you are installing Jessie for the first time, there is no way to get sysvinit installed by default (debootstrap used by debian-installer have no option for this), but one can tell the installer to switch to sysvinit before the first boot. Either by using a kernel argument to the installer, or by adding a line to the preseed file used. First, the kernel command line argument:

preseed/late_command="in-target apt-get install -y sysvinit-core"

Next, the line to use in a preseed file:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

One can of course also do this after the first boot by installing the sysvinit-core package.

I recommend only using sysvinit if you really need it, as the sysvinit boot sequence in Debian have several hardware specific bugs on Linux caused by the fact that it is unpredictable when hardware devices show up during boot. But on the other hand, the new default boot system still have a few rough edges I hope will be fixed before Jessie is released.

22 November, 2014 12:00AM

November 21, 2014

hackergotchi for Joey Hess

Joey Hess

propelling containers

Propellor has supported docker containers for a "long" time, and it works great. This week I've worked on adding more container support.

docker containers (revisited)

The syntax for docker containers has changed slightly. Here's how it looks now:

example :: Host
example = host "example.com"
    & Docker.docked webserverContainer

webserverContainer :: Docker.Container
webserverContainer = Docker.container "webserver" "joeyh/debian-stable"
    & os (System (Debian (Stable "wheezy")) "amd64")
    & Docker.publish "80:80"
    & Apt.serviceInstalledRunning "apache2"
    & alias "www.example.com"

That makes example.com have a web server in a docker container, as you'd expect, and when propellor is used to deploy the DNS server it'll automatically make www.example.com point to the host (or hosts!) where this container is docked.

I use docker a lot, but I have drank little of the Docker KoolAid. I'm not keen on using random blobs created by random third parties using either unreproducible methods, or the weirdly underpowered dockerfiles. (As for vast complicated collections of containers that each run one program and talk to one another etc ... I'll wait and see.)

That's why propellor runs inside the docker container and deploys whatever configuration I tell it to, in a way that's both replicatable later and lets me use the full power of Haskell.

Which turns out to be useful when moving on from docker containers to something else...

systemd-nspawn containers

Propellor now supports containers using systemd-nspawn. It looks a lot like the docker example.

example :: Host
example = host "example.com"
    & Systemd.persistentJournal
    & Systemd.nspawned webserverContainer

webserverContainer :: Systemd.Container
webserverContainer = Systemd.container "webserver" chroot
    & Apt.serviceInstalledRunning "apache2"
    & alias "www.example.com"
  where
    chroot = Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase

Notice how I specified the Debian Unstable chroot that forms the basis of this container. Propellor sets up the container by running debootstrap, boots it up using systemd-nspawn, and then runs inside the container to provision it.

Unlike docker containers, systemd-nspawn containers use systemd as their init, and it all integrates rather beautifully. You can see the container listed in systemctl status, including the services running inside it, use journalctl to examine its logs, etc.

But no, systemd is the devil, and docker is too trendy...

chroots

Propellor now also supports deploying good old chroots. It looks a lot like the other containers. Rather than repeat myself a third time, and because we don't really run webservers inside chroots much, here's a slightly different example.

example :: Host
example = host "mylaptop"
    & Chroot.provisioned (buildDepChroot "git-annex")

buildDepChroot :: Apt.Package -> Chroot.Chroot
buildDepChroot pkg = Chroot.debootstrapped system Debootstrap.buildd dir
    & Apt.buildDep pkg
  where
    dir = /srv/chroot/builddep/"++pkg
   system = System (Debian Unstable) "amd64"

Again this uses debootstrap to build the chroot, and then it runs propellor inside the chroot to provision it (btw without bothering to install propellor there, thanks to the magic of bind mounts and completely linux distribution-independent packaging).

In fact, the systemd-nspawn container code reuses the chroot code, and so turns out to be really rather simple. 132 lines for the chroot support, and 167 lines for the systemd support (which goes somewhat beyond the nspawn containers shown above).

Which leads to the hardest part of all this...

debootstrap

Making a propellor property for debootstrap should be easy. And it was, for Debian systems. However, I have crazy plans that involve running propellor on non-Debian systems, to debootstrap something, and installing debootstrap on an arbitrary linux system is ... too hard.

In the end, I needed 253 lines of code to do it, which is barely one magnitude less code than the size of debootstrap itself. I won't go into the ugly details, but this could be made a lot easier if debootstrap catered more to being used outside of Debian.

closing

Docker and systemd-nspawn have different strengths and weaknesses, and there are sure to be more container systems to come. I'm pleased that Propellor can add support for a new container system in a few hundred lines of code, and that it abstracts away all the unimportant differences between these systems.

PS

Seems likely that systemd-nspawn containers can be nested to any depth. So, here's a new kind of fork bomb!

infinitelyNestedContainer :: Systemd.Container
infinitelyNestedContainer = Systemd.container "evil-systemd"
    (Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase)
    & Systemd.nspawned infinitelyNestedContainer

Strongly typed purely functional container deployment can only protect us against a certian subset of all badly thought out systems. ;)

21 November, 2014 09:33PM

Niels Thykier

Release Team unblock queue flushed

At the start of this week, I wrote that we had 58 open unblock requests open (of which 25 were tagged moreinfo).  Thanks to an extra effort from the Release Team, we now down to 25 open unblocks – of which 18 are tagged moreinfo.

We have now resolved 442 unblock requests (out of a total of 467).  The rate has also declined to an average of ~18 new unblock requests a day (over 26 days) and our closing rated increased to ~17.

With all of this awesomeness, some of us are now more than ready to have a well-deserved weekend to recharge our batteries.  Meanwhile, feel free to keep the RC bug fixes flowing into unstable.


21 November, 2014 08:46PM by Niels Thykier

Richard Hartmann

Release Critical Bug report for Week 47

There's a BSP this weekend. If you're interested in remote participation, please join #debian-muc on irc.oftc.net.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1213 (Including 210 bugs affecting key packages)
    • Affecting Jessie: 342 (key packages: 152) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 260 (key packages: 119) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 37 bugs are tagged 'patch'. (key packages: 20) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 12 bugs are marked as done, but still affect unstable. (key packages: 3) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 211 bugs are neither tagged patch, nor marked done. (key packages: 96) Help make a first step towards resolution!
      • Affecting Jessie only: 82 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 65 bugs are in packages that are unblocked by the release team. (key packages: 26)
        • 17 bugs are in packages that are not unblocked. (key packages: 7)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148)
49 256 (180+76) 360 (216+155)
50 204 (148+56) 339 (195+144)
51 178 (124+54) 323 (190+133)
52 115 (78+37) 289 (190+99)
1 93 (60+33) 287 (171+116)
2 82 (46+36) 271 (162+109)
3 25 (15+10) 249 (165+84)
4 14 (8+6) 244 (176+68)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

21 November, 2014 08:31PM by Richard 'RichiH' Hartmann

Jonathan Wiltshire

On kFreeBSD and FOSDEM

Boy I love rumours. Recently I’ve heard two, which I ought to put to rest now everybody’s calmed down from recent events.

kFreeBSD isn’t an official Jessie architecture because <insert systemd-related scare story>

Not true.

Our sprint at ARM (who kindly hosted and caffeinated us for four days) was timed to coincide with the Cambridge Mini-DebConf 2014. The intention was that this would save on travel costs for those members of the Release Team who wanted to attend the conference, and give us a jolly good excuse to actually meet up. Winners all round.

It also had an interesting side-effect. The room we used was across the hall from the lecture theatre being used as hack space and, later, the conference venue, which meant everybody attending during those two days could see us locked away there (and yes, we were in there all day for two days solid, except for lunch times and coffee missions). More than one conference attendee remarked to me in person that it was interesting for them to see us working (although of course they couldn’t hear what we were discussing), and hadn’t appreciated before that how much time and effort goes into our meetings.

Most of our first morning was taken up with the last pieces of architecture qualification, and that was largely the yes/no decision we had to make about kFreeBSD. And you know what? I don’t recall us talking about systemd in that context at all. Don’t forget kFreeBSD already had a waiver for a reduced scope in Jessie because of the difficulty in porting systemd to it.

It’s sadly impossible to capture the long and detailed discussion we had into a couple of lines of status information in a bits mail. If bits mails were much longer, people would be put off reading them, and we really really want you to take note of what’s in there. The little space we do have needs to be factual and to the point, and not include all the background that led us to a decision.

So no, the lack of an official Jessie release of kFreeBSD has very little, if anything, to do with systemd.

Jessie will be released during (or even before) FOSDEM

Not necessarily true.

Debian releases are made when they’re ready. That sets us apart from lots of other distributions, and is a large factor in our reputation for stability. We may have a target date in mind for a freeze, because that helps both us and the rest of the project plan accordingly. But we do not have a release date in mind, and will not do so until we get much closer to being ready. (Have you squashed an RC bug today?)

I think this rumour originated from the office of the DPL, but it’s certainly become more concrete than I think Lucas intended.

However, it is true that we’ve gone into this freeze with a seriously low bug count, because of lots of other factors. So it may indeed be that we end up in good enough shape to think about releasing near (or even at) FOSDEM. But rest assured, Debian 8 “Jessie” will be released when it’s ready, and even we don’t know when that will be yet.

(Of course, if we do release before then, you could consider throwing us a party. Plenty of the Release Team, FTP masters and CD team will be at FOSDEM, release or none.)


On kFreeBSD and FOSDEM is a post from: jwiltshire.org.uk | Flattr

21 November, 2014 07:16PM by Jon

hackergotchi for Gunnar Wolf

Gunnar Wolf

Status of the Debian OpenPGP keyring — November update

Almost two months ago I posted our keyring status graphs, showing the progress of the transition to >=2048-bit keys for the different active Debian keyrings. So, here are the new figures.

First, the Non-uploading keyring: We were already 100% transitioned. You will only notice a numerical increase: That little bump at the right is our dear friend Tássia finally joining as a Debian Developer. Welcome! \o/

As for the Maintainers keyring: We can see a sharp increase in 4096-bit keys. Four 1024-bit DM keys were migrated to 4096R, but we did have eight new DMs coming in To them, also, welcome \o/.

Sadly, we had to remove a 1024-bit key, as Peter Miller sadly passed away. So, in a 234-key universe, 12 new 4096R keys is a large bump!

Finally, our current-greatest worry — If for nothing else, for the size of the beast: The active Debian Developers keyring. We currently have 983 keys in this keyring, so it takes considerably more effort to change it.

But we have managed to push it noticeably.

This last upload saw a great deal of movement. We received only one new DD (but hey — welcome nonetheless! \o/ ). 13 DD keys were retired; as one of the maintainers of the keyring, of course this makes me sad — but then again, in most cases it's rather an acknowledgement of fact: Those keys' holders often state they had long not been really involved in the project, and the decision to retire was in fact timely. But the greatest bulk of movement was the key replacements: A massive 62 1024D keys were replaced with stronger ones. And, yes, the graph changed quite abruptly:

We still have a bit over one month to go for our cutoff line, where we will retire all 1024D keys. It is important to say we will not retire the affected accounts, mark them as MIA, nor anything like that. If you are a DD and only have a 1024D key, you will still be a DD, but you will be technically unable to do work directly. You can still upload your packages or send announcements to regulated mailing lists via sponsor requests (although you will be unable to vote).

Speaking of votes: We have often said that we believe the bulk of the short keys belong to people not really active in the project anymore. Not all of them, sure, but a big proportion. We just had a big, controversial GR vote with one of the highest voter turnouts in Debian's history. I checked the GR's tally sheet, and the results are interesting: Please excuse my ugly bash, but I'm posting this so you can play with similar runs on different votes and points in time using the public keyring Git repository:

  1. $ git checkout 2014.10.10
  2. $ for KEY in $( for i in $( grep '^V:' tally.txt |
  3. awk '{print "<" $3 ">"}' )
  4. do
  5. grep $i keyids|cut -f 1 -d ' '
  6. done )
  7. do
  8. if [ -f debian-keyring-gpg/$KEY -o -f debian-nonupload-gpg/$KEY ]
  9. then
  10. gpg --keyring /dev/null --keyring debian-keyring-gpg/$KEY \
  11. --keyring debian-nonupload-gpg/$KEY --with-colons \
  12. --list-key $KEY 2>/dev/null \
  13. | head -2 |tail -1 | cut -f 3 -d :
  14. fi
  15. done | sort | uniq -c
  16. 95 1024
  17. 13 2048
  18. 1 3072
  19. 371 4096
  20. 2 8192

So, as of mid-October: 387 out of the 482 votes (80.3%) were cast by developers with >=2048-bit keys, and 95 (19.7%) were cast by short keys.

If we were to run the same vote with the new active keyring, 417 votes would have been cast with >=2048-bit keys (87.2%), and 61 with short keys (12.8%). We would have four less votes, as they retired:

  1. 61 1024
  2. 14 2048
  3. 2 3072
  4. 399 4096
  5. 2 8192

So, lets hear it for November/December. How much can we push down that pesky yellow line?

Disclaimer: Any inaccuracy due to bugs in my code is completely my fault!

21 November, 2014 06:29PM by gwolf

hackergotchi for Daniel Pocock

Daniel Pocock

PostBooks 4.7 packages available, xTupleCon 2014 award

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.

Better prospects for Fedora and RHEL/CentOS/EPEL packages

As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.

Debian wins a prize

While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole.

Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy

This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.

Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:

Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:

and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.

xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

21 November, 2014 02:12PM by Daniel.Pocock

hackergotchi for Julien Danjou

Julien Danjou

Distributed group management and locking in Python with tooz

With OpenStack embracing the Tooz library more and more over the past year, I think it's a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I've presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.

You'll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.

All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC).

Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features

Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It's also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It's possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That's what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There's again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you're curious and adventurous!

21 November, 2014 12:10PM by Julien Danjou

Jonathan Wiltshire

Getting things into Jessie (#6)

If it’s not in an unblock bug, we probably aren’t reading it

We really, really prefer unblock bugs to anything else right now (at least, for things relating to Jessie). Mails on the list get lost, and IRC is dubious. This includes for pre-approvals.

It’s perfectly fine to open an unblock bug and then find it’s not needed, or the question is really about something else. We’d rather that than your mail get lost between the floorboards. Bugs are easy to track, have metadata so we can keep the status up to date in a standard way, and are publicised in all the right places. They make a great to-do list.

By all means twiddle with the subject line, for example appending “(pre-approval)” so it’s clearer – though watch out for twiddling too much, or you’ll confuse udd.

(to continue my theme: asking you to file a bug instead costs you one round-trip; don’t forget we’re doing it at scale)

 


Getting things into Jessie (#6) is a post from: jwiltshire.org.uk | Flattr

21 November, 2014 10:30AM by Jon

November 20, 2014

hackergotchi for Steve McIntyre

Steve McIntyre

UEFI Debian CDs for Jessie...

So, my work for Wheezy gave us working amd64 UEFI installer images. Yay! Except: there were a few bugs that remained, and also places where we could deal better with some of the more crappy UEFI implementations out there. But, things have improve since then and we should be better for Jessie in quite a few ways.

First of all, Colin and the other Grub developers have continued working hard and quite a lot of the old bugs in this area look to be fixed. I'm hoping we're not going to see so many "UEFI boot gives me a blank black screen" type of problems now.

For those poor unfortunates with Windows 7 on their machines, using BIOS boot despite having UEFI support in their hardware, I've fixed a long-standing bug (#763127) that could leave people with broken systems, unable to dual boot.

We've fixed a silly potential permissions bug in how the EFI System Partition is mounted: (#770033).

Next up, I'm hoping to add a workaround for some of the broken UEFI implementations, by adding support in our Grub packages (and in d-i) for forcing the installation of a copy of grub-efi in the removable media path. See #746662 for more of the details. It's horrid to be doing this, but it's just about the best thing we can do to support people with broken firmware.

Finally, I've been getting lots of requests for adding i386 (32-bit x86) UEFI support in our official images. Back in the Wheezy development cycle, I had test images that worked on i386, but decided not to push that support into the release. There were worries about potentially critical bugs that could be tickled on some hardware, plus there were only very few known i386 UEFI platforms at the time; the risk of damage outweighed the small proportion of users, IMHO. However, I'm now revisiting that decision. The potentially broken machines are now 2 years older, and so less likely to be in use. Also, Intel have released some horrid platform concoction around the Bay Trail CPU: a 64-bit CPU (that really wants a 64-bit kernel), but running a 32-bit UEFI firmware with no BIOS Compatibility Mode. Recent kernels are able to cope with this mess, but at the moment there is no sensible way to install Debian on such a machine. I'm hoping to fix that next (#768461). It's going to be awkward again, needing changes in several places too.

You can help! Same as 2 years ago, I'll need help testing some of these images. Particularly for the 32-bit UEFI support, I currently have no relevant hardware myself. That's not going to make it easy... :-/

I'll start pushing unofficial Jessie EFI test images shortly.

20 November, 2014 09:59PM

Tiago Bortoletto Vaz

Things to celebrate

Turning 35 today, then I get the great news that the person whom I share my dreams with has just become a Debian member! Isn't beautiful? Thanks Tássia, thanks Debian! I should also thank friends who make an ideal ambience for tonight's fun.

20 November, 2014 08:32PM by Tiago Bortoletto Vaz

hackergotchi for Neil McGovern

Neil McGovern

Barbie the Debian Developer

Some people may have seen recently that the Barbie series has a rather sexist book out about Barbie the Computer Engineer. Fortunately, there’s a way to improve this by making your own version.

Thus, I made a short version about Barbie the Debian Developer and init system packager.

(For those who don’t know me, this is satirical. Any resemblance to people is purely coincidental.)

Edit: added text in alt tags. Also, hai reddit!

One day, Debian Developer Barbie decided to package and upload a new init system to Debian, called 'systemd'. I hope everyone else will find it useful, she thought.Oh no says Skipper! You'll never take my init system away from me! It's horrendous and Not The Unix Way! Oh dear said Barbie, What have I let myself in to?Skipper was most upset, and decided that this would not do. It's off to the technical committee with this. They'll surely see sense.Oh no! What's this? The internet decided that the Technical Committee needed to also know everyone's individual views! Bad Internet!There was much discussion and consideration. Opinions were reviewed, rows were had, and months passed. Eventually, a decision was agreed upon.Barbie was successful! The will of the Technical Committee was that systemd would be the default! But wait...Skipper still wasn't happy. We need to make sure this never affects me! I'm going to call for a General Resolution!And so, Ms Devotee was called in to look at the various options. She said that the arguments must stop, and we should all accept the result of the general resolution.The numbers turned and the vote was out. We should simply be most excellent to each other said Ms Devotee. I'm not going to tell you what you should or should not do.Over the next year, the project was able to heal itself and eventually Barbie and Skipper decided to make amends. Now let's work at making Debian better!

20 November, 2014 08:00PM by Neil McGovern

hackergotchi for Gunnar Wolf

Gunnar Wolf

UNAM. Viva México, viva en paz.

UNAM. Viva México, viva en paz.

We have had terrible months in Mexico; I don't know how much has appeared about our country in the international media. The last incidents started on the last days of September, when 43 students at a school for rural teachers were forcefully disappeared (in our Latin American countries, this means they were taken by force and no authority can yet prove whether they are alive or dead; forceful disappearance is one of the saddest and most recognized traits of the brutal military dictatorships South America had in the 1970s) in the Iguala region (Guerrero state, South of the country) and three were killed on site. An Army regiment was stationed few blocks from there and refused to help.

And yes, we live in a country where (incredibly) this news by themselves would not seem so unheard of... But in this case, there is ample evidence they were taken by the local police forces, not by a gang of (assumed) wrongdoers. And they were handed over to a very violent gang afterwards. Several weeks later, with far from a thorough investigation, we were told they were killed, burnt and thrown to a river.

The Iguala city major ran away, and was later captured, but it's not clear why he was captured at two different places. The Guerrero state governor resigned and a new governor was appointed. But this was not the result of a single person behaving far from what their voters would expect — It's a symptom of a broken society where policemen will kill when so ordered, where military personnel will look away when pointed out to the obvious, where the drug dealers have captured vast regions of the country where are stronger than the formal powers.

And then, instead of dealing with the issue personally as everybody would expect, the president goes on a commercial mission to China. Oh, to fix some issues with a building company. That coincidentally or not was selling a super-luxury house to his wife. A house that she, several days later, decided to sell because it was tarnishing her family's honor and image.

And while the president is in China, the person who dealt with the social pressure and told us about the probable (but not proven!) horrible crime where the "bad guys" for some strange and yet unknown reason (even with tens of them captured already) decided to kill and burn and dissolve and disappear 43 future rural teachers presents his version, and finishes his speech saying that "I'm already tired of this topic".

Of course, our University is known for its solidarity with social causes; students in our different schools are the first activists in many protests, and we have had a very tense time as the protests are at home here at the university. This last weekend, supposed policemen entered our main campus with a stupid, unbelievable argument (they were looking for a phone reported as stolen three days earlier), get into an argument with some students, and end up firing shots at the students; one of them was wounded in the leg.

And the university is now almost under siege: There are policemen surrounding us. We are working as usual, and will most likely finish the semester with normality, but the intimidation (in a country where seeing a policeman is practically never a good sign) is strong.

And... Oh, I could go on a lot. Things feel really desperate and out of place.

Today I will join probably tens or hundreds of thousands of Mexicans sick of this simulation, sick of this violence, in a demonstration downtown. What will this achieve? Very little, if anything at all. But we cannot just sit here watching how things go from bad to worse. I do not accept to live in a state of exception.

So, this picture is just right: A bit over a month ago, two dear friends from Guadalajara city came, and we had a nice walk in the University. Our national university is not only huge, it's also beautiful and loaded with sights. And being so close to home, it's our favorite place to go with friends to show around. This is a fragment of the beautiful mural in the Central Library. And, yes, the University stands for "Viva México". And the university stands for "Peace". And we need it all. Desperately.

20 November, 2014 06:38PM by gwolf

hackergotchi for Steve Kemp

Steve Kemp

An experiment in (re)building Debian

I've rebuilt many Debian packages over the years, largely to fix bugs which affected me, or to add features which didn't make the cut in various releases. For example I made a package of fabric available for Wheezy, since it wasn't in the release. (Happily in that case a wheezy-backport became available. Similar cases involved repackaging gtk-gnutella when the protocol changed and the official package in the lenny release no longer worked.)

I generally release a lot of my own software as Debian packages, although I'll admit I've started switching to publishing Perl-based projects on CPAN instead - from which they can be debianized via dh-make-perl.

One thing I've not done for many years is a mass-rebuild of Debian packages. I did that once upon a time when I was trying to push for the stack-smashing-protection inclusion all the way back in 2006.

Having had a few interesting emails this past week I decided to do the job for real. I picked a random server of mine, rsync.io, which stores backups, and decided to rebuild it using "my own" packages.

The host has about 300 packages installed upon it:

root@rsync ~ # dpkg --list | grep ^ii | wc -l
294

I got the source to every package, patched the changelog to bump the version, and rebuild every package from source. That took about three hours.

Every package has a "skx1" suffix now, and all the build-dependencies were also determined by magic and rebuilt:

root@rsync ~ # dpkg --list | grep ^ii | awk '{ print $2 " " $3}'| head -n 4
acpi 1.6-1skx1
acpi-support-base 0.140-5+deb7u3skx1
acpid 1:2.0.16-1+deb7u1skx1
adduser 3.113+nmu3skx1

The process was pretty quick once I started getting more and more of the packages built. The only shortcut was not explicitly updating the dependencies to rely upon my updages. For example bash has a Debian control file that contains:

Depends: base-files (>= 2.1.12), debianutils (>= 2.15)

That should have been updated to say:

Depends: base-files (>= 2.1.12skx1), debianutils (>= 2.15skx1)

However I didn't do that, because I suspect if I did want to do this decently, and I wanted to share the source-trees, and the generated packages, the way to go would not be messing about with Debian versions instead I'd create a new Debian release "alpha-apple", "beta-bananna", "crunchy-carrot", "dying-dragonfruit", "easy-elderberry", or similar.

In conclusion: Importing Debian packages into git, much like Ubuntu did with bzr, is a fun project, and it doesn't take much to mass-rebuild if you're not making huge changes. Whether it is worth doing is an entirely different question of course.

20 November, 2014 01:28PM

hackergotchi for Daniel Pocock

Daniel Pocock

Is Amnesty giving spy victims a false sense of security?

Amnesty International is getting a lot of attention with the launch of a new tool to detect government and corporate spying on your computer.

I thought I would try it myself. I went to a computer running Microsoft Windows, an operating system that does not publish its source code for public scrutiny. I used the Chrome browser, users often express concern about Chrome sending data back to the vendor about the web sites the users look for.

Without even installing the app, I would expect the Amnesty web site to recognise that I was accessing the site from a combination of proprietary software. Instead, I found a different type of warning.

Beware of Amnesty?

Instead, the only warning I received was from Amnesty's own cookies:

Even before I install the app to find out if the government is monitoring me, Amnesty is keen to monitor my behaviour themselves.

While cookies are used widely, their presence on a site like Amnesty's only further desensitizes Internet users to the downside risks of tracking technologies. By using cookies, Amnesty is effectivley saying a little bit of tracking is justified for the greater good. Doesn't that sound eerily like the justification we often hear from governments too?

Is Amnesty part of the solution or part of the problem?

Amnesty is a well known and widely respected name when human rights are mentioned.

However, their advice that you can install an app onto a Windows computer or iPhone to detect spyware is like telling people that putting a seatbelt on a motorbike will eliminate the risk of death. It would be much more credible for Amnesty to tell people to start by avoiding cloud services altogether, browse the web with Tor and only use operating systems and software that come with fully published source code under a free license. Only when 100% of the software on your device is genuinely free and open source can independent experts exercise the freedom to study the code and detect and remove backdoors, spyware and security bugs.

It reminds me of the advice Kim Kardashian gave after the Fappening, telling people they can continue trusting companies like Facebook and Apple with their private data just as long as they check the privacy settings (reality check: privacy settings in cloud services are about as effective as a band-aid on a broken leg).

Write to Amnesty

Amnesty became famous for their letter writing campaigns.

Maybe now is the time for people to write to Amnesty themselves, thank them for their efforts and encourage them to take more comprehensive action.

Feel free to cut and paste some of the following potential ideas into an email to Amnesty:


I understand you may not be able to respond to every email personally but I would like to ask you to make a statement about these matters on your public web site or blog.

I understand it is Amnesty's core objective to end grave abuses of human rights. Electronic surveillence, due to its scale and pervasiveness, has become a grave abuse in itself and in a disturbing number of jurisdictions it is an enabler for other types of grave violations of human rights.

I'm concerned that your new app Detekt gives people a false sense of security and that your campaign needs to be more comprehensive to truly help people and humanity in the long term.

If Amnesty is serious about solving the problems of electronic surveillance by government, corporations and other bad actors, please consider some of the following:

  • Instead of displaying a cookie warning on Amnesty.org, display a warning to users who access the site from a computer running closed-source software and give them a link to download an open source web browser like Firefox.
  • Redirect all visitors to your web site to use the HTTPS encrypted version of the site.
  • Using spyware-free open source software such as the Linux operating system and LibreOffice for all Amnesty's own operations, making a public statement about your use of free open source software and mentioning this in the closing paragraph of all press releases relating to surveillance topics.
  • Encouraging Amnesty donors, members and supporters to choose similar software especially when engaging in any political activities.
  • Make a public statement that Amnesty will not use cloud services such as SalesForce or Facebook to store, manage or interact with data relating to members, donors or other supporters.
  • Encouraging the public to move away from centralized cloud services such as those provided by their smartphone or social networks and use de-centralized or federated services such as XMPP chat.

Given the immense threat posed by electronic surveillance, I'd also like to call on Amnesty to allocate at least 10% of annual revenue towards software projects releasing free and open source software that offers the public an alternative to the centralized cloud.


While publicity for electronic privacy is great, I hope Amnesty can go a step further and help people use trustworthy software from the ground up.

20 November, 2014 12:48PM by Daniel.Pocock

Jonathan Wiltshire

Getting things into Jessie (#5)

Don’t assume another package’s unblock is a precedent for yours

Sometime we’ll use our judgement when granting an unblock to a less-than-straightforward package. Lots of factors go into that, including the regression risk, desirability, impact on other packages (of both acceptance and refusal) and trust.

However, a judgement call on one package doesn’t automatically mean that the same decision will be made for another. Every single unblock request we get is called on its own merits.

Do by all means ask about your package in light of another. There may be cross-over that makes your change desirable as well.

Don’t take it personally if the judgement call ends up being not what you expected.


Getting things into Jessie (#5) is a post from: jwiltshire.org.uk | Flattr

20 November, 2014 10:30AM by Jon

Stefano Zacchiroli

Thoughts on the Init System Coupling GR

on perceived hysteria and silent sanity

As you probably already know by now, the results of the Debian init system coupling general resolution (GR) look like this:

Init system coupling GR: results (arrow from A to B means that voters preferred A to B by that margin)
results of the init system coupling GR

Some random thoughts about them:

  • The turnout has been the highest since 2010 DPL elections and the 2nd highest among all GRs (!= DPL elections) ever. The highest among all GRs dates back to 2004 and was about dropping non-free. In absolute terms this vote scores even better: it is the GR with the highest number of voters ever.

    Clearly there was a lot of interest within the project about this vote. The results appear to be as representative of the views of project members as we have been able to get in the second half of Debian history.

  • There is a total ordering of options (which is not always the case with our voting system). Starting with the winning option, each option in the results beats every subsequent option. The winning option ("General resolution is not required") beats the runner-up ("Support for other init systems is recommended, i.e., "you SHOULD NOT require a specific init") by a large margin: 100 votes, ~20.7% of the voters. The winning options wins over further options by increasingly large margins: 173 votes (~35.8%) against "Packages may require specific init systems if maintainers decide" (the MAY option); 176 (~36.4%) against "Packages may not require a specific init system" (the MUST NOT option); 263 (~54.5%) against "Further discussion" (the "let's keep on flaming" option).

    While judging from Debian mailing lists and news sites you might have gotten the impression that the project was evenly split on init system matters, at least w.r.t. the matter on the ballot that doesn't seem to be the case.

  • The winning option is not as crazy as its label might imply (voting to declare that the vote was not required? WTH?). What the winning option actually says is more articulated than that; quoting from the ballot (highlight mine):

    Regarding the subject of this ballot, the Project affirms that the procedures for decision making and conflict resolution are working adequately and thus a General Resolution is not required.

    With this GR the Debian Project affirms that the procedures we have used to decide the default init system for Jessie and to arbitrate the ensuing conflicts are just fine. People might flame and troll debian-devel as much as they want (actually, I'm pretty sure we would all like them to stop, but that matter wasn't on the ballot so you'll have to take my word for it). People might write blog posts and make headlines corroborating the impression that Debian is still being torn apart by ongoing init system battles. But this vote says instead that the large majority of project members thinks our decision making and conflict-arbitration procedures, which most prominently include the Debian Technical Committee, have served use "adequately" well over the past troubled months.

    That of course doesn't mean that everyone in Debian is happy about every single recent decision, otherwise we wouldn't have had this GR in the first place. But it does mean that we consider our procedures good enough to (a) avoid getting in their way with a project-wide vote, and (b) keep on trusting them for the foreseeable future.

  • [ It is not the main focus of this post, but if you care specifically about the implications of this GR on systemd adoption in Debian, I recommend reading this excellent GR commentary by Russ Allbery. ]

My take home message is that we are experiencing a huge gap between the public perception of the state of Debian (both from within and from without the project) and the actual beliefs of the silent majority of people that make Debian with their work, day after day.

In part this is old news. The most "senior" members of the project will remember that the topic of "vocal minorities vs silent majority" was a recurrent one in Debian 10+ years ago, when flames were periodically ravaging the project. Since then Debian has grown a lot though, and we are now part of a much larger and varied ecosystem. We are now at a scale at which there are plenty of FOSS "mass-media" covering daily what happens in Debian, inducing feedback loops with our own perception of ourselves which we do not fully grok yet. This is a new factor in the perception gap. This situation is not intrinsically bad, nor there is blame to assign here: after all influential bloggers, news sites, etc., just do their job. And their attention also testifies of the huge interest that there is around Debian and our choices.

But we still need to adapt and learn to take perceived hysteria with a pinch (or two) of salt. It might just be time for our decennial check-up. Time to remind ourselves that our ways of doing things might in fact still be much more sane than sometimes we tend to believe.

We went on 10+ years ago, after monumental flames. It looks like we are now ready to move on again, putting The Era of the Great systemd Histeria™ behind us.

20 November, 2014 08:59AM

hackergotchi for Matthew Palmer

Matthew Palmer

Multi-level prefix delegation is not a myth! I've seen it!

Unless you’ve been living under a firewalled rock, you know that IPv6 is coming. There’s also a good chance that you’ve heard that IPv6 doesn’t have NAT. Or, if you pay close attention to the minutiae of IPv6 development, you’ve heard that IPv6 does have NAT, but you don’t have to (and shouldn’t) use it.

So let’s say we’ll skip NAT for IPv6. Fair enough. However, let’s say you have this use case:

  1. A bunch of containers that need Internet access…

  2. That are running in a VM…

  3. On your laptop…

  4. Behind your home router!

For IPv4, you’d just layer on the NAT, right? While SIP and IPsec might have kittens trying to work through three layers of NAT, for most things it’ll Just Work.

In the Grand Future of IPv6, without NAT, how the hell do you make that happen? The answer is “Prefix Delegation”, which allows routers to “delegate” management of a chunk of address space to downstream routers, and allow those downstream routers to, in turn, delegate pieces of that chunk to downstream routers.

In the case of our not-so-hypothetical containers-in-VM-on-laptop-at-home scenario, it would look like this:

  1. My “border router” (a DNS-323 running Debian) asks my ISP for a delegated prefix, using DHCPv6. The ISP delegates a /561. One /64 out of that is allocated to the network directly attached to the internal interface, and the rest goes into “the pool”, as /60 blocks (so I’ve got 15 of them to delegate, if required).

  2. My laptop gets an address on the LAN between itself and the DNS-323 via stateless auto-addressing (“SLAAC”). It also uses DHCPv6 to request one of the /60 blocks from the DNS-323. The laptop puts one /64 from that block as the address space for the “virtual LAN” (actually a Linux bridge) that connects the laptop to all my VMs, and puts the other 15 /64 blocks into a pool for delegation.

  3. The VM that will be running the set of containers under test gets an address on the “all VMs virtual LAN” via SLAAC, and then requests a delegated /64 to use for the “all containers virtual LAN” (another bridge, this one running on the VM itself) that the containers will each connect to themselves.

Now, almost all of this Just Works. The current releases of ISC DHCP support prefix delegation just fine, and a bit of shell script plumbing between the client and server seals the deal – the client needs to rewrite the server’s config file to tell it the netblock from which it can delegate.

Except for one teensy, tiny problem – routing. When the DHCP server delegates a netblock to a particular machine, the routing table needs to get updated so that packets going to that netblock actually get sent to the machine the netblock was delegated to. Without that, traffic destined for the containers (or the VM) won’t actually make it to its destination, and a one-way Internet connection isn’t a whole lot of use.

I cannot understand why this problem hasn’t been tripped over before. It’s absolutely fundamental to the correct operation of the delegation system. Some people advocate running a dynamic routing protocol, but that’s a sledgehammer to crack a nut if ever I saw one.

Actually, I know this problem has been tripped over before, by OpenWrt. Their solution, however, was to use a PHP script to scan logfiles and add routes. Suffice it to say, that wasn’t an option I was keen on exploring.

Instead, I decided to patch ISC DHCP so that the server can run an external script to add the necessary routes, and perhaps modify firewall rules – and also to reverse the process when the delegation is released (or expired). If anyone else wants to play around with it, I’ve put it up on Github. I don’t make any promises that it’s the right way to do it, necessarily, but it works, and the script I’ve added in contrib/prefix-delegation-routing.rb shows how it can be used to good effect. By the way, if anyone knows how pull requests work over at ISC, drop me a line. From the look of their website, they don’t appear to accept (or at least encourage) external contributions.

So, that’s one small patch for DHCP, one giant leap for my home network.

  1. The standard recommendation is for ISPs to delegate each end-user customer a /48 (giving 65,536 /64 networks); my ISP is being a little conservative in “only” giving me 256 /64s. It works fine for my purposes, but if you’re an ISP getting set for deploying IPv6, make life easy on your customers and give them a /48.

20 November, 2014 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

Russ Allbery

Interpreting the Debian init system GR

I originally posted this in a thread on debian-private, but on further reflection it seems appropriate for a broader audience. So I'm posting it here, as well as on debian-project.

There is quite a lot of discussion in various places about what the recent GR result means. Some are concluding that systemd won in some way that implies Debian is not going to support other init systems, or at least that support for other init systems is in immediate danger. A lot of that analysis concludes that the pro-systemd "side" in Debian won some sort of conclusive victory.

I have a different perspective.

I think we just had a GR in which the Debian developer community said that we, as a community, would like to work through all of the issues around init systems together, as a community, rather than having any one side of the argument win unambiguously and impose its views on those who disagree.

There were options on the ballot that clearly required loose coupling and that clearly required tight coupling. The top two options did neither of those things. The second-highest option said, effectively, that we should feel free to exercise our technical judgement for our own packages, but should do so with an eye to enabling people to make different choices, and should merge their changes and contributions where possible. The highest option said that we don't even want to say that, and would instead prefer to work this whole thing out through discussion, respect, consensus, and mutual support, without giving *anyone* a clear mandate or project-wide blessing for their approach.

In other words, the way I choose to look at this GR is that the project as a whole just voted to take away the sticks that we were using to beat each other with.

In a way, we just chose thet *hardest* option. We didn't make a simplifying technical decision that provides clear guidance to everyone. Instead, we made a complicating social decision that says that, sorry, there's no short cut to avoid having to talk to each other, respect each other's views, and try to reach workable collaborative compromises. Even though it's really hard, even though everyone is raw and upset, that's what the project as a whole is asking us to do.

Are we up to the challenge?

20 November, 2014 04:42AM

November 19, 2014

hackergotchi for Thomas Goirand

Thomas Goirand

Rotten tomatoes

There’s many ways to interpret the last GR. The way I see it is how Joey hoped Debian was: the outcome of the poll shows that we don’t want to do technical decisions by voting. At the beginning of this GR, I was supportive of it, and though it was a good thing to enforce the rule that we care for non-systemd setups. Though I have slowly changed my mind. I still think it was a good idea to see what the community thought after a so long debate. I now think that this final outcome is awesome and couldn’t have been better. Science (and computer science) has never been about voting, otherwise the earth would be flat, without drifting continents.

So my hope is that the Debian project as a whole, will allow itself to do mistakes, iterative trials, errors, and go back on any technical decision if they don’t make sense anymore. When being asked something, it’s ok to reply: “I don’t know”, and it should be ok for the Debian project to have this alternative as one of the possible answers. I’m convince that refusing to take a drastic choice in this point in time was exactly what we needed to do. And my hope is that Joey comes back after he realizes that we’ve all understood and embarrassed his position that science cannot be governed by polls.

For Stretch, I’m sure there’s going to be a lot of new alternatives. Maybe uselessd, eudev and others. Maybe I’ll have a bit of time to work on OpenRC Debian integration myself (hum… I’m dreaming here…). Maybe something else. Let’s just wait. We have more than 300 bugs to fix before Jessie can be released. Let’s happilly work on that together, and forget about the init systems for a while…

P.S: Just to be on the safe side: the rotten tomatoes image was not about criticizing the persons who started the poll, who I respect a lot, especially Ian, who I am convinced is trying to do his best for Debian (hug).

19 November, 2014 11:57PM by admin

hackergotchi for Jonathan Dowland

Jonathan Dowland

Moving to Red Hat

I'm changing jobs!

From February 2015, I will be joining Red Hat as a Senior Software Engineer. I'll be based in Newcastle and working with the Middleware team. I'm going to be working with virtualisation, containers and Docker in particular. I know a few of the folks in the Newcastle office already, thanks to their relationship with the School of Computing Science, and I'm very excited to work with them, as well as the wider company. It's also going to be great to be contributing to the free software community as part of my day job.

This October marked my tenth year working for Newcastle University. I've had a great time, learned a huge amount, and made some great friends. It's going to be sad to leave, especially the School of Computing Science where I've spent the last four years, but it's the right time to move on, It's an area that I've been personally interested in for a long time and I'm very excited to be trying something new.

19 November, 2014 09:56PM

hackergotchi for Miriam Ruiz

Miriam Ruiz

Awesome Bullying Lesson

A teacher in New York was teaching her class about bullying and gave them the following exercise to perform. She had the children take a piece of paper and told them to crumple it up, stamp on it and really mess it up but do not rip it. Then she had them unfold the paper, smooth it out and look at how scarred and dirty is was. She then told them to tell it they’re sorry. Now even though they said they were sorry and tried to fix the paper, she pointed out all the scars they left behind. And that those scars will never go away no matter how hard they tried to fix it. That is what happens when a child bullies another child, they may say they’re sorry but the scars are there forever. The looks on the faces of the children in the classroom told her the message hit home.

( Source: http://www.buzzfeed.com/mjs538/awesome-bullying-lesson-from-a-new-york-teacher )

19 November, 2014 09:36PM by Miry

hackergotchi for Erich Schubert

Erich Schubert

What the GR outcome means for the users

The GR outcome is: no GR necessary
This is good news.
Because it says: Debian will remain Debian, as it was the last 20 years.
For 20 years, we have tried hard to build the "universal operating system", and give users a choice. We've often had alternative software in the archive. Debian has come up with various tool to manage alternatives over time, and for example allows you to switch the system-wide Java.
You can still run Debian with sysvinit. There are plenty of Debian Developers which will fight for this to be possible in the future.
The outcome of this resolution says:
  • Using a GR to force others is the wrong approach of getting compatibility.
  • We've offered choice before, and we trust our fellow developers to continue to work towards choice.
  • Write patches, not useless GRs. We're coders, not bureocrats.
  • We believe we can do this, without making it a formal MUST requirement. Or even a SHOULD requirement. Just do it.
The sysvinit proponents may perceive this decision as having "lost". But they just don't realize they won, too. Because the GR may easily have backfired on them. The GR was not "every package must support sysvinit". It was also "every sysvinit package must support systemd". Here is an example: eudev, a non-systemd fork of udev. It is not yet in Debian, but I'm fairly confident that someone will make a package of it after the release, for the next Debian. Given the text of the GR, this package might have been inappropriate for Debian, unless it also supports systemd. But systemd has it's own udev - there is no reason to force eudev to work with systemd, is there?
Debian is about choice. This includes the choice to support different init systems as appropriate. Not accepting a proper patch that adds support for a different init would be perceived as a major bug, I'm assured.
A GR doesn't ensure choice. It only is a hammer to annoy others. But it doesn't write the necessary code to actually ensure compatibility.
If GNOME at some point decides that systemd as pid 1 is a must, the GR only would have left us three options: A) fork the previous version, B) remove GNOME altogether, C) remove all other init systems (so that GNOME is compliant). Does this add choice? No.
Now, we can preserve choice: if GNOME decides to go systemd-pid1-only, we can both include a forked GNOME, and the new GNOME (depending on systemd, which is allowed without the GR). Or any other solution that someone codes and packages...
Don't fear that systemd will magically become a must. Trust that the Debian Developers will continue what they have been doing the last 20 years. Trust that there are enough Debian Developers that don't run systemd. Because they do exist, and they'll file bugs where appropriate. Bugs and patches, that are the appropriate tools, not GRs (or trolling).

19 November, 2014 07:58PM

hackergotchi for EvolvisForge blog

EvolvisForge blog

Valid UTF-8 but invalid XML

Another PSA: something surprising about XML.

As you might all know, XML must be valid UTF-8 (or UTF-16 (or another encoding supported by the parser, but one which yields valid Unicode codepoints when read and converted)). Some characters, such as the ampersand ‘&’, must be escaped (“&#38;” or “&#x26;”, although “&amp;” may also work, depending on the domain) or put into a CDATA section (“<![CDATA[&]]>”).

A bit surprisingly, a literal backspace character (ASCII 08h, Unicode U+0008) is not allowed in the text. I filed a bugreport against libxml2, asking it to please encode these characters.

A bit more research followed. Surprisingly, there are characters that are not valid in XML “documents” in any way, not even as entities or in CDATA sections. (xmlstarlet, by the way, errors out somewhat nicely for an unescaped literal or entity-escaped backspace, but behaves absolutely hilarious for a literal backspace in a CDATA section.) Basically, XML contains a whitelist for the following Unicode codepoints:

  • U+0009
  • U+000A
  • U+000D
  • U+0020‥U+D7FF
  • U+E000‥U+FFFD
  • U-00010000‥U-0010FFFF

Additionally, a certain number of codepoints is discouraged: U+007F‥U+0084 (IMHO wise), U+0086‥U+009F (also wise, but why allow U+0085?), U+FDD0‥U+FDEF (a bit surprisingly, but consistent with disallowing the backspace character), and the last two codepoints of every plane (U+FFFE and U+FFFF were already disallowed, but U-0001FFFE, U-0001FFFF, …, U-0010FFFF weren’t; this is extremely wise).

The suggestion seems to be to just strip these characters silently from the XML “document”.

I’m a bit miffed about this, as I don’t even use XML directly (I’m extending a PHP “webapplication” that is a SOAP client and talks to a Java™ SOAP-WS) and would expect this to preserve my strings, but, oh my. I’ve forwarded the suggestion to just strip them silently to the libxml2 maintainers in the aforementioned bug report, for now, and may even hack that myself (on customer-paid time). More robust than hacking the PHP thingy to strip them first, anyway – I’ve got no control over the XML after all.

Sharing this so that more people know that not all UTF-8 is valid in XML. Maybe it saves someone else some time. (Now wondering whether to address this in my xhtml_escape shell function. Probably should. Meh.)

19 November, 2014 02:18PM by Thorsten Glaser

hackergotchi for Thorsten Glaser

Thorsten Glaser

Debian init system freedom of choice GR worst possible outcome

Apparently (the actual results have not yet been published by the Secretary), the GR is over, and the worst possible option has won. This is an absolutely ambiguous result, while at the same time sending a clear signal that Debian is not to be trusted wrt. investing anything into it, right now.

Why is this? Simply: “GR not required” means that “whatever people do is probably right”. Besides this, we have one statement from the CTTE (“systemd is default init system for jessie. Period.”) and nothing else. This means that runit, or upstart, or file-rc, or uselessd, can be the default init system for zurg^H^H^H^Hstretch, or even the only one. It also means that the vast majority of Debian Developers are sheeple, neither clearly voting to preserve freedom of choice between init systems for its users, nor clearly voting to unambiguously support systemd and progress over compatibility and choice, nor clearly stating that systemd is important but supporting other init systems is still recommended. (I’ll not go into detail on how the proposer of the apparently winning choice recommends others to ignore ftpmaster constraints and licences, and even suggests to run a GR to soften up the DFSG interpretation.) I’d have voted this as “no, absolutely not” if it was possible to do so more strongly.

Judging from the statistics, the only thing I voted above NOTA/FD is the one least accepted by DDs, although the only other proposal I considered is the first-rated of them: support for other init systems is recommended but not required. What made me vote it below NOTA/FD was: “The Debian Project makes no statement at this time on sysvinit support beyond the jessie release.” This sentence made even this proposal unbearable, unacceptable, for people wanting to invest (time, money, etc.) into Debian.

Update: Formal result announced. So 358 out of 483 voting DDs decided to be sheeple (if I understand the eMail correctly). We had 1006 DDs with voting rights, which is a bit ashaming as well. That’s 48.01% only. I wonder what’s worse.

This opens up a very hard problem: I’m absolutely stunned by this and wondering what to do now. While there is no real alternative to Debian at $dayjob I can always create customised packages in my own APT repository, and – while it was great when those were eventually (3.1.17-1) accepted into Debian, even replacing the previous packages completely – it is simpler and quicker to not do so. While $dayjob benefits from having packages I work on inside Debian itself, even though I cannot always test all scenarios Debian users would need, some work reduction due to… reactions… already led to Debian losing out on Mediawiki for jessie and some additional suffering. With my own package repository, I can – modulo installing/debootstrap – serve my needs for $dayjob much quicker, easily, etc. and only miss out on absolutely delightful user feedback. But then, others could always package software I’m upstream of for Debian. Or, if I do not leave the project, continue doing so via QA uploads.

I’m also disappointed because I have invested quite some effort into trying to make Debian better (my idea to join as DD was “if I’ve got to use it, it better be damn good!”), into packaging software and convincing people at work that developing software as Debian packages instead of (or not) thinking of packaging later was good. I’ve converted our versions of FusionForge and d-push to Debian packages, and it works pretty damn well. Sometimes it needs backports of my own, but that’s the corportate world, and no problem to an experienced DD. (I just feel bad we ($orkplace) lost some people, an FTP master along them, before this really gained traction.)

I’d convert to OpenBSD because, despite MirBSD’s history with them, they’re the only technically sound alternative, but apparently tedu (whom I respect technically, and who used to offer good advice to even me when asked, and who I think wouldn’t choose systemd himself) still (allying with the systemd “side” (I’m not against people being able to choose systemd, for the record, I just don’t want to be forced into it myself!)) has some sort of grudge against me. Plus, it’d be hard to get customers to follow. So, no alternative right now. But I’m used to managing my own forks of software; I’m doomed to basically hack and fix anything I use (I recently got someone who owns a licence to an old-enough Visual Studio version to transfer that to me, so I can hack on the Windows Mobile 6 version of Cachebox, to fix bugs in one of the geocaching applications I use. Now I “just” need to learn C# and the .NET Compact Framework. So I’m also used to some amount of pain.)

I’m still unresolved wrt. the attitude I should show the Debian project now. I had decided to just continue to live on, and work on the things I need done, but that was before this GR non-result. I absolutely cannot recommend anyone to “invest” into Debian (without sounding hypocriet), but I cannot recommend anything else either. I cannot justify leaving but don’t know if I want to stay. I think I should sleep over it.

One thing I promised, and thus will do, is to organise a meeting of the Debian/m68k people soonish. But then, major and important and powerful forces inside Debian still insist that Debian-Ports are not part of it… [Update: yes, DSA is moving it closer, thanks for that by the way, but that doesn’t mean anything to certain maintainers or the Release Team, although, the latter is actually understandable and probably sensible.] yet, all forks of Debian now suffer from the systemd adoption in it instead of having a freedom-of-choice upstream. I’ve said, and I still feel that systemd adoption should have done in a Debian downstream / (pure?) blend, and maybe (parts of) GNOME removed from Debian itself for it. (Adding cgroups support to the m68k kernel to support systemd was done. I adviced against it, on the grounds of memory and code size. But no downstream can remove it now.)

19 November, 2014 12:44PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

The Pogues

Actually I was working already on a different music blog entry, but I want to get this one out. I was invited to join the Organic Dancefloor last thursday. And it was a really great experience. A lot of nice people enjoying a dance evening of sort of improvisational traditional folk dancing with influences from different parts of europe. Three bands playing throughout the evening. I definitely plan to go there again. :)

Which brings me to the band I want to present you now. They also play sort-of traditional songs, or at least with traditional instruments, and are also quite danceable to. This is about The Pogues. And these are the songs that I do enjoy listening to every now and then:

  • Medley: Don't meddle with the Medley. Rather dance to it.
  • Fairytale of New York: Well, we're almost in the season for it. :)
  • Streams of Whiskey: Also quite the style of song that they are known for and party with at concerts.

Like always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

19 November, 2014 11:10AM by Rhonda

Jonathan Wiltshire

Getting things into Jessie (#4)

Make sure bug metadata is accurate

We use the metadata on the bugs you claim to have closed, as well as reading the bug report itself. You can help us out with severities, tags (e.g. blocks), and version information.

Don’t fall into the trap of believing that an unblock is a green light into Jessie. Britney still follows her validity rules, so if an RC bug appears to affect the unblocked version, it won’t migrate. Versions matter, not only the bug state (closed or open).


Getting things into Jessie (#4) is a post from: jwiltshire.org.uk | Flattr

19 November, 2014 10:12AM by Jon

hackergotchi for Bastian Venthur

Bastian Venthur

General Resolution is not required

The result for the General Resolution about the init system coupling is out and the result is, not quite surprisingly, “General Resolution is not required”.

When skimming over -devel or -private from time to time, one easily gets the impression that we are all a bunch of zealots, all too eager for fighting. People argue in the worst possible ways. People make bold statements about the future of Debian if solution X is preferred over Y. People call each other names. People leave the project.

At some point you realize, we’re not all a bunch of zealots, it is usually only the same small subset of people always involved in those discussions. It’s reassuring that we still seem to have a silent majority in Debian that, without much fuss, just do what they can to make Debian better. In this sense: A General Resolution is not required.

19 November, 2014 08:21AM by Bastian

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R / Finance 2015 Call for Papers

Earlier today, Josh send the text below to the R-SIG-Finance list, and I updated the R/Finance website, including its Call for Papers page, accordingly.

We are once again very excited about our conference, thrilled about the four confirmed keynotes, and hope that many R / Finance users will not only join us in Chicago in May 2015 -- but also submit an exciting proposal.

So read on below, and see you in Chicago in May!

Call for Papers:

R/Finance 2015: Applied Finance with R
May 29 and 30, 2015
University of Illinois at Chicago, IL, USA

The seventh annual R/Finance conference for applied finance using R will be held on May 29 and 30, 2015 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading.

Over the past six years, R/Finance has included attendees from around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2015. This year will include invited keynote presentations by Emanuel Derman, Louis Marascio, Alexander McNeil, and Rishi Narang.

We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged.

All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages.

The conference will award two (or more) $1000 prizes for best papers. A submission must be a full paper to be eligible for a best paper award. Extended abstracts, even if a full paper is provided by conference time, are not eligible for a best paper award. Financial assistance for travel and accommodation may be available to presenters, however requests must be made at the time of submission. Assistance will be granted at the discretion of the conference committee.

Please make your submission online at this link. The submission deadline is January 31, 2015. Submitters will be notified via email by February 28, 2015 of acceptance, presentation length, and financial assistance (if requested).

Additional details will be announced via the R/Finance conference website as they become available. Information on previous years' presenters and their presentations are also at the conference website.

For the program committee:

Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson, Dale Rosenthal,
Jeffrey Ryan, Joshua Ulrich

19 November, 2014 12:59AM

hackergotchi for Simon McVittie

Simon McVittie

still aiming to be the universal operating system

Debian's latest round of angry mailing list threads have been about some combination of init systems, future direction and project governance. The details aren't particularly important here, and pretty much everything worthwhile in favour of or against each position has already been said several times, but I think this bit is important enough that it bears repeating: the reason I voted "we didn't need this General Resolution" ahead of the other options is that I hope we can continue to use our normal technical and decision-making processes to make Debian 8 the best possible OS distribution for everyone. That includes people who like systemd, people who dislike systemd, people who don't care either way and just want the OS to work, and everyone in between those extremes.

I think that works best when we do things, and least well when a lot of time and energy get diverted into talking about doing things. I've been trying to do my small part of the former by fixing some release-critical bugs so we can release Debian 8. Please join in, and remember to write good unblock requests so our hard-working release team can get through them in a finite time. I realise not everyone will agree with my idea of which bugs, which features and which combinations of packages are highest-priority; that's fine, there are plenty of bugs to go round!

Regarding init systems specifically, Debian 'jessie' currently works with at least systemd-sysv or sysvinit-core as pid 1 (probably also Upstart, but I haven't tried that) and I'm confident that Debian developers won't let either of those regress before it's released as Debian 8.

I expect the freeze for Debian 'stretch' (presumably Debian 9) to be a couple of years away, so it seems premature to say anything about what will or won't be supported there; that depends on what upstream developers do, and what Debian developers do, between now and then. What I can predict is that the components that get useful bug reports, active maintenance, thorough testing, careful review, and similar help from contributors will work better than the things that don't; so if you like a component and want it to be supported in Debian, you can help by, well, supporting it.


PS. If you want the Debian 8 installer to leave you running sysvinit as pid 1 after the first reboot, here's a suitable incantation to add to the kernel command-line in the installer's bootloader. This one certainly worked when KiBi asked for testing a few days ago:

preseed/late_command="in-target apt-get install -y sysvinit-core"

I think that corresponds to this line in a preseeding file, if you use those:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

A similar apt-get command, without the in-target prefix, should work on an installed system that already has systemd-sysv. Depending on other installed software, you might need to add systemd-shim to the command line too, but when I tried it, apt-get was able to work that out for itself.

If you use aptitude instead of apt-get, double-check what it will do before saying "yes" to this particular switchover: its heuristic for resolving conflicts seems to be rather more trigger-happy about removing packages than the one in apt-get.

19 November, 2014 12:00AM

November 18, 2014

Laura Arjona

Translating (reviewing) Debian package descriptions

Some days I feel super lazy but I still would like to go on contributing translations to Debian.
Then, I leave the web translations a bit, and change to translate or review Debian package descriptions.

It’s something that anybody can do without any knowledge of translation tools, since it is a very simple web interface, as you will see.

First you need to create a login account, then, login into the system.

And then, go to the page of your mother language (in my case, Spanish, “es”). You will see some introductory text, and the list of pending translations:
ddtss_es1
At the end of the page, there is the list of translations pending to review:
ddtss_es2

We should begin with this, so the work that other people already made arrives quickly its destination. And it’s the easiest part, as you will see. Let’s pick one of them (libvformat1-dev):

review1
You see the short description in the original English, and the current translation (if there were changes from a former version, they are coloured too).

I didn’t know what the package libvformat1-dev does, but here’s a nice opportunity to learn aobut it a bit :)

The short description looks ok for me. Let’s go on to the long description:

review2

It also looks correct for me. So I leave the text box as is, and go on until the bottom of the page:
review3
and click “Accept as is”. That’s all!!

The system brings you back to the page with pending translations and reviews. Let’s pick another one: totem
review4
I found a typo and corrected some other words, so I updated the text in the translation box, left a message to the other translators in the comment box, and clicked “Accept with changes”.

And… iterate.

When 3 translators agree in a translation, it becomes official, and its propagated to apt-cache, aptitude, synaptic, etc., and the website (packages.debian.org). This is the most difficult part (to get 3 reviews for each package description):  many language teams are small, and their workforce is spread in many fronts: translations for the website, news and announcements, debconf templates (the messages that are shown to the user when a package is installed), the Debian installer, the documentation, the package descriptions… So your help (even when you only review some translations from time to time) will be appreciated, for sure.


Filed under: Tools Tagged: Contributing to libre software, Debian, English, translations

18 November, 2014 11:22PM by larjona

hackergotchi for Christian Perrier

Christian Perrier

Bug #770000

Martin Pitt reported Debian bug #770000 on Tuesday November 18th, against the release.debian.org pseudo-package.

Bug #760000 was reported as of August 30th: so there have been 10,000 bugs reported in 3 months minus 12 days. The bug rate increased quite significantly during the last weeks. We can suspect this is related to the release and the freeze (that triggers many unblock requests)

I find it interesting that this bug is directly related to the release, directly related to systemd and originated from one of the systemd packages maintainers, if I'm right.

So, I'll take this opportunity to publicly thank all people who have brought the systemd packages to what they are now, whether or not they're still maintaining the package. We've all witnessed that Debian if facing a strong social issue nowadays and I'm very deeply sad about this. I hope we'll be able to go through this without losing too many brilliant contributors, as it happened recently.

Please prove me right and do The Right Thing for me to be able to continue this silly "round bug number" contest and still believe that, some day, bug #1000000 will really happen and I'm still there to witness it.

Ah, and by the way, systemd bloody works on my system. I can't even remember when I switched to it. It Just Worked.

18 November, 2014 06:13PM

hackergotchi for Michal Čihař

Michal Čihař

Mercurial support in Weblate

Weblate has started as a translation system tightly bound to Git version control system. This was in no means design decision, but rather it was the version control I've used. But this has shown not to be sufficient and other systems were requested as well. And Mercurial is first of them to be supported.

Weblate 2.0 already had separated VCS layer and adding another system to that is quite easy if you know the VCS you're adding. Unfortunately this wasn't the case for me with Mercurial as I've never used it for anything more serious than cloning a repository, committing fixes and pushing it back. Weblate needs a bit more than that, especially in regard to remote branches. But nevertheless I've figured out all operations and the implementation is ready in our Git.

In case somebody is interested in adding support for another version control, patches are always welcome!

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

18 November, 2014 05:00PM by Michal Čihař (michal@cihar.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAnnoy 0.0.3

Hours after the initial blog post announcing the first release of the new package RcppAnnoy, Qiang Kou sent us a very nice pull request adding mmap support in Windows.

So a new release with Windows support is on now CRAN, and Windows binaries should be available by this evening as usual.

To recap, RcppAnnoy wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify. RcppAnnoy uses Rcpp Modules to offer the exact same functionality as the Python module wrapped around Annoy.

Courtesy of CRANberries, there is also a diffstat report for this release. More detailed information is on the RcppAnnoy page page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 November, 2014 11:48AM

hackergotchi for Josselin Mouette

Josselin Mouette

Introspection (not the GObject one)

Disclaimer: I’m not used to writing personal stuff on Debian channels. However, there is nothing new here for those who know me from other public channels.


Yesterday, I received the weirdest email from well-known troll MikeeUSA. He thought I shared his views of a horrible world full of bloodthirsty feminists using systemd in their quest for domination over poor white male heterosexuals. The most nauseating paragraph was probably the one where he showed signs of the mentality of a pedocriminal.

At first, I shrugged it off and sent him an email explaining I didn’t want anything with his stinky white male supremacist theories, assorted with a bit of taunting. But after discovering all that stuff was actually sent to public mailing lists, I took the time for a second look and started a bit of introspection.

MikeeUSA thought I was a white male supremacist because of the so-called SmellyWerewolf incident, 6 years ago.
Oh boy, people change in six years. Upon re-reading that, I had trouble admitting I was the one to write it. Memory is selective, and with time, you tend not to remember some gruesome details, especially the ones that conflict most with your moral values.

I can assure every reader that the only people I intended to mock then were those who mistook Debian mailing lists for advertising channels; but I understand now that my message must have caused pain to a lot more people than that. So, it may come late, but let me take this opportunity to offer my sincerest apologies to anyone I may have hurt at that time.


It may seem strange for someone with deeply-rooted values of equality to have written that. To have considered that it was okay to stereotype people. And I think I found this okay because to me, those people were given equal rights, and were therefore equal. But the fight for equality is not over when everyone is given the same rights. Not until they are given the same opportunities to exert those rights. Which does not happen when they live in a society that likes to fit them in little archetypal peg holes, never giving you the chance to question where those stereotypes come from.

For me, that chance came from an unusual direction: the fight against prostitution. This goes way back for me. Since when I was a teenager, I have always been ticked off at the idea of nonconsensual sex that somehow evades criminal responsibility because of money compensation. I never understood why it wasn’t considered as rape. Yet it sounded weird that a male heterosexual would hold such opinions; after all, male heterosexuals should go to prostitutes as a kind of social ritual, right?

It was only three years ago that an organization of men against prostitution was founded in France. Not only did I find out that I was not alone with my progressive ideas, I was given the opportunity to exchange with many men and women who had studied prostitution: its effects on victims, its relationship to rape culture and more generally to the place men and women hold in society. Because eventually, it all boils down to little peg holes in which we expect people to fit: the virile man or the faggot, the whore or the mother. For me, it was liberating. I could finally get rid of the discomfort of being a white male heterosexual that didn’t enter the little peg holes that were made for me.

And now, after Sweden 15 years ago, a new group of countries are finally adopting laws to criminalize the act of paying for sex. Including France. That’s too bad for MikeeUSA, but this country is no longer the eldorado for white male supremacists. And I’m proud that our lobbying made a contribution, however small, to that change.

18 November, 2014 10:00AM

hackergotchi for Erich Schubert

Erich Schubert

Generate iptables rules via pyroman

Vincent Bernat blogged on using Netfilter rulesets, pointing out that inserting the rules one-by-one using iptables calls may leave your firewall temporarily incomplete, eventually half-working, and that this approach can be slow.
He's right with that, but there are tools that do this properly. ;-)
Some years ago, for a multi-homed firewall, I wrote a tool called Pyroman. Using rules specified either in Python or XML syntax, it generates a firewall ruleset for you.
But it also adresses the points Vincent raised:
  • It uses iptables-restore to load the firewall more efficiently than by calling iptables a hundred times
  • It will backup the previous firewall, and roll-back on errors (or lack of confirmation, if you are remote and use --safe)
It also has a nice feature for the use in staging: it can generate firewall rule sets offline, to allow you reviewing them before use, or transfer them to a different host. Not all functionality is supported though (e.g. the Firewall.hostname constant usable in python conditionals will still be the name of the host you generate the rules on - you may want to add a --hostname parameter to pyroman)
pyroman --print-verbose will generate a script readable by iptables-restore except for one problem: it contains both the rules for IPv4 and for IPv6, separated by #### IPv6 rules. It will also annotate the origin of the rule, for example:
# /etc/pyroman/02_icmpv6.py:82
-A rfc4890f -p icmpv6 --icmpv6-type 255 -j DROP
indicates that this particular line was produced due to line 82 in file /etc/pyroman/02_icmpv6.py. This makes debugging easier. In particular it allows pyroman to produce a meaningful error message if the rules are rejected by the kernel: it will tell you which line caused the rule that was rejected.
For the next version, I will probably add --output-ipv4 and --output-ipv6 options to make this more convenient to use. So far, pyroman is meant to be used on the firewall itself.
Note: if you have configured a firewall that you are happy with, you can always use iptables-save to dump the current firewall. But it will not preserve comments, obviously.

18 November, 2014 08:46AM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

And The Papers Want To Know Whose Shirts You Wear

Bayeux Tapestry: Guy in sexist shirt sees a comet

Today I was walking past the Courant Institute at NYU when I saw a man wearing a t-shirt with a picture of a cow diagramming all the various cuts of beef.

Now I've lost all interest in science. Thanks a lot jerks.

18 November, 2014 06:52AM

Antoine Beaupré

bup vs attic silly benchmark

after see attic introduced in a discussion about bup, i figured out i could give it a try. it was answering two of my biggest concerns with bup:

  • backup removal
  • encryption

and seemed to magically out of nowhere and basically do everything i need, with an inline manual on top of it.

disclaimer

Note: this is not a real benchmark! i would probably need to port bup and attic to liw's seivot software to report on this properly (and that would amazing and really interesting, but it's late now). even worse, this was done on a production server with other stuff going on so take results with a grain of salt.

procedure and results

Here's what I did. I setup backups of my ridiculously huge ~/src directory on the external hard drive where I usually make my backups. I ran a clean backup with attic, than redid it, then I ran a similar backup with bup, then redid it. Here are the results:

anarcat@marcos:~$ sudo apt-get install attic # this installed 0.13 on debian jessie amd64
[...]
anarcat@marcos:~$ attic init /mnt/attic-test:
Initializing repository at "/media/anarcat/calyx/attic-test"
Encryption NOT enabled.
Use the "--encryption=passphrase|keyfile" to enable encryption.
anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src ~/src/
Initializing cache...
------------------------------------------------------------------------------
Archive name: src
Archive fingerprint: 7bdcea8a101dc233d7c122e3f69e67e5b03dbb62596d0b70f5b0759d446d9ed0
Start time: Tue Nov 18 00:42:52 2014
End time: Tue Nov 18 00:54:00 2014
Duration: 11 minutes 8.26 seconds
Number of files: 283910

                       Original size      Compressed size    Deduplicated size
This archive:                6.74 GB              4.27 GB              2.99 GB
All archives:                6.74 GB              4.27 GB              2.99 GB
------------------------------------------------------------------------------
311.60user 68.28system 11:08.49elapsed 56%CPU (0avgtext+0avgdata 122824maxresident)k
15279400inputs+6788816outputs (0major+3258848minor)pagefaults 0swaps
anarcat@marcos:~$ time attic create --stats /mnt/attic-test::src-2014-11-18 ~/src/
------------------------------------------------------------------------------
Archive name: src-2014-11-18
Archive fingerprint: be840f1a49b1deb76aea1cb667d812511943cfb7fee67f0dddc57368bd61c4bf
Start time: Tue Nov 18 00:05:57 2014
End time: Tue Nov 18 00:06:35 2014
Duration: 38.15 seconds
Number of files: 283910

                       Original size      Compressed size    Deduplicated size
This archive:                6.74 GB              4.27 GB            116.63 kB
All archives:               13.47 GB              8.54 GB              3.00 GB
------------------------------------------------------------------------------
30.60user 4.66system 0:38.38elapsed 91%CPU (0avgtext+0avgdata 104688maxresident)k
18264inputs+258696outputs (0major+36892minor)pagefaults 0swaps
anarcat@marcos:~$ sudo apt-get install bup # this installed bup 0.25
anarcat@marcos:~$ free && sync && echo 3 | sudo tee /proc/sys/vm/drop_caches && free # flush caches
anarcat@marcos:~$ export BUP_DIR=/mnt/bup-test
anarcat@marcos:~$ bup init
Dépôt Git vide initialisé dans /mnt/bup-test/
anarcat@marcos:~$ time bup index ~/src
Indexing: 345249, done.
56.57user 14.37system 1:45.29elapsed 67%CPU (0avgtext+0avgdata 85236maxresident)k
699920inputs+104624outputs (4major+25970minor)pagefaults 0swaps
anarcat@marcos:~$ time bup save -n src ~/src
Reading index: 345249, done.
bloom: creating from 1 file (200000 objects).
bloom: adding 1 file (200000 objects).
bloom: creating from 3 files (600000 objects).
Saving: 100.00% (6749592/6749592k, 345249/345249 files), done.
bloom: adding 1 file (126005 objects).
383.08user 61.37system 10:52.68elapsed 68%CPU (0avgtext+0avgdata 194256maxresident)k
14638104inputs+5944384outputs (50major+299868minor)pagefaults 0swaps
anarcat@marcos:attic$ time bup index ~/src
Indexing: 345249, done.
56.13user 13.08system 1:38.65elapsed 70%CPU (0avgtext+0avgdata 133848maxresident)k
806144inputs+104824outputs (137major+38463minor)pagefaults 0swaps
anarcat@marcos:attic$ time bup save -n src2 ~/src
Reading index: 1, done.
Saving: 100.00% (0/0k, 1/1 files), done.
bloom: adding 1 file (1 object).
0.22user 0.05system 0:00.66elapsed 42%CPU (0avgtext+0avgdata 17088maxresident)k
10088inputs+88outputs (39major+15194minor)pagefaults 0swaps

Disk usage is comparable:

anarcat@marcos:attic$ du -sc /mnt/*attic*
2943532K        /mnt/attic-test
2969544K        /mnt/bup-test

People are encouraged to try and reproduce those results, which should be fairly trivial.

Observations

Here are interesting things I noted while working with both tools:

  • attic is Python3: i could compile it, with dependencies, by doing apt-get build-dep attic and running setup.py - i could also install it with pip if i needed to (but i didn't)
  • bup is Python 2, and has a scary makefile
  • both have an init command that basically does almost nothing and takes little enough time that i'm ignoring it in the benchmarks
  • attic backups are a single command, bup requires me to know that i first want to index and then save, which is a little confusing
  • bup has nice progress information, especially during save (because when it loaded the index, it knew how much was remaining) - just because of that, bup "feels" faster
  • bup, however, lets me know about its deep internals (like now i know it uses a bloom filter) which is probably barely understandable by most people
  • on the contrary, attic gives me useful information about the size of my backups, including the size of the current increment
  • it is not possible to get that information from bup, even after the fact - you need to du before and after the backup
  • attic modifies the files access times when backing up, while bup is more careful (there's a pull request to fix this in attic, which is how i found out about this)
  • both backup systems seem to produce roughly the same data size from the same input

Summary

attic and bup are about equally fast. bup took 30 seconds less than attic to save the files, but that's not counting the 1m45s it took indexing them, so on the total run time, bup was actually slower. attic is also (almost) two times faster on the second run as well. but this could be within the margin of error of this very quick experiment, so my provisional verdict for now would be that they are about as fast.

bup may be more robust (for example it doesn't modify the atimes), but this has not been extensively tested and is more based with my familiarity with the "conservatism" of the bup team rather than actual tests.

considering all the features promised by attic, it makes for a really serious contender to the already amazing bup.

Next steps

The properly do this, we would need to:

  • include other software (thinking of Zbackup, Burp, ddar, obnam, rdiff-backup and duplicity)
  • bench attic with the noatime patch
  • bench dev attic vs dev bup
  • bench data removal
  • bench encryption
  • test data recovery
  • run multiple backup runs, on different datasets, on a cleaner environment
  • ideally, extend seivot to do all of that

Note that the Burp author already did an impressive comparative benchmark of a bunch of those tools for the burp2 design paper, but it unfortunately doesn't include attic or clear ways to reproduce the results.

18 November, 2014 05:39AM by anarcat

November 17, 2014

Vincent Sanders

NetSurf Developer workshop IV

Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders at the Codethink manchester office
Over the weekend the NetSurf developers met to make a concentrated effort on improving the browser. This time we were kindly hosted by Codethink in their Manchester office in a pleasant environment with plenty of refreshments.

Five developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone, Rob Kendrick and Vincent Sanders. We also had Chris Young providing some bug fixes remotely.

We started the weekend by discussing all the thorny core issues that had been put on the agenda and ensuring the outcomes were properly noted. We also held the society AGM which was minuted by Daniel.

The emphasis of this weekend was very much on planning and doing the disruptive changes we had been putting off until we were all together.

John-Mark and myself managed to change the core build system as used by all the libraries to using standard triplets to identify systems and use the gnu autoconf style of naming for parameters (i.e. HOST, BUILD and CC being used correctly).

This was accompanied by improvements and configuration changes to the CI system to accommodate the new usage.

Several issues from the bug tracker were addressed and we put ourselves in a stronger position to address numerous other usability problems in the future.

We managed to pack a great deal into the 20 hours of work on Saturday and Sunday although because we were concentrating much more on planning and infrastructure rather than a release the metrics of commits and files changed were lower than at previous events.

17 November, 2014 08:54PM by Vincent Sanders (noreply@blogger.com)

Niels Thykier

The first 12 days and 408 unblock requests into the Jessie freeze

The release team receives an extreme amount of unblock requests right now.  For the past 22 days[1], we have been receiving no less than 408 unblock/ageing requests.  That is an average of ~18.5/day.  In the same period, the release team have closed 350 unblocks requests, averaging 15.9/day.

This number does not account for number of unblocks, we add without a request, when we happen to spot when we look at the list of RC bugs[2]. Nor does it account for unblock requests currently tagged “moreinfo”, of which there are currently 25.

All in all, it has been 3 intensive weeks for the release team.  I am truly proud of my fellow team members for keeping up with this for so long!  Also a thanks to the non-RT members, who help us by triaging and reviewing the unblock requests!  It is much appreciated. :)

 

Random bonus info:

  • d (our diffing tool) finally got colordiff support during the Release Sprint last week.  Prior to that, we got black’n’white diffs!
    • ssh coccia.debian.org -t /srv/release.debian.org/tools/scripts/d <srcpkg>
    • Though coccia.debian.org do not have colordiff installed right now.  I have filed a request to have it installed.
  • The release team have about 132 (active) unblock hints deployed right now in our hint files.

 

[1] We started receiving some in the 10 days before the freeze as people realised that their uploads would need an unblock to make it into Jessie.

[2] Related topics: “what is adsb?” (the answer being: Our top hinter for Wheezy)

 


17 November, 2014 08:17PM by Niels Thykier

Daniel Leidert

Rsync files between two machines over SSH and limit read access

From time to time I need to get contents from a remote machine to my local workstation. The data sometimes is big and I don't want to start all over again if something fails. Further the transmission should be secure and the connection should be limited to syncing only this path and its sub-directories. So I've setup a way to do this using rsync and ssh and I'm going to describe this setup.

Consider you have already created a SSH key, say ~/.ssh/key_rsa together with ~/.ssh/key_rsa.pub, and on the remote machine there is an SSH server running allowing to login by a public key and rsync is available. Lets further assume the following:

  • the remote machine is rsync.domain.tld
  • the path on the remote machine that holds the data is /path/mydata
  • the user on the remote machine being able to read /path/mydata and to login via SSH is remote_user
  • the path on the local machine to put the data is /path/mydest
  • the user on the local machine being able to write /path/mydest is local_user
  • the user on the local machine has the private key ~local_user/.ssh/key_rsa and the public key ~local_user/.ssh/key_rsa.pub

Now the public key ~local_user/.ssh/key_rsa.pub is added to the remote users ~remote_user/.ssh/authorized_keys file. The file will then probably look like this (there is just one very long line with the key, here cut out by [..]):

ssh-rsa [..]= user@domain.tld

Now I would like to limit the abilities for a user logging in with this key to only rsync the special directory /path/mydata. I therefor preceed the key with a command prefix, which is explained in the manual page sshd(8). The file then looks like this:

command="/usr/bin/rsync --server --sender -vlogDtprze . /path/mydata",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [..]= user@domain.tld

I then can rsync the remote directory to a local location over SSH by running:

rsync -avz -P --delete -e 'ssh remote_user@rsync.domain.tld' rsync.domain.tld:/path/mydata/ /path/mydest

That's it.

17 November, 2014 04:32PM by Daniel Leidert (noreply@blogger.com)

Thorsten Alteholz

Manage own CA with Debian

Self signed SSL certificates are nice, but only provide encryption of retrieved data. Nobody knows who is really sending the data.

If one buys an SSL certificate for a website, the browser doesn’t complain as much as with a self signed certificate. But can you really trust the other side? Almost every commercial CA has some kind of “fast validation” or “domain validation, issued in minutes”, which is done by email or phone. So if required, within minutes everybody might become you. Even with putting money on the table your users can not be sure whether this server really belongs to the right guy.

Well, why wasting time and money? Just create your own Root CA and tell users that they need to add something in order to avoid some error messages. In Debian we basically have five packages who claim to be able to manage some kind of CA.

easy-rsa is mainly needed to manage certificates used by openVPN. Within this use case it works like a charm, but I don’t want to manage a more complex CA with it.

gnomint is dead upstream and only uses SHA1 as signature algorithm. This will cause lots of problems as Mircrosoft and Google want to deprecate SHA1 in their products by 2017. Besides, this package is already orphaned and maybe it can disappear now.

tinyCA uses more signature algorithms, unfortunately SHA1 seems to be the “best” it can. There are some patches to support up to SHA512, but they don’t work for all parts of the software yet. For example Sub-CAs still use SHA1 despite of choosing something different in the GUI. So nice, but not (yet) usable in Jessie.

FreeIPA seems to be great, but didn’t make it into Jessie in time. Unfortunately the Release Team has reasons to not unblock it. So nice, but not usable in Jessie.

xca is based on QT4. As announced in the 15th DPN of 2014 the deprecated QT4 will be removed from Debian Stretch (= Jessie+1). Apart from this, the software meets all my requirements.

17 November, 2014 01:57PM by alteholz

Daniel Leidert

Removal of debian.wgdd.de and {cvs,svn,vcs}.wgdd.de

If you've recently tried to browse to or apt-get from either cvs.wgdd.de, svn.wgdd.de, vcs.wgdd.de, debian.wgdd.de or ubuntu.wgdd.deyou've probably seen (and still are) an error (410, Gone) coming up and I'd like to give a short explanation why.

{cvs,svn,vcs}.wgdd.de

I've left my server provider and shut down the above services and only keep a small amount of services running. The domains {cvs,svn,vcs}.wgdd.de were used to provide (a) a subversion (SVN) server (via HTTPS and dav_svn) for some public and private work and (b) a CVS web-client to some old project works in CVS.

Among the latter was e.g. old code to generate manual pages for the proprietary fglrx graphics driver, stuff that laid there untouched for many years. So I guess, it was about time to finally remove it :)

The subversion web-client gave public access to some packaging work I do for the Debian GNU/Linux distribution, e.g. for the cvsweb, gtypist packages and some non-official packaging work. For the official packages I plan to move the files into the collab-maint web space and adjust the packages control files accordingly. Everything else will be hosted non-publicly in the future. I still intend to move stuff, that turns out to be useful for more people, to public places like github and Co. Update 17.11.2014: cvsweb, gurlchecker and gtypist have been moved to collab-maint.

debian.wgdd.de

I used this site to describe my usage of Debian GNU/Linux on the hardware I own ... laptop, servers etc. I wrote a few HOWTOs and provided a link collection with useful links. You can still find all of this using the archive.org service. I also had a repository up and working, especially to provide bluefish packages for users of Debian stable and Ubuntu. Half a year ago I dropped the Ubuntu build environments and packages and moved the Debian stable backports to official places. This effectively emptied the repository and left only the wgdd-archive-keyring package in place. So, there is no real need for a public repository anymore and the linklist probably got outdated too. All in all, I decided to stop this service (maybe I'll forward the site to here later :)).

If you see an error regarding the debian.wgdd.de URL running apt-get or aptitude, then there is a reference to this site in /etc/apt/sources.list or /etc/apt/sources.list.d/*, which can be safely removed. Further you should get rid of the wgdd-archive-keyring package:

apt-get autoremove --purge wgdd-archive-keyring

... or the repository key:

apt-key del E394D996

What else

In case you need any content from the mentioned services, just let me know.

17 November, 2014 12:50PM by Daniel Leidert (noreply@blogger.com)

hackergotchi for Chris Lamb

Chris Lamb

Calculating the number of pedal turns on a bike ride

If you have a cadence sensor on your bike such as the Garmin GSC-10, you can approximate the number of pedal turns you made on the bike ride using the following script (requires GPSBabel):

#!/bin/sh

STYLE="$(mktemp)"

cat >${STYLE} <<EOF
FIELD_DELIMITER COMMA
RECORD_DELIMITER NEWLINE
OFIELD CADENCE,"","%d"
EOF

exec gpsbabel -i garmin_fit -f "${1}" -o xcsv,style=${STYLE} -F- |
    awk '{x += $1} END {print int(x / 60)}'

... then call with:

$ sh cadence.sh ~/path/to/2014-11-16-14-46-05.fit
24344

Unfortunately the Garmin .fit format doesn't store the actual number of pedal turns, only the average for each particular second. However, it should be reasonably accurate given that one keeps a reasonably steady cadence.

As a bonus, using a small amount of shell plumbing you can then sum an entire year's worth of riding like so:

$ for X in ~/path/to/2014-*.fit; do sh cadence.sh ${X}; done | awk '{x += $1} END { print x }'
749943

17 November, 2014 09:34AM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

BOS -> DC

Hello, World

Been a while since my last blog post - things have been a bit hectic lately, and I’ve not really had the time.

Now that things have settled down a bit — I’m in DC! I’ve moved down south to join the rest of my colleagues at Sunlight to head up our State & Local team.

Leaving behind the brilliant Free Software community in Boston won’t be easy, but I’m hoping to find a similar community here in DC.

17 November, 2014 02:06AM

November 16, 2014

Daniel Leidert

Getting the audio over HDMI to work for the HP N54L microserver running Debian Wheezy and a Sapphire Radeon HD 6450

Conclusion: Sound over HDMI works with the Sapphire Radeon HD 6450 card in my HP N54L microserver. It requires a recent kernel and firmware from Wheezy 7.7 backports and the X.org server. There is no sound without X.org, even if audio has been enabled for the radeon kernel module.

Last year I couldn't get audio over HDMI to work after I installed a Sapphire Radeon HD 6450 1 GB (11190-02-20g) card into my N54L microserver. The cable that connects the HDMI interfaces between the card and the TV monitor supports HDMI 1.3, so audio should have been possible even then. However, I didn't get any audio output by XBMC playing video or music files. Nothing happened with stock Wheezy 7.1 and X.org/XBMC installed. So I removed the latter two and used the server as stock server without X/desktop and delayed my plans for an HTPC.

Now I tried again after I found some new hints, that made me curious for a second try :) Imagine my joy, when (finally) speaker-test produced noise on the TV! So here is my configuration and a step-by-step guide to

  • enable Sound over HDMI for the Radeon HD 6450
  • install a graphical environment
  • install XBMC
  • automatically start XBMC on boot

The latter two will be covered by a second post. Also note, that there is lot of information out there to achive the above tasks. So this is only about my configuration. Some packages below are marked as optional. A few are necessary only for the N54L microserver (firmware) and for a few I'm not sure they are necessary at all.

Step 1 - Prepare the system

At this point I don't have any desktop nor any other graphical environment (X.org) installed. First I purged pulseaudio and related packages completely and only use ALSA:

# apt-get autoremove --purge pulseaudio pulseaudio-utils pulseaudio-module-x11 gstreamer0.10-pulseaudio
# apt-get install alsa-base alsa-utils alsa-oss

Next I installed a recent linux kernel and recent firmware from Wheezy backports:

# apt-get install -t wheezy-backports linux-image-amd64 firmware-linux-free firmware-linux firmware-linux-nonfree firmware-atheros firmware-bnx2 firmware-bnx2x

This put linux-image-3.16-0.bpo.3-amd64 and recent firmware onto my system. I've chosen to upgrade linux-image-amd64 instead to pick a special (recent) linux kernel package from Wheezy backports to keep up-to-date with recent kernels from there.

Then I enabled the audio output of the kernel radeon module. Essentially there are at least three ways to do this. I use the one to modify /etc/modules.d/radeon.conf and set the audio parameter there. The hw_i2c parameter is disabled. I read, that it might cause trouble with the audio output here although I never personally experienced it:

options radeon audio=1 hw_i2c=0

JFTR: This is how I boot the N54L by default:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=force pcie_aspm=force nmi_watchdog=0"

After rebooting I see this for the Radeon card in question:


# lsmod | egrep snd\|radeon\|drm | awk '{print $1}' | sort
[..]
drm
drm_kms_helper
i2c_algo_bit
i2c_core
radeon
snd
snd_hda_codec
snd_hda_codec_hdmi
snd_hda_controller
snd_hda_intel
snd_hwdep
snd_pcm
snd_seq
snd_seq_device
snd_timer
soundcore
ttm
[..]
# lspci -k
[..]
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Caicos [Radeon HD 6450/7450/8450 / R5 230 OEM]
Subsystem: PC Partner Limited / Sapphire Technology Device e204
Kernel driver in use: radeon
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Caicos HDMI Audio [Radeon HD 6400 Series]
Subsystem: PC Partner Limited / Sapphire Technology Radeon HD 6450 1GB DDR3
Kernel driver in use: snd_hda_intel
[..]
# cat /sys/module/radeon/parameters/audio
1
# cat /sys/module/radeon/parameters/hw_i2c
0
# aplay -l
**** List of PLAYBACK Hardware Devices ****
card 0: HDMI [HDA ATI HDMI], device 3: HDMI 0 [HDMI 0]
Subdevices: 0/1
Subdevice #0: subdevice #0
# aplay -L
null
Discard all samples (playback) or generate zero samples (capture)
pulse
PulseAudio Sound Server
hdmi:CARD=HDMI,DEV=0
HDA ATI HDMI, HDMI 0
HDMI Audio Output

At this point, without having the X.org server installed, I still have no audio output to the connected monitor. Running alsamixer I only see the S/PDIF bar for the HDA ATI HDMI device, showing a value of 00. I can mute and un-mute this device but not change the value. No need to worry, sound comes with step two.

Step 2 - Install a graphical environment (X.org server)

Next is to install a graphical environment, basically the X.org server. This is done in Debian by the desktop task. Unfortunately tasksel makes use of APT::Install-Recommends="true" and would install a desktop environment and some more recommended packages. At the moment I don't want this, only X. So basically I installed only the task-desktop package with dependencies:

# apt-get install task-desktop xfonts-cyrillic

Next is to install a display manager. I've chosen lightdm:

# apt-get install lightdm accountsservice

Done. Now (re-)start the X server. Simply ...

# service lightdm restart

... should do. And now there is sound, probably due to the X.org Radeon driver. The following command created noise on the two monitor speakers :)

# speaker-test -c2 -D hdmi:0 -t pink

Finally there is sound over HDMI!

Step 3 - Install XBMC

To be continued ...

16 November, 2014 11:57PM by Daniel Leidert (noreply@blogger.com)

Install automatically starting XBMC to N54L microserver under Debian Wheezy 7.7

This is a followup to my previous post about getting sound output from the Sapphire Radeon HD 6450 card in my HP N54L microserver via HDMI. This post will describe, howto install XBMC from Wheezy backports and how to automatically start it. Again, there are vaious ways and I'll only describe mine. Further, this is, what I did so far: enable the audio output for the Radeon card and install X.org together with lightdm.

Step 3 - Install XBMC

This is a pretty easy task. I've chosen to install XBMC 13.2 from the Wheezy backports repository.

# apt-get install -t wheezy-backports xbmc

Step 4 - Automically start XBMC

There are various ways; some involve starting it a s a service using init scripts für sysvinit or upstrart or systemd. You'll easily find them. I've chosen to create a user, automatically log him into X and start XBMC. The user is called xbmc.

# adduser --home /home/xbmc --add_extra_groups xbmc

I used to choose a password. But I wonder, if using --disabled-password would work too? Next I adjusted /etc/lightdm/lightdm.conf. Below are only the differences to the stock version of this file. I haven't touched other lines.

[SeatDefaults]
greeter-session=lightdm-gtk-greeter
user-session=XBMC
autologin-guest=false
autologin-user=xbmc
autologin-user-timeout=0

The file /usr/share/xsessions/XBMC.desktop is the stock one, no changes made. After restarting lightdm:

# service lightdm restart

XBMC is started automatically. If anything goes wrong or doesn't work, I suggest to check /var/log/auth.log, /home/xbmc/.xsession-errors and /var/log/lightdm/*.log. In a few cases it seems necessary to login the user xbmc manually once although it wasn't necessary here.

JFTR: When I checked /var/log/auth.log I saw a few errors and installed gnome-keyring too:

apt-get install --install-recommends gnome-keyring

Step 5 - Useful packages

There are some packages, which might be useful running XBMC, e.g.

Conclusion

I'm now running XBMC on top of Debian Wheezy on the N54L microserver without a bloated desktop environment. The system automatically starts the XBMC session on start/reboot. Video and sound are working fine, though it was necessary to install recent firmware and a recent kernel from Wheezy backports to get it done.

Thanks to the whole OSS community for aksing, for answering, for blogging, for using and for continue developing! I currently enjoy the results :)

16 November, 2014 11:16PM by Daniel Leidert (noreply@blogger.com)

hackergotchi for Tollef Fog Heen

Tollef Fog Heen

Resigning as a Debian systemd maintainer

Apparently, people care when you, as privileged person (white, male, long-time Debian Developer) throw in the towel because the amount of crap thrown your way just becomes too much. I guess that's good, both because it gives me a soap box for a short while, but also because if enough people talk about how poisonous the well that Debian is has become, we can fix it.

This morning, I resigned as a member of the systemd maintainer team. I then proceeded to leave the relevant IRC channels and announced this on twitter. The responses I've gotten have been almost all been heartwarming. People have generally been offering hugs, saying thanks for the work put into systemd in Debian and so on. I've greatly appreciated those (and I've been getting those before I resigned too, so this isn't just a response to that). I feel bad about leaving the rest of the team, they're a great bunch: competent, caring, funny, wonderful people. On the other hand, at some point I had to draw a line and say "no further".

Debian and its various maintainer teams are a bunch of tribes (with possibly Debian itself being a supertribe). Unlike many other situations, you can be part of multiple tribes. I'm still a member of the DSA tribe for instance. Leaving pkg-systemd means leaving one of my tribes. That hurts. It hurts even more because it feels like a forced exit rather than because I've lost interest or been distracted by other shiny things for long enough that you don't really feel like part of a tribe. That happened with me with debian-installer. It was my baby for a while (with a then quite small team), then a bunch of real life thing interfered and other people picked it up and ran with it and made it greater and more fantastic than before. I kinda lost touch, and while it's still dear to me, I no longer identify as part of the debian-boot tribe.

Now, how did I, standing stout and tall, get forced out of my tribe? I've been a DD for almost 14 years, I should be able to weather any storm, shouldn't I? It turns out that no, the mountain does get worn down by the rain. It's not a single hurtful comment here and there. There's a constant drum about this all being some sort of conspiracy and there are sometimes flares where people wish people involved in systemd would be run over by a bus or just accusations of incompetence.

Our code of conduct says, "assume good faith". If you ever find yourself not doing that, step back, breathe. See if there's a reasonable explanation for why somebody is saying something or behaving in a way that doesn't make sense to you. It might be as simple as your native tongue being English and their being something else.

If you do genuinely disagree with somebody (something which is entirely fine), try not to escalate, even if the stakes are high. Examples from the last year include talking about this as a war and talking about "increasingly bitter rear-guard battles". By using and accepting this terminology, we, as a project, poison ourselves. Sam Hartman puts this better than me:

I'm hoping that we can all take a few minutes to gain empathy for those who disagree with us. Then I'm hoping we can use that understanding to reassure them that they are valued and respected and their concerns considered even when we end up strongly disagreeing with them or valuing different things.

I'd be lying if I said I didn't ever feel the urge to demonise my opponents in discussions. That they're worse, as people, than I am. However, it is imperative to never give in to this, since doing that will diminish us as humans and make the entire project poorer. Civil disagreements with reasonable discussions lead to better technical outcomes, happier humans and a healthier projects.

16 November, 2014 10:55PM

John Goerzen

Contemplative Weather

Sometimes I look out the window and can’t help but feel “this weather is deep.” Deep with meaning, with import. Almost as if the weather is confident of itself, and is challenging me to find some meaning within it.

This weekend brought the first blast of winter to the plains of Kansas. Saturday was chilly and windy, and overnight a little snow fell. Just enough to cover up the ground and let the tops of the blades of grass poke through. Just enough to make the landscape look totally different, without completely hiding what lies beneath. Laura and I stood silently at the window for a few minutes this morning, gazing out over the untouched snow, extending out as far as we can see.

Yesterday, I spent some time with my great uncle and aunt. My great uncle isn’t doing so well. He’s been battling cancer and other health issues for some time, and can’t get out of the house very well. We talked for an hour and a half – about news of the family, struggles in life now and in the past, and joys. There were times when all three of us had tears in our eyes, and times when all of us were laughing so loudly. My great uncle managed to stand up twice while I was there — this took quite some effort — once to give me a huge hug when I arrived, and another to give me an even bigger hug when I left. He has always been a person to give the most loving hugs.

He hadn’t been able to taste food for awhile, due to treatment for cancer. When I realized he could taste again, I asked, “When should I bring you some borscht?” He looked surprised, then got a huge grin, glanced at his watch, and said, “Can you be back by 3:00?”

His brother, my grandpa, was known for his beef borscht. I also found out my great uncle’s favorite kind of bread, and thought that maybe I would do some cooking for him sometime soon.

Today on my way home from church, I did some shopping. I picked up the ingredients for borscht and for bread. I came home, said hi to the cats that showed up to greet me, and went inside. I turned on the radio – Prairie Home Companion was on – and started cooking.

It takes a long time to prepare what I was working on – I spent a solid two hours in the kitchen. As I was chopping up a head of cabbage, I remembered coming to what is now my house as a child, when my grandpa lived here. I remembered his borscht, zwiebach, monster cookies; his dusty but warm wood stove; his closet with toys in it. I remembered two years ago, having nearly 20 Goerzens here for Christmas, hosted by the boys and me, and the 3 gallons of borscht I made for the occasion.

I poured in some tomato sauce, added some water. The radio was talking about being kind to people, remembering that others don’t always have the advantages we do. Garrison Keillor’s fictional boy in a small town, when asked what advantages he had, mentioned “belonging.” Yes, that is an advantage. We all deal with death, our own and that of loved ones, but I am so blessed by belonging – to a loving family, two loving churches, a wonderful community.

Out came three pounds of stew beef. Chop, chop, slice, plunk into the cast iron Dutch oven. It’s my borscht pot. It looks as if it would be more at home over a campfire than a stovetop, but it works anywhere.

Outside, the sun came up. The snow melts a little, and the cats start running around even though it’s still below freezing. They look like they’re having fun playing.

I’m chopping up parsley and an onion, then wrapping them up in a cheesecloth to make the spice ball for the borscht. I add the basil and dill, some salt, and plonk them in, too. My 6-quart pot is nearly overflowing as I carefully stir the hearty stew.

On the radio, a woman who plays piano in a hospital and had dreamed of being on that particular radio program for 13 years finally was. She played with passion and delight I could hear through the radio.

Then it’s time to make bread. I pour in some warm water, add some brown sugar, and my thoughts turn to Home On The Range. I am reminded of this verse:

How often at night when the heavens are bright
With the light from the glittering stars
Have I stood here amazed and asked as I gazed
If their glory exceeds that of ours.

There’s something about a beautiful landscape out the window to remind a person of all the blessings in life. This has been a quite busy weekend — actually, a busy month — but despite the fact I have a relative that is sick in the midst of it all, I am so blessed in so many ways.

I finish off the bread, adding some yeast, and I remember my great uncle thanking me so much for visiting him yesterday. He commented that “a lot of younger people have no use for visiting an old geezer like me.” I told him, “I’ve never been like that. I am so glad I could come and visit you today. The best gifts are those that give in both directions, and this surely is that.”

Then I clean up the kitchen. I wipe down the counters from all the bits of cabbage that went flying. I put away all the herbs and spices I used, and finally go to sit down and reflect. From the kitchen, the smells of borscht and bread start to seep out, sweeping up the rest of the house. It takes at least 4 hours for the borscht to cook, and several hours for the bread, so this will be an afternoon of waiting with delicious smells. Soon my family will be home from all their activities of the day, and I will be able to greet them with a warm house and the same smells I stepped into when I was a boy.

I remember this other verse from Home On the Range:

Where the air is so pure, the zephyrs so free,
The breezes so balmy and light,
That I would not exchange my home on the range
For all of the cities so bright.

Today’s breeze is an icy blast from the north – maybe not balmy in the conventional sense. But it is the breeze of home, the breeze of belonging. Even today, as I gaze out at the frozen landscape, I realize how balmy it really is, for I know I wouldn’t exchange my life on the range for anything.

16 November, 2014 10:30PM by John Goerzen

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2014/45-46

I was not much at home during the last two weeks, so not much to report about RC bug activities. – some general observations:

  1. the RC bug count is still relatively low, even after lucas' archive rebuild.
  2. the release team is extremely fast in handling unblock request - kudos! – pro tip: file them quickly after uploading, or some{thing,one} else might be faster :)

my small contributions:

  • #765327 – libnet-dns-perl: "lintian fails if the machine has a link-local IPv6 nameserver configured"
    discuss possible fix with upstream (pkg-perl)
  • #768683 – src:libmoosex-storage-perl: "libmoosex-storage-perl: missing runtime dependencies cause (build) failures in other packages"
    move packages from Recommends to Depends (pkg-perl)
  • #768692 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #768712 – src:libpoe-component-client-mpd-perl: "libpoe-component-client-mpd-perl: FTBFS in jessie: Failed test '10 seconds worth of music in the db': got: '9'"
    add patch from Simon McVittie (pkg-perl)
  • #769003 – libgluegen2-build-java: "libjogl2-java: FTBFS on arm64, ppc64el, s390x"
    add patch from Colin Watson (pkg-java)

16 November, 2014 10:07PM

hackergotchi for Bernhard R. Link

Bernhard R. Link

Enabling Change

Big changes are always a complicated thing to get done and can be the harder the bigger or more diverse an organization is it is taking place in.

Transparency

Ideally every change is well communicated early and openly. Leaving people in the dark about what will change and when means people have much less time to feeling comfortable about it or arranging with it mentally. Especially bad can be extending the change later or or shortening transition periods. Letting people think they have some time to transition only to force them to rush later will remove any credibility you have and severely reduce their ability to believe you are not crossing them. Making a new way optional is a great way to create security (see below), but making that obligatory before the change even arrives as optional with them will not make them very willing to embrace change.

Take responsibility

Every transformation means costs. Even if some change did only improve and did not make anything worse once implemented (the ideal change you will never meet in reality), the deployment of the change still costs: processes have adapted to it, people have to relearn how to do things, how to detect if something goes wrong, how to fix it, documentation has to be adopted and and and. Even as the change causes more good than costs in the whole organization (let's hope it does, I hope you wouldn't try to do something if the total benefit is negative), the benefits and thus the benefit to cost ratio will differ for the different parts of your organization or the different people within it. It's hardly avoidable that for some people there will not be much benefit, much less perceived benefit compared to the costs they have to burden for it. Those are the people whose good will you want to fight for, not the people you want to fight against.

They have to pay with their labor/resources and thus their good will for your benefit the overall benefit.

This is much easier if you acknowledge that fact. If you blame them for having the costs, claim their situation does not even exist or even ridicule them for not embracing change you only prepare yourself for frustration. You might be able to persuade yourself that everyone that is not willing to invest in the change is just acting out of malevolent self-interest. But you will hardly be able to persuade people that it is evil to not help your cause if you treat them as enemies.

And once you ignored or played down costs that later actually happen, your credibility in being able to see the big picture will simply cease to exist at all for the next change.

Allow different metrics

People have different opinions about priorities, about what is important, about how much something costs and even about what is a problem. If you want to persuade them, try to take that into account. If you do not understand why something is a reason, it might be because the given point is stupid. But it might also be that you miss something. And often there is simple a different valuation of what is important, what the costs are and what are problems. If you want to persuade people, it is worth to try to understand those.

If all you want to do is persuade some leader or some majority then ridiculing their concerns might get you somewhere. But how do you want to win people over if you do not even appear to understand their problems. Why should people trust you that their costs will be worth the overall benefits if you tell them the costs that they clearly see do not exist? How credible is referring to the bigger picture if the part of the picture they can see does not match what you say the bigger picture looks like?

Don't get trolled and don't troll

There will always be people that might be unreasonable or even try to provoke you. Don't allow being provoked. Remember that for successful changes you need to win broad support. Feeling personally attacked or feeling presented a large amount of pointless arguments easily results in not bringing proper responses or actually looking at arguments. If someone is only trolling and purely malevolent, they will tease you best if they bring actual concerns of people in a way you likely degrade your yourself and your point in answering. Becoming impertinent with the troll is like attacking the annoying little goblin hiding next to the city guards with area damage.

When not being able to persuade people, it is also far to easy to consider them in bad faith and/or attacking them personally. This can only escalate even more. Worst case you frustrate someone in good faith. In most cases you poison the discussion so much that people actually in good faith will no longer contribute the discussion. It might be rewarding short term because after some escalation only obviously unreasonable people will talk against you, but it makes it much harder to find solutions together that could benefit anyone and almost impossible to persuade those that simply left the discussion.

Give security

Last but not least, remember that humans are quite risk-averse. In general they might invest in (even small) chances to win, but go a long way to avoid risks. Thus an important part of enabling change is to reduce risks, real and perceived ones and give people a feeling of security.

In the end, almost every measure boils down to that: You give people security by giving them the feeling that the whole picture is considered in decisions (by bringing them early into the process, by making sure their concerns are understood and part of the global profit/cost calculation and making sure their experiences with the change are part of the evaluation). You give people security by allowing them to predict and control things (by transparency about plans, how far the change will go and guaranteed transitions periods, by giving them enough time so they can actually plan and do the transition). You give people security by avoiding early points of no return (by having wide enough tests, rollback scenarios,...). You give people security by not letting them alone (by having good documentation, availability of training, ...).

Especially side-by-side availability of old and new is an extremely powerful tool, as it fits all of the above: It allows people to actually test it (and not some little prototype mostly but not quite totally unrelated to reality) so their feedback can be heard. It makes it more predictable as all the new ways can be tried before the old ones no longer work. It is the ultimate role-back scenario (just switch off the new). And allows for learning the new before losing the old.

Of course giving the people a feeling of security needs resources. But it is a very powerful way to get people to embrace the chance.

Also in my experience people only fearing for themselves will usually mostly be passive by not pushing forward and trying to avoid or escape the changes. (After all, working against something costs energy, so purely egoistic behavior is quite limiting in that regard). Most people actively pushing back do it because they fear for something larger than only them. And any measure to making them fear less that you ruin the overall organization, not only avoids unnecessary hurdles rolling out the change but also has some small chance to actually avoid running into disaster with closed eyes.

16 November, 2014 03:51PM

hackergotchi for Vincent Bernat

Vincent Bernat

Staging a Netfilter ruleset in a network namespace

A common way to build a firewall ruleset is to run a shell script calling iptables and ip6tables. This is convenient since you get access to variables and loops. There are three major drawbacks with this method:

  1. While the script is running, the firewall is temporarily incomplete. Even if existing connections can be arranged to be left untouched, the new ones may not be allowed to be established (or unauthorized flows may be allowed). Also, essential NAT rules or mangling rules may be absent.

  2. If an error occurs, you are left with an half-working firewall. Therefore, you should ensure that some rules authorizing remote access are set very early. Or implement some kind of automatic rollback system.

  3. Building a large firewall can be slow. Each ip{,6}tables command will download the ruleset from the kernel, add the rule and upload the whole modified ruleset to the kernel.

Using iptables-restore

A classic way to solve these problems is to build a rule file that will be read by iptables-restore and ip6tables-restore1. Those tools send the ruleset to the kernel in one pass. The kernel applies it atomically. Usually, such a file is built with ip{,6}tables-save but a script can fit the task.

The ruleset syntax understood by ip{,6}tables-restore is similar to the syntax of ip{,6}tables but each table has its own block and chain declaration is different. See the following example:

$ iptables -P FORWARD DROP
$ iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE
$ iptables -N SSH
$ iptables -A SSH -p tcp --dport ssh -j ACCEPT
$ iptables -A INPUT -i lo -j ACCEPT
$ iptables -A OUTPUT -o lo -j ACCEPT
$ iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
$ iptables -A FORWARD -j SSH
$ iptables-save
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE
COMMIT

*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:SSH - [0:0]
-A INPUT -i lo -j ACCEPT
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -j SSH
-A OUTPUT -o lo -j ACCEPT
-A SSH -p tcp -m tcp --dport 22 -j ACCEPT
COMMIT

As you see, we have one block for the nat table and one block for the filter table. The user-defined chain SSH is declared at the top of the filter block with other builtin chains.

Here is a script diverting ip{,6}tables commands to build such a file (heavily relying on some Zsh-fu2):

#!/bin/zsh
set -e

work=$(mktemp -d)
trap "rm -rf $work" EXIT

# ➊ Redefine ip{,6}tables
iptables() {
    # Intercept -t
    local table="filter"
    [[ -n ${@[(r)-t]} ]] && {
        # Which table?
        local index=${(k)@[(r)-t]}
        table=${@[(( index + 1 ))]}
        argv=( $argv[1,(( $index - 1 ))] $argv[(( $index + 2 )),$#] )
    }
    [[ -n ${@[(r)-N]} ]] && {
        # New user chain
        local index=${(k)@[(r)-N]}
        local chain=${@[(( index + 1 ))]}
        print ":${chain} -" >> ${work}/${0}-${table}-userchains
        return
    }
    [[ -n ${@[(r)-P]} ]] && {
        # Policy for a builtin chain
        local index=${(k)@[(r)-P]}
        local chain=${@[(( index + 1 ))]}
        local policy=${@[(( index + 2 ))]}
        print ":${chain} ${policy}" >> ${work}/${0}-${table}-policy
        return
    }
    # iptables-restore only handle double quotes
    echo ${${(q-)@}//\'/\"} >> ${work}/${0}-${table}-rules #'
}
functions[ip6tables]=${functions[iptables]}

# ➋ Build the final ruleset that can be parsed by ip{,6}tables-restore
save() {
    for table (${work}/${1}-*-rules(:t:s/-rules//)) {
        print "*${${table}#${1}-}"
        [ ! -f ${work}/${table}-policy ] || cat ${work}/${table}-policy
        [ ! -f ${work}/${table}-userchains || cat ${work}/${table}-userchains
        cat ${work}/${table}-rules
        print "COMMIT"
    }
}

# ➌ Execute rule files
for rule in $(run-parts --list --regex '^[.a-zA-Z0-9_-]+$' ${0%/*}/rules); do
    . $rule
done

# ➍ Execute rule files
ret=0
save iptables  | iptables-restore  || ret=$?
save ip6tables | ip6tables-restore || ret=$?
exit $ret

In ➊, a new iptables() function is defined and will shadow the iptables command. It will try to locate the -t parameter to know which table should be used. If such a parameter exists, the table is remembered in the $table variable and removed from the list of arguments. Defining a new chain (with -N) is also handled as well as setting the policy (with -P).

In ➋, the save() function will output a ruleset that should be parseable by ip{,6}tables-restore. In ➌, user rules are executed. Each ip{,6}tables command will call the previously defined function. When no error has occurred, in ➍, ip{,6}tables-restore is invoked. The command will either succeed or fail.

This method works just fine3. However, the second method is more elegant.

Using a network namespace

An hybrid approach is to build the firewall rules with ip{,6}tables in a newly created network namespace, save it with ip{,6}tables-save and apply it in the main namespace with ip{,6}tables-restore. Here is the gist (still using Zsh syntax):

#!/bin/zsh
set -e

alias main='/bin/true ||'
[ -n $iptables ] || {
    # ➊ Execute ourself in a dedicated network namespace
    iptables=1 unshare --net -- \
        $0 4> >(iptables-restore) 6> >(ip6tables-restore)
    # ➋ In main namespace, disable iptables/ip6tables commands
    alias iptables=/bin/true
    alias ip6tables=/bin/true
    alias main='/bin/false ||'
}

# ➌ In both namespaces, execute rule files
for rule in $(run-parts --list --regex '^[.a-zA-Z0-9_-]+$' ${0%/*}/rules); do
    . $rule
done

# ➍ In test namespace, save the rules
[ -z $iptables ] || {
    iptables-save >&4
    ip6tables-save >&6
}

In ➊, the current script is executed in a new network namespace. Such a namespace has its own ruleset that can be modified without altering the one in the main namespace. The $iptables environment variable tell in which namespace we are. In the new namespace, we execute all the rule files (➌). They contain classic ip{,6}tables commands. If an error occurs, we stop here and nothing happens, thanks to the use of set -e. Otherwise, in ➍, the ruleset of the new namespace are saved using ip{,6}tables-save and sent to dedicated file descriptors.

Now, the execution in the main namespace resumes in ➊. The results of ip{,6}tables-save are feeded to ip{,6}tables-restore. At this point, the firewall is mostly operational. However, we will play again the rule files (➌) but the ip{,6}tables commands will be disabled (➋). Additional commands in the rule files, like enabling IP forwarding, will be executed.

The new namespace does not provide the same environment as the main namespace. For example, there is no network interface in it, so we cannot get or set IP addresses. A command that must not be executed in the new namespace should be prefixed by main:

main ip addr add 192.168.15.1/24 dev lan-guest

You can look at a complete example on GitHub.


  1. Another nifty tool is iptables-apply which will apply a rule file and rollback after a given timeout unless the change is confirmed by the user. 

  2. As you can see in the snippet, Zsh comes with some powerful features to handle arrays. Another big advantage of Zsh is it does not require quoting every variable to avoid field splitting. Hence, the script can handle values with spaces without a problem, making it far more robust. 

  3. If I were nitpicking, there are three small flaws with it. First, when an error occurs, it can be difficult to match the appropriate location in your script since you get the position in the ruleset instead. Second, a table can be used before it is defined. So, it may be difficult to spot some copy/paste errors. Third, the IPv4 firewall may fail while the IPv6 firewall is applied, and vice-versa. Those flaws are not present in the next method. 

16 November, 2014 03:28PM by Vincent Bernat

Intel Wireless 7260 as an access point

My home router acts as an access point with an Intel Dual-Band Wireless-AC 7260 wireless card. This card supports 802.11ac (on the 5 GHz band) and 802.11n (on both the 5 GHz and 2.4 GHz band). While this seems a very decent card to use in managed mode, this is not really a great choice for an access point.

$ lspci -k -nn -d 8086:08b1
03:00.0 Network controller [0280]: Intel Corporation Wireless 7260 [8086:08b1] (rev 73)
        Subsystem: Intel Corporation Dual Band Wireless-AC 7260 [8086:4070]
        Kernel driver in use: iwlwifi

TL;DR: Use an Atheros card instead.

Limitations

First, the card is said “dual-band” but you can only uses one band at a time because there is only one radio. Almost all wireless cards have this limitation. If you want to use both the 2.4 GHz band and the less crowded 5 GHz band, two cards are usually needed.

5 GHz band

There is no support to set an access point on the 5 GHz band. The firmware doesn’t allow it. This can be checked with iw:

$ iw reg get
country CH: DFS-ETSI
        (2402 - 2482 @ 40), (N/A, 20), (N/A)
        (5170 - 5250 @ 80), (N/A, 20), (N/A)
        (5250 - 5330 @ 80), (N/A, 20), (0 ms), DFS
        (5490 - 5710 @ 80), (N/A, 27), (0 ms), DFS
        (57240 - 65880 @ 2160), (N/A, 40), (N/A), NO-OUTDOOR
$ iw list
Wiphy phy0
[...]
        Band 2:
                Capabilities: 0x11e2
                        HT20/HT40
                        Static SM Power Save
                        RX HT20 SGI
                        RX HT40 SGI
                        TX STBC
                        RX STBC 1-stream
                        Max AMSDU length: 3839 bytes
                        DSSS/CCK HT40
                Frequencies:
                        * 5180 MHz [36] (20.0 dBm) (no IR)
                        * 5200 MHz [40] (20.0 dBm) (no IR)
                        * 5220 MHz [44] (20.0 dBm) (no IR)
                        * 5240 MHz [48] (20.0 dBm) (no IR)
                        * 5260 MHz [52] (20.0 dBm) (no IR, radar detection)
                          DFS state: usable (for 192 sec)
                          DFS CAC time: 60000 ms
                        * 5280 MHz [56] (20.0 dBm) (no IR, radar detection)
                          DFS state: usable (for 192 sec)
                          DFS CAC time: 60000 ms
[...]

While the 5 GHz band is allowed by the CRDA, all frequencies are marked with no IR. Here is the explanation for this flag:

The no-ir flag exists to allow regulatory domain definitions to disallow a device from initiating radiation of any kind and that includes using beacons, so for example AP/IBSS/Mesh/GO interfaces would not be able to initiate communication on these channels unless the channel does not have this flag.

Multiple SSID

This card can only advertise one SSID. Managing several of them is useful to setup distinct wireless networks, like a public access (routed to Tor), a guest access and a private access. iw can confirm this:

$ iw list
        valid interface combinations:
                 * #{ managed } <= 1, #{ AP, P2P-client, P2P-GO } <= 1, #{ P2P-device } <= 1,
                   total <= 3, #channels <= 1

Here is the output of an Atheros card able to manage 8 SSID:

$ iw list
        valid interface combinations:
                 * #{ managed, WDS, P2P-client } <= 2048, #{ IBSS, AP, mesg point, P2P-GO } <= 8,
                   total <= 2048, #channels <= 1

Configuration as an access point

Except for those two limitations, the card works fine as an access point. Here is the configuration that I use for hostapd:

interface=wlan-guest
driver=nl80211

# Radio
ssid=XXXXXXXXX
hw_mode=g
channel=11

# 802.11n
wmm_enabled=1
ieee80211n=1
ht_capab=[HT40-][SHORT-GI-20][SHORT-GI-40][DSSS_CCK-40][DSSS_CCK-40][DSSS_CCK-40]

# WPA
auth_algs=1
wpa=2
wpa_passphrase=XXXXXXXXXXXXXXX
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP

Because of the use of channel 11, only 802.11n HT40- rate can be enabled. Look at the Wikipedia page for 802.11n to check if you can use either HT40-, HT40+ or both.

16 November, 2014 03:27PM by Vincent Bernat

Replacing Swisscom router by a Linux box

I have recently moved to Lausanne, Switzerland. Broadband Internet access is not as cheap as in France. Free, a French ISP, is providing an FTTH access with a bandwith of 1 Gbps1 for about 38 € (including TV and phone service), Swisscom is providing roughly the same service for about 200 €2. Swisscom fiber access was available for my appartment and I chose the 40 Mbps contract without phone service for about 80 €.

Like many ISP, Swisscom provides an Internet box with an additional box for TV. I didn’t unpack the TV box as I have no use for it. The Internet box comes with some nice features like the ability to setup firewall rules, a guest wireless access and some file sharing possibilities. No shell access!

I have bought a small PC to act as router and replace the Internet box. I have loaded the upcoming Debian Jessie on it. You can find the whole software configuration in a GitHub repository.

This blog post only covers the Swisscom-specific setup (and QoS). Have a look at those two blog posts for related topics:

Ethernet

The Internet box is packed with a Siligence-branded 1000BX SFP3. This SFP receives and transmits data on the same fiber using a different wavelength for each direction.

Instead of using a network card with an SFP port, I bought a Netgear GS110TP which comes with 8 gigabit copper ports and 2 fiber SFP ports. It is a cheap switch bundled with many interesting features like VLAN and LLDP. It works fine if you don’t expect too much from it.

IPv4

IPv4 connectivity is provided over VLAN 10. A DHCP client is mandatory. Moreover, the DHCP vendor class identifier option (option 60) needs to be advertised. This can be done by adding the following line to /etc/dhcp/dhclient.conf when using the ISC DHCP client:

send vendor-class-identifier "100008,0001,,Debian";

The first two numbers are here to identify the service you are requesting. I suppose this can be read as requesting the Swisscom residential access service. You can put whatever you want after that. Once you get a lease, you need to use a browser to identify yourself to Swisscom on the first use.

IPv6

Swisscom provides IPv6 access through the 6rd protocol. This is a tunneling mechanism to facilitate IPv6 deployment accross an IPv4 infrastructure. This kind of tunnel is natively supported by Linux since kernel version 2.6.33.

To setup IPv6, you need the base IPv6 prefix and the 6rd gateway. Some ISP are providing those values through DHCP (option 212) but this is not the case for Swisscom. The gateway is 6rd.swisscom.com and the prefix is 2a02:1200::/28. After appending the IPv4 address to the prefix, you still get 4 bits for internal subnets.

Swisscom doesn’t provide a fixed IPv4 address. Therefore, it is not possible to precompute the IPv6 prefix. When installed as a DHCP hook (in /etc/dhcp/dhclient-exit-hooks.d/6rd), the following script configures the tunnel:

sixrd_iface=internet6
sixrd_mtu=1472                  # This is 1500 - 20 - 8 (PPPoE header)
sixrd_ttl=64
sixrd_prefix=2a02:1200::/28     # No way to guess, just have to know it.
sixrd_br=193.5.29.1             # That's "6rd.swisscom.com"

sixrd_down() {
    ip tunnel del ${sixrd_iface} || true
}

sixrd_up() {
    ipv4=${new_ip_address:-$old_ip_address}

    sixrd_subnet=$(ruby <<EOF
require 'ipaddr'
prefix = IPAddr.new "${sixrd_prefix}", Socket::AF_INET6
prefixlen = ${sixrd_prefix#*/}
ipv4 = IPAddr.new "${ipv4}", Socket::AF_INET
ipv6 = IPAddr.new (prefix.to_i + (ipv4.to_i << (64 + 32 - prefixlen))), Socket::AF_INET6
puts ipv6
EOF
)

    # Let's configure the tunnel
    ip tunnel add ${sixrd_iface} mode sit local $ipv4 ttl $sixrd_ttl
    ip tunnel 6rd dev ${sixrd_iface} 6rd-prefix ${sixrd_prefix}
    ip addr add ${sixrd_subnet}1/64 dev ${sixrd_iface}
    ip link set mtu ${sixrd_mtu} dev ${sixrd_iface}
    ip link set ${sixrd_iface} up
    ip route add default via ::${sixrd_br} dev ${sixrd_iface}
}

case $reason in
    BOUND|REBOOT)
        sixrd_down
        sixrd_up
        ;;
    RENEW|REBIND)
        if [ "$new_ip_address" != "$old_ip_address" ]; then
            sixrd_down
            sixrd_up
        fi
        ;;
    STOP|EXPIRE|FAIL|RELEASE)
        sixrd_down
        ;;
esac

The computation of the IPv6 prefix is offloaded to Ruby instead of trying to use the shell for that. Even if the ipaddr module is pretty “basic”, it suits the job.

Swisscom is using the same MTU for all clients. Because some of them are using PPPoE, the MTU is 1472 instead of 1480. You can easily check your MTU with this handy online MTU test tool.

It is not uncommon that PMTUD is broken on some parts of the Internet. While not ideal, setting up TCP MSS will alievate any problem you may run into with a MTU less than 1500:

ip6tables -t mangle -A POSTROUTING -o internet6 \
          -p tcp --tcp-flags SYN,RST SYN \
          -j TCPMSS --clamp-mss-to-pmtu

QoS

UPDATED: Unfortunately, this section is incorrect, including its premise. Have a look at Dave Taht comment for more details.

Once upon a time, QoS was a tacky subject. The Wonder Shaper was a common way to get a somewhat working setup. Nowadays, thanks to the work of the Bufferbloat project, there are two simple steps to get something quite good:

  1. Reduce the queue of your devices to something like 32 packets. This helps TCP to detect congestion and act accordingly while still being able to saturate a gigabit link.

    ip link set txqueuelen 32 dev lan
    ip link set txqueuelen 32 dev internet
    ip link set txqueuelen 32 dev wlan
    
  2. Change the root qdisc to fq_codel. A qdisc receives packets to be sent from the kernel and decide how packets are handled to the network card. Packets can be dropped, reordered or rate-limited. fq_codel is a queuing discipline combining fair queuing and controlled delay. Fair queuing means that all flows get an equal chance to be served. Another way to tell it is that a high-bandwidth flow won’t starve the queue. Controlled delay means that the queue size will be limited to ensure the latency stays low. This is achieved by dropping packets more aggressively when the queue grows.

    tc qdisc replace dev lan root fq_codel
    tc qdisc replace dev internet root fq_codel
    tc qdisc replace dev wlan root fq_codel
    

  1. Maximum download speed is 1 Gbps, while maximum upload speed is 200 Mbps. 

  2. This is the standard Vivo XL package rated at CHF 169.– plus the 1 Gbps option at CHF 80.–. 

  3. There are two references on it: SGA 441SFP0-1Gb and OST-1000BX-S34-10DI. It transmits to the 1310 nm wave length and receives on the 1490 nm one. 

16 November, 2014 03:26PM by Vincent Bernat

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Introducing RcppAnnoy

A new package RcppAnnoy is now on CRAN.

It wraps the small, fast, and lightweight C++ template header library Annoy written by Erik Bernhardsson for use at Spotify.

While Annoy is setup for use by Python, RcppAnnoy offers the exact same functionality from R via Rcpp.

A new page for RcppAnnoy provides some more detail, example code and further links. See a recent blog post by Erik for a performance comparison of different approximate nearest neighbours libraries for Python.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 November, 2014 02:36PM

Stefano Zacchiroli

Debsources Participation in FOSS Outreach Program

Jingjie Jiang selected as OPW intern for Debsources

I'm glad to announce that Jingjie Jiang, AKA sophiejjj, has been selected as intern to work on Debsources as part of the FOSS Outreach Program (formerly known as Outreach Program for Women, or OPW). I'll co-mentor her work together with Matthieu Caneill.

I've just added sophiejjj's blog to Planet Debian, so you will soon hear about her work in the Debian blogosphere.

I've been impressed by the interest that the Debsources proposal in this round of OPW has spawned. Together with Matthieu I have interacted with more than a dozen OPW applicants. Many of them have contributed useful patches during the application period, and those patches have been in production at http://sources.debian.net since quite a while now (see the commit log for details). A special mention goes to Akshita Jha, who has shown a lot of determination in tackling both simple and complex issues affecting Debsources. I hope there will be other chances to work with her in the future.

OPW internship will begin December 9th, fasten your seat belts for a boost in Debsources development!

16 November, 2014 12:45PM

hackergotchi for Matthew Palmer

Matthew Palmer

A benefit of running an alternate init in Debian Jessie

If you’re someone who doesn’t like Debian’s policy of automatically starting on install (or its heinous cousin, the RUN or ENABLE variable in /etc/default/<service>), then running an init system other than systemd should work out nicely.

16 November, 2014 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

Jingjie Jiang

Start the new journey

I’m very excited about being accepted to the Debsources project in OPW. I’ll record everything about my adventure here.

Cheers ^_^


16 November, 2014 03:26AM by sophiejjj