November 27, 2014

Stefano Zacchiroli

CTTE nomination

Apparently, enough fellow developers have been foolish enough to nominate me as a prospective member of the Debian Technical Committee (CTTE), that I've been approached to formally accept/decline the nomination. (Accepted nominees would then go through a selection process and possibly proposed to the DPL for nomination.)

I'm honored by the nominations and I thank the fellow developers that have thrown my name in the hat. But I've respectfully declined the nomination. Given my current involvement in an ongoing attempt to introduce a maximum term limit for CTTE membership, it would have been highly inappropriate for me to accept the nomination at this time.

I have no doubt that the current CTTE and the DPL will fill the empty seats with worthy project members.

27 November, 2014 08:43PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

PGP transition statement

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512,SHA1

I'm transitioning from my old, 1024-bit DSA PGP key, FD35 0B0A C6DD 5D91 DB7A 83D1 168B 4E71 7032 F238, to my newer, 4096-bit RSA key, E037 CB2A 1A00 61B9 4336 3C8B 0907 4096 06AA AAAA.

If you have signed my old key, I'd be very grateful if you would consider signing my new key. (Thanks in advance!)

This is long overdue! I've had 06AAAAAA since 2009, but it took me a while to get enough signatures on it for me to consider a transition. I still have far more signatures on my older key, owing to attending more conferences when I was using it than since I switched.

This statement, available in plaintext at http://jmtd.net/log/pgp_transition/statement.txt, has been signed with both keys.

I've marked my old key as expiring in around 72 days time, which coincides with my change of job, and will be just short of ten years since I generated it.

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQIcBAEBCgAGBQJUdysuAAoJEAkHQJYGqqqquQ0QALVGcn9Wbasg0oh8JeE8rN7h
k7FeZjmGk+ZwfXwo4Eq4pjxKp+TN+r4xBCFSHDRmMaAm9q7aWF/RqkdNiEtioQfJ
SmfzLsoEnSBmS9dr1LvAoxIlUzs/9Lt8EY2tB4NQne3QIu3VyRBjQDvqff6KJgkV
849+GsrJezkg/2VFZkZp1N9y0YwrmezV/1j0N6zR8Y20LlF4YuD3egUJYvbrQ1pA
krJwhiHskXRinqviYMg4CuTZZ3m2wc4fM6uiY5t9taqK90KFgmg73KvW6Rx2L+XP
BUzTlSYHnK+qI23Cng//DthFww2Wf9RofkfRgXk0zAe4+DcoMZzALKtZQA6GrT46
wzsmu459p5nUwwa6BzUvqBzyVtzbtHN5xaFwbjHCPlemsFlH/8LHackJLsBvr2Gp
q292huYu/HNo+Q6OIslFX6rc5AR3HKdbU2DxYMPAfI+CEtW1XwetK8ADSZc1v54C
+sc+Yb2kAleLX2xMIfS9JqvcKOP2Gbo7nTdRPnS2jZRWOv8JxVfFiSzHVODtlLfk
AmRvms5rQ2nPnB21yOd+DP2sACVKY0MS7/UwMh2hoQLR/bSu/jRh24n3WHDfQWtF
Jp4AD80ABFV2t/pSP1L70HlxwPmOzra8AVzCdXMOT/SdT387rSM9fvue4FY5goT+
/pResJSl9pAbBCtjOFJoiEYEARECAAYFAlR3Ky4ACgkQFotOcXAy8jiLawCgofsp
ggze/iSpfkyeL0vhXi6N/WMAoLQogFQY+VaQrVP3lGl1VVh4jw0W
=JBA4
-----END PGP SIGNATURE-----

27 November, 2014 01:48PM

hackergotchi for Keith Packard

Keith Packard

Black Friday 2014

Altus Metrum's 2014 Black Friday Event

BLACK FOREST, Colorado USA

Altus Metrum announces two special offers for "Black Friday" 2014.

We are pleased to announce that both TeleMetrum and TeleMega will be back in stock and available for shipment before the end of November. To celebrate this, any purchase of a TeleMetrum, TeleMega, or EasyMega board will include, free of charge, one each of our 160, 400, and 850 mAh Polymer Lithium Ion batteries and a free micro USB cable!

To celebrate NAR's addition of our 1.9 gram recording altimeter, MicroPeak, to the list of devices approved for use in contests and records, and help everyone get ready for NARAM 2015's altitude events, purchase 4 MicroPeak boards and we'll throw in a MicroPeak USB adapter for free!

These deals will be available from 00:00 Friday, 28 November 2014 through 23:59 Monday, 1 December, 2014. Only direct sales through our web store at http://shop.gag.com are included; no other discounts apply.

Find more information on all Altus Metrum products at http://altusmetrum.org.

Thank you for your continued support of Altus Metrum in 2014. We continue to work on more cool new products, and look forward to meeting many of you on various flight lines in 2015!

27 November, 2014 07:47AM

hackergotchi for Pau Garcia i Quiles

Pau Garcia i Quiles

Reminder: Desktops DevRoom @ FOSDEM 2015

We are less than 10 days away from the deadline for the Desktops DevRoom at FOSDEM 2015, the largest Free and Open Source event in Europe.

Do you think you can fill a room with 200+ people out of 6,000+ geeks? Prove it!

Check the Call for Talks for details on how to submit your talk proposal about anything related to the desktop:

  • Development
  • Deployment
  • Community
  • SCM
  • Software distribution / package managers
  • Why a particular default desktop on a prominent Linux distribution ;-)
  • etc

http://www.elpauer.org/2014/10/fosdem-2015-desktops-devroom-call-for-talks/

27 November, 2014 12:14AM by pgquiles

November 26, 2014

hackergotchi for Gunnar Wolf

Gunnar Wolf

Guests in the classroom: @Rolman talks about persistent storage and filesystems

On November 14, as a great way to say goodbye to a semester, a good friend came to my class again to present a topic to the group; a good way to sum up the contents of this talk is "everything you ever wondered about persistent storage".

As people who follow my blog know, I like inviting my friends to present selected topics in my Operating Systems class. Many subjects will stick better if presented by more than a single viewpoint, and different experiences will surely enrich the group's learning.

So, here is Rolando Cedillo — A full gigabyte of him, spawning two hours (including two hiccups where my camera hit a per-file limit...).

Rolando is currently a RedHat Engineer, and in his long career, he has worked from so many trenches, it would be a crime not to have him! Of course, one day we should do a low-level hardware session with him, as his passion (and deep knowledge) for 8-bit arcades is beyond any other person I have met.

So, here is the full video on my server. Alternatively, you can get it from The Internet Archive.

26 November, 2014 03:49PM by gwolf

Enrico Zini

calypso-davdroid

Calypso and DAVDroid

calypso and DAVdroid appeal to me. Let's try to make the whole thing work.

Update: radicale seems to also support git as a backend, and I plan to give it a try, too.

A self-signed SSL certificate

Generating the certificate:

$ openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
[...]
Country Name (2 letter code) [AU]:IT
State or Province Name (full name) [Some-State]:Bologna
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
Email Address []:postmaster@enricozini.org

Installing it on my phone:

$ openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
$ adb push cal-cert.crt /mnt/sdcard/
$ enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing calypso in my VPS

An updated calypso package:

$ git clone git://keithp.com/git/calypso
$ git checkout debian -b enrico
$ git remote add chrysn  git://prometheus.amsuess.com/calypso-patches
$ git fetch chrysn
$ git merge chrysn/chrysn/integration
$ dch -v 1.4+enrico  "Merged with chrysn integration branch"
$ debuild -us -uc -rfakeroot

Install the package:

# dpkg -i calypso_1.4+enrico_all.deb

Create a system user to run it:

# adduser --system --disabled-password calypso
# chsh calypso  # /bin/dash

Make it run at boot time (based on calypso-init from the git repo):

# cat /etc/default/calypso
CALYPSO_OPTS="-d -P $PIDFILE"
# diff -Nau calypso-init calypso-init.enrico
--- calypso-init        2014-11-26 11:50:35.301001194 +0100
+++ calypso-init.enrico 2014-11-26 12:18:16.564138554 +0100
@@ -62,8 +62,8 @@
        || return 1

    mkdir -p $(dirname $PIDFILE)
-       chown calypso:calypso $(dirname $PIDFILE)
-       start-stop-daemon --start -c $NAME --quiet --pidfile $PIDFILE --exec $DAEMON -- \
+       chown calypso:nogroup $(dirname $PIDFILE)
+       start-stop-daemon --start -c $NAME:nogroup --quiet --pidfile $PIDFILE --exec $DAEMON -- \
        $CALYPSO_OPTS \
        || return 2
    # Add code here, if necessary, that waits for the process to be ready
# cp calypso-init.enrico /etc/init.d/calypso
# update-rc.d calypso defaults

Setting up the database

# su - calypso

Certificates and server setup:

$ mkdir .config/calypso/certs
$ mv cal-key.pem .config/calypso/certs/cal.key
$ mv cal-cert.pem .config/calypso/certs/cal.pem
$ chmod 0600 .config/calypso/certs/*
$ cat > .config/calypso/config << EOF
[server]
certificate=/home/calypso/.config/calypso/certs/cal.pem
key=/home/calypso/.config/calypso/certs/cal.key

[acl]
type=htpasswd
encryption=sha1
filename=/home/calypso/.config/calypso/htpasswd
    EOF

User passwords:

    $ htpasswd -s .config/calypso/htpasswd enrico

Database initialization:

$ mkdir -p .config/calypso/calendars
$ cd .config/calypso/calendars
$ git init
    $ cat > .calypso-collection << EOF
[collection]
is-calendar = True
is-addressbook = False
displayname = Test
description = Test calendar
EOF
    $ git add .calypso-collection
$ git commit --allow-empty -m'initialize new calendar'

Start the server

# /etc/init.d/calypso start

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https://:5233
  4. Add username and password

It should work.

Related links

26 November, 2014 11:38AM

hackergotchi for Charles Plessy

Charles Plessy

Browsing debian-private via SSH

I recently realised that one can browse the archives of debian-private via SSH. I find this a good compromise between subscription and ignorance. Here is for instance the command for November.

ssh -t master.debian.org mutt -f /home/debian/archive/debian-private/debian-private.201411

26 November, 2014 10:35AM

hackergotchi for Francois Marier

Francois Marier

Hiding network disconnections using an IRC bouncer

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc
mv znc.pem ~/.znc/
cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT

Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):

servers = (
  {
    address = "irc.example.com";
    chatnet = "OFTC";
    password = "fmarier/oftc:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  },
  {
    address = "irc.example.com";
    chatnet = "Mozilla";
    password = "francois/mozilla:Passw0rd1!";
    port = "6697";
    use_ssl = "yes";
    ssl_verify = "yes";
    ssl_cafile = "~/.irssi/certs/znc.crt";
  }
);

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

#!/bin/bash
ssh irc.example.com "pgrep znc || znc"
irssi
read -p "Terminate the bouncer? [y/N] " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
  ssh irc.example.com killall -sSIGINT znc
fi

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

26 November, 2014 10:30AM

November 25, 2014

hackergotchi for Sune Vuorela

Sune Vuorela

QImage and QPixmap in a Qt Quick item

For reasons I don’t know, apparantly a Qt Quick Item that can show a QImage or a QPixmap is kind of missing. The current Image QML item only works with data that can be represented by a URL.

So I wrote one that kind of works. Comments most welcome.

It is found on git.kde.org: http://quickgit.kde.org/?p=scratch/sune/imageitem.git

Oh, and the KDE End of Year fundraiser is still running. https://www.kde.org/fundraisers/yearend2014/. Go support it if you haven’t already.

25 November, 2014 09:50PM by Sune Vuorela

hackergotchi for Holger Levsen

Holger Levsen

20141125-change

Change

Not many people adapt to fundamental changes easily, but at least people can change at all. I'm sure what looks funny now has also been a painful experience, but... - that's life. Sometimes it sucks. And suddenly...

25 November, 2014 09:48PM

Enrico Zini

mock-webserver

A mock webserver to use for unit testing HTTP clients

With python -m SimpleHTTPServer it's easy to bring up an HTTP server to use to test HTTP client code, however it only supports GET requests, and I needed to test an HTTP client that needs to perform a file upload.

It took way more than I originally expected to put this together, so here it is, hopefully saving other people (including future me) some time:

#!/usr/bin/python3

import http.server
import cgi
import socketserver
import hashlib
import json

PORT = 8081

class Handler(http.server.SimpleHTTPRequestHandler):
    def do_POST(self):
        info = {
            "method": "POST",
            "headers": { k: v for k, v in self.headers.items() },
        }

        # From https://snipt.net/raw/f8ef141069c3e7ac7e0134c6b58c25bf/?nice
        form = cgi.FieldStorage(
            fp=self.rfile,
            headers=self.headers,
            environ={'REQUEST_METHOD':'POST',
                     'CONTENT_TYPE':self.headers['Content-Type'],
                     })

        postdata = {}
        for k in form.keys():
            if form[k].file:
                buf = form.getvalue(k)
                postdata[k] = {
                    "type": "file",
                    "name": form[k].filename,
                    "size": len(buf),
                    # json.dumps will not serialize a byte() object, so we
                    # return the shasum instead of the file body
                    "sha256": hashlib.sha256(buf).hexdigest(),
                }
            else:
                vals = form.getlist(k)
                if len(vals) == 1:
                    postdata[k] = {
                        "type": "field",
                        "val": vals[0],
                    }
                else:
                    postdata[k] = {
                        "type": "multifield",
                        "vals": vals,
                    }

        info["postdata"] = postdata

        resbody = json.dumps(info, indent=1)
        print(resbody)

        resbody = resbody.encode("utf-8")

        self.send_response(200)
        self.send_header("Content-type", "application/json")
        self.send_header("Content-Length", str(len(resbody)))
        self.end_headers()

        self.wfile.write(resbody)

class TCPServer(socketserver.TCPServer):
    # Allow to restart the mock server without needing to wait for the socket
    # to end TIME_WAIT: we only listen locally, and we may restart often in
    # some workflows
    allow_reuse_address = True

httpd = TCPServer(("", PORT), Handler)

print("serving at port", PORT)
httpd.serve_forever()

25 November, 2014 05:22PM

hackergotchi for Thorsten Glaser

Thorsten Glaser

d-i preseeding is not the answer

This post details what the d-i team currently shows as the only way.

It has several shortcomings and one missing documentation part.

Shortcoming: --purge is missing from the apt-get invocation. This leaves packages in “rc” state (requiring a manual dpkg --purge to completely remove them later, as they are then invisible to apt).

Worse shortcoming: this still leaves all dependencies pulled in by systemd around on the system, because packages installed by debootstrap are not eligible for “apt-get --purge autoremove”. Additionally, it does not influence debootstrap’s (nōn-existent, see #557322, #668001, #768062) dependency resolver, leading to possibly pessimistic package selections.

Missing: you can just hit Alt-F2 and enter the command…

	in-target apt-get --purge -y install sysvinit-core
 

… there, no need to preseed. But this does not eliminate the aforementioned shortcomings, of course.

25 November, 2014 05:00PM by MirOS Developer tg (tg@mirbsd.org)

Scott Kitterman

On being excellent to each other

There has been a lot of discussion recently where there is strong disagreement, even about how to discuss the disagreement. Here’s a few thoughts on the matter.

The thing I personally find the most annoying is when someone thinks what someone else says is inappropriate and says so, it seems like the inevitable response is to scream censorship. When people do that, I’m pretty sure they don’t know what the word censorship actually means. Debian/Ubuntu/Insert Project Name Here resources are not public spaces and no government is telling people what they can and can’t say.

When you engage in speech and people respond to that speech, even if you don’t feel all warm and fuzzy after reading the response, it’s not censorship. It’s called discussion.

When someone calls out speech that they think is inappropriate, the proper response is not to blame a Code of Conduct or some other set of rules. Projects that have a code, also have a process for dealing with claims the code has been violated. Unless someone invokes that process (which almost never happens), the code is irrelevant. What’s relevant is that someone is having a problem with what or how you are saying something and are in some way hurt by it.

Let’s focus on that. The rules are irrelevant, what matters is working together in a collegial way. I really don’t think project members actively want other project members to feel bad/unsafe, but it’s hard to get outside ones own defensive reaction to being called out. So please pay less attention to how you’re feeling about things and try to see things from the other side. If we can all do a bit more of that, then things can be better for all of us.

Final note: If you’ve gotten this far and thought “Oh, that other person is doing this to me”, I have news for you – it’s not just them.

25 November, 2014 04:47PM by skitterman

hackergotchi for Chris Lamb

Chris Lamb

Validating Django model attribute assignment

Ever done the following?

>>> user = User.objects.get(pk=102)
>>> user.superuser = True
>>> user.save()

# Argh, why is this user now not a superuser...

Here's a dirty hack to validate these:

import sys

from django.db import models
from django.conf import settings

FIELDS = {}
EXCEPTIONS = {
    'auth.User': ('backend',),
}

def setattr_validate(self, name, value):
    super(models.Model, self).__setattr__(name, value)

    # Real field names cannot start with underscores
    if name.startswith('_'):
        return

    # Magic
    if name == 'pk':
        return

    k = '%s.%s' % (self._meta.app_label, self._meta.object_name)
    try:
        fields = FIELDS[k]
    except KeyError:
        fields = FIELDS[k] = set(
            getattr(x, y) for x in self._meta.fields
            for y in ('attname', 'name')
        )

    # Field is in allowed list
    if name in fields:
        return

    # Field is in known exceptions
    if  name in EXCEPTIONS.get(k, ()):
        return

    # Always allow Django internals to set values (eg. aggregates)
    if 'django/db/models' in sys._getframe().f_back.f_code.co_filename:
        return

    raise ValueError(
        "Refusing to set unknown attribute '%s' on %s instance. "
        "(Did you misspell %s?)" % (name, k, ', '.join(fields))
    )

# Let's assume we have good test coverage
if settings.DEBUG:
    models.Model.__setattr__ = setattr_validate

Now:

>>> user = User.objects.get(pk=102)
>>> user.superuser = True
...
ValueError: Refusing to set unknown attribute 'superuser' on auth.User instance. (Did you misspell 'username', 'first_name', 'last_name', 'is_active', 'email', 'is_superuser', 'is_staff', 'last_login', 'password', 'id', 'date_joined')

(Django can be a little schizophrenic on this — Model.save()'s update_fields keyword argument validates its fields, as does prefetch_related, but it's taking select_related a little while to land.)

25 November, 2014 02:54PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 300 CRAN packages

max-heap image

This morning, Rcpp reached another round milestone: 300 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time. There are 41 more on BioConductor (which is not included in the chart).

The first and less detailed part uses manually save entries, the second half of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of user package is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We may well hit five per-cent before the end of the year.

300 is a pretty humbling and staggering number. Also interesting that we we cleared 200 only at the end of April, and 250 in early August.

So from everybody behind Rcpp, a heartfelt Thank You! to all the users and of course other contributors.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 November, 2014 12:14PM

hackergotchi for DebConf team

DebConf team

DebConf14 final report (Posted by Uli Scholler, and the DebConf Team)

The Final Report for DebConf14 is complete and the DebConf team proudly presents it to the world.

DebConf14, which was held in Portland, Oregon, USA, in August 2014, was a big success. Our final report captures the essence of this year’s conference in pictures and words:

  • talks and how we selected them
  • face-to-face meetings and their effect on building trust
  • events such as the day trip or the infamous cheese & wine party
  • the university venue
  • a selection of attendee’s impressions

And of course there are numbers, budget, and statistics.

Read, enjoy, and share!

The DebConf team

25 November, 2014 08:48AM by DebConf Organizers

hackergotchi for Erich Schubert

Erich Schubert

Installing Debian with sysvinit

First let me note that I am using systemd, so these things here are untested by me. See e.g. Petter's and Simon's blog entries on the same overall topic.
According to the Debian installer maintainers, the only accepted way to install Debian with sysvinit is to use preseeding. This can either be done at the installer boot prompt by manually typing the magic spell:
preseed/late_command="in-target apt-get install -y sysvinit-core"
or by using a preseeding file (which is a really nice feature I used for installing my Hadoop nodes) to do the same:
d-i preseed/late_command string in-target apt-get install -y sysvinit-core
If you are a sysadmin, using preseeding can save you a lot of typing. Put all your desired configuration into preseeding files, put them on a webserver (best with a short name resolvable by local DNS). Let's assume you have set up the DNS name d-i.example.com, and your DHCP is configured such that example.com is on the DNS search list. You can also add a vendor extension to DHCP to serve a full URL. Manually enabling preseeding then means adding
auto url=d-i
to the installer boot command line (d-i is the hostname I suggested to set up in your DNS before, and the full URL would then be http://d-i.example.com/d-i/jessie/./preseed.cfg. Preseeding is well documented in Appendix B of the installer manual, but nevertheless will require a number of iterations to get everything work as desired for a fully automatic install like I used for my Hadoop nodes.

There might be an easier option.
I have filed a wishlist bug suggesting to use the tasksel mechanism to allow the user to choose sysvinit at installation time. However, it got turned down by the Debian installer maintainers quire rudely in a "No." - essentially this is a "shut the f... up and go away", which is in my opinion an inappropriate to discard a reasonable user wishlist request.
Since I don't intend to use sysvinit anymore, I will not be pursuing this option further. It is, as far as I can tell, still untested. If it works, it might be the least-effort, least-invasive option to allow the installation of sysvinit Jessie (except for above command line magic).
If you have interest in sysvinit, you (because I don't use sysvinit) should now test if this approach works.
  1. Get the patch proposed to add a task-sysvinit package.
  2. Build an installer CD with this tasksel (maybe this documentation is helpful for this step).
  3. Test whether the patch works. Report results to above bug report, so that others interested in sysvinit can find them easily.
  4. Find and fix bugs if it didn't work. Repeat.
  5. Publish the modified ("forked") installer, and get user feedback.
If you are then still up for a fight, you can try to convince the maintainers (or go the nasty way, and ask the CTTE for their opinion, to start another flamewar and make more maintainers give up) that this option should be added to the mainline installer. And hurry up, or you may at best get this into Jessie reloaded, 8.1. - chance are that the release manager will not accept such patches this late anymore. The sysvinit supporters should have investigated this option much, much earlier instead of losing time on the GR.
Again, I won't be doing this job for you. I'm happy with systemd. But patches and proof-of-concept is what makes open source work, not GRs and MikeeUSA's crap videos spammed to the LKML...
(And yes, I am quite annoyed by the way the Debian installer maintainers handled the bug report. This is not how open-source collaboration is supposed to work. I tried to file a proper wishlist bug reporting, suggesting a solution that I could not find discussed anywhere before and got back just this "No. Shut up." answer. I'm not sure if I will be reporting a bug in debian-installer ever again, if this is the way they handle bug reports ...)
I do care about our users, though. If you look at popcon "vote" results, we have 4179 votes for sysvinit-core and 16918 votes for systemd-sysv (graph) indicating that of those already testing jessie and beyond - neglecting 65 upstart votes, and assuming that there is no bias to not-upgrade if you prefer sysvinit - about 20% appear to prefer sysvinit (in fact, they may even have manually switched back to sysvinit after being upgraded to systemd unintentionally?). These are users that we should listen to, and that we should consider adding an installer option for, too.

25 November, 2014 08:24AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

10 PRINT CHR$(205.5+RND(1)); : GOTO 10 (also known as #10print )

The line of BASIC code that appears as the subject for this post is the title for a book I just finished reading — And enjoyed thoroughly. The book is available online for download under a CC-BY-NC-SA 3.0 License, so you can take a good look at it before (or instead of) buying it. Although it's among the books I will enjoy having on my shelf; the printing is of a very enjoyable good quality.

And what is this book about? Well, of course, it analizes that very simple line of code, as it ran on the Commodore 64 thirty years ago.

And the analysis is made from every possible angle: What do mazes mean in culture? What have they meant in cultures through history? What about regularity in art (mainly 20th century art)? How would this code look (or how it would be adapted) on contemporary non-C64 computers? And in other languages more popular today? What does randomness mean? And what does random() mean? What is BASIC, and how it came to the C64? What is the C64, and where did it come from? And several other beautiful chapters.

The book was collaboratively written by ten different authors, in a Wiki-like fashion. And... Well, what else is there to say? I enjoyed so much reading through long chapters of my childhood, of what attracted me to computers, of my cultural traits and values... I really hope that, in due time, I can be a part of such a beautiful project!

25 November, 2014 05:41AM by gwolf

hackergotchi for Kenshi Muto

Kenshi Muto

Bug #668001

If the bug title of #668001 was not "debootstrap: cant install systemd instead of sysvinit", but was like "debootstrap ignores everything from the first pipe character to the end of Depends/Pre-Depends line.", it would be treated more carefully ;)

My patch posting #20 aims to fix it.

Well, I wish this bug will be solved on jessie+1 or backports.

25 November, 2014 03:00AM by Kenshi Muto

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

YATORP -- Yet Another Tutorial on R Packaging

What the world needs right now is yet another tutorial on R packages and their creation. Luckily, this last Friday and Saturday, I had the opportunity to present in a workshop organized by Frank DiTraglia at Penn's shiny new Warren Center, and held at Wharton.

Given the Warren Center's focus, the workshop centered around Big Data and Open Science with R. Yihui Xie and myself alternated on delivering four units on an Introduction to R, Writing R packages, Dynamic Documents with R, and HPC with Rcpp and RcppArmadillo.

So I had to come up with a plan for teaching creating R packages -- and decided to do it from the very bottom up, clearly introducing the underlying R CMD ... commands and only then switching to taking advantage of an environment such as the RStudio IDE.

The resulting slides are now available on my presentations page. The code examples are in a repo subdirectory on GitHub as well. While both were designed to support the parallel live instruction offered in the workshop, I would be interested in feedback (preferably via email) about how useful the slides are by themselves.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 November, 2014 02:42AM

November 24, 2014

hackergotchi for Rogério Brito

Rogério Brito

Problems with Emacs 24.4

This is, essentially, a call for help, as I don't really know which program is at a fault here.

Given that Emacs's upstream converted their repository from bzr to git, all the commits in mirror repositories became "invalid" in relation to the official repository.

What does this mean in practical terms, in my case? Well, bear with me while I try to report my steps.

Noticing a regression and reporting a bug

There is a regression with Emacs 24.4 relative to 24.3, which I discovered after Emacs 24.4 became available in Debian's sid.

The regression in particular is that Emacs 24.4 doesn't seem to respect my Xresources, while 24.3 does (and this is 100% reproducible: I kept the binary packages of version 24.3 of emacs24 and I can install and reinstall things).

When I reported this to upstream, I received a reply that it worked fine with another person that was using XFCE with unstable.

Testing various Desktop environments

As I am using the MATE desktop environment, I proceeded to test this assertion by installing XFCE. Emacs 24.4 read my Xresources. I went ahead and installed LXDE. It worked again. I tried once more with GNOME 3, but "regular" GNOME 3 just crashed. I tried with GNOME 3 Classic and Emacs 24.4 just worked again.

Going deep into the rabbit's hole

Then, I got more curious and I tried to see why things worked the way that they did and given that there was a mirror of the Emacs repo on github, I cloned it and started to git bisect to find the first problematic commit (I have no idea if bzr even offers something like git's bisect and I wouldn't really know how to do it as quickly as I do with git).

To cut short a long story, after many recompiles, many wasted hours, a lot of wasted electrical energy, I found a bad commit and reported it.

I received no response after that.

The new repo enters in action

Of course, all my hard work bisecting things was completely invalidated after the transition to the new repository went live.

To make things relevant again, I used the awesome powers of git, restricting the changes of the newly cloned repository to the e-mail of the committer in question (Chong Yidong) and, from there, I proceeded to another painful process of git bisects.

And, sure enough, the first bad commit was the same one that I found with the previous tree.

Semi-blindly reverting this commit, and also semi-blindly resolving the conflicts make Emacs's from master work again on my system, but I highly suspect that (given the way that I did it), it would not really be appropriate for upstream.

But given also that I failed to receive feedback after my original report, I am not too confident that this bug can be solved soon (even if it doesn't qualify for being fixed in Debian 8).

After all this, I don't really know what else to do. I even filed a bug report (more like a request for help) to the Debian MATE maintainers.

As a side note, I would have filed a bug to upstream MATE, but it is not really clear what the proper procedure to report bugs to them is---they seem to use github's issues system, but given that they have separate repositories for each component of the project, and that I don't know precisely what repository to report to (or even if it applies to MATE after all), I am more or less paralyzed.

A side note

I must say that the conversion was well done by Eric Raymond, because the whole .git repository of the new repo is only about 200MB, with history going back to 1985, while the other repository had about 800GB.

24 November, 2014 05:16PM

John Goerzen

My boys love 1986 computing

Yesterday, Jacob (age 8) asked to help me put together a 30-year-old computer from parts in my basement. Meanwhile, Oliver (age 5) asked Laura to help him learn cursive. Somehow, this doesn’t seem odd for a Saturday at our place.

2014-11-22 18.58.36

Let me tell you how this came about.

I’ve had a project going on for a while now to load data from old floppies. It’s been fun, and had a surprise twist the other day: my parents gave me an old TRS-80 Color Computer II (aka “CoCo 2″). It was, in fact, my first computer, one they got for me when I was in Kindergarten. It is nearly 30 years old.

I have been musing lately about the great disservice Apple did the world by making computers easy to learn — namely the fact that few people ever bother to learn about them. Who bothers to learn about them when, on the iPhone for instance, the case is sealed shut, the lifespan is 1 or 2 years for many purchasers, and the platform is closed in lots of ways?

I had forgotten how finicky computers used to be. But after some days struggling with IDE incompatibilities, booting issues, etc., when I actually managed to get data off a machine that had last booted in 1999, I had quite the sense of accomplishment, which I rarely have lately. I did something that was hard to do in a world where most of the interfaces don’t work with equipment that old (even if nominally they are supposed to.)

The CoCo is one of those computers normally used with a floppy drive or cassette recorder to store programs. You type DIR, and you feel the clack of the drive heads through the desk. You type CLOAD and you hear the relay click closed to turn on the tape motor. You wiggle cables around until they make contact just right. You power-cycle for the times when the reset button doesn’t quite do the job. The details of how it works aren’t abstracted away by innumerable layers of controllers, interfaces, operating system modules, etc. It’s all right there, literally vibrating your desk.

So I thought this could be a great opportunity for Jacob to learn a few more computing concepts, such as the difference between mass storage and RAM, plus a great way to encourage him to practice critical thinking. So we trekked down to the basement and came up with handfulls of parts. We brought up the computer, some joysticks, all sorts of tangled cables. We needed adapters, an old TV. Jacob helped me hook everything up, and then the moment of truth: success! A green BASIC screen!

I added more parts, but struck out when I tried to connect the floppy drive. The thing just wouldn’t start up right whenever the floppy controller cartridge was installed. I cleaned the cartridge. I took it apart, scrubbed the contacts, even did a re-seat of the chips. No dice.

So I fired up my CoCo emulator (xroar), and virtually “saved” some programs to cassette (a .wav file). I then burned those .wav files to an audio CD, brought up an old CD player from the basement, connected the “cassette in” plug to the CD player’s headphone jack, and presto — instant programs. (Well, almost. It takes a couple of minutes to load a program from audio codes.)

The picture above is Oliver cackling at one of the very simplest BASIC programs there is: “number find.” The computer picks a random number between 1 and 2000, and asks the user to guess it, giving a “too low” or “too high” clue with each incorrect guess. Oliver delighted in giving invalid input (way too high numbers, or things that weren’t numbers at all) and cackled at the sarcastic error messages built into the program. During Jacob’s turn, he got very serious about it, and is probably going to be learning about how to calculate halfway points before too long.

But imagine my pride when this morning, Jacob found the new CD I had made last night (correcting a couple recordings), found my one-line instruction on just part of how to load a program, and correctly figured out by himself all the steps to do in order (type CLOAD on the CoCo, advance the CD to the proper track, press play on the player, wait for it to load on the CoCo, then type RUN).

I ordered a replacement floppy controller off eBay tonight, and paid $5 for a coax adapter that should fix some video quality issues. I rescued some 5.25″ floppies from my trash can from another project, so they should have plenty of tools for exploration.

It is so much easier for them to learn how a disk drive works, and even what the heck a track is, when you can look at a floppy drive with the cover off and see the heads move. There are other things we can do with more modern equipment — Jacob has shown a lot of interest in Arduino projects — but I have so far drawn a blank on ways to really let kids discover how a modern PC (let alone a modern phone or tablet) works.

Update Nov. 24: Every so often, the world surprises me by deciding to, well, read one of my random blog posts. For the benefit of those of you that don’t already know my boys, you might want to know that among their common play activites are turning trees into pretend trains, typing at a manual typewriter, reading, writing their own books, using a cassette recorder, building a PC and learning to use bash or xmonad, making long paper tapes with an adding machine, playing records on a record player, building electric gizmos, and even making mud balls.

I am often asked about the role of the computer in the lives, given that my hobby and profession involves computers. The answer: less than that of most of their peers. I look for opportunities for them to learn by doing, discovering, playing, or imagining. I make no presumption that they will develop the passion for computers that I did. What I want is for them to have the curiosity and drive to learn everything there is to know about whatever they do develop a passion for, so they will be great at it.

24 November, 2014 03:27AM by John Goerzen

November 23, 2014

Dimitri John Ledkov

Analyzing public OpenPGP keys

OpenPGP Message Format (RFC 4880) well defines key structure and wire formats (openpgp packets). Thus when I looked for public key network (SKS) server setup, I quickly found pointers to dump files in said format for bootstrapping a key server.

I did not feel like experimenting with Python and instead opted for Go and found http://code.google.com/p/go.crypto/openpgp/packet library that has comprehensive support for parsing openpgp low level structures. I've downloaded the SKS dump, verified it's MD5SUM hashes (lolz), and went ahead to process them in Go.

With help from http://github.com/lib/pq and database/sql, I've written a small program to churn through all the dump files, filter for primary RSA keys (not subkeys) and inject them into a database table. The things that I have chosen to inject are fingerprint, N, E. N & E are the modulus of the RSA key pair and the public exponent. Together they form a public part of an RSA keypair. So far, nothing fancy.

Next I've run an SQL query to see how unique things are... and found 92 unique N & E pairs that have from two and up to fifteen duplicates. In total it is 231 unique fingerprints, which use key material with a known duplicate in the public key network. That didn't sound good. And also odd - given that over 940 000 other RSA keys managed to get unique enough entropy to pull out a unique key out of the keyspace haystack (which is humongously huge by the way).

Having the list of the keys, I've fetched them and they do not look like regular keys - their UIDs do not have names & emails, instead they look like something from the monkeysphere. The keys look like they are originally used for TLS and/or SSH authentication, but were converted into OpenPGP format and uploaded into the public key server. This reminded me of the Debian's SSL key generation vulnerability CVE-2008-0166. So these keys might have been generated with bad entropy due to affected tools by that CVE and later converted to OpenPGP.

Looking at the openssl-blacklist package, it should be relatively easy for me to generate all possible RSA key-pairs and I believe all other material that is hashed to generate the fingerprint are also available (RFC 4880#12.2). Thus it should be reasonably possible to generate matching private keys, generate revocation certificates and publish the revocation certificate with pointers to CVE-2008-0166. (Or email it to the people who have signed given monkeysphered keys). When I have a minute I will work on generating openpgp-blacklist type of scripts to address this.

If anyone is interested in the Go source code I've written to process openpgp packets, please drop me a line and I'll publish it on github or something.

23 November, 2014 09:15PM by Dimitri John Ledkov (noreply@blogger.com)

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Scaling analysis.sesse.net

As I previously mentioned, I've been running live chess analysis during the Carlsen–Anand World Chess Championship match. Now it's all over (congratulations to Magnus!), so I thought I should write a few words about scaling, as we ended up peaking at (I think) 1527 simultaneous users, much more than the system was originally designed for (2–3 or so :-) ).

Let me explain first the requirements. Generally the backend system outputs analysis as soon as it comes in from the two chess engines (although rate-limited so that it doesn't output more than once a second), and we want to push this out to the clients as soon as possible. The clients are regular web browsers (both on mobile and on desktop; I haven't checked the ratio) running a fair amount of JavaScript; they generally request an URL in a loop, and whenever something comes in, they display it on the chessboard, wait 100 ms (just as a safeguard) and then go fetch again.

Of course, I could have just had the clients ask every second, but it seems inelegant and a bit wasteful, especially for mobile. If the analysis has come far, or even has stopped entirely since e.g. the game is over, there's no need to go fetch the same data over and over again. Instead, what I want it a system where the, if the client already has the latest data, the HTTP request hangs until there's more data, and then gets it immediately. Together with this, there's also a special header that says how many people are connected (which is also shown to the viewers). If a client has been hanging/sleeping for more than 30 seconds, I just re-send the latest analysis; in this world of NATs, transparent proxies and other unpredictable network conditions, I don't want to have connections hanging for minutes with no idea of whether I can actually answer them when the time comes.

The client tells the server (in a CGI parameter, again for simplicity so that I don't have to deal with caching proxies etc. on the way) in the request the timestamp of the latest data it has. This leads to the following different scenarios:

  1. A client comes in and has no existing data. They should get the latest data, immediately.
  2. A client has old data, and re-asks. They should also get the latest data, immediately.
  3. A client already has the newest data, which causes it to hang, and no new data is ready within 30 seconds. They should get the latest data anew (or I could have returned some other HTTP code, but I decided not to get fancy).
  4. A client already has the newest data, but new data comes in underway. They should get the new data.

Unsurprisingly, this leads to a lot of clients being in the “hanging” state at the same time, and then when new analysis comes in, there's a thundering herd of clients that should have it at the same time (and then come back for more soon after).

Like I wrote earlier, Node.js was pretty much the ideal case model-wise for this; there's only one process to handle all of them, which means the extra memory overhead per hanging client is very low, and when there's new data, we can just send to all of them after each other. Furthermore, since there's only one process, it is also easy to find the viewer count: Simply count all the hanging clients, plus the ones that I think are simply processing the latest data and should come back with a new request soon (the limit was five seconds or so).

However, around 6–700 clients, I started getting issues with requests not coming through. It turns out that the single Node.js process just couldn't handle all that many clients and started hitting the roof CPU-wise. Everyone who's done a bit of performance work in nontrivial systems know that you can't really optimize anything without profiling it first, but unfortunately, Node.js was extremely limited here. There were some systems sending lots of data to externals, which I didn't really feel like. Then, there were some systems to try to interpret V8's debug output logs, and they simply gave out bogus answers.

In particular, they claimed 93% of my time was spent in glibc, but couldn't say where in glibc, and when I pointed perf at them, it was pretty clear that most of the time was actually spent in JavaScript and V8 support functions. I took a guess at my viewer count functions, optimized so I didn't have to count as often, and it helped—but I still wasn't really confident it would scale very far. Usually people start up more Node.js workers and then have some load balancer in front, but it would make the viewer counting much more complicated, and the CPU would need to be shared with the chess engine itself, so I wasn't happy with the “just give it more cores” approach either.

So I turned to everyone's favorite website scaling tool, Varnish. With lots of help from Lasse (Varnish Software) and Tollef (ex-Varnish Software, now Fastly), I got things working; it was a sort of bumpy road, though, especially as I hit two different crash bugs in Varnish 4.0.2 that only manifested themselves under actual production load. Here's what I ended up with (running on git master):

The first thing to realize is that we're not trying to keep backend traffic entirely minimal, just cut it significantly. For instance, if Varnish sees #1 (no existing data) and #2 (old existing data) as different and fire off two different backend requests for them, it doesn't really matter. However, we really want all the hanging clients to get the same backend connection; thankfully, Varnish gives us this entirely by itself with its backend coalescing; if it has a backend request going and another one comes in for the same URL, it simply puts the second one on the sleep list and gives both the same response when it comes back. Also, if a slow or new client doesn't manage to get onto the hanging request (ie., it comes in after the backend response came), it should simply be served entirely out of Varnish' cache.

A lot of this comes automatically, and some of it comes with some cooperation between the backend and the VCL. In particular, we can let the client tag the response with the timestamp of the data, and once something comes in, simply purge/ban every object with a different timestamp header, causing us to never give out stale data from the Varnish cache. Varnish bans can be a bit tricky since they're checked lazily, and if you're not entirely careful, you can end up with a very long ban list, but it seems to work well in practice.

However, the distinction between #3 and #4 gives us a problem. We now have a situation where people ask for an URL, it times out after 30 seconds and gives us a response... which we then should give out to everybody, but the next time they ask, we should hang again! This was incredibly tricky to get right; the combination of TTL, Expires headers, grace, and the problem of clock skew for long-running requests (what exactly is the Date timestamp supposed to mean; request received, first byte of backend response, or something else?) and so on was just too much for me. Eventually I got tired of reading the Varnish source code (which, frankly, I find quite opaque) and decided to just sidestep the problem. Now, instead of the 30-second timeout, the backend simply just touches the data file every 30 seconds if it hasn't been changed, so it gets a new timestamp every time. Problem solved.

Finally, there's the counting problem; the backend doesn't see all the requests anymore, so we need a different way of counting. I solve this by tailing the access logs (using varnishncsa) in a separate process, comparing to the updates of the analysis file, and then trying to figure out if they've fallen out or not. Then I simply inject the viewer count into the backend every second. Problem solved, again. (Well, at low traffic numbers, seemingly there's some sort of buffering somewhere, causing me to see the requests way too late, and this causes the count to oscillate down between 1 and the real number somehow. But I don't care too much right now.)

So, there you have it. Varnish' threaded architecture isn't super for this kind of thing; in a sense, much less than Node.js. However, even in this day and age, optimized C beats JavaScript any day of the week; seemingly by a factor five or so. In the end, the 1500 clients were handled with CPU usage of about 40% of one core. I don't really like the fact that it needs ~1500 worker threads for those 1500 clients (I had to increase it from the default of 1024 in order to keep the HTTP errors away), but I used taskset to restrict it to two physical cores in order not to disturb the chess worker threads too much (they are already rather sensitive to the kernel's scheduling decisions).

So, how far can it go? Well, those 1500 clients needed about 33 Mbit/sec, so we can go to ~45k based on bandwidth (the server is on gigabit). At that point, though, I sincerely doubt I can sustain both Varnish and the chess engine can keep going, so I'd either move it externally. So next up, maybe Fastly? Well, at least if they start supporting IPv6.

You can find all the code, including the Varnish snippets, in the git repository. Until next time—perhaps WCC 2016! Waiting for Carlsen–Caruana. :-)

Final bonus: Munin graphs. Everyone loves Munin graphs; it's the Comic Sans of system administration.

Daily analysis Weekly analysis

23 November, 2014 08:06PM

Iustin Pop

Debian, Debian…

Due to some technical issues, I've been without access to my lists subscription email for a bit more than a week. Once I regained access and proceeded to read the batch of emails, I was - once again - shocked. Shocked at the amount of emails spent on the systemd issue, shocked at the number of people resigning, shocked at the amount of mud thrown.

I just hope that the GR results finally will mean silence and getting over the last 3-6 months.

For the record:

  • I seconded the GR because I believed we were moving too fast (I wanted one full release as a transition period, even if that's a long time)
  • I am quite happy with the result of the GR!
  • I am not happy with the amount of people leaving (I hope they're just taking a break)
  • I am, as usual, behind on my Debian packaging ☹

However, some of the more recent emails on -private give me more hope, so I'm looking forward to the next 6 months. I wonder how this will all look in two years?

(Side-note: emacs-nox shows me the italic word above as italic in text mode: I never saw that before, and didn't know, that it's possible to have italic fonts in xterm! What is this trickery⁈ … it seems to be related to the font I use, fun!)

23 November, 2014 09:23AM

hackergotchi for Matthew Palmer

Matthew Palmer

You stay classy, Uber

You may have heard that Uber has been under a bit of fire lately for its desires to hire private investigators to dig up “dirt” on journalists who are critical of Uber. From using users’ ride data for party entertainment, putting the assistance dogs of blind passengers in the trunk, adding a surcharge to reduce the number of dodgy drivers, or even booking rides with competitors and then cancelling, or using the ride to try and convince the driver to change teams, it’s pretty clear that Uber is a pretty good example of how companies are inherently sociopathic.

However, most of those examples are internal stupidities that happened to be made public. It’s a very rare company that doesn’t do all sorts of shady things, on the assumption that the world will never find out about them. Uber goes quite a bit further, though, and is so out-of-touch with the world that it blogs about analysing people’s sexual activity for amusement.

You’ll note that if you follow the above link, it sends you to the Wayback Machine, and not Uber’s own site. That’s because the original page has recently turned into a 404. Why? Probably because someone at Uber realised that bragging about how Uber employees can amuse themselves by perving on your one night stands might not be a great idea. That still leaves the question open of what sort of a corporate culture makes anyone ever think that inspecting user data for amusement would be a good thing, let alone publicising it? It’s horrific.

Thankfully, despite Uber’s fairly transparent attempt at whitewashing (“clearwashing”?), the good ol’ Wayback Machine helps us to remember what really went on. It would be amusing if Uber tried to pressure the Internet Archive to remove their copies of this blog post (don’t bother, Uber; I’ve got a “Save As” button and I’m not afraid to use it).

In any event, I’ve never used Uber (not that I’ve got one-night stands to analyse, anyway), and I’ll certainly not be patronising them in the future. If you’re not keen on companies amusing themselves with your private data, I suggest you might consider doing the same.

23 November, 2014 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

November 22, 2014

hackergotchi for Steve Kemp

Steve Kemp

Lumail 2.x ?

I've continued to ponder the idea of reimplementing the console mail-client I wrote, lumail, using a more object-based codebase.

For one thing having loosely coupled code would allow testing things in isolation, which is clearly a good thing.

I've written some proof of concept code which will allow the following Lua to be evaluated:

-- Open the maildir.
users = Maildir.new( "/home/skx/Maildir/.debian.user" )

-- Count the messages.
print( "There are " .. users:count() .. " messages in the maildir " .. users:path() )

--
-- Now we want to get all the messages and output their paths.
--
for k,v in ipairs( users:messages()) do
    --
    -- Here we could do something like:
    --
    --   if ( string.find( v:headers["subject"], "troll", 1, true ) ) then v:delete()  end
    --
    -- Instead play-nice and just show the path.
    print( k .. " -> " .. v:path() )
end

This is all a bit ugly, but I've butchered some code together that works, and tried to solicit feedback from lumail users.

I'd write more but I'm tired, and intending to drink whisky and have an early night. Today I mostly replaced pipes in my attic. (Is it "attic", or is it "loft"? I keep alternating!) Fingers crossed this will mean a dry kitchen in the morning.

22 November, 2014 09:39PM

hackergotchi for Sune Vuorela

Sune Vuorela

Is linux about choice?

Occasionally, various quotes from people having an opinion if linux is about choice or not. Even pages like http://www.islinuxaboutchoice.com/ has shown up.

My short answer is “YES”. Linux is about choice. And you get all your choices directly from your f/loss definition of choice (FSF’s 4 freedoms / OSI’s opensource definition / Debian Free Software Guidelines)

It doesn’t mean that you get all the gui configuration bits that you want. It doesn’t mean that you without any problems can switch out any component. But it does mean that you can get it exactly your way. But it might require you to edit some source code and compile some stuff.

22 November, 2014 09:10PM by Sune Vuorela

hackergotchi for Daniel Pocock

Daniel Pocock

rtc.debian.org updated for latest browsers

I've just updated rtc.debian.org with the latest versions of JSCommunicator and JsSIP.

The version of JsSIP that had been on the site was actually quite old, from February 2014 and the browsers have evolved a lot since then.

If you've tried it before and it didn't work consistently please try again and feel free to share any feedback you have.

22 November, 2014 04:24PM by Daniel.Pocock

Jonathan Wiltshire

Getting things into Jessie (#7)

Keep in touch

We don’t really have a lot of spare capacity to check up on things, so if we ask for more information or send you away to do an upload, please stay in touch about it.

Do remove a moreinfo tag if you reply to a question and are now waiting for us again.

Do ping the bug if you get a green light about an upload, and have done it. (And remove moreinfo if it was set.)

Don’t be afraid of making sure we’re aware of progress.


Getting things into Jessie (#7) is a post from: jwiltshire.org.uk | Flattr

22 November, 2014 10:18AM by Jon

Craig Small

WordPress 4.0.1 for Debian

WordPress recently released an update that had multiple security patches for their (then) current version 4.0. This release is 4.0.1 and includes important security fixes.  The Debian packages got just uploaded, if you are running the Debian packaged wordpress, you should update to 4.0.1+dfsg-1 or later.

I am going to look at these patches and see if they can and need to be backported to wordpress 3.6.1. Unfortunately I believe they will be. I’m also asking it to be unblocked into Jessie as it is a security fix.

There was, at the time of writing, no CVE numbers.

22 November, 2014 08:49AM by Craig Small

Petter Reinholdtsen

How to stay with sysvinit in Debian Jessie

By now, it is well known that Debian Jessie will not be using sysvinit as its boot system by default. But how can one keep using sysvinit in Jessie? It is fairly easy, and here are a few recipes, courtesy of Erich Schubert and Simon McVittie.

If you already are using Wheezy and want to upgrade to Jessie and keep sysvinit as your boot system, create a file /etc/apt/preferences.d/use-sysvinit with this content before you upgrade:

Package: systemd-sysv
Pin: release o=Debian
Pin-Priority: -1

This file content will tell apt and aptitude to not consider installing systemd-sysv as part of any installation and upgrade solution when resolving dependencies, and thus tell it to avoid systemd as a default boot system. The end result should be that the upgraded system keep using sysvinit.

If you are installing Jessie for the first time, there is no way to get sysvinit installed by default (debootstrap used by debian-installer have no option for this), but one can tell the installer to switch to sysvinit before the first boot. Either by using a kernel argument to the installer, or by adding a line to the preseed file used. First, the kernel command line argument:

preseed/late_command="in-target apt-get install --purge -y sysvinit-core"

Next, the line to use in a preseed file:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

One can of course also do this after the first boot by installing the sysvinit-core package.

I recommend only using sysvinit if you really need it, as the sysvinit boot sequence in Debian have several hardware specific bugs on Linux caused by the fact that it is unpredictable when hardware devices show up during boot. But on the other hand, the new default boot system still have a few rough edges I hope will be fixed before Jessie is released.

Update 2014-11-26: Inspired by a blog post by Torsten Glaser, added --purge to the preseed line.

22 November, 2014 12:00AM

November 21, 2014

hackergotchi for Joey Hess

Joey Hess

propelling containers

Propellor has supported docker containers for a "long" time, and it works great. This week I've worked on adding more container support.

docker containers (revisited)

The syntax for docker containers has changed slightly. Here's how it looks now:

example :: Host
example = host "example.com"
    & Docker.docked webserverContainer

webserverContainer :: Docker.Container
webserverContainer = Docker.container "webserver" "joeyh/debian-stable"
    & os (System (Debian (Stable "wheezy")) "amd64")
    & Docker.publish "80:80"
    & Apt.serviceInstalledRunning "apache2"
    & alias "www.example.com"

That makes example.com have a web server in a docker container, as you'd expect, and when propellor is used to deploy the DNS server it'll automatically make www.example.com point to the host (or hosts!) where this container is docked.

I use docker a lot, but I have drank little of the Docker KoolAid. I'm not keen on using random blobs created by random third parties using either unreproducible methods, or the weirdly underpowered dockerfiles. (As for vast complicated collections of containers that each run one program and talk to one another etc ... I'll wait and see.)

That's why propellor runs inside the docker container and deploys whatever configuration I tell it to, in a way that's both replicatable later and lets me use the full power of Haskell.

Which turns out to be useful when moving on from docker containers to something else...

systemd-nspawn containers

Propellor now supports containers using systemd-nspawn. It looks a lot like the docker example.

example :: Host
example = host "example.com"
    & Systemd.persistentJournal
    & Systemd.nspawned webserverContainer

webserverContainer :: Systemd.Container
webserverContainer = Systemd.container "webserver" chroot
    & Apt.serviceInstalledRunning "apache2"
    & alias "www.example.com"
  where
    chroot = Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase

Notice how I specified the Debian Unstable chroot that forms the basis of this container. Propellor sets up the container by running debootstrap, boots it up using systemd-nspawn, and then runs inside the container to provision it.

Unlike docker containers, systemd-nspawn containers use systemd as their init, and it all integrates rather beautifully. You can see the container listed in systemctl status, including the services running inside it, use journalctl to examine its logs, etc.

But no, systemd is the devil, and docker is too trendy...

chroots

Propellor now also supports deploying good old chroots. It looks a lot like the other containers. Rather than repeat myself a third time, and because we don't really run webservers inside chroots much, here's a slightly different example.

example :: Host
example = host "mylaptop"
    & Chroot.provisioned (buildDepChroot "git-annex")

buildDepChroot :: Apt.Package -> Chroot.Chroot
buildDepChroot pkg = Chroot.debootstrapped system Debootstrap.BuildD dir
    & Apt.buildDep pkg
  where
    dir = /srv/chroot/builddep/"++pkg
   system = System (Debian Unstable) "amd64"

Again this uses debootstrap to build the chroot, and then it runs propellor inside the chroot to provision it (btw without bothering to install propellor there, thanks to the magic of bind mounts and completely linux distribution-independent packaging).

In fact, the systemd-nspawn container code reuses the chroot code, and so turns out to be really rather simple. 132 lines for the chroot support, and 167 lines for the systemd support (which goes somewhat beyond the nspawn containers shown above).

Which leads to the hardest part of all this...

debootstrap

Making a propellor property for debootstrap should be easy. And it was, for Debian systems. However, I have crazy plans that involve running propellor on non-Debian systems, to debootstrap something, and installing debootstrap on an arbitrary linux system is ... too hard.

In the end, I needed 253 lines of code to do it, which is barely one magnitude less code than the size of debootstrap itself. I won't go into the ugly details, but this could be made a lot easier if debootstrap catered more to being used outside of Debian.

closing

Docker and systemd-nspawn have different strengths and weaknesses, and there are sure to be more container systems to come. I'm pleased that Propellor can add support for a new container system in a few hundred lines of code, and that it abstracts away all the unimportant differences between these systems.

PS

Seems likely that systemd-nspawn containers can be nested to any depth. So, here's a new kind of fork bomb!

infinitelyNestedContainer :: Systemd.Container
infinitelyNestedContainer = Systemd.container "evil-systemd"
    (Chroot.debootstrapped (System (Debian Unstable) "amd64") Debootstrap.MinBase)
    & Systemd.nspawned infinitelyNestedContainer

Strongly typed purely functional container deployment can only protect us against a certian subset of all badly thought out systems. ;)

21 November, 2014 09:33PM

Niels Thykier

Release Team unblock queue flushed

At the start of this week, I wrote that we had 58 open unblock requests open (of which 25 were tagged moreinfo).  Thanks to an extra effort from the Release Team, we now down to 25 open unblocks – of which 18 are tagged moreinfo.

We have now resolved 442 unblock requests (out of a total of 467).  The rate has also declined to an average of ~18 new unblock requests a day (over 26 days) and our closing rated increased to ~17.

With all of this awesomeness, some of us are now more than ready to have a well-deserved weekend to recharge our batteries.  Meanwhile, feel free to keep the RC bug fixes flowing into unstable.


21 November, 2014 08:46PM by Niels Thykier

Richard Hartmann

Release Critical Bug report for Week 47

There's a BSP this weekend. If you're interested in remote participation, please join #debian-muc on irc.oftc.net.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1213 (Including 210 bugs affecting key packages)
    • Affecting Jessie: 342 (key packages: 152) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 260 (key packages: 119) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 37 bugs are tagged 'patch'. (key packages: 20) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 12 bugs are marked as done, but still affect unstable. (key packages: 3) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 211 bugs are neither tagged patch, nor marked done. (key packages: 96) Help make a first step towards resolution!
      • Affecting Jessie only: 82 (key packages: 33) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 65 bugs are in packages that are unblocked by the release team. (key packages: 26)
        • 17 bugs are in packages that are not unblocked. (key packages: 7)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148)
49 256 (180+76) 360 (216+155)
50 204 (148+56) 339 (195+144)
51 178 (124+54) 323 (190+133)
52 115 (78+37) 289 (190+99)
1 93 (60+33) 287 (171+116)
2 82 (46+36) 271 (162+109)
3 25 (15+10) 249 (165+84)
4 14 (8+6) 244 (176+68)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

21 November, 2014 08:31PM by Richard 'RichiH' Hartmann

Jonathan Wiltshire

On kFreeBSD and FOSDEM

Boy I love rumours. Recently I’ve heard two, which I ought to put to rest now everybody’s calmed down from recent events.

kFreeBSD isn’t an official Jessie architecture because <insert systemd-related scare story>

Not true.

Our sprint at ARM (who kindly hosted and caffeinated us for four days) was timed to coincide with the Cambridge Mini-DebConf 2014. The intention was that this would save on travel costs for those members of the Release Team who wanted to attend the conference, and give us a jolly good excuse to actually meet up. Winners all round.

It also had an interesting side-effect. The room we used was across the hall from the lecture theatre being used as hack space and, later, the conference venue, which meant everybody attending during those two days could see us locked away there (and yes, we were in there all day for two days solid, except for lunch times and coffee missions). More than one conference attendee remarked to me in person that it was interesting for them to see us working (although of course they couldn’t hear what we were discussing), and hadn’t appreciated before that how much time and effort goes into our meetings.

Most of our first morning was taken up with the last pieces of architecture qualification, and that was largely the yes/no decision we had to make about kFreeBSD. And you know what? I don’t recall us talking about systemd in that context at all. Don’t forget kFreeBSD already had a waiver for a reduced scope in Jessie because of the difficulty in porting systemd to it.

It’s sadly impossible to capture the long and detailed discussion we had into a couple of lines of status information in a bits mail. If bits mails were much longer, people would be put off reading them, and we really really want you to take note of what’s in there. The little space we do have needs to be factual and to the point, and not include all the background that led us to a decision.

So no, the lack of an official Jessie release of kFreeBSD has very little, if anything, to do with systemd.

Jessie will be released during (or even before) FOSDEM

Not necessarily true.

Debian releases are made when they’re ready. That sets us apart from lots of other distributions, and is a large factor in our reputation for stability. We may have a target date in mind for a freeze, because that helps both us and the rest of the project plan accordingly. But we do not have a release date in mind, and will not do so until we get much closer to being ready. (Have you squashed an RC bug today?)

I think this rumour originated from the office of the DPL, but it’s certainly become more concrete than I think Lucas intended.

However, it is true that we’ve gone into this freeze with a seriously low bug count, because of lots of other factors. So it may indeed be that we end up in good enough shape to think about releasing near (or even at) FOSDEM. But rest assured, Debian 8 “Jessie” will be released when it’s ready, and even we don’t know when that will be yet.

(Of course, if we do release before then, you could consider throwing us a party. Plenty of the Release Team, FTP masters and CD team will be at FOSDEM, release or none.)


On kFreeBSD and FOSDEM is a post from: jwiltshire.org.uk | Flattr

21 November, 2014 07:16PM by Jon

hackergotchi for Gunnar Wolf

Gunnar Wolf

Status of the Debian OpenPGP keyring — November update

Almost two months ago I posted our keyring status graphs, showing the progress of the transition to >=2048-bit keys for the different active Debian keyrings. So, here are the new figures.

First, the Non-uploading keyring: We were already 100% transitioned. You will only notice a numerical increase: That little bump at the right is our dear friend Tássia finally joining as a Debian Developer. Welcome! \o/

As for the Maintainers keyring: We can see a sharp increase in 4096-bit keys. Four 1024-bit DM keys were migrated to 4096R, but we did have eight new DMs coming in To them, also, welcome \o/.

Sadly, we had to remove a 1024-bit key, as Peter Miller sadly passed away. So, in a 234-key universe, 12 new 4096R keys is a large bump!

Finally, our current-greatest worry — If for nothing else, for the size of the beast: The active Debian Developers keyring. We currently have 983 keys in this keyring, so it takes considerably more effort to change it.

But we have managed to push it noticeably.

This last upload saw a great deal of movement. We received only one new DD (but hey — welcome nonetheless! \o/ ). 13 DD keys were retired; as one of the maintainers of the keyring, of course this makes me sad — but then again, in most cases it's rather an acknowledgement of fact: Those keys' holders often state they had long not been really involved in the project, and the decision to retire was in fact timely. But the greatest bulk of movement was the key replacements: A massive 62 1024D keys were replaced with stronger ones. And, yes, the graph changed quite abruptly:

We still have a bit over one month to go for our cutoff line, where we will retire all 1024D keys. It is important to say we will not retire the affected accounts, mark them as MIA, nor anything like that. If you are a DD and only have a 1024D key, you will still be a DD, but you will be technically unable to do work directly. You can still upload your packages or send announcements to regulated mailing lists via sponsor requests (although you will be unable to vote).

Speaking of votes: We have often said that we believe the bulk of the short keys belong to people not really active in the project anymore. Not all of them, sure, but a big proportion. We just had a big, controversial GR vote with one of the highest voter turnouts in Debian's history. I checked the GR's tally sheet, and the results are interesting: Please excuse my ugly bash, but I'm posting this so you can play with similar runs on different votes and points in time using the public keyring Git repository:

  1. $ git checkout 2014.10.10
  2. $ for KEY in $( for i in $( grep '^V:' tally.txt |
  3. awk '{print "<" $3 ">"}' )
  4. do
  5. grep $i keyids|cut -f 1 -d ' '
  6. done )
  7. do
  8. if [ -f debian-keyring-gpg/$KEY -o -f debian-nonupload-gpg/$KEY ]
  9. then
  10. gpg --keyring /dev/null --keyring debian-keyring-gpg/$KEY \
  11. --keyring debian-nonupload-gpg/$KEY --with-colons \
  12. --list-key $KEY 2>/dev/null \
  13. | head -2 |tail -1 | cut -f 3 -d :
  14. fi
  15. done | sort | uniq -c
  16. 95 1024
  17. 13 2048
  18. 1 3072
  19. 371 4096
  20. 2 8192

So, as of mid-October: 387 out of the 482 votes (80.3%) were cast by developers with >=2048-bit keys, and 95 (19.7%) were cast by short keys.

If we were to run the same vote with the new active keyring, 417 votes would have been cast with >=2048-bit keys (87.2%), and 61 with short keys (12.8%). We would have four less votes, as they retired:

  1. 61 1024
  2. 14 2048
  3. 2 3072
  4. 399 4096
  5. 2 8192

So, lets hear it for November/December. How much can we push down that pesky yellow line?

Disclaimer: Any inaccuracy due to bugs in my code is completely my fault!

21 November, 2014 06:29PM by gwolf

hackergotchi for Daniel Pocock

Daniel Pocock

PostBooks 4.7 packages available, xTupleCon 2014 award

I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.

Better prospects for Fedora and RHEL/CentOS/EPEL packages

As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.

Debian wins a prize

While visiting xTupleCon 2014 in Norfolk, I was delighted to receive the Community Member of the Year award which I happily accepted not just for my own efforts but for the Debian Project as a whole.

Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy

This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.

Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:

Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:

and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.

xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.

21 November, 2014 02:12PM by Daniel.Pocock

hackergotchi for Julien Danjou

Julien Danjou

Distributed group management and locking in Python with tooz

With OpenStack embracing the Tooz library more and more over the past year, I think it's a good start to write a bit about it.

A bit of history

A little more than year ago, with my colleague Yassine Lamgarchal and others at eNovance, we investigated on how to solve a problem often encountered inside OpenStack: synchronization of multiple distributed workers. And while many people in our ecosystem continue to drive development by adding new bells and whistles, we made a point of solving new problems with a generic solution able to address the technical debt at the same time.

Yassine wrote the first ideas of what should be the group membership service that was needed for OpenStack, identifying several projects that could make use of this. I've presented this concept during the OpenStack Summit in Hong-Kong during an Oslo session. It turned out that the idea was well-received, and the week following the summit we started the tooz project on StackForge.

Goals

Tooz is a Python library that provides a coordination API. Its primary goal is to handle groups and membership of these groups in distributed systems.

Tooz also provides another useful feature which is distributed locking. This allows distributed nodes to acquire and release locks in order to synchronize themselves (for example to access a shared resource).

The architecture

If you are familiar with distributed systems, you might be thinking that there are a lot of solutions already available to solve these issues: ZooKeeper, the Raft consensus algorithm or even Redis for example.

You'll be thrilled to learn that Tooz is not the result of the NIH syndrome, but is an abstraction layer on top of all these solutions. It uses drivers to provide the real functionalities behind, and does not try to do anything fancy.

All the drivers do not have the same amount of functionality of robustness, but depending on your environment, any available driver might be suffice. Like most of OpenStack, we let the deployers/operators/developers chose whichever backend they want to use, informing them of the potential trade-offs they will make.

So far, Tooz provides drivers based on:

All drivers are distributed across processes. Some can be distributed across the network (ZooKeeper, memcached, redis…) and some are only available on the same host (IPC).

Also note that the Tooz API is completely asynchronous, allowing it to be more efficient, and potentially included in an event loop.

Features

Group membership

Tooz provides an API to manage group membership. The basic operations provided are: the creation of a group, the ability to join it, leave it and list its members. It's also possible to be notified as soon as a member joins or leaves a group.

Leader election

Each group can have a leader elected. Each member can decide if it wants to run for the election. If the leader disappears, another one is elected from the list of current candidates. It's possible to be notified of the election result and to retrieve the leader of a group at any moment.

Distributed locking

When trying to synchronize several workers in a distributed environment, you may need a way to lock access to some resources. That's what a distributed lock can help you with.

Adoption in OpenStack

Ceilometer is the first project in OpenStack to use Tooz. It has replaced part of the old alarm distribution system, where RPC was used to detect active alarm evaluator workers. The group membership feature of Tooz was leveraged by Ceilometer to coordinate between alarm evaluator workers.

Another new feature part of the Juno release of Ceilometer is the distribution of polling tasks of the central agent among multiple workers. There's again a group membership issue to know which nodes are online and available to receive polling tasks, so Tooz is also being used here.

The Oslo team has accepted the adoption of Tooz during this release cycle. That means that it will be maintained by more developers, and will be part of the OpenStack release process.

This opens the door to push Tooz further in OpenStack. Our next candidate would be write a service group driver for Nova.

The complete documentation for Tooz is available online and has examples for the various features described here, go read it if you're curious and adventurous!

21 November, 2014 12:10PM by Julien Danjou

Jonathan Wiltshire

Getting things into Jessie (#6)

If it’s not in an unblock bug, we probably aren’t reading it

We really, really prefer unblock bugs to anything else right now (at least, for things relating to Jessie). Mails on the list get lost, and IRC is dubious. This includes for pre-approvals.

It’s perfectly fine to open an unblock bug and then find it’s not needed, or the question is really about something else. We’d rather that than your mail get lost between the floorboards. Bugs are easy to track, have metadata so we can keep the status up to date in a standard way, and are publicised in all the right places. They make a great to-do list.

By all means twiddle with the subject line, for example appending “(pre-approval)” so it’s clearer – though watch out for twiddling too much, or you’ll confuse udd.

(to continue my theme: asking you to file a bug instead costs you one round-trip; don’t forget we’re doing it at scale)

 


Getting things into Jessie (#6) is a post from: jwiltshire.org.uk | Flattr

21 November, 2014 10:30AM by Jon

November 20, 2014

hackergotchi for Steve McIntyre

Steve McIntyre

UEFI Debian CDs for Jessie...

So, my work for Wheezy gave us working amd64 UEFI installer images. Yay! Except: there were a few bugs that remained, and also places where we could deal better with some of the more crappy UEFI implementations out there. But, things have improve since then and we should be better for Jessie in quite a few ways.

First of all, Colin and the other Grub developers have continued working hard and quite a lot of the old bugs in this area look to be fixed. I'm hoping we're not going to see so many "UEFI boot gives me a blank black screen" type of problems now.

For those poor unfortunates with Windows 7 on their machines, using BIOS boot despite having UEFI support in their hardware, I've fixed a long-standing bug (#763127) that could leave people with broken systems, unable to dual boot.

We've fixed a silly potential permissions bug in how the EFI System Partition is mounted: (#770033).

Next up, I'm hoping to add a workaround for some of the broken UEFI implementations, by adding support in our Grub packages (and in d-i) for forcing the installation of a copy of grub-efi in the removable media path. See #746662 for more of the details. It's horrid to be doing this, but it's just about the best thing we can do to support people with broken firmware.

Finally, I've been getting lots of requests for adding i386 (32-bit x86) UEFI support in our official images. Back in the Wheezy development cycle, I had test images that worked on i386, but decided not to push that support into the release. There were worries about potentially critical bugs that could be tickled on some hardware, plus there were only very few known i386 UEFI platforms at the time; the risk of damage outweighed the small proportion of users, IMHO. However, I'm now revisiting that decision. The potentially broken machines are now 2 years older, and so less likely to be in use. Also, Intel have released some horrid platform concoction around the Bay Trail CPU: a 64-bit CPU (that really wants a 64-bit kernel), but running a 32-bit UEFI firmware with no BIOS Compatibility Mode. Recent kernels are able to cope with this mess, but at the moment there is no sensible way to install Debian on such a machine. I'm hoping to fix that next (#768461). It's going to be awkward again, needing changes in several places too.

You can help! Same as 2 years ago, I'll need help testing some of these images. Particularly for the 32-bit UEFI support, I currently have no relevant hardware myself. That's not going to make it easy... :-/

I'll start pushing unofficial Jessie EFI test images shortly.

20 November, 2014 09:59PM

Tiago Bortoletto Vaz

Things to celebrate

Turning 35 today, then I get the great news that the person whom I share my dreams with has just become a Debian member! Isn't beautiful? Thanks Tássia, thanks Debian! I should also thank friends who make an ideal ambience for tonight's fun.

20 November, 2014 08:32PM by Tiago Bortoletto Vaz

hackergotchi for Neil McGovern

Neil McGovern

Barbie the Debian Developer

Some people may have seen recently that the Barbie series has a rather sexist book out about Barbie the Computer Engineer. Fortunately, there’s a way to improve this by making your own version.

Thus, I made a short version about Barbie the Debian Developer and init system packager.

(For those who don’t know me, this is satirical. Any resemblance to people is purely coincidental.)

Edit: added text in alt tags. Also, hai reddit!

One day, Debian Developer Barbie decided to package and upload a new init system to Debian, called 'systemd'. I hope everyone else will find it useful, she thought.Oh no says Skipper! You'll never take my init system away from me! It's horrendous and Not The Unix Way! Oh dear said Barbie, What have I let myself in to?Skipper was most upset, and decided that this would not do. It's off to the technical committee with this. They'll surely see sense.Oh no! What's this? The internet decided that the Technical Committee needed to also know everyone's individual views! Bad Internet!There was much discussion and consideration. Opinions were reviewed, rows were had, and months passed. Eventually, a decision was agreed upon.Barbie was successful! The will of the Technical Committee was that systemd would be the default! But wait...Skipper still wasn't happy. We need to make sure this never affects me! I'm going to call for a General Resolution!And so, Ms Devotee was called in to look at the various options. She said that the arguments must stop, and we should all accept the result of the general resolution.The numbers turned and the vote was out. We should simply be most excellent to each other said Ms Devotee. I'm not going to tell you what you should or should not do.Over the next year, the project was able to heal itself and eventually Barbie and Skipper decided to make amends. Now let's work at making Debian better!

20 November, 2014 08:00PM by Neil McGovern

hackergotchi for Gunnar Wolf

Gunnar Wolf

UNAM. Viva México, viva en paz.

UNAM. Viva México, viva en paz.

We have had terrible months in Mexico; I don't know how much has appeared about our country in the international media. The last incidents started on the last days of September, when 43 students at a school for rural teachers were forcefully disappeared (in our Latin American countries, this means they were taken by force and no authority can yet prove whether they are alive or dead; forceful disappearance is one of the saddest and most recognized traits of the brutal military dictatorships South America had in the 1970s) in the Iguala region (Guerrero state, South of the country) and three were killed on site. An Army regiment was stationed few blocks from there and refused to help.

And yes, we live in a country where (incredibly) this news by themselves would not seem so unheard of... But in this case, there is ample evidence they were taken by the local police forces, not by a gang of (assumed) wrongdoers. And they were handed over to a very violent gang afterwards. Several weeks later, with far from a thorough investigation, we were told they were killed, burnt and thrown to a river.

The Iguala city major ran away, and was later captured, but it's not clear why he was captured at two different places. The Guerrero state governor resigned and a new governor was appointed. But this was not the result of a single person behaving far from what their voters would expect — It's a symptom of a broken society where policemen will kill when so ordered, where military personnel will look away when pointed out to the obvious, where the drug dealers have captured vast regions of the country where are stronger than the formal powers.

And then, instead of dealing with the issue personally as everybody would expect, the president goes on a commercial mission to China. Oh, to fix some issues with a building company. That coincidentally or not was selling a super-luxury house to his wife. A house that she, several days later, decided to sell because it was tarnishing her family's honor and image.

And while the president is in China, the person who dealt with the social pressure and told us about the probable (but not proven!) horrible crime where the "bad guys" for some strange and yet unknown reason (even with tens of them captured already) decided to kill and burn and dissolve and disappear 43 future rural teachers presents his version, and finishes his speech saying that "I'm already tired of this topic".

Of course, our University is known for its solidarity with social causes; students in our different schools are the first activists in many protests, and we have had a very tense time as the protests are at home here at the university. This last weekend, supposed policemen entered our main campus with a stupid, unbelievable argument (they were looking for a phone reported as stolen three days earlier), get into an argument with some students, and end up firing shots at the students; one of them was wounded in the leg.

And the university is now almost under siege: There are policemen surrounding us. We are working as usual, and will most likely finish the semester with normality, but the intimidation (in a country where seeing a policeman is practically never a good sign) is strong.

And... Oh, I could go on a lot. Things feel really desperate and out of place.

Today I will join probably tens or hundreds of thousands of Mexicans sick of this simulation, sick of this violence, in a demonstration downtown. What will this achieve? Very little, if anything at all. But we cannot just sit here watching how things go from bad to worse. I do not accept to live in a state of exception.

So, this picture is just right: A bit over a month ago, two dear friends from Guadalajara city came, and we had a nice walk in the University. Our national university is not only huge, it's also beautiful and loaded with sights. And being so close to home, it's our favorite place to go with friends to show around. This is a fragment of the beautiful mural in the Central Library. And, yes, the University stands for "Viva México". And the university stands for "Peace". And we need it all. Desperately.

20 November, 2014 06:38PM by gwolf

hackergotchi for Steve Kemp

Steve Kemp

An experiment in (re)building Debian

I've rebuilt many Debian packages over the years, largely to fix bugs which affected me, or to add features which didn't make the cut in various releases. For example I made a package of fabric available for Wheezy, since it wasn't in the release. (Happily in that case a wheezy-backport became available. Similar cases involved repackaging gtk-gnutella when the protocol changed and the official package in the lenny release no longer worked.)

I generally release a lot of my own software as Debian packages, although I'll admit I've started switching to publishing Perl-based projects on CPAN instead - from which they can be debianized via dh-make-perl.

One thing I've not done for many years is a mass-rebuild of Debian packages. I did that once upon a time when I was trying to push for the stack-smashing-protection inclusion all the way back in 2006.

Having had a few interesting emails this past week I decided to do the job for real. I picked a random server of mine, rsync.io, which stores backups, and decided to rebuild it using "my own" packages.

The host has about 300 packages installed upon it:

root@rsync ~ # dpkg --list | grep ^ii | wc -l
294

I got the source to every package, patched the changelog to bump the version, and rebuild every package from source. That took about three hours.

Every package has a "skx1" suffix now, and all the build-dependencies were also determined by magic and rebuilt:

root@rsync ~ # dpkg --list | grep ^ii | awk '{ print $2 " " $3}'| head -n 4
acpi 1.6-1skx1
acpi-support-base 0.140-5+deb7u3skx1
acpid 1:2.0.16-1+deb7u1skx1
adduser 3.113+nmu3skx1

The process was pretty quick once I started getting more and more of the packages built. The only shortcut was not explicitly updating the dependencies to rely upon my updages. For example bash has a Debian control file that contains:

Depends: base-files (>= 2.1.12), debianutils (>= 2.15)

That should have been updated to say:

Depends: base-files (>= 2.1.12skx1), debianutils (>= 2.15skx1)

However I didn't do that, because I suspect if I did want to do this decently, and I wanted to share the source-trees, and the generated packages, the way to go would not be messing about with Debian versions instead I'd create a new Debian release "alpha-apple", "beta-bananna", "crunchy-carrot", "dying-dragonfruit", "easy-elderberry", or similar.

In conclusion: Importing Debian packages into git, much like Ubuntu did with bzr, is a fun project, and it doesn't take much to mass-rebuild if you're not making huge changes. Whether it is worth doing is an entirely different question of course.

20 November, 2014 01:28PM

hackergotchi for Daniel Pocock

Daniel Pocock

Is Amnesty giving spy victims a false sense of security?

Amnesty International is getting a lot of attention with the launch of a new tool to detect government and corporate spying on your computer.

I thought I would try it myself. I went to a computer running Microsoft Windows, an operating system that does not publish its source code for public scrutiny. I used the Chrome browser, users often express concern about Chrome sending data back to the vendor about the web sites the users look for.

Without even installing the app, I would expect the Amnesty web site to recognise that I was accessing the site from a combination of proprietary software. Instead, I found a different type of warning.

Beware of Amnesty?

Instead, the only warning I received was from Amnesty's own cookies:

Even before I install the app to find out if the government is monitoring me, Amnesty is keen to monitor my behaviour themselves.

While cookies are used widely, their presence on a site like Amnesty's only further desensitizes Internet users to the downside risks of tracking technologies. By using cookies, Amnesty is effectivley saying a little bit of tracking is justified for the greater good. Doesn't that sound eerily like the justification we often hear from governments too?

Is Amnesty part of the solution or part of the problem?

Amnesty is a well known and widely respected name when human rights are mentioned.

However, their advice that you can install an app onto a Windows computer or iPhone to detect spyware is like telling people that putting a seatbelt on a motorbike will eliminate the risk of death. It would be much more credible for Amnesty to tell people to start by avoiding cloud services altogether, browse the web with Tor and only use operating systems and software that come with fully published source code under a free license. Only when 100% of the software on your device is genuinely free and open source can independent experts exercise the freedom to study the code and detect and remove backdoors, spyware and security bugs.

It reminds me of the advice Kim Kardashian gave after the Fappening, telling people they can continue trusting companies like Facebook and Apple with their private data just as long as they check the privacy settings (reality check: privacy settings in cloud services are about as effective as a band-aid on a broken leg).

Write to Amnesty

Amnesty became famous for their letter writing campaigns.

Maybe now is the time for people to write to Amnesty themselves, thank them for their efforts and encourage them to take more comprehensive action.

Feel free to cut and paste some of the following potential ideas into an email to Amnesty:


I understand you may not be able to respond to every email personally but I would like to ask you to make a statement about these matters on your public web site or blog.

I understand it is Amnesty's core objective to end grave abuses of human rights. Electronic surveillence, due to its scale and pervasiveness, has become a grave abuse in itself and in a disturbing number of jurisdictions it is an enabler for other types of grave violations of human rights.

I'm concerned that your new app Detekt gives people a false sense of security and that your campaign needs to be more comprehensive to truly help people and humanity in the long term.

If Amnesty is serious about solving the problems of electronic surveillance by government, corporations and other bad actors, please consider some of the following:

  • Instead of displaying a cookie warning on Amnesty.org, display a warning to users who access the site from a computer running closed-source software and give them a link to download a free and open source web browser like Firefox.
  • Redirect all visitors to your web site to use the HTTPS encrypted version of the site.
  • Using free software such as the GNU/Linux operating system (using one of the Debian, Fedora or Ubuntu systems is one of the more common ways to achieve this) and LibreOffice for all Amnesty's own operations, making a public statement about your use of free software and mentioning this in the closing paragraph of all press releases relating to surveillance topics.
  • Encouraging Amnesty donors, members and supporters to choose similar software especially when engaging in any political activities.
  • Make a public statement that Amnesty will not use cloud services such as SalesForce or Facebook to store, manage or interact with data relating to members, donors or other supporters.
  • Encouraging the public to move away from centralized cloud services such as those provided by their smartphone or social networks and use de-centralized or federated services such as XMPP chat.

Given the immense threat posed by electronic surveillance, I'd also like to call on Amnesty to allocate at least 10% of annual revenue towards software projects releasing free and open source software that offers the public an alternative to the centralized cloud.


While publicity for electronic privacy is great, I hope Amnesty can go a step further and help people use trustworthy software from the ground up.

20 November, 2014 12:48PM by Daniel.Pocock

Jonathan Wiltshire

Getting things into Jessie (#5)

Don’t assume another package’s unblock is a precedent for yours

Sometime we’ll use our judgement when granting an unblock to a less-than-straightforward package. Lots of factors go into that, including the regression risk, desirability, impact on other packages (of both acceptance and refusal) and trust.

However, a judgement call on one package doesn’t automatically mean that the same decision will be made for another. Every single unblock request we get is called on its own merits.

Do by all means ask about your package in light of another. There may be cross-over that makes your change desirable as well.

Don’t take it personally if the judgement call ends up being not what you expected.


Getting things into Jessie (#5) is a post from: jwiltshire.org.uk | Flattr

20 November, 2014 10:30AM by Jon

Stefano Zacchiroli

Thoughts on the Init System Coupling GR

on perceived hysteria and silent sanity

As you probably already know by now, the results of the Debian init system coupling general resolution (GR) look like this:

Init system coupling GR: results (arrow from A to B means that voters preferred A to B by that margin)
results of the init system coupling GR

Some random thoughts about them:

  • The turnout has been the highest since 2010 DPL elections and the 2nd highest among all GRs (!= DPL elections) ever. The highest among all GRs dates back to 2004 and was about dropping non-free. In absolute terms this vote scores even better: it is the GR with the highest number of voters ever.

    Clearly there was a lot of interest within the project about this vote. The results appear to be as representative of the views of project members as we have been able to get in the second half of Debian history.

  • There is a total ordering of options (which is not always the case with our voting system). Starting with the winning option, each option in the results beats every subsequent option. The winning option ("General resolution is not required") beats the runner-up ("Support for other init systems is recommended, i.e., "you SHOULD NOT require a specific init") by a large margin: 100 votes, ~20.7% of the voters. The winning options wins over further options by increasingly large margins: 173 votes (~35.8%) against "Packages may require specific init systems if maintainers decide" (the MAY option); 176 (~36.4%) against "Packages may not require a specific init system" (the MUST NOT option); 263 (~54.5%) against "Further discussion" (the "let's keep on flaming" option).

    While judging from Debian mailing lists and news sites you might have gotten the impression that the project was evenly split on init system matters, at least w.r.t. the matter on the ballot that doesn't seem to be the case.

  • The winning option is not as crazy as its label might imply (voting to declare that the vote was not required? WTH?). What the winning option actually says is more articulated than that; quoting from the ballot (highlight mine):

    Regarding the subject of this ballot, the Project affirms that the procedures for decision making and conflict resolution are working adequately and thus a General Resolution is not required.

    With this GR the Debian Project affirms that the procedures we have used to decide the default init system for Jessie and to arbitrate the ensuing conflicts are just fine. People might flame and troll debian-devel as much as they want (actually, I'm pretty sure we would all like them to stop, but that matter wasn't on the ballot so you'll have to take my word for it). People might write blog posts and make headlines corroborating the impression that Debian is still being torn apart by ongoing init system battles. But this vote says instead that the large majority of project members thinks our decision making and conflict-arbitration procedures, which most prominently include the Debian Technical Committee, have served use "adequately" well over the past troubled months.

    That of course doesn't mean that everyone in Debian is happy about every single recent decision, otherwise we wouldn't have had this GR in the first place. But it does mean that we consider our procedures good enough to (a) avoid getting in their way with a project-wide vote, and (b) keep on trusting them for the foreseeable future.

  • [ It is not the main focus of this post, but if you care specifically about the implications of this GR on systemd adoption in Debian, I recommend reading this excellent GR commentary by Russ Allbery. ]

My take home message is that we are experiencing a huge gap between the public perception of the state of Debian (both from within and from without the project) and the actual beliefs of the silent majority of people that make Debian with their work, day after day.

In part this is old news. The most "senior" members of the project will remember that the topic of "vocal minorities vs silent majority" was a recurrent one in Debian 10+ years ago, when flames were periodically ravaging the project. Since then Debian has grown a lot though, and we are now part of a much larger and varied ecosystem. We are now at a scale at which there are plenty of FOSS "mass-media" covering daily what happens in Debian, inducing feedback loops with our own perception of ourselves which we do not fully grok yet. This is a new factor in the perception gap. This situation is not intrinsically bad, nor there is blame to assign here: after all influential bloggers, news sites, etc., just do their job. And their attention also testifies of the huge interest that there is around Debian and our choices.

But we still need to adapt and learn to take perceived hysteria with a pinch (or two) of salt. It might just be time for our decennial check-up. Time to remind ourselves that our ways of doing things might in fact still be much more sane than sometimes we tend to believe.

We went on 10+ years ago, after monumental flames. It looks like we are now ready to move on again, putting The Era of the Great systemd Histeria™ behind us.

20 November, 2014 08:59AM

hackergotchi for Matthew Palmer

Matthew Palmer

Multi-level prefix delegation is not a myth! I've seen it!

Unless you’ve been living under a firewalled rock, you know that IPv6 is coming. There’s also a good chance that you’ve heard that IPv6 doesn’t have NAT. Or, if you pay close attention to the minutiae of IPv6 development, you’ve heard that IPv6 does have NAT, but you don’t have to (and shouldn’t) use it.

So let’s say we’ll skip NAT for IPv6. Fair enough. However, let’s say you have this use case:

  1. A bunch of containers that need Internet access…

  2. That are running in a VM…

  3. On your laptop…

  4. Behind your home router!

For IPv4, you’d just layer on the NAT, right? While SIP and IPsec might have kittens trying to work through three layers of NAT, for most things it’ll Just Work.

In the Grand Future of IPv6, without NAT, how the hell do you make that happen? The answer is “Prefix Delegation”, which allows routers to “delegate” management of a chunk of address space to downstream routers, and allow those downstream routers to, in turn, delegate pieces of that chunk to downstream routers.

In the case of our not-so-hypothetical containers-in-VM-on-laptop-at-home scenario, it would look like this:

  1. My “border router” (a DNS-323 running Debian) asks my ISP for a delegated prefix, using DHCPv6. The ISP delegates a /561. One /64 out of that is allocated to the network directly attached to the internal interface, and the rest goes into “the pool”, as /60 blocks (so I’ve got 15 of them to delegate, if required).

  2. My laptop gets an address on the LAN between itself and the DNS-323 via stateless auto-addressing (“SLAAC”). It also uses DHCPv6 to request one of the /60 blocks from the DNS-323. The laptop puts one /64 from that block as the address space for the “virtual LAN” (actually a Linux bridge) that connects the laptop to all my VMs, and puts the other 15 /64 blocks into a pool for delegation.

  3. The VM that will be running the set of containers under test gets an address on the “all VMs virtual LAN” via SLAAC, and then requests a delegated /64 to use for the “all containers virtual LAN” (another bridge, this one running on the VM itself) that the containers will each connect to themselves.

Now, almost all of this Just Works. The current releases of ISC DHCP support prefix delegation just fine, and a bit of shell script plumbing between the client and server seals the deal – the client needs to rewrite the server’s config file to tell it the netblock from which it can delegate.

Except for one teensy, tiny problem – routing. When the DHCP server delegates a netblock to a particular machine, the routing table needs to get updated so that packets going to that netblock actually get sent to the machine the netblock was delegated to. Without that, traffic destined for the containers (or the VM) won’t actually make it to its destination, and a one-way Internet connection isn’t a whole lot of use.

I cannot understand why this problem hasn’t been tripped over before. It’s absolutely fundamental to the correct operation of the delegation system. Some people advocate running a dynamic routing protocol, but that’s a sledgehammer to crack a nut if ever I saw one.

Actually, I know this problem has been tripped over before, by OpenWrt. Their solution, however, was to use a PHP script to scan logfiles and add routes. Suffice it to say, that wasn’t an option I was keen on exploring.

Instead, I decided to patch ISC DHCP so that the server can run an external script to add the necessary routes, and perhaps modify firewall rules – and also to reverse the process when the delegation is released (or expired). If anyone else wants to play around with it, I’ve put it up on Github. I don’t make any promises that it’s the right way to do it, necessarily, but it works, and the script I’ve added in contrib/prefix-delegation-routing.rb shows how it can be used to good effect. By the way, if anyone knows how pull requests work over at ISC, drop me a line. From the look of their website, they don’t appear to accept (or at least encourage) external contributions.

So, that’s one small patch for DHCP, one giant leap for my home network.

  1. The standard recommendation is for ISPs to delegate each end-user customer a /48 (giving 65,536 /64 networks); my ISP is being a little conservative in “only” giving me 256 /64s. It works fine for my purposes, but if you’re an ISP getting set for deploying IPv6, make life easy on your customers and give them a /48.

20 November, 2014 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

Russ Allbery

Interpreting the Debian init system GR

I originally posted this in a thread on debian-private, but on further reflection it seems appropriate for a broader audience. So I'm posting it here, as well as on debian-project.

There is quite a lot of discussion in various places about what the recent GR result means. Some are concluding that systemd won in some way that implies Debian is not going to support other init systems, or at least that support for other init systems is in immediate danger. A lot of that analysis concludes that the pro-systemd "side" in Debian won some sort of conclusive victory.

I have a different perspective.

I think we just had a GR in which the Debian developer community said that we, as a community, would like to work through all of the issues around init systems together, as a community, rather than having any one side of the argument win unambiguously and impose its views on those who disagree.

There were options on the ballot that clearly required loose coupling and that clearly required tight coupling. The top two options did neither of those things. The second-highest option said, effectively, that we should feel free to exercise our technical judgement for our own packages, but should do so with an eye to enabling people to make different choices, and should merge their changes and contributions where possible. The highest option said that we don't even want to say that, and would instead prefer to work this whole thing out through discussion, respect, consensus, and mutual support, without giving *anyone* a clear mandate or project-wide blessing for their approach.

In other words, the way I choose to look at this GR is that the project as a whole just voted to take away the sticks that we were using to beat each other with.

In a way, we just chose the *hardest* option. We didn't make a simplifying technical decision that provides clear guidance to everyone. Instead, we made a complicating social decision that says that, sorry, there's no short cut to avoid having to talk to each other, respect each other's views, and try to reach workable collaborative compromises. Even though it's really hard, even though everyone is raw and upset, that's what the project as a whole is asking us to do.

Are we up to the challenge?

20 November, 2014 04:42AM

November 19, 2014

hackergotchi for Thomas Goirand

Thomas Goirand

Rotten tomatoes

There’s many ways to interpret the last GR. The way I see it is how Joey hoped Debian was: the outcome of the poll shows that we don’t want to do technical decisions by voting. At the beginning of this GR, I was supportive of it, and though it was a good thing to enforce the rule that we care for non-systemd setups. Though I have slowly changed my mind. I still think it was a good idea to see what the community thought after a so long debate. I now think that this final outcome is awesome and couldn’t have been better. Science (and computer science) has never been about voting, otherwise the earth would be flat, without drifting continents.

So my hope is that the Debian project as a whole, will allow itself to do mistakes, iterative trials, errors, and go back on any technical decision if they don’t make sense anymore. When being asked something, it’s ok to reply: “I don’t know”, and it should be ok for the Debian project to have this alternative as one of the possible answers. I’m convince that refusing to take a drastic choice in this point in time was exactly what we needed to do. And my hope is that Joey comes back after he realizes that we’ve all understood and embarrassed his position that science cannot be governed by polls.

For Stretch, I’m sure there’s going to be a lot of new alternatives. Maybe uselessd, eudev and others. Maybe I’ll have a bit of time to work on OpenRC Debian integration myself (hum… I’m dreaming here…). Maybe something else. Let’s just wait. We have more than 300 bugs to fix before Jessie can be released. Let’s happilly work on that together, and forget about the init systems for a while…

P.S: Just to be on the safe side: the rotten tomatoes image was not about criticizing the persons who started the poll, who I respect a lot, especially Ian, who I am convinced is trying to do his best for Debian (hug).

19 November, 2014 11:57PM by admin

hackergotchi for Jonathan Dowland

Jonathan Dowland

Moving to Red Hat

I'm changing jobs!

From February 2015, I will be joining Red Hat as a Senior Software Engineer. I'll be based in Newcastle and working with the Middleware team. I'm going to be working with virtualisation, containers and Docker in particular. I know a few of the folks in the Newcastle office already, thanks to their relationship with the School of Computing Science, and I'm very excited to work with them, as well as the wider company. It's also going to be great to be contributing to the free software community as part of my day job.

This October marked my tenth year working for Newcastle University. I've had a great time, learned a huge amount, and made some great friends. It's going to be sad to leave, especially the School of Computing Science where I've spent the last four years, but it's the right time to move on, It's an area that I've been personally interested in for a long time and I'm very excited to be trying something new.

19 November, 2014 09:56PM

hackergotchi for Miriam Ruiz

Miriam Ruiz

Awesome Bullying Lesson

A teacher in New York was teaching her class about bullying and gave them the following exercise to perform. She had the children take a piece of paper and told them to crumple it up, stamp on it and really mess it up but do not rip it. Then she had them unfold the paper, smooth it out and look at how scarred and dirty is was. She then told them to tell it they’re sorry. Now even though they said they were sorry and tried to fix the paper, she pointed out all the scars they left behind. And that those scars will never go away no matter how hard they tried to fix it. That is what happens when a child bullies another child, they may say they’re sorry but the scars are there forever. The looks on the faces of the children in the classroom told her the message hit home.

( Source: http://www.buzzfeed.com/mjs538/awesome-bullying-lesson-from-a-new-york-teacher )

19 November, 2014 09:36PM by Miry

hackergotchi for Erich Schubert

Erich Schubert

What the GR outcome means for the users

The GR outcome is: no GR necessary
This is good news.
Because it says: Debian will remain Debian, as it was the last 20 years.
For 20 years, we have tried hard to build the "universal operating system", and give users a choice. We've often had alternative software in the archive. Debian has come up with various tool to manage alternatives over time, and for example allows you to switch the system-wide Java.
You can still run Debian with sysvinit. There are plenty of Debian Developers which will fight for this to be possible in the future.
The outcome of this resolution says:
  • Using a GR to force others is the wrong approach of getting compatibility.
  • We've offered choice before, and we trust our fellow developers to continue to work towards choice.
  • Write patches, not useless GRs. We're coders, not bureocrats.
  • We believe we can do this, without making it a formal MUST requirement. Or even a SHOULD requirement. Just do it.
The sysvinit proponents may perceive this decision as having "lost". But they just don't realize they won, too. Because the GR may easily have backfired on them. The GR was not "every package must support sysvinit". It was also "every sysvinit package must support systemd". Here is an example: eudev, a non-systemd fork of udev. It is not yet in Debian, but I'm fairly confident that someone will make a package of it after the release, for the next Debian. Given the text of the GR, this package might have been inappropriate for Debian, unless it also supports systemd. But systemd has it's own udev - there is no reason to force eudev to work with systemd, is there?
Debian is about choice. This includes the choice to support different init systems as appropriate. Not accepting a proper patch that adds support for a different init would be perceived as a major bug, I'm assured.
A GR doesn't ensure choice. It only is a hammer to annoy others. But it doesn't write the necessary code to actually ensure compatibility.
If GNOME at some point decides that systemd as pid 1 is a must, the GR only would have left us three options: A) fork the previous version, B) remove GNOME altogether, C) remove all other init systems (so that GNOME is compliant). Does this add choice? No.
Now, we can preserve choice: if GNOME decides to go systemd-pid1-only, we can both include a forked GNOME, and the new GNOME (depending on systemd, which is allowed without the GR). Or any other solution that someone codes and packages...
Don't fear that systemd will magically become a must. Trust that the Debian Developers will continue what they have been doing the last 20 years. Trust that there are enough Debian Developers that don't run systemd. Because they do exist, and they'll file bugs where appropriate. Bugs and patches, that are the appropriate tools, not GRs (or trolling).

19 November, 2014 07:58PM

hackergotchi for EvolvisForge blog

EvolvisForge blog

Valid UTF-8 but invalid XML

Another PSA: something surprising about XML.

As you might all know, XML must be valid UTF-8 (or UTF-16 (or another encoding supported by the parser, but one which yields valid Unicode codepoints when read and converted)). Some characters, such as the ampersand ‘&’, must be escaped (“&#38;” or “&#x26;”, although “&amp;” may also work, depending on the domain) or put into a CDATA section (“<![CDATA[&]]>”).

A bit surprisingly, a literal backspace character (ASCII 08h, Unicode U+0008) is not allowed in the text. I filed a bugreport against libxml2, asking it to please encode these characters.

A bit more research followed. Surprisingly, there are characters that are not valid in XML “documents” in any way, not even as entities or in CDATA sections. (xmlstarlet, by the way, errors out somewhat nicely for an unescaped literal or entity-escaped backspace, but behaves absolutely hilarious for a literal backspace in a CDATA section.) Basically, XML contains a whitelist for the following Unicode codepoints:

  • U+0009
  • U+000A
  • U+000D
  • U+0020‥U+D7FF
  • U+E000‥U+FFFD
  • U-00010000‥U-0010FFFF

Additionally, a certain number of codepoints is discouraged: U+007F‥U+0084 (IMHO wise), U+0086‥U+009F (also wise, but why allow U+0085?), U+FDD0‥U+FDEF (a bit surprisingly, but consistent with disallowing the backspace character), and the last two codepoints of every plane (U+FFFE and U+FFFF were already disallowed, but U-0001FFFE, U-0001FFFF, …, U-0010FFFF weren’t; this is extremely wise).

The suggestion seems to be to just strip these characters silently from the XML “document”.

I’m a bit miffed about this, as I don’t even use XML directly (I’m extending a PHP “webapplication” that is a SOAP client and talks to a Java™ SOAP-WS) and would expect this to preserve my strings, but, oh my. I’ve forwarded the suggestion to just strip them silently to the libxml2 maintainers in the aforementioned bug report, for now, and may even hack that myself (on customer-paid time). More robust than hacking the PHP thingy to strip them first, anyway – I’ve got no control over the XML after all.

Sharing this so that more people know that not all UTF-8 is valid in XML. Maybe it saves someone else some time. (Now wondering whether to address this in my xhtml_escape shell function. Probably should. Meh.)

19 November, 2014 02:18PM by Thorsten Glaser

hackergotchi for Thorsten Glaser

Thorsten Glaser

Debian init system freedom of choice GR worst possible outcome

Apparently (the actual results have not yet been published by the Secretary), the GR is over, and the worst possible option has won. This is an absolutely ambiguous result, while at the same time sending a clear signal that Debian is not to be trusted wrt. investing anything into it, right now.

Why is this? Simply: “GR not required” means that “whatever people do is probably right”. Besides this, we have one statement from the CTTE (“systemd is default init system for jessie. Period.”) and nothing else. This means that runit, or upstart, or file-rc, or uselessd, can be the default init system for zurg^H^H^H^Hstretch, or even the only one. It also means that the vast majority of Debian Developers are sheeple, neither clearly voting to preserve freedom of choice between init systems for its users, nor clearly voting to unambiguously support systemd and progress over compatibility and choice, nor clearly stating that systemd is important but supporting other init systems is still recommended. (I’ll not go into detail on how the proposer of the apparently winning choice recommends others to ignore ftpmaster constraints and licences, and even suggests to run a GR to soften up the DFSG interpretation.) I’d have voted this as “no, absolutely not” if it was possible to do so more strongly.

Judging from the statistics, the only thing I voted above NOTA/FD is the one least accepted by DDs, although the only other proposal I considered is the first-rated of them: support for other init systems is recommended but not required. What made me vote it below NOTA/FD was: “The Debian Project makes no statement at this time on sysvinit support beyond the jessie release.” This sentence made even this proposal unbearable, unacceptable, for people wanting to invest (time, money, etc.) into Debian.

Update: Formal result announced. So 358 out of 483 voting DDs decided to be sheeple (if I understand the eMail correctly). We had 1006 DDs with voting rights, which is a bit ashaming as well. That’s 48.01% only. I wonder what’s worse.

This opens up a very hard problem: I’m absolutely stunned by this and wondering what to do now. While there is no real alternative to Debian at $dayjob I can always create customised packages in my own APT repository, and – while it was great when those were eventually (3.1.17-1) accepted into Debian, even replacing the previous packages completely – it is simpler and quicker to not do so. While $dayjob benefits from having packages I work on inside Debian itself, even though I cannot always test all scenarios Debian users would need, some work reduction due to… reactions… already led to Debian losing out on Mediawiki for jessie and some additional suffering. With my own package repository, I can – modulo installing/debootstrap – serve my needs for $dayjob much quicker, easily, etc. and only miss out on absolutely delightful user feedback. But then, others could always package software I’m upstream of for Debian. Or, if I do not leave the project, continue doing so via QA uploads.

I’m also disappointed because I have invested quite some effort into trying to make Debian better (my idea to join as DD was “if I’ve got to use it, it better be damn good!”), into packaging software and convincing people at work that developing software as Debian packages instead of (or not) thinking of packaging later was good. I’ve converted our versions of FusionForge and d-push to Debian packages, and it works pretty damn well. Sometimes it needs backports of my own, but that’s the corportate world, and no problem to an experienced DD. (I just feel bad we ($orkplace) lost some people, an FTP master along them, before this really gained traction.)

I’d convert to OpenBSD because, despite MirBSD’s history with them, they’re the only technically sound alternative, but apparently tedu (whom I respect technically, and who used to offer good advice to even me when asked, and who I think wouldn’t choose systemd himself) still (allying with the systemd “side” (I’m not against people being able to choose systemd, for the record, I just don’t want to be forced into it myself!)) has some sort of grudge against me. Plus, it’d be hard to get customers to follow. So, no alternative right now. But I’m used to managing my own forks of software; I’m doomed to basically hack and fix anything I use (I recently got someone who owns a licence to an old-enough Visual Studio version to transfer that to me, so I can hack on the Windows Mobile 6 version of Cachebox, to fix bugs in one of the geocaching applications I use. Now I “just” need to learn C# and the .NET Compact Framework. So I’m also used to some amount of pain.)

I’m still unresolved wrt. the attitude I should show the Debian project now. I had decided to just continue to live on, and work on the things I need done, but that was before this GR non-result. I absolutely cannot recommend anyone to “invest” into Debian (without sounding hypocriet), but I cannot recommend anything else either. I cannot justify leaving but don’t know if I want to stay. I think I should sleep over it.

One thing I promised, and thus will do, is to organise a meeting of the Debian/m68k people soonish. But then, major and important and powerful forces inside Debian still insist that Debian-Ports are not part of it… [Update: yes, DSA is moving it closer, thanks for that by the way, but that doesn’t mean anything to certain maintainers or the Release Team, although, the latter is actually understandable and probably sensible.] yet, all forks of Debian now suffer from the systemd adoption in it instead of having a freedom-of-choice upstream. I’ve said, and I still feel that systemd adoption should have done in a Debian downstream / (pure?) blend, and maybe (parts of) GNOME removed from Debian itself for it. (Adding cgroups support to the m68k kernel to support systemd was done. I adviced against it, on the grounds of memory and code size. But no downstream can remove it now.)

19 November, 2014 12:44PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

The Pogues

Actually I was working already on a different music blog entry, but I want to get this one out. I was invited to join the Organic Dancefloor last thursday. And it was a really great experience. A lot of nice people enjoying a dance evening of sort of improvisational traditional folk dancing with influences from different parts of europe. Three bands playing throughout the evening. I definitely plan to go there again. :)

Which brings me to the band I want to present you now. They also play sort-of traditional songs, or at least with traditional instruments, and are also quite danceable to. This is about The Pogues. And these are the songs that I do enjoy listening to every now and then:

  • Medley: Don't meddle with the Medley. Rather dance to it.
  • Fairytale of New York: Well, we're almost in the season for it. :)
  • Streams of Whiskey: Also quite the style of song that they are known for and party with at concerts.

Like always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

19 November, 2014 11:10AM by Rhonda

Jonathan Wiltshire

Getting things into Jessie (#4)

Make sure bug metadata is accurate

We use the metadata on the bugs you claim to have closed, as well as reading the bug report itself. You can help us out with severities, tags (e.g. blocks), and version information.

Don’t fall into the trap of believing that an unblock is a green light into Jessie. Britney still follows her validity rules, so if an RC bug appears to affect the unblocked version, it won’t migrate. Versions matter, not only the bug state (closed or open).


Getting things into Jessie (#4) is a post from: jwiltshire.org.uk | Flattr

19 November, 2014 10:12AM by Jon

hackergotchi for Bastian Venthur

Bastian Venthur

General Resolution is not required

The result for the General Resolution about the init system coupling is out and the result is, not quite surprisingly, “General Resolution is not required”.

When skimming over -devel or -private from time to time, one easily gets the impression that we are all a bunch of zealots, all too eager for fighting. People argue in the worst possible ways. People make bold statements about the future of Debian if solution X is preferred over Y. People call each other names. People leave the project.

At some point you realize, we’re not all a bunch of zealots, it is usually only the same small subset of people always involved in those discussions. It’s reassuring that we still seem to have a silent majority in Debian that, without much fuss, just do what they can to make Debian better. In this sense: A General Resolution is not required.

19 November, 2014 08:21AM by Bastian

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R / Finance 2015 Call for Papers

Earlier today, Josh send the text below to the R-SIG-Finance list, and I updated the R/Finance website, including its Call for Papers page, accordingly.

We are once again very excited about our conference, thrilled about the four confirmed keynotes, and hope that many R / Finance users will not only join us in Chicago in May 2015 -- but also submit an exciting proposal.

So read on below, and see you in Chicago in May!

Call for Papers:

R/Finance 2015: Applied Finance with R
May 29 and 30, 2015
University of Illinois at Chicago, IL, USA

The seventh annual R/Finance conference for applied finance using R will be held on May 29 and 30, 2015 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading.

Over the past six years, R/Finance has included attendees from around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2015. This year will include invited keynote presentations by Emanuel Derman, Louis Marascio, Alexander McNeil, and Rishi Narang.

We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged.

All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages.

The conference will award two (or more) $1000 prizes for best papers. A submission must be a full paper to be eligible for a best paper award. Extended abstracts, even if a full paper is provided by conference time, are not eligible for a best paper award. Financial assistance for travel and accommodation may be available to presenters, however requests must be made at the time of submission. Assistance will be granted at the discretion of the conference committee.

Please make your submission online at this link. The submission deadline is January 31, 2015. Submitters will be notified via email by February 28, 2015 of acceptance, presentation length, and financial assistance (if requested).

Additional details will be announced via the R/Finance conference website as they become available. Information on previous years' presenters and their presentations are also at the conference website.

For the program committee:

Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson, Dale Rosenthal,
Jeffrey Ryan, Joshua Ulrich

19 November, 2014 12:59AM