December 18, 2014

hackergotchi for Wouter Verhelst

Wouter Verhelst

Introducing libjoy

I've owned a Logitech Wingman Gamepad Extreme since pretty much forever, and although it's been battered over the years, it's still mostly functional. As a gamepad, it has 10 buttons. What's special about it, though, is that the device also has a mode in which a gravity sensor kicks in and produces two extra axes, allowing me to pretend I'm really talking to a joystick. It looks a bit weird though, since you end up playing your games by wobbling the gamepad around a bit.

About 10 years ago, I first learned how to write GObjects by writing a GObject-based joystick API. Unfortunately, I lost the code at some point due to an overzealous rm -rf call. I had planned to rewrite it, but that never really happened.

About a year back, I needed to write a user interface for a customer where a joystick would be a major part of the interaction. The code there was written in Qt, so I write an event-based joystick API in Qt. As it happened, I also noticed that jstest would output names for the actual buttons and axes; I had never noticed this, because due to my 10 buttons and 4 axes, which by default produce a lot of output, the jstest program would just scroll the names off my screen whenever I plugged it in. But the names are there, and it's not too difficult.

Refreshing my memory on the joystick API made me remember how much fun it is, and I wrote the beginnings of what I (at the time) called "libgjs", for "Gobject JoyStick". I didn't really finish it though, until today. I did notice in the mean time that someone else released GObject bindings for javascript and also called that gjs, so in the interest of avoiding confusion I decided to rename my library to libjoy. Not only will this allow me all kinds of interesting puns like "today I am releasing more joy", it also makes for a more compact API (compare joy_stick_open() against gjs_joystick_open()).

The library also comes with a libjoy-gtk that creates a GtkListStore* which is automatically updated as joysticks are added and removed to the system; and a joytest program, a graphical joystick test program which also serves as an example of how to use the API.

still TODO:

  • Clean up the API a bit. There's a bit too much use of GError in there.
  • Improve the UI. I suck at interface design. Patches are welcome.
  • Differentiate between JS_EVENT_INIT kernel-level events, and normal events.
  • Improve the documentation to the extent that gtk-doc (and, thus, GObject-Introspection) will work.

What's there is functional, though.

18 December, 2014 11:29PM

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/18

what constantly fascinates me in debian is that people sit at home, have an idea, work on it, & then suddenly present it to an unexpecting public; all without prior announcements or discussions, & totally apart from any hot discussion-de-jour. the last example I encountered & tried out just now is the option to edit source packages online & submit patches. - I hope we as a project can keep up with this creativity!


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

18 December, 2014 10:01PM

John Goerzen

Aerial Photos: Our Little House on the Prairie

DJI00812

This was my first attempt to send up the quadcopter in winter. It’s challenging to take good photos of a snowy landscape anyway. Add to that the fact that the camera is flying, and it’s cold, which is hard on batteries and motors. I was rather amazed at how well it did!

DJI00816

DJI00830

18 December, 2014 06:37PM by John Goerzen

hackergotchi for Mario Lang

Mario Lang

deluXbreed #2 is out!

The third installment of my crossbreed digital mix podcast is out!

This time, I am featuring Harder & Louder and tracks from Behind the Machine and the recently released Remixes.

  1. Apolloud - Nagazaki
  2. Apolloud - Hiroshima
  3. SA+AN - Darksiders
  4. Im Colapsed - Cleaning 8
  5. Micromakine & Switch Technique - Ascension
  6. Micromakine - Cyberman (Dither Remix)
  7. Micromakine - So Good! (Synapse Remix)

How was DarkCast born and how is it done?

I always loved 175BPM music. It is an old thing that is not going away soon :-). I recently found that there is a quite active culture going on, at least on BandCamp. But single tracks are just that, not really fun to listen to in my opinion. This sort of music needs to be mixed to be fun. In the past, I used to have most tracks I like/love as vinyl, so I did some real-world vinyl mixing myself. But these days, most fun music is only available digitally, at least easily. Some people still do vinyl releases, but they are actually rare.

So for my personal enjoyment, I started to digitally mix tracks I really love, such that I can listen to them without "interruption". And since I am an iOS user since three years now, using the podcast format to get stuff onto my devices was quite a natural choice.

I use SoX and a very small shell script to create these mixes. Here is a pseudo-template:

sox --combine mix-power \
"|sox \"|sox 1.flac -p\" \"|sox 3.flac -p speed 0.987 delay 2:28.31 2:28.31\" -p" \
"|sox \"|sox 2.flac -p delay 2:34.1 2:34.1\" -p" \
mix.flac

As you can imagine, it is quite a bit of fiddling to get these scripts to do what you want. But it is a non-graphical method to get things done. If you know of a better tool, possibly with a bit of real-time controls, to get the same job done, wihtout having to resort to a damn GUI, let me know.

18 December, 2014 09:47AM by Mario Lang

December 17, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/17

my list of IRC channels (& the list of people I'm following on micro-blogging platforms) has a heavy debian bias. a thing I noticed today is that I had read (or at least: seen) messages in 6 languages (English, German, Castilian, Catalan, French, Italian). – thanks guys for the free language courses :) (& the opportunity to at least catch a glimpse into other cultures)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

17 December, 2014 10:30PM

hackergotchi for Keith Packard

Keith Packard

MST-monitors

Multi-Stream Transport 4k Monitors and X

I'm sure you've seen a 4k monitor on a friends desk running Mac OS X or Windows and are all ready to go get one so that you can use it under Linux.

Once you've managed to acquire one, I'm afraid you'll discover that when you plug it in, you're limited to 30Hz refresh rates at the full size, unless you're running a kernel that is version 3.17 or later. And then...

Good Grief! What Is My Computer Doing!

Ok, so now you're running version 3.17 and when X starts up, it's like you're using a gigantic version of Google Cardboard. Two copies of a very tall, but very narrow screen greets you.

Welcome to MST island.

In order to drive these giant new panels at full speed, there isn't enough bandwidth in the display hardware to individually paint each pixel once during each frame. So, like all good hardware engineers, they invented a clever hack.

This clever hack paints the screen in parallel. I'm assuming that they've got two bits of display hardware, each one hooked up to half of the monitor. Now, each paints only half of the pixels, avoiding costly redesign of expensive silicon, at least that's my surmise.

In the olden days, if you did this, you'd end up running two monitor cables to your computer, and potentially even having two video cards. Today, thanks to the magic of Display Port Multi-Stream Transport, we don't need all of that; instead, MST allows us to pack multiple cables-worth of data into a single cable.

I doubt the inventors of MST intended it to be used to split a single LCD panel into multiple "monitors", but hardware engineers are clever folk and are more than capable of abusing standards like this when it serves to save a buck.

Turning Two Back Into One

We've got lots of APIs that expose monitor information in the system, and across which we might be able to wave our magic abstraction wand to fix this:

  1. The KMS API. This is the kernel interface which is used by all graphics stuff, including user-space applications and the frame buffer console. Solve the problem here and it works everywhere automatically.

  2. The libdrm API. This is just the KMS ioctls wrapped in a simple C library. Fixing things here wouldn't make fbcons work, but would at least get all of the window systems working.

  3. Every 2D X driver. (Yeah, we're trying to replace all of these with the one true X driver). Fixing the problem here would mean that all X desktops would work. However, that's a lot of code to hack, so we'll skip this.

  4. The X server RandR code. More plausible than fixing every driver, this also makes X desktops work.

  5. The RandR library. If not in the X server itself, how about over in user space in the RandR protocol library? Well, the problem here is that we've now got two of them (Xlib and xcb), and the xcb one is auto-generated from the protocol descriptions. Not plausible.

  6. The Xinerama code in the X server. Xinerama is how we did multi-monitor stuff before RandR existed. These days, RandR provides Xinerama emulation, but we've been telling people to switch to RandR directly.

  7. Some new API. Awesome. Ok, so if we haven't fixed this in any existing API we control (kernel/libdrm/X.org), then we effectively dump the problem into the laps of the desktop and application developers. Given how long it's taken them to adopt current RandR stuff, providing yet another complication in their lives won't make them very happy.

All Our APIs Suck

Dave Airlie merged MST support into the kernel for version 3.17 in the simplest possible fashion -- pushing the problem out to user space. I was initially vaguely tempted to go poke at it and try to fix things there, but he eventually convinced me that it just wasn't feasible.

It turns out that all of our fancy new modesetting APIs describe the hardware in more detail than any application actually cares about. In particular, we expose a huge array of hardware objects:

  • Subconnectors
  • Connectors
  • Outputs
  • Video modes
  • Crtcs
  • Encoders

Each of these objects exposes intimate details about the underlying hardware -- which of them can work together, and which cannot; what kinds of limits are there on data rates and formats; and pixel-level timing details about blanking periods and refresh rates.

To make things work, some piece of code needs to actually hook things up, and explain to the user why the configuration they want just isn't possible.

The sticking point we reached was that when an MST monitor gets plugged in, it needs two CRTCs to drive it. If one of those is already in use by some other output, there's just no way you can steal it for MST mode.

Another problem -- we expose EDID data and actual video mode timings. Our MST monitor has two EDID blocks, one for each half. They happen to describe how they're related, and how you should configure them, but if we want to hide that from the application, we'll have to pull those EDID blocks apart and construct a new one. The same goes for video modes; we'll have to construct ones for MST mode.

Every single one of our APIs exposes enough of this information to be dangerous.

Every one, except Xinerama. All it talks about is a list of rectangles, each of which represents a logical view into the desktop. Did I mention we've been encouraging people to stop using this? And that some of them listened to us? Foolishly?

Dave's Tiling Property

Dave hacked up the X server to parse the EDID strings and communicate the layout information to clients through an output property. Then he hacked up the gnome code to parse that property and build a RandR configuration that would work.

Then, he changed to RandR Xinerama code to also parse the TILE properties and to fix up the data seen by application from that.

This works well enough to get a desktop running correctly, assuming that desktop uses Xinerama to fetch this data. Alas, gtk has been "fixed" to use RandR if you have RandR version 1.3 or later. No biscuit for us today.

Adding RandR Monitors

RandR doesn't have enough data types yet, so I decided that what we wanted to do was create another one; maybe that would solve this problem.

Ok, so what clients mostly want to know is which bits of the screen are going to be stuck together and should be treated as a single unit. With current RandR, that's some of the information included in a CRTC. You pull the pixel size out of the associated mode, physical size out of the associated outputs and the position from the CRTC itself.

Most of that information is available through Xinerama too; it's just missing physical sizes and any kind of labeling to help the user understand which monitor you're talking about.

The other problem with Xinerama is that it cannot be configured by clients; the existing RandR implementation constructs the Xinerama data directly from the RandR CRTC settings. Dave's Tiling property changes edit that data to reflect the union of associated monitors as a single Xinerama rectangle.

Allowing the Xinerama data to be configured by clients would fix our 4k MST monitor problem as well as solving the longstanding video wall, WiDi and VNC troubles. All of those want to create logical monitor areas within the screen under client control

What I've done is create a new RandR datatype, the "Monitor", which is a rectangular area of the screen which defines a rectangular region of the screen. Each monitor has the following data:

  • Name. This provides some way to identify the Monitor to the user. I'm using X atoms for this as it made a bunch of things easier.

  • Primary boolean. This indicates whether the monitor is to be considered the "primary" monitor, suitable for placing toolbars and menus.

  • Pixel geometry (x, y, width, height). These locate the region within the screen and define the pixel size.

  • Physical geometry (width-in-millimeters, height-in-millimeters). These let the user know how big the pixels will appear in this region.

  • List of outputs. (I think this is the clever bit)

There are three requests to define, delete and list monitors. And that's it.

Now, we want the list of monitors to completely describe the environment, and yet we don't want existing tools to break completely. So, we need some way to automatically construct monitors from the existing RandR state while still letting the user override portions of it as needed to explain virtual or tiled outputs.

So, what I did was to let the client specify a list of outputs for each monitor. All of the CRTCs which aren't associated with an output in any client-defined monitor are then added to the list of monitors reported back to clients. That means that clients need only define monitors for things they understand, and they can leave the other bits alone and the server will do something sensible.

The second tricky bit is that if you specify an empty rectangle at 0,0 for the pixel geometry, then the server will automatically compute the geometry using the list of outputs provided. That means that if any of those outputs get disabled or reconfigured, the Monitor associated with them will appear to change as well.

Current Status

Gtk+ has been switched to use RandR for RandR versions 1.3 or later. Locally, I hacked libXrandr to override the RandR version through an environment variable, set that to 1.2 and Gtk+ happily reverts back to Xinerama and things work fine. I suspect the plan here will be to have it use the new Monitors when present as those provide the same info that it was pulling out of RandR's CRTCs.

KDE appears to still use Xinerama data for this, so it "just works".

Where's the code

As usual, all of the code for this is in a collection of git repositories in my home directory on fd.o:

git://people.freedesktop.org/~keithp/randrproto master
git://people.freedesktop.org/~keithp/libXrandr master
git://people.freedesktop.org/~keithp/xrandr master
git://people.freedesktop.org/~keithp/xserver randr-monitors

RandR protocol changes

Here's the new sections added to randrproto.txt

                  ❧❧❧❧❧❧❧❧❧❧❧

1.5. Introduction to version 1.5 of the extension

Version 1.5 adds monitors

 • A 'Monitor' is a rectangular subset of the screen which represents
   a coherent collection of pixels presented to the user.

 • Each Monitor is be associated with a list of outputs (which may be
   empty).

 • When clients define monitors, the associated outputs are removed from
   existing Monitors. If removing the output causes the list for that
   monitor to become empty, that monitor will be deleted.

 • For active CRTCs that have no output associated with any
   client-defined Monitor, one server-defined monitor will
   automatically be defined of the first Output associated with them.

 • When defining a monitor, setting the geometry to all zeros will
   cause that monitor to dynamically track the bounding box of the
   active outputs associated with them

This new object separates the physical configuration of the hardware
from the logical subsets  the screen that applications should
consider as single viewable areas.

1.5.1. Relationship between Monitors and Xinerama

Xinerama's information now comes from the Monitors instead of directly
from the CRTCs. The Monitor marked as Primary will be listed first.

                  ❧❧❧❧❧❧❧❧❧❧❧

5.6. Protocol Types added in version 1.5 of the extension

MONITORINFO { name: ATOM
          primary: BOOL
          automatic: BOOL
          x: INT16
          y: INT16
          width: CARD16
          height: CARD16
          width-in-millimeters: CARD32
          height-in-millimeters: CARD32
          outputs: LISTofOUTPUT }

                  ❧❧❧❧❧❧❧❧❧❧❧

7.5. Extension Requests added in version 1.5 of the extension.

┌───
    RRGetMonitors
    window : WINDOW
     ▶
    timestamp: TIMESTAMP
    monitors: LISTofMONITORINFO
└───
    Errors: Window

    Returns the list of Monitors for the screen containing
    'window'.

    'timestamp' indicates the server time when the list of
    monitors last changed.

┌───
    RRSetMonitor
    window : WINDOW
    info: MONITORINFO
└───
    Errors: Window, Output, Atom, Value

    Create a new monitor. Any existing Monitor of the same name is deleted.

    'name' must be a valid atom or an Atom error results.

    'name' must not match the name of any Output on the screen, or
    a Value error results.

    If 'info.outputs' is non-empty, and if x, y, width, height are all
    zero, then the Monitor geometry will be dynamically defined to
    be the bounding box of the geometry of the active CRTCs
    associated with them.

    If 'name' matches an existing Monitor on the screen, the
    existing one will be deleted as if RRDeleteMonitor were called.

    For each output in 'info.outputs, each one is removed from all
    pre-existing Monitors. If removing the output causes the list of
    outputs for that Monitor to become empty, then that Monitor will
    be deleted as if RRDeleteMonitor were called.

    Only one monitor per screen may be primary. If 'info.primary'
    is true, then the primary value will be set to false on all
    other monitors on the screen.

    RRSetMonitor generates a ConfigureNotify event on the root
    window of the screen.

┌───
    RRDeleteMonitor
    window : WINDOW
    name: ATOM
└───
    Errors: Window, Atom, Value

    Deletes the named Monitor.

    'name' must be a valid atom or an Atom error results.

    'name' must match the name of a Monitor on the screen, or a
    Value error results.

    RRDeleteMonitor generates a ConfigureNotify event on the root
    window of the screen.

                  ❧❧❧❧❧❧❧❧❧❧❧

17 December, 2014 09:36AM

December 16, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/16

today I met with a young friend (attending the final year of technical high school) for coffee. he's exploring free software since one or two years, & he's running debian jessie on his laptop since some time. it's really amazing to see how exciting this travel into the free software cosmos is for him; & it's good to see that linux & debian are not only appealing to greybeards like me :)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

16 December, 2014 10:46PM

Raphael Geissert

Editing Debian online with sources.debian.net

How cool would it be to fix that one bug you just found without having to download a source package? and without leaving your browser?

Inspired by github's online code editing, during Debconf 14 I worked on integrating an online editor on debsources (the software behind sources.debian.net). Long story short: it is available today, for users of chromium (or anything supporting chrome extensions).

After installing the editor for sources.debian.net extension, go straight to sources.debian.net and enjoy!

Go from simple debsources:


To debsources on steroids:


All in all, it brings:
  • Online editing of all of Debian
  • In-browser patch generation, available for download
  • Downloading the modified file
  • Sending the patch to the BTS
  • Syntax highlighting for over 120 file formats!
  • More hidden gems from Ace editor that can be integrated thanks to patches from you

Clone it or fork it:
git clone https://github.com/rgeissert/ace-sourced.n.git

For example, head to apt's source code, find a typo and correct it online: open apt.cc, click on edit, make the changes, click on email patch. Yes! it can generate a mail template for sending the patch to the BTS: just add a nice message and your patch is ready to be sent.

Didn't find any typo to fix? how sad, head to codesearch and search Debian for a spelling mistake, click on any result, edit, correct, email! you will have contributed to Debian in less than 5 minutes without leaving your browser.

The editor was meant to be integrated into debsources itself, without the need of a browser extension. This is expected to be done when the requirements imposed by debsources maintainers are sorted out.

Kudos to Harlan Lieberman who helped debug some performance issues in the early implementations of the integration and for working on the packaging of the Ace editor.

16 December, 2014 08:00AM by Raphael Geissert (noreply@blogger.com)

December 15, 2014

hackergotchi for Gustavo Noronha Silva

Gustavo Noronha Silva

Web Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel RegoPhilippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL – a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME’s web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test – our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to   a TV to work well. We could think of something next time ;D.

I’d like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

15 December, 2014 11:20PM by kov

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/15

nothing exciting today in my debian life. just yet another nice example of collaboration around an RC bug where the bug submitter, the maintainer & me investigated via the BTS, & the maintainer also got support on IRC from others. – now we just need someone to come up with an actual fix for the problem :)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

15 December, 2014 09:58PM

hackergotchi for Holger Levsen

Holger Levsen

20121214-not-everybody-is-equal

We ain't equal in Debian neither and wishful thinking won't help.

"White people think calling them white is racist." - "White people think calling them racist is racist."

(Thanks to and via 2damnfeisty and blackgirlsparadise!)

Posted here (in this white male community...) as a food for thought. What else is invisible for whom? Or hardly visible or distorted or whatever shade of (in)visible... - and how can we know about things we cannot (yet) see...

15 December, 2014 07:25PM

hackergotchi for Thomas Goirand

Thomas Goirand

Supporting 3 init systems in OpenStack packages

tl;dr: Providing support for all 3 init systems (sysv-rc, Upstart and systemd) isn’t hard, and generating the init scripts / Upstart job / systemd using a template system is a lot easier than I previously thought.

As always, when writing this kind of blog post, I do expect that others will not like what I did. But that’s the point: give me your opinion in a constructive way (please be polite even if you don’t like what you see… I had too many times had to read harsh comments), and I’ll implement your ideas if I find it nice.

History of the implementation: how we came to the idea

I had no plan to do this. I don’t believe what I wrote can be generalized to all of the Debian archive. It’s just that I started doing things, and it made sense when I did it. Let me explain how it happened.

Since it’s clear that many, and especially the most advanced one, may have an opinion about which init system they prefer, and because I also support Ubuntu (at least Trusty), I though it was a good idea to support all the “main” init system: sysv-rc, Upstart and systemd. Though I have counted (for the sake of being exact in this blog) : OpenStack in Debian contains currently 64 init scripts to run daemons in total. That’s quite a lot. A way too much to just write them, all by hand. Though that’s what I was doing for the last years… until this the end of this last summer!

So, doing all by hand, I first started implementing Upstart. Its support was there only when building in Ubuntu (which isn’t the correct thing to do, this is now fixed, read further…). Then we thought about adding support for systemd. Gustavo Panizzo, one of the contributors in the OpenStack packages, started implementing it in Keystone (the auth server for OpenStack) for the Juno release which was released this October. He did that last summer, early enough so we didn’t expect anyone to use the Juno branch Keystone. After some experiments, we had kind of working. What he did was invoking “/etc/init.d/keystone start-systemd”, which was still using start-stop-daemon. Yes, that’s not perfect, and it’s better to use systemd foreground process handling, but at least, we had a unique place where to write the startup scripts, where we check the /etc/default for the logging configuration, configure the log file, and so on.

Then around in october, I took a step backward to see the whole picture with sysv-rc scripts, and saw the mess, with all the tiny, small difference between them. It became clear that I had to do something to make sure they were all the same, with the support for the same things (like which log system to use, where to store the PID, create /var/lib/project, /var/run/project and so on…).

Last, on this month of December, I was able to fix the remaining issues for systemd support, thanks to the awesome contribution of Mikael Cluseau on the Alioth OpenStack packaging list. Now, the systemd unit file is still invoking the init script, but it’s not using start-stop-daemon anymore, no PID file involved, and daemons are used as systemd foreground processes. Finally, daemons service files are also activated on installation (they were not previously).

Implementation

So I took the simplistic approach to use always the same template for the sysv-rc switch/case, and the start and stop functions, happening it at the end of all debian/*.init.in scripts. I started to try to reduce the number of variables, and I was surprised of the result: only a very small part of the init scripts need to change from daemon to daemon. For example, for nova-api, here’s the init script (LSB header stripped-out):

DESC="OpenStack Compute API"
PROJECT_NAME=nova
NAME=${PROJECT_NAME}-api

That is it: only 3 lines, defining only the name of the daemon, the name of the project it attaches (eg: nova, cinder, etc.), and a long description. There’s of course much more complicated init scripts (see the one for neutron-server in the Debian archive for example), but the vast majority only needs the above.

Here’s the sysv-rc init script template that I currently use:

#!/bin/sh
# The content after this line comes from openstack-pkg-tools
# and has been automatically added to a .init.in script, which
# contains only the descriptive part for the daemon. Everything
# else is standardized as a single unique script.

# Author: Thomas Goirand <zigo@debian.org>

# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin

if [ -z "${DAEMON}" ] ; then
	DAEMON=/usr/bin/${NAME}
fi
PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid
if [ -z "${SCRIPTNAME}" ] ; then
	SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_GROUP=${PROJECT_NAME}
fi
if [ "${SYSTEM_USER}" != "root" ] ; then
	STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}"
fi
if [ -z "${CONFIG_FILE}" ] ; then
	CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf
fi
LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log
if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then
	DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}"
fi

# Exit if the package is not installed
[ -x $DAEMON ] || exit 0

# If ran as root, create /var/lock/X, /var/run/X, /var/lib/X and /var/log/X as needed
if [ "x$USER" = "xroot" ] ; then
	for i in lock run log lib ; do
		mkdir -p /var/$i/${PROJECT_NAME}
		chown ${SYSTEM_USER} /var/$i/${PROJECT_NAME}
	done
fi

# This defines init_is_upstart which we use later on (+ more...)
. /lib/lsb/init-functions

# Manage log options: logfile and/or syslog, depending on user's choosing
[ -r /etc/default/openstack ] && . /etc/default/openstack
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
[ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog"
[ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE"

do_start() {
	start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \
			--test > /dev/null || return 1
	start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \
			-- $DAEMON_ARGS || return 2
}

do_stop() {
	start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE
	RETVAL=$?
	rm -f $PIDFILE
	return "$RETVAL"
}

do_systemd_start() {
	exec $DAEMON $DAEMON_ARGS
}

case "$1" in
start)
	init_is_upstart > /dev/null 2>&1 && exit 1
	log_daemon_msg "Starting $DESC" "$NAME"
	do_start
	case $? in
		0|1) log_end_msg 0 ;;
		2) log_end_msg 1 ;;
	esac
;;
stop)
	init_is_upstart > /dev/null 2>&1 && exit 0
	log_daemon_msg "Stopping $DESC" "$NAME"
	do_stop
	case $? in
		0|1) log_end_msg 0 ;;
		2) log_end_msg 1 ;;
	esac
;;
status)
	status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
systemd-start)
	do_systemd_start
;;  
restart|force-reload)
	init_is_upstart > /dev/null 2>&1 && exit 1
	log_daemon_msg "Restarting $DESC" "$NAME"
	do_stop
	case $? in
	0|1)
		do_start
		case $? in
			0) log_end_msg 0 ;;
			1) log_end_msg 1 ;; # Old process is still running
			*) log_end_msg 1 ;; # Failed to start
		esac
	;;
	*) log_end_msg 1 ;; # Failed to stop
	esac
;;
*)
	echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload|systemd-start}" >&2
	exit 3
;;
esac

exit 0

Nothing particularly fancy here… You’ll noticed that it’s really OpenStack centric (see the LOGFILE and CONFIGFILE things…). You may have also noticed the call to “init_is_upstart” which is needed for upstart support. I’m not sure if it’s at the correct place in the init script. Should I put that on top of the script? Was I right with the exit values for it? Please send me your comments…

Then I thought about generalizing all of this. Because not only the sysv-rc scripts needed to be squared-up, but also Upstart. The approach here was to source the sysv-rc script in debian/*.init.in, and then generate the Upstart job accordingly, using the above 3 variables (or more as needed). Here, the fun is that, instead of taking the approach of calculating everything at runtime with the sysv-rc, for Upstart jobs, many things are calculated at build time. For each debian/*.init.in script that the debian/rules finds, pkgos-gen-upstart-job is called. Here’s pkgos-gen-upstart-job:

#!/bin/sh

INIT_TEMPLATE=${1}
UPSTART_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.upstart/'`

# Get the variables defined in the init template
. ${INIT_TEMPLATE}

## Find out what should go in After=
#SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'`
#
#if [ -n "${SHOULD_START}" ] ; then
#	AFTER="After="
#	for i in ${SHOULD_START} ; do
#		AFTER="${AFTER}${i}.service "
#	done
#fi

if [ -z "${DAEMON}" ] ; then
        DAEMON=/usr/bin/${NAME}
fi
PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid
if [ -z "${SCRIPTNAME}" ] ; then
	SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_GROUP}" ] ; then
	SYSTEM_GROUP=${PROJECT_NAME}
fi
if [ "${SYSTEM_USER}" != "root" ] ; then
	STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}"
fi
if [ -z "${CONFIG_FILE}" ] ; then
	CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf
fi
LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log
DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}"

echo "description \"${DESC}\"
author \"Thomas Goirand <zigo@debian.org>\"

start on runlevel [2345]
stop on runlevel [!2345]

chdir /var/run

pre-start script
	for i in lock run log lib ; do
		mkdir -p /var/\$i/${PROJECT_NAME}
		chown ${SYSTEM_USER} /var/\$i/${PROJECT_NAME}
	done
end script

script
	[ -x \"${DAEMON}\" ] || exit 0
	DAEMON_ARGS=\"${DAEMON_ARGS}\"
	[ -r /etc/default/openstack ] && . /etc/default/openstack
	[ -r /etc/default/\$UPSTART_JOB ] && . /etc/default/\$UPSTART_JOB
	[ \"x\$USE_SYSLOG\" = \"xyes\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --use-syslog\"
	[ \"x\$USE_LOGFILE\" != \"xno\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --log-file=${LOGFILE}\"

	exec start-stop-daemon --start --chdir /var/lib/${PROJECT_NAME} \\
		${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} \\
		--exec ${DAEMON} -- --config-file=${CONFIG_FILE} \${DAEMON_ARGS}
end script
" >${UPSTART_FILE}

The only thing which I don’t know how to do, is how to implement the Should-Start / Should-Stop in an Upstart job. Can anyone shoot me a mail and tell me the solution?

Then, I wanted to add support for systemd. Here, we cheated, since we only just called the sysv-rc script from the systemd unit, however, the systemd-start target uses exec, so the process stays in the foreground. It’s also much smaller than the Upstart thing. However, here, I could implement the “After” thing, corresponding to the Should-Start:

#!/bin/sh

INIT_TEMPLATE=${1}
SERVICE_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.service/'`

# Get the variables defined in the init template
. ${INIT_TEMPLATE}

if [ -z "${SCRIPTNAME}" ] ; then
	SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
	SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_GROUP}" ] ; then
	SYSTEM_GROUP=${PROJECT_NAME}
fi

# Find out what should go in After=
SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'`

if [ -n "${SHOULD_START}" ] ; then
	AFTER="After="
	for i in ${SHOULD_START} ; do
		AFTER="${AFTER}${i}.service "
	done
fi

echo "[Unit]
Description=${DESC}
$AFTER

[Service]
User=${SYSTEM_USER}
Group=${SYSTEM_GROUP}
WorkingDirectory=/var/lib/${PROJECT_NAME}
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME}
ExecStartPre=/bin/chown ${SYSTEM_USER}:${SYSTEM_GROUP} /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME}
ExecStart=${SCRIPTNAME} systemd-start
Restart=on-failure

[Install]
WantedBy=multi-user.target
" >${SERVICE_FILE}

As you can see, it’s calling /etc/init.d/${SCRIPTNAME} sytemd-start, which isn’t great. I’d be happy to have comments from systemd user / maintainers on how to fix it to make it better.

Integrating in debian/rules

To integrate with the Debian package build system, we only need had to write this:

override_dh_installinit:
	# Create the init scripts from the template
	for i in `ls -1 debian/*.init.in` ; do \
		MYINIT=`echo $$i | sed s/.init.in//` ; \
		cp $$i $$MYINIT.init ; \
		cat /usr/share/openstack-pkg-tools/init-script-template >>$$MYINIT.init ; \
		pkgos-gen-systemd-unit $$i ; \
	done
	# If there's an upstart.in file, use that one instead of the generated one
	for i in `ls -1 debian/*.upstart.in` ; do \
		MYPKG=`echo $$i | sed s/.upstart.in//` ; \
		cp $$MYPKG.upstart.in $$MYPKG.upstart ; \
	done
	# Generate the upstart job if there's no already existing .upstart.in
	for i in `ls debian/*.init.in` ; do \
		MYINIT=`echo $$i | sed s/.init.in/.upstart.in/` ; \
		if ! [ -e $$MYINIT ] ; then \
			pkgos-gen-upstart-job $$i ; \
		fi \
	done
	dh_installinit --error-handler=true
	# Generate the systemd unit file
	# Note: because dh_systemd_enable is called by the
	# dh sequencer *before* dh_installinit, we have
	# to process it manually.
	for i in `ls debian/*.init.in` ; do \
		pkgos-gen-systemd-unit $$i ; \
		MYSERVICE=`echo $$i | sed 's/debian\///'` ; \
		MYSERVICE=`echo $$MYSERVICE | sed 's/.init.in/.service/'` ; \
		dh_systemd_enable $$MYSERVICE ; \
	done

As you can see, it’s possible to use a debian/*.upstart.in and not use the templating system, in the more complicated case (I needed it mostly for neutron-server and neutron-plugin-openvswitch-agent).

Conclusion

I do not pretend that what I wrote in the openstack-pkg-tools is the ultimate solution. But I’m convince that it answers our own need as the OpenStack maintainers in Debian. There’s a lot of room for improvements (like implementing the Should-Start in Upstart jobs, or stop calling the sysv-rc script in the systemd units), but that this is a very good move that we did to use templates and generated scripts, as the init scripts are a way more easy to maintain now, in a much more unified way. As I’m not completely satisfied for the systemd and Upstart implementation, I’m sure that there’s already a huge improvements on the sysv-rc script maintainability.

Last and again: please send your comments and help improving the above! :)

15 December, 2014 08:15AM by Goirand Thomas

December 14, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

GDAC 2014/14

I just got a couple of mails from the BTS. like almost every day, several times per day. now it made me realize how much I like the BTS, & how happy I am that it works so well & even gets new features. – thanks to the BTS maintainers for their continuous work!


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

14 December, 2014 09:27PM

hackergotchi for Mario Lang

Mario Lang

Data-binding MusicXML

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub.

I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy.

During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand.

If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API.

For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt.

Finally, if you want to see how this library can be put to use: The MusicXML export functionality of BMC is all in one C++ source file: musicxml.cpp.

14 December, 2014 08:30PM by Mario Lang

Enrico Zini

html5-sse

HTML5 Server-sent events

I have a Django view that runs a slow script server-side, and streams the script output to Javascript. This is the bit of code that runs the script and turns the output into a stream of events:

def stream_output(proc):
    '''
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    '''
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

I used to just serialize its output and stream it to JavaScript, then monitor onreadystatechange on the XMLHttpRequest object browser-side, but then it started failing on Chrome, which won't trigger onreadystatechange until something like a kilobyte of data has been received.

I didn't want to stream a kilobyte of padding just to work-around this, so it was time to try out Server-sent events. See also this.

This is the Django view that sends the events:

class HookRun(View):
    def get(self, request):
        proc = run_script(request)
        def make_events():
            for evtype, data in utils.stream_output(proc):
                if evtype == "result":
                    yield "event: {}\ndata: {}\n\n".format(evtype, data)
                else:
                    yield "event: {}\ndata: {}\n\n".format(evtype, data.decode("utf-8", "replace"))

        return http.StreamingHttpResponse(make_events(), content_type='text/event-stream')

    @method_decorator(never_cache)
    def dispatch(self, *args, **kwargs):
        return super().dispatch(*args, **kwargs)

And this is the template that renders it:

{% extends "base.html" %}
{% load i18n %}

{% block head_resources %}
{{block.super}}
<style type="text/css">
.out {
    font-family: monospace;
    padding: 0;
    margin: 0;
}
.stdout {}
.stderr { color: red; }
.result {}
.ok { color: green; }
.ko { color: red; }
</style>
{# Polyfill for IE, typical... https://github.com/remy/polyfills/blob/master/EventSource.js #}
<script src="{{ STATIC_URL }}js/EventSource.js"></script>
<script type="text/javascript">
$(function() {
    // Manage spinners and other ajax-related feedback
    $(document).nav();
    $(document).nav("ajax_start");

    var out = $("#output");

    var event_source = new EventSource("{% url 'session_hookrun' name=name %}");
    event_source.addEventListener("open", function(e) {
      //console.log("EventSource open:", arguments);
    });
    event_source.addEventListener("stdout", function(e) {
      out.append($("<p>").attr("class", "out stdout").text(e.data));
    });
    event_source.addEventListener("stderr", function(e) {
      out.append($("<p>").attr("class", "out stderr").text(e.data));
    });
    event_source.addEventListener("result", function(e) {
      if (+e.data == 0)
          out.append($("<p>").attr("class", "result ok").text("{% trans 'Success' %}"));
      else
          out.append($("<p>").attr("class", "result ko").text("{% trans 'Script failed with code' %} " + e.data));
      event_source.close();
      $(document).nav("ajax_end");
    });
    event_source.addEventListener("error", function(e) {
      // There is an annoyance here: e does not contain any kind of error
      // message.
      out.append($("<p>").attr("class", "result ko").text("{% trans 'Error receiving script output from the server' %}"));
      console.error("EventSource error:", arguments);
      event_source.close();
      $(document).nav("ajax_end");
    });
});
</script>
{% endblock %}

{% block content %}

<h1>{% trans "Processing..." %}</h1>

<div id="output">
</div>

{% endblock %}

It's simple enough, it seems reasonably well supported besides needing a polyfill for IE and, astonishingly, it even works!

14 December, 2014 03:32PM

Daniel Leidert

Issues with Server4You vServer running Debian Stable (Wheezy)

I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using debootstrap over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)

Cron jobs not running

I installed unattended-upgrades and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with

# run-parts --list /etc/cron.daily

and apt was not listed although /etc/cron.daily/apt was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using dpkg-statoverride :( It was easy to fix the file permissions for everything beyond /etc/cron*, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using debsums. I'm speaking about file permissions and ownership.

Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.

init from testing / upstart on hold

Today I've discovered, that apt-get wanted to update the init package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, init is only in testing/unstable and essential there. I purged it, but apt-get keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:

  • no sources listed for backports, volatile, multimedia etc.
  • sources listed for testing and unstable
  • only packages from stable/stable-updates installed
  • sets APT::Default-Release "stable";

First I checked with aptitude:

# aptitude why init
Unable to find a reason to install init.

Ok, so why:

# apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

JFTR: I see a stable system bugging me to install systemd for no obvious reason. The issue might be similar! I'm still investigating. (not reproducible anymore)

Now I tried to debug this:

# apt-get -o  Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) upstart [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
Added sysvinit:amd64 to the remove list
Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

Eh, upstart?

# apt-cache policy upstart
upstart:
Installed: 1.6.1-1
Candidate: 1.6.1-1
Version table:
1.11-5 0
500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
*** 1.6.1-1 0
990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
100 /var/lib/dpkg/status
# dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================-===================-===================-===============================================================
hi upstart 1.6.1-1 amd64 event-based init daemon

Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. But I still don't understand apt-get and the difference to aptitude behaviour? Can someone please enlighten me?

Customized files

This isn't really an issue, but just for completion: several files have been customized. debsums easily shows which ones:

# debsums -ac
I don't have the original list anymore - please check yourself

14 December, 2014 01:58PM by Daniel Leidert (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 0.0.4.20141212

A new version of rfoaas is now on CRAN. The rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

The FOAAS backend gets updated in spurts, and yesterday a few pull requests were integrated, including one from yours truly. So with that it was time for an update to rfoaas. As the version number upstream did not change (bad, bad, practice) I appended the date the version number.

CRANberries also provides a diff to the previous release. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 December, 2014 12:20AM

December 13, 2014

hackergotchi for Holger Levsen

Holger Levsen

20141213-on-having-fun-in-debian

On having fun in Debian

(Thanks to cardboard-crack.com for this awesome comic!)

13 December, 2014 04:11PM

hackergotchi for Keith Packard

Keith Packard

present-compositor

Present and Compositors

The current Present extension is pretty unfriendly to compositing managers, causing an extra frame of latency between the applications operation and the scanout buffer. Here's how I'm fixing that.

An extra frame of lag

When an application uses PresentPixmap, that operation is generally delayed until the next vblank interval. When using X without composting, this ensures that the operation will get started in the vblank interval, and, if the rendering operation is quick enough, you'll get the frame presented without any tearing.

When using a compositing manager, the operation is still delayed until the vblank interval. That means that the CopyArea and subsequent Damage event generation don't occur until the display has already started the next frame. The compositing manager receives the damage event and constructs a new frame, but it also wants to avoid tearing, so that frame won't get displayed immediately, instead it'll get delayed until the next frame, introducing the lag.

Copy now, complete later

While away from the keyboard this morning, I had a sudden idea -- what if we performed the CopyArea and generated Damage right when the PresentPixmap request was executed but delayed the PresentComplete event until vblank happened.

With the contents updated and damage delivered, the compositing manager can immediately start constructing a new scene for the upcoming frame. When that is complete, it can also use PresentPixmap (either directly or through OpenGL) to queue the screen update.

If it's fast enough, that will all happen before vblank and the application contents will actually appear at the desired time.

Now, at the appointed vblank time, the PresentComplete event will get delivered to the client, telling it that the operation has finished and that its contents are now on the screen. If the compositing manager was quick, this event won't even be a lie.

We'll be lying less often

Right now, the CopyArea, Damage and PresentComplete operations all happen after the vblank has passed. As the compositing manager delays the screen update until the next vblank, then every single PresentComplete event will have the wrong UST/MSC values in it.

With the CopyArea happening immediately, we've a pretty good chance that the compositing manager will get the application contents up on the screen at the target time. When this happens, the PresentComplete event will have the correct values in it.

How can we do better?

The only way to do better is to have the PresentComplete event generated when the compositing manager displays the frame. I've talked about how that should work, but it's a bit twisty, and will require changes in the compositing manager to report the association between their PresentPixmap request and the applications' PresentPixmap requests.

Where's the code

I've got a set of three patches, two of which restructure the existing code without changing any behavior and a final patch which adds this improvement. Comments and review are encouraged, as always!

git://people.freedesktop.org/~keithp/xserver.git present-compositor

13 December, 2014 08:28AM

December 12, 2014

hackergotchi for Thorsten Glaser

Thorsten Glaser

WTF is Jessie; PA4 paper size

My personal APT repository now has a jessie suite – currently just a clone of the sid suite, but so, people can get on the correct “upgrade channel” already.

Besides that, the usual small updates to my metapackages, bugfixes, etc. – You might have noticed that it’s now on a (hopefully permanent) location. I’ve put a donated eee-pc from my father to good use and am now running a Debian system at home. (Fun, as I’m emeritus now, officially, and haven’t had one during my time as active uploading DD.) I’ve created a couple of cowbuilder chroots (pbuilderrc to achieve that included in the repo) and can build packages, but for i386 only (amd64 is still done on the x32 desktop at work), but, more importantly, I can build, sign and publish the repo, so it may grow. (popcon data is interesting. More than double the amount of machines I have installed that stuff on.)

Update: I’ve started writing a NEWS file and cobbled together an RSS 2.0 feed from that… still plaintext content, but at least signalling in feedreaders upon updates.


Installing gimp and inkscape, I’m asked for a default paper size by libpaper1. PA4 is still not an option, I wonder why. I also haven’t managed to get MirPorts GNU groff and Artifex Ghostscript to use that paper size, so the various PDF manpages I produce are still using DIN ISO A4, rendering e.g. Mexicans unable to print them. Help welcome.

12 December, 2014 11:56PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

a10n for l10n

The abbreviated title above means "Appreciation for Localization" :)

I wanted to say a word of thanks for the awesome work done by debian localization teams. I speak English, and my other language skills are weak. I'm lucky: most software I use is written by default in a language that I can already understand.

The debian localization teams do great work in making sure that packages in debian gets translated into many other languages, so that many more people around the world can take advantage of free software.

I was reminded of this work recently (again) with the great patches submitted to GnuPG and related packages. The changes were made by many different people, and coordinated with the debian GnuPG packaging team by David Prévot.

This work doesn't just help debian and its users. These localizations make their way back upstream to the original projects, which in turn are available to many other people.

If you use debian, and you speak a language other than english, and you want to give back to the community, please consider joining one of the localization teams. They are a great way to help out our project's top priorities: our users and free software.

Thank you to all the localizers!

(this post was inspired by gregoa's debian advent calendar. i won't be posting public words of thanks as frequently or as diligently as he does, any more than i'll be fixing the number of RC bugs that he fixes. This are just two of the ways that gregoa consistently leads the community by example. He's an inspiration, even if living up to his example is a daunting challenge.)

12 December, 2014 11:00PM by Daniel Kahn Gillmor (dkg)

Jingjie Jiang

Week1

Down the rabbit hole

Starting from this week, my OPW period officially begins.
I am thankful and very grateful to this chance. One for the reason I can get an opportunity to contribute to a beneficial, working, meaningful, real-world software. The other seemingly reason is, I can learn much experience and design philosophy from my mentors zack and matthieu. :)

This week my fix is on, bug #763921. It’s basically making the folder page rendering providing more information, specifically the ls -l format. This offers information such as filetype, permission, size, etc.

I learned some new knowledge about “man 2 stat”, and also got more familiar(actually confident) with front end stuff, namely css.

I also get myself familiar with the python test (coverage). Next week, I will try to increase the test coverage a bit. Tests is an essential part of software. It ensures the correctness and robustness. And more importantly, by making tests, we can easily debug the software. The so called, 磨刀不误砍柴工。

The trello cards of next week is interesting. (in case you dunno the site, it’s here:http://trello.com

Let’s see it.


12 December, 2014 02:37PM by sophiejjj

hackergotchi for EvolvisForge blog

EvolvisForge blog

Tip of the day: don’t use –purge when cross-grading

A surprise to see my box booting up with the default GRUB 2.x menu, followed by “cannot find a working init”.

What happened?

Well, grub:i386 and grub:x32 are distinct packages, so APT helpfully decided to purge the GRUB config. OK. Manual boot menu entry editing later, re-adding “GRUB_DISABLE_SUBMENU=y” and “GRUB_CMDLINE_LINUX=”syscall.x32=y”” to /etc/default/grub, removing “quiet” again from GRUB_CMDLINE_LINUX_DEFAULT, and uncommenting “GRUB_TERMINAL=console”… and don’t forget to “sudo update-grub”. There. This should work.

On the plus side, nvidia-driver:i386 seems to work… but not with boinc-client:x32 (why, again? I swear, its GPU detection has been driving me nuts on >¾ of all systems I installed it on, already!).

On the minus side, I now have to figure out why…

tglase@tglase:~ $ sudo ifup -v tap1
Configuring interface tap1=tap1 (inet)
run-parts –exit-on-error –verbose /etc/network/if-pre-up.d
run-parts: executing /etc/network/if-pre-up.d/bridge
run-parts: executing /etc/network/if-pre-up.d/ethtool
ip addr add 192.168.0.3/255.255.255.255 broadcast 192.168.0.3 peer 192.168.0.4 dev tap1 label tap1
Cannot find device “tap1″
Failed to bring up tap1.

… this happens. This used to work before the cktN kernels.

12 December, 2014 09:04AM by Thorsten Glaser

hackergotchi for Joey Hess

Joey Hess

a brainfuck monad

Inspired by "An ASM Monad", I've built a Haskell monad that produces brainfuck programs. The code for this monad is available on hackage, so cabal install brainfuck-monad.

Here's a simple program written using this monad. See if you can guess what it might do:

import Control.Monad.BrainFuck

demo :: String
demo = brainfuckConstants $ \constants -> do
        add 31
        forever constants $ do
                add 1
                output

Here's the brainfuck code that demo generates: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>++++++++++++++++++++++++++++++++<<<<<<<<[>>>>>>>>+.<<<<<<<<]

If you feed that into a brainfuck interpreter (I'm using hsbrainfuck for my testing), you'll find that it loops forever and prints out each character, starting with space (32), in ASCIIbetical order.

The implementation is quite similar to the ASM monad. The main differences are that it builds a String, and that the BrainFuck monad keeps track of the current position of the data pointer (as brainfuck lacks any sane way to manipulate its instruction pointer).

newtype BrainFuck a = BrainFuck (DataPointer -> ([Char], DataPointer, a))

type DataPointer = Integer

-- Gets the current address of the data pointer.
addr :: BrainFuck DataPointer
addr = BrainFuck $ \loc -> ([], loc, loc)

Having the data pointer address available allows writing some useful utility functions like this one, which uses the next (brainfuck opcode >) and prev (brainfuck opcode <) instructions.

-- Moves the data pointer to a specific address.
setAddr :: Integer -> BrainFuck ()
setAddr n = do
        a <- addr
        if a > n
                then prev >> setAddr n
                else if a < n
                        then next >> setAddr n
                        else return ()

Of course, brainfuck is a horrible language, designed to be nearly impossible to use. Here's the code to run a loop, but it's really hard to use this to build anything useful..

-- The loop is only entered if the byte at the data pointer is not zero.
-- On entry, the loop body is run, and then it loops when
-- the byte at the data pointer is not zero.
loopUnless0 :: BrainFuck () -> BrainFuck ()
loopUnless0 a = do
        open
        a
        close

To tame brainfuck a bit, I decided to treat data addresses 0-8 as constants, which will contain the numbers 0-8. Otherwise, it's very hard to ensure that the data pointer is pointing at a nonzero number when you want to start a loop. (After all, brainfuck doesn't let you set data to some fixed value like 0 or 1!)

I wrote a little brainfuckConstants that runs a BrainFuck program with these constants set up at the beginning. It just generates the brainfuck code for a series of ASCII art fishes: >+>++>+++>++++>+++++>++++++>+++++++>++++++++>

With the fishes^Wconstants in place, it's possible to write a more useful loop. Notice how the data pointer location is saved at the beginning, and restored inside the loop body. This ensures that the provided BrainFuck action doesn't stomp on our constants.

-- Run an action in a loop, until it sets its data pointer to 0.
loop :: BrainFuck () -> BrainFuck ()
loop a = do
    here <- addr
    setAddr 1
    loopUnless0 $ do
        setAddr here
        a

I haven't bothered to make sure that the constants are really constant, but that could be done. It would just need a Control.Monad.BrainFuck.Safe module, that uses a different monad, in which incr and decr and input don't do anything when the data pointer is pointing at a constant. Or, perhaps this could be statically checked at the type level, with type level naturals. It's Haskell, we can make it safer if we want to. ;)

So, not only does this BrainFuck monad allow writing brainfuck code using crazy haskell syntax, instead of crazy brainfuck syntax, but it allows doing some higher-level programming, building up a useful(!?) library of BrainFuck combinators and using them to generate brainfuck code you'd not want to try to write by hand.

Of course, the real point is that "monad" and "brainfuck" so obviously belonged together that it would have been a crime not to write this.

12 December, 2014 05:02AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.2

A new release 0.4.2 of RProtoBuf is now on CRAN. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

Murray and Jeroen did almost all of the heavy lifting. Many changes were triggered by two helpful referee reports, and we are slowly getting to the point where we will resubmit a much improved paper. Full details are below.

Changes in RProtoBuf version 0.4.2 (2014-12-10)

  • Address changes suggested by anonymous reviewers for our Journal of Statistical Software submission.

  • Make Descriptor and EnumDescriptor objects subsettable with "[[".

  • Add length() method for Descriptor objects.

  • Add names() method for Message, Descriptor, and EnumDescriptor objects.

  • Clarify order of returned list for descriptor objects in as.list documentation.

  • Correct the definition of as.list for EnumDescriptors to return a proper list instead of a named vector.

  • Update the default print methods to use cat() with fill=TRUE instead of show() to eliminate the confusing [1] since the classes in RProtoBuf are not vectorized.

  • Add support for serializing function, language, and environment objects by falling back to R's native serialization with serialize_pb and unserialize_pb to make it easy to serialize into a Protocol Buffer all of the more than 100 datasets which come with R.

  • Use normalizePath instead of creating a temporary file with file.create when getting absolute path names.

  • Add unit tests for all of the above.

CRANberries also provides a diff to the previous release. RProtoBuf page which has a draft package vignette, a a 'quick' overview vignette, and a unit test summary vignette. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 December, 2014 02:19AM

December 11, 2014

Enrico Zini

ssl-protection

SSL "protection"

In my experience with my VPS, setting up pretty much any service exposed to the internet, even a simple thing to put a calendar in my phone requires an SSL certificate, which costs money, which needs to be given to some corporation or another.

When the only way to get protection from a threat is to give money to some big fish, I feel like I'm being forced to pay protection money.

I look forward to this.

11 December, 2014 02:35PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s fourth report about Debian Long Term Support

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November 42.5 work hours have been equally split among 3 paid contributors. Their reports are available:

  • Thorsten Alteholz did his share as usual.
  • Raphaël Hertzog worked 18 hours (catching up the remaining 4 hours of October).
  • Holger Levsen did his share but did not manage to catch up with the backlog of the previous months. As such, those unused work hours have been redispatched among other contributors for the month of December.

New paid contributors

Last month we mentioned the possibility to recruit more paid contributors to better share the work load and this has already happened: Ben Hutchings and Mike Gabriel join the list of paid contributors.

Ben, as a kernel maintainer, will obviously take care of releasing Linux security updates. We are glad to have him on board because backporting kernel fixes really need some skills that nobody else had within the team of paid contributors.

Evolution of the situation

Compared to last month, the number of paid work hours has almost not increased (we are at 45.7 hours per month) but we are in the process of adding a few more sponsors: Roche Diagnostics International AG, Misal-System, Bitfolk LTD. And we are still in contact with a couple of other companies which have announced their willingness to contribute but which are waiting the new fiscal year.

But even with those new sponsors, we still have some way to go to reach our minimal goal of funding the equivalent of a half-time position. So consider asking your company representative to join this project!

In terms of security updates waiting to be handled, the situation looks better than last month: the dla-needed.txt file lists 27 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total. Like last month, we’re a bit behind in terms of CVE triaging and there are still many packages using SSLv3 where we have no clear plan (in response to the POODLE issues).

The good side is that even though the kernel update spent a large chunk of time to Holger and Raphaël, we still managed to further reduce the backlog of security issues.

Thanks to our sponsors

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

11 December, 2014 11:32AM by Raphaël Hertzog

hackergotchi for Steve Kemp

Steve Kemp

An anniversary and a retirement

On this day last year I we got married.

This morning my wife cooked me breakfast in bed for the second time in her life, the first being this time last year. In thanks I will cook a three course meal this evening.

 

In unrelated news the BlogSpam service will be retiring the XML/RPC API come 1st January 2015.

This means that any/all plugins which have not been updated to use the JSON API will start to fail.

Fingers crossed nobody will hate me too much..

11 December, 2014 10:56AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.6 (and 0.6.5)

A new release 0.6.6 of the digest package is now on CRAN and in Debian.

This release brings the xxHash non-cryptographic hash function by Yann Collet, thanks to several pull requests by Jim Hester. After the upload of version 0.6.5 we uncovered another lovely non-standardness of Windoze: you cannot format unsigned long long via printf() format strings. Great. Luckily Jim found a quick (and portable) fix via the inttypes.h header, and that went into the 0.6.6 release.

The release also contains an earlier extension for hmac() to also cover crc32 hashes, kindly provided by Suchen Jin.

I also made a number of small internal changes such as

  • switching (compiled) function registration to package load via a the useDynLib() declaration in NAMESPACE,
  • (finally!!) formating code to proper four-space indentation,
  • adding some documentation around Jim's pull request, and
  • adding a few GPL copyright headers.

CRANberries provides the usual summary of changes to the previous version.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 December, 2014 01:27AM

RcppRedis 0.1.3

A very minor bugfix release of RcppRedis is now on CRAN. The zcount function now returns the correct type.

Changes in version 0.1.3 (2014-12-10)

  • Bug fix setting correct return type of zcount

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 December, 2014 12:59AM

December 10, 2014

hackergotchi for Clint Adams

Clint Adams

In Uganda, a popular marbles game is called dool.

Sophie stood before me. “I'm leaving with that guy,” she gestured.

“Yes, I thought that would happen,” I chuckled.

She hugged me. The guy, whose name we managed to never utter, did not hug me, though he usually does. They went home together.

That was the last time I saw Sophie.

The rest of us sat down, finished our drinks, and split up. I went with Sophie's ex-girlfriend and the guy who sometimes serves as her ironic beard.

They smoked their disgusting light cigarettes, the kind with very little tobacco but lots of horrible chemicals that make me cough and hopefully fail to give me lung cancer, because watching someone else die of that was excruciating enough.

So we get to our next destination and there is a Peruvian girl sitting on a stool and shopping for shoes on her phone. I am fascinated. Phone app developers had told me that people actually did this but I thought it was just wishful thinking on their part.

The Peruvian girl, who is named something that sounds like it was uttered accidentally by Tommy Gnosis, complains to Sophie's ex-girlfriend that some guy keeps harassing her. We instinctively form a human barrier to shield her from this alleged transgressor, who, it turns out, is the pompous drug dealer with whom Sophie's ex-girlfriend is just about to conduct business.

“I'll be right back,” she says. “Hit on her.”

“What‽ Why‽” I shout after her. There is no response.

Sophie's ex-girlfriend and the drug dealer return from the darkness, having swapped possessions.

The drug dealer is a blowhard and proceeds to regale us with stories so little interest to me that I can't even remember what they were about, but as drug dealers are wont to do, he abuses the power of his possession to maintain the delusion that people would tolerate his presence even if he didn't have illegal commodities to sell them.

When the beard and Sophie's ex-girlfriend go out for a smoke break, I went home.

10 December, 2014 08:13PM

hackergotchi for Chris Lamb

Chris Lamb

Starting IPython automatically from zsh

Instead of a calculator, I tend to use IPython for those quotidian bits of "mental" arithmetic:

In  [1]: 17 * 22.2
Out [1]: 377.4

However, I often forget to actually start IPython, resulting in me running the following in my shell:

$ 17 * 22.2
zsh: command not found: 17

Whilst I could learn do this maths within Zsh itself, I would prefer to dump myself into IPython instead — being able to use "_" and Python modules generally is just too useful.

After following this pattern too many times, I put together the following snippet that will detect whether I have prematurely attempted a calculation inside zsh and pretend that I ran it in IPython all along:

zmodload zsh/pcre

math_regex='^[\d\-][\d\.\s\+\*\/\-]*$'

function math_precmd() {
    if [ "${?}" = 0 ]
    then
        return
    fi

    if [ -z "${math_command}" ]
    then
        return
    fi

    if whence -- "$math_command" 2>&1 >/dev/null
    then
        return
    fi

    if [ "${math_command}" -pcre-match "${math_regex}" ]
    then
        echo
        ipython -i -c "_=${math_command}; print _"
    fi
}

function math_preexec() {
    typeset -g math_command="${1}"
}

typeset -ga precmd_functions
typeset -ga preexec_functions

precmd_functions+=math_precmd
preexec_functions+=math_preexec

For example:

lamby@seriouscat:~% 17 * 22.2
zsh: command not found: 17

377.4

In  [1]: _ + 1
Out [1]: 378.4

(Canonical version from my zshrc.d)

10 December, 2014 06:07PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Wilco!!

Wilco at The Riviera

With a bit of luck due to a collegue having a spare ticket, I managed to make it to an awesome Wilco show at The Riviera in Uptown.

This concert was part of a set a six shows. Tweedy and the band were fast, and loose, and wonderful, and totally beloved by the home crowd. An truly outstanding show, and a great evening.

Also: I should get out more often. Last blog entry about Wilco was from 2005. Ouch.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 December, 2014 12:47AM

December 09, 2014

hackergotchi for Joey Hess

Joey Hess

podcasts that don't suck, 2014 edition

  • The Memory Palace: This is the way history should be taught, but rarely is. Nate DiMeo takes past events and puts you in the middle of them, in a way that makes you emphathise so much with people from the past. Each episode is a little short story, and they're often only a few minutes long. A great example is this description of when Niagra falls stopped. I have listened to the entire back archive, and want more. Only downside is it's a looong time between new episodes.

  • The Haskell Cast: Panel discussion with a guest, there is a lot of expertise amoung them and I'm often scrambling to keep up with the barrage of ideas. If this seems too tame, check out The Type Theory Podcast instead..

  • Benjamen Walker's Theory of Everything: Only caught 2 episodes so far, but they've both been great. Short, punchy, quirky, geeky. Astoundingly good production values.

  • Lightspeed magazine and Escape Pod blur together for me. Both feature 20-50 minute science fiction short stories, and occasionally other genre fictions. They seem to get all the award-winning short stories. I sometimes fall asleep to these which can make for strange dreams. Two strongly contrasting examples: "Observations About Eggs from the Man Sitting Next to Me on a Flight from Chicago, Illinois to Cedar Rapids, Iowa" and "Pay Phobetor"

  • Serial: You probably already know about this high profile TAL spinoff. If you didn't before: You're welcome. :) Nuff said.

  • Redecentralize: Interviews with creators of decentralized internet tools like Tahoe-LAFS, Ethereum, Media Goblin, TeleHash. I just wish it went into more depth on protocols and how they work.

  • Love and Radio: This American Life squared and on acid.

  • Debian & Stuff: My friend Asheesh and that guy I ate Thai food with once in Portland in a marvelously unfocused podcast that somehow connects everything up in the end. Only one episode so far; what are you guys waiting on? :P

  • Hacker Public Radio: Anyone can upload an episode, and multiple episodes are published each week, which makes this a grab bag to pick and choose from occasionally. While mostly about Linux and Free Software, the best episodes are those that veer var afield, such as the 40 minute river swim recording featured in Wildswimming in France.

Also, out of the podcasts I listed previously, I still listen to and enjoy Free As In Freedom, Off the Hook, and the Long Now Seminars.

PS: A nice podcatcher, for the technically inclined is git-annex importfeed. Featuring list of feeds in a text file, and distributed podcatching!

09 December, 2014 07:05PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Playing with ExtreMon

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring.

By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too.

Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some.

And collectd doesn't do the nagios thing of actually telling you when things go down.

So what if you could see it when things go bad?

At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...

well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?

which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code.

As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere.

While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do.

You will need:

  • A monitor with a FullHD (or better) resolution. Currently, the display frontend of ExtreMon assumes it has a FullHD display at all time. Even if you have a lower resolution. Or a higher one.
  • Python3
  • OpenJDK 6 (or better)

First, we clone the ExtreMon git repository:

git clone https://github.com/m4rienf/ExtreMon.git extremon
cd extremon

There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to

  • install CollectD (which is required for its types.db)
  • Configure CollectD to have a line like Hostname "com.example.myhost" rather than the (usual) FQDNLookup true. This is because extremon uses the java-style reverse hostname, rather than the internet-style FQDN.

Make sure the dump.py script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things.

Next, we build the java libraries (which we'll need for the display thing later on):

cd java/extremon
mvn install
cd ../client/
mvn install

This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository.

We'll now build the display frontend. This is maintained in a separate repository:

cd ../..
git clone https://github.com/m4rienf/ExtreMon-Display.git display
cd display
mvn install

This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github

With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:

  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.

If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:

keytool -genkeypair -alias extremontest

after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader.

Now, we will sign the .jar file:

jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest

There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course.

You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites.

Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory.

Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to barbershop.grep.be by the location where you've stored your signed .jar file.

Add the chalice_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication—chalice_in_http is an input mechanism.

Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring.

Now run javaws https://url/x3console.jnlp to start Extremon-Display.

At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in.

Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver.

The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out.

Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd.

The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that.

Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead end and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot.

Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

09 December, 2014 06:43PM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

MySQL Meet-up 20141208

I had an enjoyable time last night at Twitter with local MySQL DBAs and developers. We had an attendee who has no experience with SQL or programming at all. She is interested in organizing her collection of recipes and had heard a rumor that MySQL was a good tool to use for this task. She indicated that her desktop runs Windows 7. I think I’m going to encourage her to turn her concept in to a community project, as she is not the first person I’ve met who wants to organize recipes!

We were hosted by Rob at Twitter, who used to work with Lisa back before she retired. He’s a member of the site reliability team and keeps the fail whale from rearing its blubbery head.

Pizza was provided by my dear friend and long-time open source buddy Gerry Narvaja with the assistance of the folks in the kitchen at Zeek’s.

We discussed new techniques in the areas of load balancing and high availability. Five nines is no longer the thing that people talk about, instead it’s six nines. It’s a brave new world out there!

I was not the only person who was excited about one of the latest features in MariaDB / MySQL to come out of HP, the high resolution time data types.

One of the attendees is an old hand at COBOL and was asking if anyone knows where one can get a COBOL runtime environment. I’ve never thought about that before… Let me ask the googs… Looks like there’s an active project called GNU COBOL which is officially part of the GNU project:

09 December, 2014 05:31PM by C.J. Adams-Collier

Enrico Zini

radicale-davdroid

Radicale and DAVDroid

radicale and DAVdroid appeal to me. Let's try to make the whole thing work.

A self-signed SSL certificate

Generating the certificate:

    openssl req -nodes -x509 -newkey rsa:2048 -keyout cal-key.pem -out cal-cert.pem -days 3650
    [...]
    Country Name (2 letter code) [AU]:IT
    State or Province Name (full name) [Some-State]:Bologna
    Locality Name (eg, city) []:
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:enricozini.org
    Organizational Unit Name (eg, section) []:
    Common Name (e.g. server FQDN or YOUR name) []:cal.enricozini.org
    Email Address []:postmaster@enricozini.org

Installing it on my phone:

    openssl x509 -in cal-cert.pem -outform DER -out cal-cert.crt
    adb push cal-cert.crt /mnt/sdcard/
    enrico --follow-instructions http://davdroid.bitfire.at/faq/entry/importing-a-certificate

Installing radicale in my VPS

An updated radicale package, with this patch to make it work with DAVDroid:

    apt-get source radicale
    # I reviewed 063f7de7a2c7c50de5fe3f8382358f9a1124fbb6
    git clone https://github.com/Kozea/Radicale.git
    Move the python code from git to the Debian source
    dch -v 0.10~enrico  "Pulled in the not yet released 0.10 work from upstream"
    debuild -us -uc -rfakeroot

Install the package:

    # dpkg -i python-radicale_0.10~enrico0-1_all.deb
    # dpkg -i radicale_0.10~enrico0-1_all.deb

Create a system user to run it:

    # adduser --system --disabled-password radicale

Configure it for mod_wsgi with auth done by Apache:

    # For brevity, this is my config file with comments removed

    [storage]
    # Storage backend
    # Value: filesystem | multifilesystem | database | custom
    type = filesystem

    # Folder for storing local collections, created if not present
    filesystem_folder = /var/lib/radicale/collections

    [logging]
    config = /etc/radicale/logging

Create the wsgi file to run it:

    # mkdir /srv/radicale
    # cat <<EOT > /srv/radicale/radicale.wsgi
    import radicale
    radicale.log.start()
    application = radicale.Application()
    EOT
    # chown radicale.radicale /srv/radicale/radicale.wsgi
    # chmod 0755 /srv/radicale/radicale.wsgi

Make radicale commit to git

    # apt-get install python-dulwich
    # cd /var/lib/radicale/collections
    # git init
    # chown radicale.radicale -R /var/lib/radicale/collections/.git

Apache configuration

Add a new site to apache:

    $ cat /etc/apache2/sites-available/cal.conf
    # For brevity, this is my config file with comments removed
    <IfModule mod_ssl.c>
    <VirtualHost *:443>
            ServerName cal.enricozini.org
            ServerAdmin enrico@enricozini.org

            Alias /robots.txt /srv/radicale/robots.txt
            Alias /favicon.ico /srv/radicale/favicon.ico

            WSGIDaemonProcess radicale user=radicale group=radicale threads=1 umask=0027 display-name=%{GROUP}
            WSGIProcessGroup radicale
            WSGIScriptAlias / /srv/radicale/radicale.wsgi

            <Directory /srv/radicale>
                    # WSGIProcessGroup radicale
                    # WSGIApplicationGroup radicale
                    # WSGIPassAuthorization On
                    AllowOverride None
                    Require all granted
            </Directory>

            <Location />
                    AuthType basic
                    AuthName "Enrico's Calendar"
                    AuthBasicProvider file
                    AuthUserFile /usr/local/etc/radicale/htpasswd
                    Require user enrico
            </Location>

            ErrorLog{APACHE_LOG_DIR}/cal-enricozini-org-error.log
            LogLevel warn

            CustomLog{APACHE_LOG_DIR}/cal-enricozini-org-access.log combined

            SSLEngine on
            SSLCertificateFile    /etc/ssl/certs/cal.pem
            SSLCertificateKeyFile /etc/ssl/private/cal.key
    </VirtualHost>
    </IfModule>

Then enable it:

    # a2ensite cal.conf
    # service apache2 reload

Create collections

DAVdroid seems to want to see existing collections on the server, so we create them:

    $ apt-get install cadaver
    $ cat <<EOT > /tmp/empty.ics
    BEGIN:VCALENDAR
    VERSION:2.0
    END:VCALENDAR
    EOT
    $ cat <<EOT > /tmp/empty.vcf
    BEGIN:VCARD
    VERSION:2.1
    END:VCARD
    EOT
    $ cadaver https://cal.enricozini.org
    WARNING: Untrusted server certificate presented for `cal.enricozini.org':
    [...]
    Do you wish to accept the certificate? (y/n) y
    Authentication required for Enrico's Calendar on server `cal.enricozini.org':
    Username: enrico
    Password: ****
    dav:/> cd enrico/contacts.vcf/
    dav:/> put /tmp/empty.vcf
    dav:/> cd ../calendar.ics/
    dav:/> put /tmp/empty.ics
    dav:/enrico/calendar.ics/> ^D
    Connection to `cal.enricozini.org' closed.

DAVdroid configuration

  1. Add a new DAVdroid sync account
  2. Use server/username configuration
  3. For server, use https:////
  4. Add username and password

It should work.

Related links

09 December, 2014 03:35PM

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

Using bsdtar to change an archive format

Streamable archive formats

Package icon

Archive formats such as tar(5) and cpio(5) have the advantage of being streamable, so you can use them for transferring data with pipes and remote shells, without having to store the archive in the middle of the process, for instance:

$ cd public_html/blog
$ rgrep -lF "archive" data/articles \
      | pax -w \
      | ssh newserver "mkdir public_html/blog ;
                       cd public_html/blog ;
                       pax -r"

Turning a ZIP archive into tarball

Unfortunately, many people will send you data in non-streamable archive formats such as ZIP¹. For such cases, bsdtar(1) can be useful, as it is able to convert an archive from one format to another:

$ bsdtar -cf - @archive.zip \
      | COMMAND

These arguments tell bsdtar to:

  • create an archive;
  • write it to stdout (contrary to GNU tar which defaults to stdout, bsdtar defaults to a tape device);
  • put into it the files it will find in the archive archive.zip.

The result is a tape archive, which is easier to manipulate in a stream than a ZIP archive.

Notes

  1. Some will say that although ZIP is based on an file index, it can be stream because that index is placed at the end of the archive. In fact, that characteristic only allows to stream the archive creation, but requires to store the full archive before being able to extract it. .

09 December, 2014 03:00PM by Tanguy

Hideki Yamane

ThinkPad X121e with UFEI boot

I have ThinkPad X121e and recenly exchanged its HDD to SSD, then I've tried to boot from UEFI but I couldn't. And I considered its something wrong with this old BIOS verion but new one can improve the situation, tried to update it. Steps are below.
  1. get iso image file from Lenovo (Japanese site) (release note)
  2. put iso image into /boot
  3. add custom grub file as /etc/grub.d/99_bios (note: I don't separate /boot partition, maybe you should specify path for file if you don't do so).
    $ sudo sh -c "touch /etc/grub.d/99_bios; chmod +x /etc/grub.d/99_bios"
    and edit /etc/grub.d/99_bio file.
    #! /bin/sh
    menuentry "BIOS Update" {
    linux16 memdisk iso
    initrd16 xxxxxxxxxx.iso
    }
  4. update grub menu with and check /boot/grub/grub.cfg file
    $ sudo update-grub
    $ tail /boot/grub/grub.cfg
  5. make sure memdisk command is installed
    $ sudo apt-get install syslinux
  6. just reboot and select bios update menu
Looks okay, its firmware update was success but I cannot boot it (installation was okay). Hmm...

As Matthew Garrett blogged before, probably ThinkPad X121e's firmware doesn't allow to boot from any entries in UEFI except "Windows Boot Manager" :-(

 ...So I have to back to legacy BIOS. *sigh*

09 December, 2014 01:09PM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Thorsten Glaser

Thorsten Glaser

The colon in the shell: corrigenda

Bernhard’s article on Plänet Debian about the “colon” command in the shell could use a clarification and a security-relevant correcture.

There is, indeed, no difference between the : and true built-in commands.

Stéphane Chazelas points out that writing : ${VARNAME:=default} is bad, : "${VARNAME:=default}" is correct. Reason: someone could preset $VARNAME with, for example, /*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/* which will exhaust during globbing.

Besides that, the article is good. Thanks Bernhard for posting it!

PS: I sometimes use the colon as comment leader in the last line of a script or function, because it, unlike the octothorpe, sets $? to 0, which can be useful.

Update: As jilles pointed out in IRC, “colon” (‘:’) is a POSIX special built-in (most importantly, it keeps assignments), whereas “true” is a regular built-in utility.

09 December, 2014 10:40AM by MirOS Developer tg (tg@mirbsd.org)

Russ Allbery

wallet 1.2

wallet is a system for secure credential management and distribution.

This release renames the duo object type to duo-pam (since it really only handles PAM integrations) and adds new object types duo-radius, duo-ldap, and duo-rdp to handle other types of Duo Security integrations.

It also adds a rename command, which can be used to rename existing objects without destroying them and recreating them. Currently, this only supports file objects.

My only role in this release was to do the final release management and a bit of release testing. The new code was implemented by Jon Robertson (who's also done a lot of work on wallet in the past).

You can get the latest release from the wallet distribution page.

09 December, 2014 05:39AM

December 08, 2014

hackergotchi for Bernhard R. Link

Bernhard R. Link

The Colon in the Shell.

I was recently asked about some construct in a shell script starting with a colon(:), leading me into a long monologue about it. Afterwards I realized I had forgotten to mention half of the nice things. So here for your amusement some usage for the colon in the shell:

To find the meaning of ":" in the bash manpage[1], you have to look at the start of the SHELL BUILTIN COMMANDS section. There you find:

: [arguments]
	No effect; the command does nothing beyond expanding arguments and performing any specified redirections.  A zero exit code is returned.

If you wonder what the difference to true is: I don't know any difference (except that there is no /bin/:)

So what is the colon useful for? You can use it if you need a command that does nothing, but still is a command.

  • For example, if you want to avoid using a negation (for fear of history expansion still being on by default on a interactive bash or wanting to support ancient shells), you cannot simply write
    if conditon ; then
    	# this will be an error
    else
    	echo condition is false
    fi
    
    but need some command there, for which the colon can be used:
    if conditon ; then
    	: # nothing to do in this case
    else
    	echo condition is false
    fi
    
    To confuse your reader, you can use the fact that the colon ignores it's arguments and you only have normal words there:
    if conditon ; then
    	: nothing to do in this case # <- this works but is not good style
    else
    	echo condition is false
    fi
    
    though I strongly recommend against it (exercise: why did I use a # there for my remark?).
  • This of course also works in other cases:
    while processnext ; do
    	:
    done
    
  • The ability to ignore the actual arguments (but still processing them as with every command that ignores it arguments) can also be used, like in:
    : ${VARNAME:=default}
    
    which sets VARNAME to a default if unset or empty. (One could also use that the first time it is used, or ${VARNAME:-default} everywhere, but this can be more readable).
  • In other cases you do not strictly need a command, but using the colon can clear things up, like creating or truncating a file using a redirection:
    : > /path/to/file
    

Then there is more things you can do with the colon, most I'd put under "abuse":

  • misuing it for comments:
    : ====== here
    
    While it has the advantage of also showing up in -x output, the to be expected confusion of the reader and the danger of using any shell active character makes this general a bad idea.
  • As it practically the same as true it can be used as a shorter form of true. Given that true is more readable that is a bad idea. (At least it isn't as evil as using the empty string to denote true.)
    # bad style!
    if condition ; then doit= ; doit2=: ; else doit=false ; doit2=false ; fi
    if $doit ; then echo condition true ; fi
    if $doit2 && true ; then echo condition true ; fi
    
  • Another way to scare people:
    ignoreornot=
    $ignoreornot echo This you can see.
    ignoreornot=:
    $ignoreornot echo This you cannot see.
    
    While it works, I recommend against it: Easily confusing and any > in there or $(...) will likely rain harvoc over you.
  • Last and least, one can shadow the built-in colon with a different one. Only useful for obfuscation, and thus likely always evil. :(){:&:};: anyone?

This is of course not a complete list. But unless I missed something else, those are the most common cases I run into.

[1] <rant>If you never looked at it, better don't start: the bash manpage is legendary for being quite useless as hiding all information in other information in a quite absurd order. Unless you look at documentation about how to write a shell script parser, then the bash manpage is really what you want to read.</rant>

08 December, 2014 07:35PM

hackergotchi for Andrew Pollock

Andrew Pollock

[tech] A geek Dad goes to Kindergarten with a box full of Open Source and some vegetables

Zoe's Kindergarten encourages parents to come in and spend some time with the kids. I've heard reports of other parents coming in and doing baking with the kids or other activities at various times throughout the year.

Zoe and I had both wanted me to come in for something, but it had taken me until the last few weeks of the year to get my act together and do something.

I'd thought about coming in and doing some baking, but that seemed rather done to death already, and it's not like baking is really my thing, so I thought I'd do something technological. I just wracked my brains for something low effort and Kindergarten-age friendly.

The Kindergarten has a couple of eduss touch screens. They're just some sort of large-screen with a bunch of inputs and outputs on them. I think the Kindergarten mostly uses them for showing DVDs and hooking up a laptop and possibly doing something interactive on them.

As they had HDMI input, and my Raspberry Pi had HDMI output, it seemed like a no-brainer to do something using the Raspberry Pi. I also thought hooking up the MaKey MaKey to it would make for a more fun experience. I just needed to actually have it all do something, and that's where I hit a bit of a creative brick wall.

I thought I'd just hack something together where based on different inputs on the MaKey MaKey, a picture would get displayed and a sound played. Nothing fancy at all. I really struggled to get a picture displayed full screen in a time efficient manner. My Pi was running Raspbian, so it was relatively simple to configure LightDM to auto-login and auto-start something. I used triggerhappy to invoke a shell script, which took care of playing a sound and an image.

Playing a sound was easy. Displaying an image less so, especially if I wanted the image loaded fast. I really wanted to avoid having to execute an image viewer every time an input fired, because that would be just way too slow. I thought I'd found a suitable application in Geeqie, because it supported being out of band managed, but it's problem was it also responded to the inputs from the MaKey MaKey, so it became impossible to predictably display the right image with the right input.

So the night before I was supposed to go to Kindergarten, I was up beating my head against it, and decided to scrap it and go back to the drawing board. I was looking around for a Kindergarten-friendly game that used just the arrow keys, and I remembered the trusty old Frozen Bubble.

This ended up being absolutely perfect. It had enough flags to control automatic startup, so I could kick it straight into a dumbed-down full screen 1 player game (--fullscreen --solo --no-time-limit)

The kids absolutely loved it. They were cycled through in groups of four and all took turns having a little play. I brought a couple of heads of broccoli, a zucchini and a potato with me. I started out using the two broccoli as left and right and the zucchini to fire, but as it turns out, not all the kids were as good with the "left" and "right" as Zoe, so I swapped one of the broccoli for a potato and that made things a bit less ambiguous.

The responses from the kids were varied. Quite a few clearly had their minds blown and wanted to know how the broccoli was controlling something on the screen. Not all of them got the hang of the game play, but a lot did. Some picked it up after having a play and then watching other kids play and then came back for a more successful second attempt. Some weren't even sure what a zucchini was.

Overall, it was a very successful activity, and I'm glad I switched to Frozen Bubble, because what I'd originally had wouldn't have held up to the way the kids were using it. There was a lot of long holding/touching of the vegetables, which would have fired hundreds of repeat events, and just totally overwhelmed triggerhappy. Quite a few kids wanted to pick up and hold the vegetables instead of just touch them to send an event. As it was, the Pi struggled to play Frozen Bubble enough as it was.

The other lesson I learned pretty quickly was that an aluminium BBQ tray worked a lot better as the grounding point for the MaKey MaKey than having to tether an anti-static strap around each kid's ankle as they sat down in front of the screen. Once I switched to the tray, I could rotate kids through the activity much faster.

I just wish I was a bit more creative, or there were more Kindergarten-friendly arrow-key driven Linux applications out there, but I was happy with what I managed to hack together with a fairly minimal amount of effort.

08 December, 2014 05:04PM

Niels Thykier

Jessie has half the number of RC bugs compared to Wheezy

In the last 24 hours, the number of RC bugs currently affecting Jessie was reduced to just under half of the same number for Wheezy.

 

There are still a lot of bugs to be fixed, but please keep up the good work. :)

 


08 December, 2014 07:08AM by Niels Thykier

hackergotchi for Clint Adams

Clint Adams

I don't care about the sunshine, yeah

Rhoda is guarded. She is secretly in love with her brother. When he gets a girlfriend, she finds a boyfriend. She tells no one what she's really thinking.

Rhoda likes to seize the day. In the midst of evening conversation, she will excuse herself to “use the bathroom” or “come right back”, then, within literally two minutes, she will go home with an acquaintance or stranger. Often these encounters or the aftermaths thereof do not go according to her liking, and she will make veiled remarks of hostility, should she see those people again.

Rhoda was unhappy so she changed “everything” in her life. Though multiple things remained constant, she concluded that she was, in fact, the problem.

Rhoda does not make enough money on which to live. She works part-time, and turns down all other job offers. Other people make up for her financial shortcomings so that she is not homeless and starving.

Rhoda is certain that it is more difficult to be female than male.

08 December, 2014 02:32AM

December 07, 2014

hackergotchi for Ana Beatriz Guerrero Lopez

Ana Beatriz Guerrero Lopez

Keysigning, it’s never too late

I managed to sign all the keys from DebConf14 within a month after the conference ended, but I had still a pile of keys to sign
going back to DebConf11 from several minidebconfs and other small events. Today it was frakking cold and I finally managed to sign something close to 100 keys. I didn’t sign any key with less than 2048 bits, expired or revoked.

If you got your key signed by me (or get it in the next hours), sorry it took so long!

07 December, 2014 11:30PM by ana

hackergotchi for Junichi Uekawa

Junichi Uekawa

Got nasne.

Got nasne. Got it working with Android, PlayStation VITA, and a Sony TV. Each platform seem to require some form of payment in addition to buying the Nasne itself. Something about ecosystem and lock in that I don't like.

07 December, 2014 10:10PM by Junichi Uekawa

Sven Hoexter

Failing with F5: Implementing HTTP session persistence

In the old days people deployed the Apache HTTPD together with mod_jk and mod_balancer in front of Apache Tomcat instances to achieve scalability and resilience. That sooner or later turns out to not provide features you nowadays would like to have to run a service 24/7. So it happened that I had to jump into the sometimes cold sea of managing services fronted by F5 BigIP devices, a somewhat common load balancing solution.

Soon after you start to introduce loadbalancing, because you grow beyond the "one fat webserver" state, you'll have the requirement to implement some way of session persistence. In the old days you might have added a unique jvmRoute value to your mod_balancer pool and to your tomcat configuration to match the connection to the right pool member. If you're clever you've set it on the Tomcat as a variable that automatically builds up the jvmRoute based on the hostname, so you can add new instances without having to touch the Tomcat configuration. In case you were not that clever (like we were when we started) you might have added placeholders in the server.xml to set the jvmRoute value in your deployment script.

I think in a somewhat better world your first step usually is to introduce a shared session store, be it the php session handler backed by a memcached or Hazelcast for the Java afficionados, to avoid the persistence requirement on your balancer. But if you grow even bigger the requirement will soon reappear because you might have to add some form of sharding and now have to sent users to the same cluster serving a specific shard.

In the end all of that is not cool and your developers have to be aware of those issues. For all cases where fault tolerance within the session is not a priority we aimed for something that we can enable or disable on the balancer without application code changes.

The "let the box do its job" solution

The nicest solution from our point of view is letting the F5 inject a cookie that points the device on subsequent requests to the right pool node. If you like you can define the cookie name, a timeout, force the injection on replies and some other tunables. Take a look at the help for ltm persistence cookie if you'd like to have a look at the details.

The "let's do it by hand but on the box" solution

Most Java frameworks provide a session cookie called JSESSIONID. Why not use that one and add a persistence table entry on the BigIP with the cookie as the lookup key? As always if there's no solution provided out of the box you can implement an iRule for it. Kind of an F5 mantra.

There are many examples out there, here is what we ended up with:

when HTTP_REQUEST {
    if { [HTTP::cookie "JSESSIONID"] ne "" }{
        persist uie [string tolower [HTTP::cookie "JSESSIONID"]] 1800 
    }
}

when HTTP_RESPONSE {
    if { [HTTP::cookie "JSESSIONID"] ne "" }{
        persist add uie [string tolower [HTTP::cookie "JSESSIONID"]] 1800
    }
}

And here is where I failed

Now imagine you're running an application that so far handles only machine to machine traffic without session persistence. A few month later you're approached to enable session persistence, because someone introduced a management console for the application as a web user interface. Without further ado you jump on board and provide - showcasing the flexibility of the new balancer - session persistence based on cookies injected by the BigIP. You've done it before, you enable it for the test environment, everything looks fine, you enable it for the production setup. Everything is fine (at least for now ...), the WUI workes fine, sessions are persistent to the node the user hit first.

A month or two later someone from the development team approaches you again, this time asking if the balancing setup has some issues because over 90% of the live traffic hit one of the two machines in the pool until it wents down due to a crash or a deployment. Then about 90% hit the other node, even after the crashed one is resurrected. But every time the developer tests the balancing with a "while true" loop sending several hundred requests via curl the balancing is perfectly equal.

After beeing puzzled by the behaviour myself we started to look at the live traffic with tcpdump and suddenly it was all clear when I saw the BigIP cookie beeing passed around. Actually everything behaved as we configured it. The application used a proper HTTP library and that one implemented cookies: If you receive one you're polite and pass it back to the sender.

The original sender of course also implements HTTP with cookie support and now, due to the natur of the traffic, we have long living connections between two applications passing around the same cookie. On every single HTTP request sent through this connection. And every time the balancer will look at the cookie and happily base the routing decision on the pool member noted in this cookie.

When we tested with curl we did not have the usage of cookies on our mind. Because our false assumption was that the applications do not create cookies for machine to machine traffic. Nobody, including myself, actively remembered the decision to base the session persistence on cookies injected by the balancer.

We went on and switched to the iRule mentioned above. Luckily we already had that one implemented and tested for a different use case with the requirement to not inject additional cookies.

  1. Can we maybe improve the change process for the balancer to handle a mixed traffic scenario?
  2. Do we need monitoring to monitor the balancing of requests?
  3. Is it a wise idea to mix machine2machine and user2machine traffic on the same port?
  4. Should we maybe, if we seperate properly, place management applications on a different system?
  5. Is our documentation sufficient?

Combine 3. and 4. and ask yourself how resilient you are if someone tries to exhaust the memory of the balancer with a lot of connection attempts that have random JSESSIONID cookies? Or is it even possible to exhaust ressource with faked BigIP cookies you inject? Maybe it's even a vector for something like http://events.ccc.de/congress/2011/Fahrplan/attachments/2007_28C3_Effective_DoS_on_web_application_platforms.pdf?

Quite some questions left to think about and draft answers for.

07 December, 2014 09:15PM

hackergotchi for Miriam Ruiz

Miriam Ruiz

Falling Trees, by Robert Fulghum

In the Solomon Islands in the south Pacific some villagers practice a unique form of logging. If a tree is too large to be felled with an ax, the natives cut it down by yelling at it. (Can’t lay my hands on the article, but I swear I read it.) Woodsmen with special powers creep up on a tree just at dawn and suddenly scream at it at the top of their lungs. They continue this for thirty days. The tree dies and falls over. The theory is that the hollering kills the spirit of the tree. According to the villagers, it always works.

Ah, those poor nave innocents. Such quaintly charming habits of the jungle. Screaming at trees, indeed. How primitive. Too bad thay don’t have the advantages of modern technology and the scientific mind.

Me? I yell at my wife. And yell at the telephone and the lawn mower. And yell at the TV and the newspaper and my children. I’ve been known to shake my fist and yell at the sky at times.

Man next door yells at his car a lot. And this summer I heard him yell at a stepladder for most of an afternoon. We modern, urban, educated folks yell at traffic and umpires and bills and banks and machines–especially machines. Machines and relatives get most of the yelling.

Don’t know what good it does. Machines and things just sit there. Even kicking doesn’t always help. As for people, well, the Solomon Islanders may have a point. Yelling at living things does tend to kill the spirit in them. Sticks and stones may break our bones, but words will break our hearts….

by Robert Fulghum (All I Really Need To Know I Learned In Kindergarten)

07 December, 2014 08:35PM by Miry

hackergotchi for Steve Kemp

Steve Kemp

I eventually installed Debian on a new desktop.

Recently I build a new desktop system. The hightlights of the hardware are a pair of 512Gb SSDs, which were to be configured in software RAID for additional speed and reliability (I'm paranoid that they'd suddenly stop working one day). From power-on to the (GNOME) login-prompt takes approximately 10 seconds.

I had to fight with the Debian installer to get the beast working though as only the Jessie Beta 2 installer would recognize the SSDs, which are Crucual MX100 devices. My local PXE-setup which deploys the daily testing installer, and the wheezy installer, both failed to recognize the drives at all.

The biggest pain was installing grub on the devices. I think this was mostly this was due to UFI things I didn't understand. I created spare partitions for it, and messaged around with grub-ufi, but ultimately disabled as much of the "fancy modern stuff" as I could in the BIOS, leaving me with AHCI for the SATA SSDs, and then things worked pretty well. After working through the installer about seven times I also simplified things by partitioning and installing on only a single drive, and only configured the RAID once I had a bootable and working system.

(If you've never done that it's pretty fun. Install on one drive. Ignore the other. Then configure the second drive as part of a RAID array, but mark the other half as missing/failed/dead. Once you've done that you can create filesystems on the various /dev/mdX devices, rsync the data across, and once you boot from the system with root=/dev/md2 you can add the first drive as the missing half. Do it patiently and carefully and it'll just work :)

There were some niggles though:

  • Jessie didn't give me the option of the gnome desktop I know/love. So I had to install gnome-session-fallback. I also had to mess around with ~/.config/autostart because the gnome-session-properties command (which should let you tweak the auto-starting applications) doesn't exist anymore.

  • Setting up custom keyboard-shortcuts doesn't seem to work.

  • I had to use gnome-tweak-tool to get icons, etc, on my desktop.

Because I assume the SSDs will just die at some point, and probably both on the same day, I installed and configured obnam to run backups. There is more testing and similar, but this is the core of my backup script:

#!/bin/sh

# backup "/" - minus some exceptions.
obnam backup -r /media/backups/storage --exclude=/proc --exclude=/sys --exclude=/dev --exclude=/media /

# keep files for various periods
obnam forget --keep="30d,8w,8m" --repository /media/backups/storage

07 December, 2014 08:12AM

Hideki Yamane

security.debian.org for Asia region

Thanks to DSAs' deployment, now there is security.debian.org mirror in Japan. It means users who live in Asia/Pacific region are not annoyed with slow download speed for stable security update - anymore :)

New security.debian.org mirror is kindly hosted by Sakura Internet, Inc. (さくらインターネット株式会社) as well as debian-mirror.sakura.ne.jp mirror server in Ishikari, Hokkaido (北海道石狩市).


Kudos to Sakura Internet! (twitter: CEO Kunihiro Tanaka and PR)

note: it's not for Pacific region, only Asia region. Thanks to Paul Wise for pointed out. But still good for our users :)

07 December, 2014 04:17AM by Hideki Yamane (noreply@blogger.com)

December 06, 2014

hackergotchi for Holger Levsen

Holger Levsen

20141206-distro-collaboration

Distro collaboration...

So it seems I now use h01ger.id.fedoraproject to mentor a Debian applicant for GNOME's Outreach Project for Women to improve Ubuntu's apparmor software :-D

Thanks to the Fedora Project for this nice service to the community!

06 December, 2014 07:21PM

December 05, 2014

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Fiona Apple

I do hang out in #debian-women on IRC, which shouldn't be much of a surprise after my last blog entry about my Feminist Year. And for readers of my blog it also shouldn't be much of a surprise that Music is an important part of my life. Recently a colleague from Debian though asked me in said IRC channel about whether I can recommend some female artists or bands. Which got me looking through my recommendations so far, and actually, there weren't many of those in here, unfortunately. So I definitely want to work on that because there are so many female singers, songwriters and bands out there that I totally would like to share with the broader audience.

I want to start out with a strong female voice who was introduced to me by another strong woman—thanks for that! Fiona Apple definitely has her own style and is something special, she stands out. Here are my suggestions:

  • Hot Knife: This was the song I was introduced to her with. And I love the kettledrum rhythm and sound.
  • Criminal: Definitely a different sound, but it was the song that won her a Grammy.
  • Not About Love: Such a lovely composition. I do love the way she plays the piano.

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

05 December, 2014 11:16PM by Rhonda

hackergotchi for Joey Hess

Joey Hess

clean OS reinstalls with propellor

You have a machine someplace, probably in The Cloud, and it has Linux installed, but not to your liking. You want to do a clean reinstall, maybe switching the distribution, or getting rid of the cruft. But this requires running an installer, and it's too difficult to run d-i on remote machines.

Wouldn't it be nice if you could point a program at that machine and have it do a reinstall, on the fly, while the machine was running?

This is what I've now taught propellor to do! Here's a working configuration which will make propellor convert a system running Fedora (or probably many other Linux distros) to Debian:

testvm :: Host
testvm = host "testvm.kitenet.net"
        & os (System (Debian Unstable) "amd64")
        & OS.cleanInstallOnce (OS.Confirmed "testvm.kitenet.net")
                `onChange` propertyList "fixing up after clean install"
                        [ User.shadowConfig True
                        , OS.preserveRootSshAuthorized
                        , OS.preserveResolvConf
                        , Apt.update
                        , Grub.boots "/dev/sda"
                                `requires` Grub.installed Grub.PC
                        ]
        & Hostname.sane
        & Hostname.searchDomain
        & Apt.installed ["linux-image-amd64"]
        & Apt.installed ["ssh"]
        & User.hasSomePassword "root"

And here's a video of it in action.


It was surprisingly easy to build this. Propellor already knew how to create a chroot, so from there it basically just has to move files around until the chroot takes over from the old OS.

After the cleanInstallOnce property does its thing, propellor is running inside a freshly debootstrapped Debian system. Then we just need a few more Propertites to get from there to a bootable, usable system: Install grub and the kernel, turn on shadow passwords, preserve a few config files from the old OS, etc.

It's really astounding to me how much easier this was to build than it was to build d-i. It took years to get d-i to the point of being able to install a working system. It took me a few part days to add this capability to propellor (It's 200 lines of code), and I've probably spent a total of less than 30 days total developing propellor in its entirity.

So, what gives? Why is this so much easier? There are a lot of reasons:

  • Technology is so much better now. I can spin up cloud VMs for testing in seconds; I use VirtualBox to restore a system from a snapshot. So testing is much much easier. The first work on d-i was done by booting real machines, and for a while I was booting them using floppies.

  • Propellor doesn't have a user interface. The best part of d-i is preseeding, but that was mostly an accident; when I started developing d-i the first thing I wrote was main-menu (which is invisible 99.9% of the time) and we had to develop cdebconf, and tons of other UI. Probably 90% of d-i work involves the UI. Jettisoning the UI entirely thus speeds up development enormously. And propellor's configuration file blows d-i preseeding out of the water in expressiveness and flexability.

  • Propellor has a much more principled design and implementation. Separating things into Properties, which are composable and reusable gives enormous leverage. Strong type checking and a powerful programming language make it much easier to develop than d-i's mess of shell scripts calling underpowered busybox commands etc. Properties often Just Work the first time they're tested.

  • No separate runtime. d-i runs in its own environment, which is really a little custom linux distribution. Developing linux distributions is hard. Propellor drops into a live system and runs there. So I don't need to worry about booting up the system, getting it on the network, etc etc. This probably removes another order of magnitude of complexity from propellor as compared with d-i.

This seems like the opposite of the Second System effect to me. So perhaps d-i was the second system all along?

I don't know if I'm going to take this all the way to propellor is d-i 2.0. But in theory, all that's needed now is:

  • Teaching propellor how to build a bootable image, containing a live Debian system and propellor. (Yes, this would mean reimplementing debian-live, but I estimate 100 lines of code to do it in propellor; most of the Properties needed already exist.) That image would then be booted up and perform the installation.
  • Some kind of UI that generates the propellor config file.
  • Adding Properties to partition the disk.

cleanInstallOnce and associated Properties will be included in propellor's upcoming 1.1.0 release, and are available in git now.

Oh BTW, you could parameterize a few Properties by OS, and Propellor could be used to install not just Debian or Ubuntu, but whatever Linux distribution you want. Patches welcomed...

05 December, 2014 08:24PM

Richard Hartmann

Release Critical Bug report for Week 49

Look at that bug count!

At that pace, Jessy will happen before FOSDEM ;)

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1101 (Including 169 bugs affecting key packages)
    • Affecting Jessie: 226 (key packages: 119) That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 147 (key packages: 85) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 28 bugs are tagged 'patch'. (key packages: 22) Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 10 bugs are marked as done, but still affect unstable. (key packages: 6) This can happen due to missing builds on some architectures, for example. Help investigate!
        • 109 bugs are neither tagged patch, nor marked done. (key packages: 57) Help make a first step towards resolution!
      • Affecting Jessie only: 79 (key packages: 34) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 54 bugs are in packages that are unblocked by the release team. (key packages: 18)
        • 25 bugs are in packages that are not unblocked. (key packages: 16)

How do we compare to the Squeeze release cycle?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144)
51 178 (124+54) 323 (190+133)
52 115 (78+37) 289 (190+99)
1 93 (60+33) 287 (171+116)
2 82 (46+36) 271 (162+109)
3 25 (15+10) 249 (165+84)
4 14 (8+6) 244 (176+68)
5 2 (0+2) 224 (132+92)
6 release! 212 (129+83)
7 release+1 194 (128+66)
8 release+2 206 (144+62)
9 release+3 174 (105+69)
10 release+4 120 (72+48)
11 release+5 115 (74+41)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

05 December, 2014 08:24PM by Richard 'RichiH' Hartmann

Iustin Pop

Jump!

Jump!!!

What

  • Tandem skydive! or alternatively: a real "one-way" plane ticket ☺
  • Start point: ~4'250 meters above the ocean (14'000ft)
  • End point: on the beach
  • Total time: less than ten minutes
  • Time in "real" free fall: according to Wikipedia, around 12 seconds, with most of the terminal velocity being reached at about 8 seconds
  • Terminal velocity: ~190Km/h, ~120mph (again, according to Wikipedia)
  • Time in "fake" free fall until the chute opened: around one minute
  • Time spent going slowly down, with a few fun manoeuvres: don't remember, 3-5 minutes?

Why

At the end of November, we had a team offsite planned, with lots of fun and exciting activities in a somewhat exotic location. I was quite looking forward to it, when - less than two weeks before the event - a colleague asked if anyone is interested in going skydiving as an extra activity. Without thinking too much, I said "yes" immediately, because: a) I've never done it before, and b) it sounded really cool! Other people said yes as well, so we were set up to have a really good time!

Of course, as the time counted down and we were approaching the offsite, I was thinking: OK, this sounds cool, but: will I be fine? do I have altitude sickness? All kinds of such, rather logistical, questions. In order to not think too much, I did exactly zero research on the topic (all mentions of Wikipedia above are from post-fact reading).

So, we went on the offsite - which was itself cool - and then, on the last day, right before going back, the skydive event!

How it went

The weather on the day of the jump was nice, the sky not perfectly clear, just a bit of small clouds and some haze. We waited for our turn, got the instruction for what to do (and not to do!), got hooked into the harness, prepared everything, and then boarded the plane; it needed only a very short run before taking off the ground.

It took around ten minutes or so to get to the jump altitude, which I spent partially looking forward to it, partially trying to calm the various emotions I had - a very interesting mix. It was actually annoying just having to wait and wait the ten long minutes, I wished that we actually get to the jumping altitude faster. The altimeter on the instructor's hand was showing 4'000, then 4'100, 4'150, then he reminded me again what I need to do (or rather, not to do), and then - people were already jumping from the plane! I was third from our team to jump, and I had the opportunity to see how people were not simply "exiting" the plane, but rather - exiting and then immediately disappearing from view!

Finally we were on the edge of the door, a push and then - I'm looking down, more than two and a half miles of nothing between me and the ground. Just air and the thought - "Why did I do this"? - as I start falling. For the first around ten seconds, it's actually a free fall, gaining speed almost at standard acceleration, and the result - Weightlessness - it's the weirdest feeling ever: all your organs floating in your body, no compression or torsion forces. Much more weird a roller-coaster that never ends; then most I had on roller-coasters was around one second of such acceleration, and you still are in contact with the chair or the restraints, whereas this long fall was very confusing for my brain - it felt somewhat like when you're tripping and you need to do something to regain balance, except in sky-diving you can't do anything, of course. There's nothing to grab, nothing to hold on, and you keep falling.

After ten long seconds we reached terminal velocity, phase 1 ends, and phase 2 begins, in which - while still falling - the friction with the air compensates exactly the earth's pull and one is falling at a constant speed and it's the most wonderful state ever. Like floating on the air, except that you're actually falling at almost 200kph, and yes, the closest feeling to flying, I guess. It doesn't hurt that you're no longer weightless, which means back to some level of normality.

The location of the skydive was very beautiful: the blue ocean beneath, the blue sky above, somewhere to the side the beach, and the air filling the mouth and lungs without any effort is the only sign that I'm moving really fast. The way this whole thing feels is very alien if you never jumped before, but one gets accustomed to it quite fast - and that means I got too comfortable and excited and forgot the correct position to keep my legs in, the instructor reminded me, and as I put my legs back in the correct position, which is (among others) with the soles of the feet pointing up, I felt again the air going strongly into my shoes, and a thought crossed my mind: what if I the air blows off one of my shoes (the right one, more precisely) and I lose it? How do I get to the airport for the trip back? Will I look suspicious at the security check? The banality of this thought, given that I was still up in the air somewhere and travelling quite fast, was so comical that I started laughing ☺

And then, an unexpected noise, the chute opens, and I feel like someone is pulling me strongly up. Of course nobody is pulling "up", I'm just slowing down very fast on this final phase (Wikipedia says: 3 to 4g). And then, once at the new terminal velocity, the lack of wind noise and the quietness of everything around gives a different kind of awesome - more majestic and serene this time, rather than the adrenaline-filled moments before.

Because one is still up and the beach looks small, you actually feel that you're suspended in the air, almost frozen. Of course, that feeling goes away quickly when the instructor start telling me to pull the strings, and we enter a fast spin - so fast that my body is almost horizontal again - a reminder that we're still in the air, going somewhat fast, and not in "normal" conditions.

I'm again reminded of the speed once we get closer to the earth, the people on the beach start to get bigger fast, and now we're gliding over the beach and finally land in the sand. The adventure is over, but I'm still pumped up and my body is still full of adrenaline, and I feel like you've just been in heaven - which is true, for some definitions of ☺.

The first thing I realise is that the earth is very solid. And not moving at all. Everything is very very slow… which is both good and bad. My body is confused at the very fast sequence of events, and why did everything stop??

Conclusion

I learned all about the terminal velocity, how fast you get there, and so on a day later, from Wikipedia and other sources. It helped explain and clarify the things I experienced during the dive, because there in the air I was quite confused (and my body even more so). Knowing this in advance would have spoiled the surprise, but on the other hand would have allowed me to enjoy the experience slightly better.

Looking back, I can say a few of things. First, it was really awesome - not what I was expecting, much more awesome (in the real sense of awe) than I thought, but also not as easy or trivial as I believed from just seeing videos of people "floating" during their dive. Yep, worth doing, and hard to actually put in words (I tried to, but I think this rambling is more confusing than helping).

Phase one was too long (and a bit scary), phase two was too short (and the best thing), phase three was relaxing (and just the right length).

I also wonder how it is to jump alone - without the complicated and heavy harness, without an instructor, just you up there. Oh, and the parachute. And the reserve parachute ☺, of course. Point is, this was awesome, but I was mostly a passive spectator, so I wonder what it feels like to be actually in control (as much as one can be, falling down) and responsible.

And finally, as we left the offsite location just a couple of hours after the skydive, and we had a 4½ hours flight back, I couldn't believe myself how slow everything was. I never experienced quite such a thing, I was sitting in this normal airplane flying high and fast, but for me everything was going in slow motion and I was bored out of my mind. Adrenaline aftershock or something like that? Also interesting!

05 December, 2014 06:02PM

Stefano Zacchiroli

Shuttleworth Foundation Flash Grant

Shuttleworth Foundation Flash Grant

I'm glad to announce that I've been awarded a 5,000 USD "Flash Grant" by the Shuttleworth Foundation.

Flash grants are an interesting funding model, which I've just learned about. You don't need to apply for them. Rather, you get nominated by current fellows, and then selected and approached by the foundation for funding. The grant amount is smaller than actual fellowships, but it comes with very few strings attached: furthering open knowledge (which is the foundation's core mission) and being transparent about how you use the money.

I'm lucky enough to already have a full-time job to pay my bills, and I do my Free Software activism mostly in my spare time. So I plan to use the money not to pay my bills, but rather to boost the parts of my Free Software activities that could benefit from some funding. I don't have a fully detailed budget yet but, tentatively: some money will go to fund Debsources development (by others), some into promoting my thoughts on the dark ages of Free Software, and maybe some into helping the upcoming release of Debian. I'll provide a public report at the end of the funding period (~6 months from now).

I'd like to thank the Shuttleworth Foundation for the grant and foundation's fellow Jonas Öberg for making this possible.

05 December, 2014 03:51PM

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.1

Weblate 2.1 has been released today. It comes with native Mercurial support, user interface cleanup and various other fixes.

Full list of changes for 2.1:

  • Added support for Mercurial repositories.
  • Replaced Glyphicon font by Awesome.
  • Added icons for social authentication services.
  • Better consistency of button colors and icons.
  • Documentation improvements.
  • Various bugfixes.
  • Automatic hiding of columns in translation listing for small screens.
  • Changed configuration of filesystem paths.
  • Improved SSH keys handling and storage.
  • Improved repository locking.
  • Customizable quality checks per source string.

You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Ready to run appliances will be soon available in SUSE Studio Gallery.

Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, Gammu, Weblate itself and others.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

05 December, 2014 02:25PM by Michal Čihař (michal@cihar.com)

Enrico Zini

the-smell-of-email

The smell of email

This was written in response to a message with a list of demotivating behaviours in email interactions, like fingerpointing, aggressiveness, resistance when being called out for misbehaving, public humiliation for mistakes, and so on

There are times when I stumble on an instance of the set of things that were mentioned, and I think "ok, today I feel like doing some paid work rather than working on Debian".

If another day I wake up deciding to enjoy working on Debian, which I greatly do, I try and make sure that I can focus on bits of Debian where I don't stumble on any instances of the set of things that were mentioned.

Then I stumble on Gregor's GDAC and I feel like I'd happily lose one day of pay right now, and have fun with Debian.

I feel like Debian is this big open kitchen populated by a lot of people:

  • some dump shit
  • some poke the shit with a stick, contributing to the spread of the smell
  • some carefully clean up the shit, which in the short term still contributes to the smell, but makes things better in the long term
  • some prepare and cook, making a nice smell of food and NOMs
  • some try out the food and tell us how good it was

I have fun cooking and tring out the food. I have fun being around people who cook and try out the food.

The fun in the kitchen seems to be correlated to several things, one of which is that it seems to be inversely proportional to the stink.

I find this metaphore interesting, and I will start thinking about the smell of a mailing list post. I expect it should put posts into perspective, I expect I will develop an instinct for it, so that I won't give a stinky post the same importance of a post that smells of food.

I also expect that the more I learn to tell the smell of food from the smell of shit, the more I can help cleaning it, and the more I can help telling people who repeatedly contribute to the stink to please try cooking instead, or failing that, just try and stay out of the kitchen.

05 December, 2014 10:51AM