September 18, 2014

hackergotchi for Jonathan McDowell

Jonathan McDowell

Automatic inline signing for mutt with RT

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign"
send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

18 September, 2014 10:00AM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Scotland: Vote NO

        _  __<;
      </_/ _/__   
     /> >  7   )  
     ~;</7    /   
     /> /   _*<---- Perth    
     ~ </7  7~\_  
        </7     \ 
         /_ _ _ | 

If you don't, the UK will have to rename itself the K. And that's just silly.

Also vote yes on whether Alex Trebek should keep his mustache.

18 September, 2014 04:21AM

September 17, 2014

hackergotchi for Steve Kemp

Steve Kemp

If this goes well I have a new blog engine

Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.

Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)

Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.

When chronicle runs it :

  • It reads each post into a complex data-structure.
  • Then it walks this multiple times.
  • Finally it outputs a whole bunch of posts.

In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.

Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.

So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run make steve on my blog sources.

Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.

The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.

In conclusion I've written something last night which is a stepping stone between the current chronicle and chronicle2 which will appear in due course.

PS. This entry was written in markdown, just because I wanted to be sure it worked.

17 September, 2014 05:23PM

NOKUBI Takatsugu

Met with a debian developer from Germany

Last weekend, I (knok), Hideki (henrich) and Yutaka (gniibe) met with John Paul Adrian Glaubitz (glaubitz).

In the past, I had met with another Germany developer Jens Schmalzing (jensen) in Japan. He was a good guy, but unfortunately he gone in 2005.

I had an old OpenPGP key with his sign. It is a record of his activity, but the key is weak nowaday (1024D), so I stop to use the key but don’t issue revoke.

Anyway glaubitz is also a good guy, and he loves old videogame console. gniibe gave him five DreamCast consoles. I bring him to SUPER POTATO, a old videogame shop. He bought some software for Virtual Boy.

DebConf 2015 will hold in Germany, I want to go for it if I can.

 

17 September, 2014 08:22AM by knok

September 16, 2014

hackergotchi for Matthew Garrett

Matthew Garrett

ACPI, kernels and contracts with firmware

ACPI is a complicated specification - the latest version is 980 pages long. But that's because it's trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can't define the full behaviour of an ACPI system. It doesn't explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we're forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as "Windows" but refuses to identify itself as Linux. The short version is that "Linux" doesn't actually identify the behaviour of the kernel in a meaningful way. "Linux" doesn't tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. "Linux" doesn't tell you whether the OS knows how to deal with an HPET. "Linux" doesn't tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it's also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that's intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it's compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it's actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won't suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we're compatible with the Windows behaviour.

That's fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren't targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It's going to make assumptions about ordering. It's going to mishandle some cases. And it's the operating system's job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don't have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we've resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there's been no need to - we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren't telling us which behaviour they expect, so we're left with the prospect of adding machine-specific quirks. This isn't scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven't had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn't a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that's going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comment count unavailable comments

16 September, 2014 10:51PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

The virtues of std::unique_ptr

Among all the changes in C++11, there's one that I don't feel has received enough attention: std::unique_ptr (or just unique_ptr; I'll drop the std:: from here on). The motivation is simple; assume a function like this:

Foo *func() {
        Foo *foo = new Foo;
        if (something_complicated()) {
                // Oops, something wrong happened
                return NULL;
        }
        foo->baz();
        return foo;
}

The memory leak is obvious; if something_complicated() returns false, we presumably leak foo. The classical fix is:

Foo *func() {
        Foo *foo = new Foo;
        if (something_complicated()) {
                delete foo;
                return NULL;
        }
        foo->baz();
        return foo;
}

But this is cumbersome and easy to get wrong. Tools like valgrind have made this a lot easier to detect, but that's a poor substitute; what we want is a coding style where it's deliberately hard to make mistakes. Enter unique_ptr:

Foo *func() {
        unique_ptr<Foo> foo(new Foo);
        if (something_complicated()) {
                // unique_ptr<Foo> destructor deletes foo for us!
                return NULL;
        }
        foo->baz();
        return foo.release();
}

So we have introduced a notion of ownership; the function (or, more precisely, scope) now owns the Foo object. The only way we can leave the function and not have it destroyed is through an explicit call to release() (which returns the raw pointer and clears the unique_ptr). We have smart pointer semantics, so we can use -> just as if we had a regular pointer. In any case, the runtime overhead over a regular pointer is exactly zero.

Ownership does, of course, extend just fine to classes:

class Bar {
public:
        Foo() : foo(new Foo) {}

private:
        unique_ptr<Foo> foo;
};

In this case, the Bar object owns the Foo object, and will destroy it when it goes out of scope without having to do a manual delete in the destructor, operator= and so on; not to mention that it will make your object non-copy-constructible, so you won't get that wrong by mistake. (In this case, you could do the same just by “Foo foo;” instead of using unique_ptr, of course, modulo the copy constructor behavior and heap behavior.)

So far, we could do all of this in C++03. But C++11 includes a very helpful extra piece of the puzzle, namely move semantics. These allow us to transfer the ownership safely:

class Bar {
public:
        Bar(unique_ptr<Foo> arg_foo) : foo(foo) {}

private:
        unique_ptr<Foo> foo;
};

void func() {
        unique_ptr<Foo> foo(new Foo);
        // Do something with foo.
        Bar bar(move(foo));
        // ...
}

Below the Bar constructor line, foo is empty, and bar owns the Foo object! And at no point, the object was without an owner; if there's no more code in the function, bar will get immediately destroyed, and the Foo object with it (since it has ownership). It also deals just fine with exception safety.

If you program with unique_ptr, it is genuinely very hard to get memory leaks. And it's much better than Java-style garbage collection; you don't get the RAM overhead GC needs, your objects are destroyed at predictable times, and destructors are run, so you can get reliable behavior on things like file handles, sockets and the likes, without having to resort to manual cleanup in a finally block. (In a sense, it's like a refcount that can only ever be 0 or 1.)

It sounds so innocuous on paper, but all great ideas are simple. So, go forth and unique_ptr!

16 September, 2014 10:30PM

hackergotchi for Steve Kemp

Steve Kemp

Applications updating & phoning home

Personally I believe that any application packaged for Debian should neither phone home, attempt to download plugins over HTTP at run-time, or update itself.

On that basis I've filed #761828.

As a project we have guidelines for what constitutes a "serious" bug, which generally boil down to a package containing a security issue, causing data-loss, or being unusuable.

I'd like to propose that these kind of tracking "things" are equally bad. If consensus could be reached that would be a good thing for the freedom of our users.

(Ooops I slipped into "us", "our user", I'm just an outsider looking in. Mostly.)

16 September, 2014 07:42PM

Petter Reinholdtsen

Speeding up the Debian installer using eatmydata and dpkg-divert

The Debian installer could be a lot quicker. When we install more than 2000 packages in Skolelinux / Debian Edu using tasksel in the installer, unpacking the binary packages take forever. A part of the slow I/O issue was discussed in bug #613428 about too much file system sync-ing done by dpkg, which is the package responsible for unpacking the binary packages. Other parts (like code executed by postinst scripts) might also sync to disk during installation. All this sync-ing to disk do not really make sense to me. If the machine crash half-way through, I start over, I do not try to salvage the half installed system. So the failure sync-ing is supposed to protect against, hardware or system crash, is not really relevant while the installer is running.

A few days ago, I thought of a way to get rid of all the file system sync()-ing in a fairly non-intrusive way, without the need to change the code in several packages. The idea is not new, but I have not heard anyone propose the approach using dpkg-divert before. It depend on the small and clever package eatmydata, which uses LD_PRELOAD to replace the system functions for syncing data to disk with functions doing nothing, thus allowing programs to live dangerous while speeding up disk I/O significantly. Instead of modifying the implementation of dpkg, apt and tasksel (which are the packages responsible for selecting, fetching and installing packages), it occurred to me that we could just divert the programs away, replace them with a simple shell wrapper calling "eatmydata $program $@", to get the same effect. Two days ago I decided to test the idea, and wrapped up a simple implementation for the Debian Edu udeb.

The effect was stunning. In my first test it reduced the running time of the pkgsel step (installing tasks) from 64 to less than 44 minutes (20 minutes shaved off the installation) on an old Dell Latitude D505 machine. I am not quite sure what the optimised time would have been, as I messed up the testing a bit, causing the debconf priority to get low enough for two questions to pop up during installation. As soon as I saw the questions I moved the installation along, but do not know how long the question were holding up the installation. I did some more measurements using Debian Edu Jessie, and got these results. The time measured is the time stamp in /var/log/syslog between the "pkgsel: starting tasksel" and the "pkgsel: finishing up" lines, if you want to do the same measurement yourself. In Debian Edu, the tasksel dialog do not show up, and the timing thus do not depend on how quickly the user handle the tasksel dialog.

Machine/setup Original tasksel Optimised tasksel Reduction
Latitude D505 Main+LTSP LXDE 64 min (07:46-08:50) <44 min (11:27-12:11) >20 min 18%
Latitude D505 Roaming LXDE 57 min (08:48-09:45) 34 min (07:43-08:17) 23 min 40%
Latitude D505 Minimal 22 min (10:37-10:59) 11 min (11:16-11:27) 11 min 50%
Thinkpad X200 Minimal 6 min (08:19-08:25) 4 min (08:04-08:08) 2 min 33%
Thinkpad X200 Roaming KDE 19 min (09:21-09:40) 15 min (10:25-10:40) 4 min 21%

The test is done using a netinst ISO on a USB stick, so some of the time is spent downloading packages. The connection to the Internet was 100Mbit/s during testing, so downloading should not be a significant factor in the measurement. Download typically took a few seconds to a few minutes, depending on the amount of packages being installed.

The speedup is implemented by using two hooks in Debian Installer, the pre-pkgsel.d hook to set up the diverts, and the finish-install.d hook to remove the divert at the end of the installation. I picked the pre-pkgsel.d hook instead of the post-base-installer.d hook because I test using an ISO without the eatmydata package included, and the post-base-installer.d hook in Debian Edu can only operate on packages included in the ISO. The negative effect of this is that I am unable to activate this optimization for the kernel installation step in d-i. If the code is moved to the post-base-installer.d hook, the speedup would be larger for the entire installation.

I've implemented this in the debian-edu-install git repository, and plan to provide the optimization as part of the Debian Edu installation. If you want to test this yourself, you can create two files in the installer (or in an udeb). One shell script need do go into /usr/lib/pre-pkgsel.d/, with content like this:

#!/bin/sh
set -e
. /usr/share/debconf/confmodule
info() {
    logger -t my-pkgsel "info: $*"
}
error() {
    logger -t my-pkgsel "error: $*"
}
override_install() {
    apt-install eatmydata || true
    if [ -x /target/usr/bin/eatmydata ] ; then
        for bin in dpkg apt-get aptitude tasksel ; do
            file=/usr/bin/$bin
            # Test that the file exist and have not been diverted already.
            if [ -f /target$file ] ; then
                info "diverting $file using eatmydata"
                printf "#!/bin/sh\neatmydata $bin.distrib \"\$@\"\n" \
                    > /target$file.edu
                chmod 755 /target$file.edu
                in-target dpkg-divert --package debian-edu-config \
                    --rename --quiet --add $file
                ln -sf ./$bin.edu /target$file
            else
                error "unable to divert $file, as it is missing."
            fi
        done
    else
        error "unable to find /usr/bin/eatmydata after installing the eatmydata pacage"
    fi
}

override_install

To clean up, another shell script should go into /usr/lib/finish-install.d/ with code like this:

#! /bin/sh -e
. /usr/share/debconf/confmodule
error() {
    logger -t my-finish-install "error: $@"
}
remove_install_override() {
    for bin in dpkg apt-get aptitude tasksel ; do
        file=/usr/bin/$bin
        if [ -x /target$file.edu ] ; then
            rm /target$file
            in-target dpkg-divert --package debian-edu-config \
                --rename --quiet --remove $file
            rm /target$file.edu
        else
            error "Missing divert for $file."
        fi
    done
    sync # Flush file buffers before continuing
}

remove_install_override

In Debian Edu, I placed both code fragments in a separate script edu-eatmydata-install and call it from the pre-pkgsel.d and finish-install.d scripts.

By now you might ask if this change should get into the normal Debian installer too? I suspect it should, but am not sure the current debian-installer coordinators find it useful enough. It also depend on the side effects of the change. I'm not aware of any, but I guess we will see if the change is safe after some more testing. Perhaps there is some package in Debian depending on sync() and fsync() having effect? Perhaps it should go into its own udeb, to allow those of us wanting to enable it to do so without affecting everyone.

16 September, 2014 12:00PM

Hideki Yamane

Intel 910 SSD 400GB - $420


Intel SSD 910 (400GB, SSDPEDOX400G301) is cheaper than ever in Japan - only $420 (and its spec sheet says "Recommended Customer Price BULK: $1929.00", wow).

16 September, 2014 08:10AM by Hideki Yamane (noreply@blogger.com)

September 15, 2014

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

apt-offline 1.5

I am very pleased to announce the release of apt-offline, version 1.5.

In version 1.4, the offline bug report functionality had to be dropped. In version 1.5, it is back again. apt-offline now uses the new Debian native BTS library. Thanks to its developers, this library is much more slim and neat. The only catch is that it depends on the SOAPpy library which currently is not stock in Python. If you run apt-offline of Debian, you may not have to worry as I will add a Recommends on that package. For users using it on Microsoft Windows, please ensure that you have the SOAPpy library installed. It is available on pypi.

The old bundled magic library has been replaced with the version of python magic library that Debian ships. This library is derived from the file package and is portable on almost all Unixes. For Debian users, there will be a Recommends on it too.

There were also a bunch of old, outstanding, and annoying bugs that have been fixed in this release. For a full list of changes, please refer to the git logs.

With this release, apt-offline should be in good shape for the Jessie release.

apt-offline is available on Alioth @ https://alioth.debian.org/projects/apt-offline/

AddThis: 

Categories: 

Keywords: 

15 September, 2014 06:17PM by Ritesh Raj Sarraf

hackergotchi for Keith Packard

Keith Packard

xserver-pending-fixes

A Forest of X Server Changes

We’ve got about another month left in the X server merge window for 1.17 and I’ve written a small set of fixes which haven’t been reviewed yet for merging. I thought I’d advertise them a bit and see if I couldn’t encourage a few of you to take a look and see if they’re useful, correct and complete.

All of these are in my personal X server repository:

git://people.freedesktop.org/~keithp/xserver.git

Cleaning up the X Registry

Branch: registry-fixes

I’ll bet most of you don’t even know about this code. It serves as a database mapping various X enumerations to strings to aid in diagnostics. For the security extensions, SECURITY and XSELinux, it holds names for all of the request, event and errors in the core protocol and all registered extensions. For X-Resource, it has the names of the registered resource types.

The X registry gets the request, event and error data from a file, “protocol.txt”, which is installed in /usr/lib/xorg/protocol.txt on my machine. It gets the resource names as a part of resource type allocation.

So, what’s wrong with this? Three basic things:

  1. A simple bug — protocol.txt is left open while the server runs. This consumes a file descriptor for no good reason.

  2. protocol.txt is read and parsed even if the security extensions aren’t available. This wastes time and memory.

  3. The resource names are kept even if X-Resource isn’t in use.

The fixes remove the configure options for including the registry code; these functions are only used by the above extensions, so we can tell whether to include the code based solely on whether the extensions are being built.

Getting rid of the TCP listener by default

Branch: listen-fixes

We’ve had the ‘-nolisten’ option for a while now to disable inbound TCP connections. It’s useful for security reasons, but we’ve never enabled this by default. This patch sequence provides configure options for each of the listen sockets (tcp, unix and local), leaves unix and local enabled by default and disables tcp by default.

A new option, ‘-listen’, is added which allows the user to override the -nolisten defaults in case they actually want to use TCP connections to X.

Glamor bug fixes

branch: glamor-fixes

This branch fixes two bugs:

  1. Scale a large pixmap down to a small pixmap. This happens when you display enormous images in a web page. Iceweasel sends the whole huge image to X and uses Render to scale it to the screen. If the image is larger than a single texture, the X server splits it up into tiles, but the code which tries to perform the merged scale is just broken. Five patches fix this.

  2. Shader-based trapezoids. This code uses area coverage to compute trapezoids. That violates the Render spec, which requires point sampling. Further, the performance of these trapezoids is lower than software (by a lot). This one patch removes the code.

Present bug fixes

branch: present-fixes

A selection of small bug fixes:

  1. Clear pending flips at CloseScreen. This removes a reference to any pending flip pixmap, allowing it to be freed. Otherwise, we’ll leak memory across server reset.

  2. Add support for PresentOptionCopy. This has been in the protocol spec for a while, and was completely trivial to implement. However, it never got done. One tiny little patch.

  3. Expose the Present API to drivers via sdksyms.sh. Until now, the present extension APIs have only been available inside the X server. This exposes them to drivers. This took a few cleanup patches first.

Use Present for Glamor XV

branch: glamor-present-xv

Painting XV to the screen should be done at vblank time to avoid tearing. Present offers vblank synchronized operations. Hooking those two together required a few new present APIs to expose the vblank functionality outside of the present code, then a bit of glamor code to hook up that new API to the XV bits.

Switching Glamor to a GL core profile context

branch: glamor-core-profile

This patch set is still in progress, but demonstrates how close we are. We’ll be requiring OpenGL 3.3 for this so that we get texture swizzling, which is required for our single channel objects.

The changes present on the branch are:

  1. Switch single channel surfaces from GLALPHA to GLRED.

  2. Use vertex array objects.

  3. Switch ephyr over to using a core 3.3 profile.

Still left to do is

  1. Switch Render code to VBOs

The core code uses VBOs everywhere, but the Render code doesn’t. This means that all Render drawing fails, which makes the resulting server not very useful.

My main objective for getting this done is to reduce memory usage by about 16MB, which is the space allocated for software rendering in Mesa in case someone does something which the hardware doesn’t handle, and that can only with some legacy OpenGL APIs.

Please help out!

All of these friendly little patches are looking for a bit of review so that they can get merged before the 1.17 window closes.

15 September, 2014 05:14PM

hackergotchi for Thomas Goirand

Thomas Goirand

Backporting libjs-angularjs and libjs-d3 to Wheezy

If you didn’t notice, Javascript isn’t as simple as it used to be… Want to backport the 2 simple javascript libs? No problem. You then “just” need to backport a bunch of other packages which are build-dependencies… (and file #761670, #761672, and #761674 on the way when rebuilding…). Here’s the short list:

gyp
node-abbrev
node-ansi
node-ansi-color-table
node-archy
node-async
node-block-stream
node-combined-stream
node-contextify
node-cookie-jar
node-cssom
node-delayed-stream
node-diff
node-eyes
node-forever-agent
node-form-data
node-fstream
node-fstream-ignore
node-github-url-from-git
node-glob
node-graceful-fs
node-gyp
node-htmlparser
node-inherits
node-ini
node-jake
node-jsdom
node-json-stringify-safe
node-lockfile
node-lru-cache
node-marked
node-mime
node-minimatch
node-mkdirp
node-mute-stream
node-node-uuid
node-nopt
node-normalize-package-data
node-npmlog
node-once
node-optimist
node-osenv
node-qs
node-queue-async
node-read
node-read-package-json
node-request
node-retry
node-rimraf
node-semver
node-sha
node-sigmund
node-slide
node-smash
node-tar
node-tunnel-agent
node-uglify
node-underscore
node-utilities
node-vows
node-whic
node-which
node-wordwrap
nodejs
npm
ruby-ronn

Yes, that’s 66 packages above… And of course, backporting some ruby stuff makes sense… :)

15 September, 2014 05:13PM by admin

Vincent Sanders

NetSurf 3.2

We recently released a new version of NetSurf this was largely to address numerous small bugs but did also include the persistent caching implementation I have written about previously. A release used to require the release manager (usually me) to perform a lot of manual processes and while we had a checklist it was far too easy to miss things.

The Continuous Integration (CI) system combined with signed release tags in git has resulted in a greatly simplified process indeed it has become almost completely automated. The majority of the manual work is now confined to doing the tasks that require actual decision making and checking we are releasing what was intended.

By having the CI system build release binaries the project now has a much clearer and importantly traceable process, I can recommend such a system to any project that produces releases especially if they release binaries for any of their targets.

I have also managed to package and upload this version of NetSurf ready for the Debian Jessie release. I would like to thank Jonathan Wiltshire for his assistance in ensuring this was a good quality package.

The release incorporates the successfully merged work of Rupinder Singh who was our our GSoc 2014 student. Rupinder mainly made improvements to our core DOM implementation and was very responsive and enthusiastic throughout his time despite the mentor team sometimes not being available.

This work goes towards improving NetSurf in the future by ensuring the underlying features are present in our core libraries. The GSoc mentors and all project developers are all pleased with the results of this years GSoc participation and would like to thank everyone involved in making our participation possible.

Along with the good news comes a little bad:
PowerPC Mac OS X
Despite repeated calls for assistance with new hardware and Java builds none has been forthcoming meaning that from this release we ware no longer able to ship PowerPC builds for MAC OS X.

The main issue is the last version of MAC OS X that runs on PPC is Leopard and there is no viable Java 1.6 port necessary for our CI system to run. Additionally the fully loaded PPC Mac mini (kindly donated to us by Mythic Beasts) had become far too slow to keep up with our builds and was causing long delays.
Bugs
NetSurf 3.2 Bug graph
We have a lot of bugs, in fact just during this release cycle we have 30 more bugs reported than we closed.So while the new bug reporting system has been a success and our users are reporting issues when they find them the development team is not keeping up..

The failure to keep up stems from the underlying issue of lack of manpower. We have relatively few active developers which is especially problematic when there are many users for a platform, such as RISCOS, but the maintainer is unable to commit enough time to fixing issues.

If you would like to help making NetSurf a better browser we are always happy to work with new contributors.

15 September, 2014 03:08PM by Vincent Sanders (noreply@blogger.com)

hackergotchi for Junichi Uekawa

Junichi Uekawa

ARM assembly.

ARM assembly. I was reading up some docs on Unified Assembly Language (UAL). and confusions. I don't seem to be able to find a comprehensive doc about what works and what doesn't. Heh.

15 September, 2014 11:51AM by Junichi Uekawa

hackergotchi for Julien Danjou

Julien Danjou

Python bad practice, a concrete case

A lot of people read up on good Python practice, and there's plenty of information about that on the Internet. Many tips are included in the book I wrote this year, The Hacker's Guide to Python. Today I'd like to show a concrete case of code that I don't consider being the state of the art.

In my last article where I talked about my new project Gnocchi, I wrote about how I tested, hacked and then ditched whisper out. Here I'm going to explain part of my thought process and a few things that raised my eyebrows when hacking this code.

Before I start, please don't get the spirit of this article wrong. It's in no way a personal attack to the authors and contributors (who I don't know). Furthermore, whisper is a piece of code that is in production in thousands of installation, storing metrics for years. While I can argue that I consider the code not to be following best practice, it definitely works well enough and is worthy to a lot of people.

Tests

The first thing that I noticed when trying to hack on whisper, is the lack of test. There's only one file containing tests, named test_whisper.py, and the coverage it provides is pretty low. One can check that using the coverage tool.

$ coverage run test_whisper.py
...........
----------------------------------------------------------------------
Ran 11 tests in 0.014s
 
OK
$ coverage report
Name Stmts Miss Cover
----------------------------------
test_whisper 134 4 97%
whisper 584 227 61%
----------------------------------
TOTAL 718 231 67%


While one would think that 61% is "not so bad", taking a quick peak at the actual test code shows that the tests are incomplete. Why I mean by incomplete is that they for example use the library to store values into a database, but they never check if the results can be fetched and if the fetched results are accurate. Here's a good reason one should never blindly trust the test cover percentage as a quality metric.

When I tried to modify whisper, as the tests do not check the entire cycle of the values fed into the database, I ended up doing wrong changes but had the tests still pass.

No PEP 8, no Python 3

The code doesn't respect PEP 8 . A run of flake8 + hacking shows 732 errors… While it does not impact the code itself, it's more painful to hack on it than it is on most Python projects.

The hacking tool also shows that the code is not Python 3 ready as there is usage of Python 2 only syntax.

A good way to fix that would be to set up tox and adds a few targets for PEP 8 checks and Python 3 tests. Even if the test suite is not complete, starting by having flake8 run without errors and the few unit tests working with Python 3 should put the project in a better light.

Not using idiomatic Python

A lot of the code could be simplified by using idiomatic Python. Let's take a simple example:

def fetch(path,fromTime,untilTime=None,now=None):
fh = None
try:
fh = open(path,'rb')
return file_fetch(fh, fromTime, untilTime, now)
finally:
if fh:
fh.close()


That piece of code could be easily rewritten as:

def fetch(path,fromTime,untilTime=None,now=None):
with open(path, 'rb') as fh:
return file_fetch(fh, fromTime, untilTime, now)


This way, the function looks actually so simple that one can even wonder why it should exists – but why not.

Usage of loops could also be made more Pythonic:

for i,archive in enumerate(archiveList):
if i == len(archiveList) - 1:
break


could be actually:

for i, archive in enumerate(itertools.islice(archiveList, len(archiveList) - 1):


That reduce the code size and makes it easier to read through the code.

Wrong abstraction level

Also, one thing that I noticed in whisper, is that it abstracts its features at the wrong level.

Take the create() function, it's pretty obvious:

def create(path,archiveList,xFilesFactor=None,aggregationMethod=None,sparse=False,useFallocate=False):
# Set default params
if xFilesFactor is None:
xFilesFactor = 0.5
if aggregationMethod is None:
aggregationMethod = 'average'
 
#Validate archive configurations...
validateArchiveList(archiveList)
 
#Looks good, now we create the file and write the header
if os.path.exists(path):
raise InvalidConfiguration("File %s already exists!" % path)
fh = None
try:
fh = open(path,'wb')
if LOCK:
fcntl.flock( fh.fileno(), fcntl.LOCK_EX )
 
aggregationType = struct.pack( longFormat, aggregationMethodToType.get(aggregationMethod, 1) )
oldest = max([secondsPerPoint * points for secondsPerPoint,points in archiveList])
maxRetention = struct.pack( longFormat, oldest )
xFilesFactor = struct.pack( floatFormat, float(xFilesFactor) )
archiveCount = struct.pack(longFormat, len(archiveList))
packedMetadata = aggregationType + maxRetention + xFilesFactor + archiveCount
fh.write(packedMetadata)
headerSize = metadataSize + (archiveInfoSize * len(archiveList))
archiveOffsetPointer = headerSize
 
for secondsPerPoint,points in archiveList:
archiveInfo = struct.pack(archiveInfoFormat, archiveOffsetPointer, secondsPerPoint, points)
fh.write(archiveInfo)
archiveOffsetPointer += (points * pointSize)
 
#If configured to use fallocate and capable of fallocate use that, else
#attempt sparse if configure or zero pre-allocate if sparse isn't configured.
if CAN_FALLOCATE and useFallocate:
remaining = archiveOffsetPointer - headerSize
fallocate(fh, headerSize, remaining)
elif sparse:
fh.seek(archiveOffsetPointer - 1)
fh.write('\x00')
else:
remaining = archiveOffsetPointer - headerSize
chunksize = 16384
zeroes = '\x00' * chunksize
while remaining > chunksize:
fh.write(zeroes)
remaining -= chunksize
fh.write(zeroes[:remaining])
 
if AUTOFLUSH:
fh.flush()
os.fsync(fh.fileno())
finally:
if fh:
fh.close()


The function is doing everything: checking if the file doesn't exist already, opening it, building the structured data, writing this, building more structure, then writing that, etc.

That means that the caller has to give a file path, even if it just wants a whipser data structure to store itself elsewhere. StringIO() could be used to fake a file handler, but it will fail if the call to fcntl.flock() is not disabled – and it is inefficient anyway.

There's a lot of other functions in the code, such as for example setAggregationMethod(), that mixes the handling of the files – even doing things like os.fsync() – while manipulating structured data. This is definitely not a good design, especially for a library, as it turns out reusing the function in different context is near impossible.

Race conditions

There are race conditions, for example in create() (see added comment):

if os.path.exists(path):
raise InvalidConfiguration("File %s already exists!" % path)
fh = None
try:
# TOO LATE I ALREADY CREATED THE FILE IN ANOTHER PROCESS YOU ARE GOING TO
# FAIL WITHOUT GIVING ANY USEFUL INFORMATION TO THE CALLER :-(
fh = open(path,'wb')


That code should be:

try:
fh = os.fdopen(os.open(path, os.O_WRONLY | os.O_CREAT | os.O_EXCL), 'wb')
except OSError as e:
if e.errno = errno.EEXIST:
raise InvalidConfiguration("File %s already exists!" % path)


to avoid any race condition.

Unwanted optimization

We saw earlier the fetch() function that is barely useful, so let's take a look at the file_fetch() function that it's calling.

def file_fetch(fh, fromTime, untilTime, now = None):
header = __readHeader(fh)
[...]


The first thing the function does is to read the header from the file handler. Let's take a look at that function:

def __readHeader(fh):
info = __headerCache.get(fh.name)
if info:
return info
 
originalOffset = fh.tell()
fh.seek(0)
packedMetadata = fh.read(metadataSize)
 
try:
(aggregationType,maxRetention,xff,archiveCount) = struct.unpack(metadataFormat,packedMetadata)
except:
raise CorruptWhisperFile("Unable to read header", fh.name)
[...]


The first thing the function does is to look into a cache. Why is there a cache?

It actually caches the header based with an index based on the file path (fh.name). Except that if one for example decide not to use file and cheat using StringIO, then it does not have any name attribute. So this code path will raise an AttributeError.

One has to set a fake name manually on the StringIO instance, and it must be unique so nobody messes with the cache

import StringIO
 
packedMetadata = <some source>
fh = StringIO.StringIO(packedMetadata)
fh.name = "myfakename"
header = __readHeader(fh)


The cache may actually be useful when accessing files, but it's definitely useless when not using files. But it's not necessarily true that the complexity (even if small) that the cache adds is worth it. I doubt most of whisper based tools are long run processes, so the cache that is really used when accessing the files is the one handled by the operating system kernel, and this one is going to be much more efficient anyway, and shared between processed. There's also no expiry of that cache, which could end up of tons of memory used and wasted.

Docstrings

None of the docstrings are written in a a parsable syntax like Sphinx. This means you cannot generate any documentation in a nice format that a developer using the library could read easily.

The documentation is also not up to date:

def fetch(path,fromTime,untilTime=None,now=None):
"""fetch(path,fromTime,untilTime=None)
[...]
"""
 
def create(path,archiveList,xFilesFactor=None,aggregationMethod=None,sparse=False,useFallocate=False):
"""create(path,archiveList,xFilesFactor=0.5,aggregationMethod='average')
[...]
"""


This is something that could be avoided if a proper format was picked to write the docstring. A tool cool be used to be noticed when there's a diversion between the actual function signature and the documented one, like missing an argument.

Duplicated code

Last but not least, there's a lot of code that is duplicated around in the scripts provided by whisper in its bin directory. Theses scripts should be very lightweight and be using the console_scripts facility of setuptools, but they actually contains a lot of (untested) code. Furthermore, some of that code is partially duplicated from the whisper.py library which is against DRY.

Conclusion

There are a few more things that made me stop considering whisper, but these are part of the whisper features, not necessarily code quality. One can also point out that the code is very condensed and hard to read, and that's a more general problem about how it is organized and abstracted.

A lot of these defects are actually points that made me start writing The Hacker's Guide to Python a year ago. Running into this kind of code makes me think it was a really good idea to write a book on advice to write better Python code!

A book I wrote talking about designing Python applications, state of the art, advice to apply when building your application, various Python tips, etc. Interested? Check it out.

15 September, 2014 11:09AM by Julien Danjou

hackergotchi for Cyril Brulebois

Cyril Brulebois

Freelance Debian consultant

I’m not used to talking about my day job but here’s an exception.

Over the past few years I worked in two startups (3 years each). It was nice to spend time in different areas: one job was mostly about research and development in a Linux cluster environment; the other one was about maintaining a highly-customized, Linux-based operating system, managing a small support team, and performing technological surveillance in IT security.

In the meanwhile I’ve reached a milestone: 10 years with Debian. I had been wondering for a few months whether I could try my luck going freelance, becoming a Debian consultant. I finally decided to go ahead and started in August!

The idea is to lend a hand for various Debian-related things like systems administration, development/debugging, packaging/repository maintenance, or Debian Installer support, be it one-shot or on a regular basis. I didn’t think about trainings/workshops at first but sharing knowledge is something I’ve always liked, even if I didn’t become a teacher.

For those interested, details can be found on my website: https://mraw.org/.

Of course this doesn’t mean I’m going to put an end to my volunteer activities within Debian, especially as a Debian Installer release manager. Quite the contrary in fact! See the August and September debian-boot@ archives, which have been busy months. :)

15 September, 2014 09:20AM

hackergotchi for Gergely Nagy

Gergely Nagy

Looking ahead

A little more than a year ago, I started working as a syslog-ng OSE developer full-time. That was a tremendously important milestone in my career, as one of the goals I wanted to achieve in life - to work on free software for a living - became a reality. Rocket boots were fired up, and we accomplised quite a lot in the past year, and I'm very, very proud of the work we did - we, the whole community. I enjoyed every bit of it, but as it turns out, some of the other desires I wish to pursue, and new challenges I am looking for, will lead me in a different direction. At the end of August, I handed in my resignation, and the past friday was my last work day at BalaBit.

There are a few questions that will likely be asked, and I'll try my best to answer them beforehand. Questions such as: What will happen to syslog-ng?, Who will be the new maintainer?, How does this affect your Debian work?, and so on, and so forth.

syslog-ng

What will happen to syslog-ng?

After careful consideration, with the syslog-ng team at BalaBit, we decided that they will take over OSE-related responsibilities as well. They will do releases, they will engage the people on GitHub and the mailing list, they will take care of the @sngOSE twitter account.

During the past few months, we've been working on pushing the team closer to the community: we moved issues to GitHub, the team submitted pull requests, and in general, we moved closer to each other in every possible way.

While they may not have the open source maintainer experience I had, they are all capable folk, and will quickly learn on the job. The team taking over also has the advantage of having more manpower, and better collaboration between the premium edition and OSE.

Who will be the syslog-ng maintainer?

There will be no single bottle neck. This has both good and bad implications, but I believe the good ones outweigh the disadvantages.

What about the roadmap? When will 3.6.1 happen?

The roadmap is already laid out for 3.6, but it is the team's judgement when it will be released. The latest 3.6.0beta2 was released together with the team, too. I expect there will be delays (I planned 3.6.1 to be released on September 27th), but not much; a few weeks, perhaps.

What will your involvement in syslog-ng development be?

With changing jobs, my involvement will drastically decrease. I will remain a syslog-ng user, and I will continue packaging it for Debian and Ubuntu, and will keep my unofficial repository running. I do not see myself contributing much, perhaps bug reports, opinions and an occasional idea.

Nevertheless, my expertise is still considerable, and I will help the team and the community in whatever way I can - but I will be severely time constrained.

Debian

While BalaBit were very free software friendly, and allowed me to work on Debian tasks from time to time, my other activities took up an increasingly big chunk of my paid time, and I had to cut back a little. At the new job, things will be a bit different: while I won't have as much time to do Debian work on the job, I will have a lot more free time, in which I hope to do more for Debian.

I will - as promised above - continue maintaining syslog-ng, and all other packages I currently maintain. Apart from those, I wish to make myself more useful within the Clojure and Hy teams.

Miscellaneous

Where to are you moving?

This is something I will answer in due time. For now, I will have two weeks off, which I wish to spend my way, picking up projects I neglected for far too long.

15 September, 2014 08:40AM by Gergely Nagy

hackergotchi for Martin Pitt

Martin Pitt

autopkgtest 3.5: Reboot support, Perl/Ruby implicit tests

Last week’s autopkgtest 3.5 release (in Debian sid and Ubuntu Utopic) brings several new features which I’d like to announce.

Tests that reboot

For testing low-level packages like init or the kernel it is sometimes desirable to reboot the testbed in the middle of a test. For example, I added a new boot_and_services systemd autopkgtest which configures grub to boot with systemd as pid 1, reboots, and then checks that the most important services like lightdm, D-BUS, NetworkManager, and cron come up as expected. (This test will be expanded a lot in the future to cover other areas like the journal, logind, etc.)

In a testbed which supports rebooting (currently only QEMU) your test will now find an “autopkgtest-reboot” command which the test calls with an arbitrary “marker” string. autopkgtest will then reboot the testbed, save/restore any files it needs to (like the tests file tree or previously created artifacts), and then re-run the test with ADT_REBOOT_MARK=mymarker.

The new “Reboot during a test” section in README.package-tests explains this in detail with an example.

Implicit test metadata for similar packages

The Debian pkg-perl team recently discussed how to add package tests to the ~ 3.000 Perl packages. For most of these the test metadata looks pretty much the same, so they created a new pkg-perl-autopkgtest package which centralizes the logic. autopkgtest 3.5 now supports an implicit debian/tests/control control file to avoid having to modify several thousand packages with exactly the same file.

An initial run already looked quite promising, 65% of the packages pass their tests. There will be a few iterations to identify common failures and fix those in pkg-perl-autopkgtest and autopkgtestitself now.

There is still some discussion about how implicit test control files go together with the DEP-8 specification, as other runners like sadt do not support them yet. Most probably we’ll declare those packages XS-Testsuite: autopkgtest-pkg-perl instead of the usual autopkgtest.

In the same vein, Debian’s Ruby maintainer (Antonio Terceiro) added implicit test control support for Ruby packages. We haven’t done a mass test run with those yet, but their structure will probably look very similar.

15 September, 2014 08:23AM by pitti

September 14, 2014

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2014/34-37

the perl 5.20 transition is over, debconf14 is over, so I should have more time for RC bugs? yes & no: I fixed some, but only in "our" (as in: pkg-perl) packages:

  • #711418 – src:libanyevent-dbi-perl: "libanyevent-dbi-perl: FTBFS: Failed test 'Using an unknown function results in error'"
    add patch for newer SQLite (pkg-perl)
  • #756566 – libxml-dt-perl: "libxml-dt-perl: Insecure use of temporary files (CVE-2014-5260)"
    upload new upstream release (pkg-perl)
  • #759838 – src:padre: "padre: FTBFS: Failed test 'no warnings'"
    fix 2/3 of the test failures in git, last one fixed by dod (pkg-perl)
  • #759942 – src:cpanminus: "cpanminus: FTBFS: Can't write to cpanm home '/sbuild-nonexistent/.cpanm': You should fix it with chown/chmod first."
    set HOME once more in debian/rules (pkg-perl)
  • #759964 – src:libhtml-formhandler-model-dbic-perl: "libhtml-formhandler-model-dbic-perl: FTBFS: dh_auto_test: make -j1 test returned exit code 2"
    upload new upstream release (pkg-perl)
  • #761312 – libdbix-class-resultset-recursiveupdate-perl: "libdbix-class-resultset-recursiveupdate-perl: missing dependency on liblist-moreutils-perl"
    add missing dependency (pkg-perl)
  • #761313 – libgeo-google-mapobject-perl: "libgeo-google-mapobject-perl: missing dependency on libjson-perl"
    add missing dependency (pkg-perl)
  • #761315 – libfile-read-perl: "libfile-read-perl: missing dependency on libfile-slurp-perl"
    add missing dependency (pkg-perl)
  • #761319 – libpod-wordlist-hanekomu-perl: "libpod-wordlist-hanekomu-perl: missing dependency on libtest-spelling-perl"
    add missing dependency (pkg-perl)
  • #761526 – src:liblocale-maketext-gettext-perl: "liblocale-maketext-gettext-perl: FTBFS: Tests failures"
    skip test which needs internet and a writable $HOME (pkg-perl)
  • #761558 – src:libposix-strptime-perl: "libposix-strptime-perl: FTBFS: Tests failures"
    skip test which needs internet and a writable $HOME (pkg-perl)

14 September, 2014 09:13PM

September 13, 2014

hackergotchi for Ben Armstrong

Ben Armstrong

Bluff Wilderness Trail Hike, Summer 2014

Happy to be back from our yearly hike with my friend, Ryan Neily, on the Bluff Wilderness Trail. We’re proud of our achievement, hiking all four loops. Including the trip to and from the head of the trail, that was 30 km in all. Exhausting, but well worth it.

On the trip we bumped into one of the people from WRWEO who helps to maintain the trail, and stopped for a bit to talk to swap stories and tips about hiking the trail. Kudos to Nancy for helping keep this trail beautiful and accessible. We really appreciate the tireless work of this organization, and the thought they’ve put into it. It’s a treasure!

13 September, 2014 10:30PM by Ben Armstrong

hackergotchi for Keith Packard

Keith Packard

Altos1.5

AltOS 1.5 — EasyMega support, features and bug fixes

Bdale and I are pleased to announce the release of AltOS version 1.5.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a major release of AltOS, including support for our new EasyMega board and a host of new features and bug fixes

AltOS Firmware — EasyMega added, new features and fixes

Our new flight computer, EasyMega, is a TeleMega without any radios:

  • 9 DoF IMU (3 axis accelerometer, 3 axis gyroscope, 3 axis compass).

  • Orientation tracking using the gyroscopes (and quaternions, which are lots of fun!)

  • Four fully-programmable pyro channels, in addition to the usual apogee and main channels.

AltOS Changes

We’ve made a few improvements in the firmware:

  • The APRS secondary station identifier (SSID) is now configurable by the user. By default, it is set to the last digit of the serial number.

  • Continuity of the four programmable pyro channels on EasyMega and TeleMega is now indicated via the beeper. Four tones are sent out after the continuity indication for the apogee and main channels with high tones indicating continuity and low tones indicating an open circuit.

  • Configurable telemetry data rates. You can now select among 38400 (the previous value, and still the default), 9600 or 2400 bps. To take advantage of this, you’ll need to reflash your TeleDongle or TeleBT.

AltOS Bug Fixes

We also fixed a few bugs in the firmware:

  • TeleGPS had separate flight logs, one for each time the unit was turned on. Turning the unit on to test stuff and turning it back off would consume one of the flight log ‘slots’ on the board; once all of the slots were full, no further logging would take place. Now, TeleGPS appends new data to an existing single log.

  • Increase the maximum computed altitude from 32767m to 2147483647m. Back when TeleMetrum v1.0 was designed, we never dreamed we’d be flying to 100k’ or more. Now that’s surprisingly common, and so we’ve increased the size of the altitude data values to fit modern rocketry needs.

  • Continuously evaluate pyro firing condition during delay period. The previous firmware would evaluate the pyro firing requirements, and once met, would delay by the indicated amount and then fire the channel. If the conditions had changed state, the channel would still fire. Now, the conditions are continuously evaluated during the delay period and if they change state, the event is suppressed.

  • Allow negative values in the pyro configuration. Now you can select a negative speed to indicate a descent rate or a negative acceleration value to indicate acceleration towards the ground.

AltosUI and TeleGPS — EasyMega support, OS integration and more

The AltosUI and TeleGPS applications have a few changes for this release:

  • EasyMega support. That was a simple matter of adapting the existing TeleMega support.

  • Added icons for our file types, and hooked up the file manager so that AltosUI, TeleGPS and/or MicroPeak are used to view any of our data files.

  • Configuration support for APRS SSIDs, and telemetry data rates.

13 September, 2014 06:47PM

Laura Arjona

Disabling comments in the blog

I’m getting more spam than the amount that I can stand in this blog. Comments are moderated, so the public is not suffering that, only me. From time to time I go to my dashboard and clean the spam. I’m afraid that I delete some legit comment in these spam-cleaning-fevers, or, more probably, that a legit comment waits in the queue for several days (weeks?), just because I’m lazy to deal with spam and let days pass by (until the fever comes).

I think I’m going to follow the wisdom of Bradley M. Kuhn and link to a pump.io note for comments on my blog posts (disabling them here in WordPress.com). I usually post a notice when I write something in my blog, so the only task is to update the blog post with the pump.io URL of the thread for comments.

While WordPress.com allows to write comments quickly, without need of an account (you write just a name and an email, and the comment), in pump you need to have an account and sign in to comment. That looks as a bad thing, a barrier for people to participate. But of course, it stops spam :)

After thinking about it a bit, pump.io it’s a federated network, you can choose the pump server that they want, you can create a fake account, you don’t need to provide personal information… and it’s another way to promote one of the social networks where I live. Other systems link to facebook, twitter, or other places, and nobody complains! Even when those services don’t have any of the advantages of being in a federated free-software powered social network :)

If anybody don’t want to use pump.io but wants to comment, other ways to reach me or the related blog post are:

  • Comment in the GNUSocial fediverse: the post announcing the thread for each blog post will be propagated to my quitter.se account too.
  • While I’m still using Twitter, they can comment on the corresponding tweet, but beware that I’m seriously thinking about closing my account there, since I rarely use it and don’t like the platform.
  • Drop me an email, I can post the comment on behalf of that person (if you want your comment to be “anonymous”, please state it in the email).

So now it’s decided, and this is the first post of this new experiment. This text is posted in pump.io too, and you can comment there :)


Filed under: My experiences and opinion, Tools Tagged: Blog, English, federation, pump.io, social networks, Wordpress

13 September, 2014 02:23PM by larjona

September 12, 2014

hackergotchi for Steve Kemp

Steve Kemp

Storing and distributing secrets.

I run a number of hosts, and they are controlled via a server automation tool I wrote called slaughter [Documentation].

The policies I use to control my hosts are public and I don't want to make them private because they server as good examples.

Because the roles are public I don't want to embed passwords in them, which means I need something to hold secrets securely. In my case secrets are things like plaintext-passwords. I want those secrets to be secure and unavailable from untrusted hosts.

The simplest solution I could think of was an IP-address based ACL and a simple webserver. A client requests something like:

  • http://secret.example.com/user-passwords

That returns a JSON object, if the requesting host is permitted to read the data. Otherwise it returns a HTTP 403 error.

The layout is very simple:

|-- secrets
|   |-- 206.190.139.148
|   |   `-- auth.json
|   |-- 127.0.0.1
|   |   `-- example.json
|   `-- 80.68.84.109
|       `-- chat.json

Each piece of data is beneath a directory/symlink which controls the read-only access. If the request comes in from the suitable IP it is granted, if not it is denied.

For example a failing case:

skx@desktop ~ $ curl  http://sss.steve.org.uk/chat
missing/permission denied

A working case :

root@chat ~ # curl  http://sss.steve.org.uk/chat
{ "steve": "haha", "bot": "notreally" }

(The JSON suffix is added automatically.)

It is hardly rocket-science, but I couldn't find anything else packaged neatly for this - only things like auth/secstore and factotum. So I'll share if it is useful.

Simple Secret Sharing, or Steve's secret storage.

12 September, 2014 08:10PM

Richard Hartmann

Release Critical Bug report for Week 37

Remember, remember; the fifth of November.

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1422
    • Affecting Jessie: 410 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 355 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 52 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 26 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 277 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 55 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 55 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

12 September, 2014 08:08PM by Richard 'RichiH' Hartmann

hackergotchi for Jonathan McDowell

Jonathan McDowell

Back from DebConf 14

I previously forgot to mention that I was planning to attend DebConf14, having missed DebConf13. This year the conference was held in Portland, OR. This is a city I've been to many times before, and enjoy, but I hadn't spent any time wandering around its city centre as a pedestrian. I have to say I really prefer DebConfs that are held in middle of city. It always seems a bit of a shame to travel some distance to somewhere new and spend all the time there in a conference venue. Plus these days I have the added lure of going out and playing Ingress in a new location. DebConf14 didn't disappoint in these respects; the location was super easy to get to from the airport via public transportation, all of the evening social events were within reasonable walking distance (I'll tend to default to walking when possible) and the talk venue/accommodation were close to each other and various eating + drinking options. Throw in the fact at Portland managed to produce some excellent weather (modulo my Ingress session on the last Saturday morning, when rained on me) and it's impossible to fault the physicalities of DebConf this year.

This year the conference format was a bit different; previous years have had a week long DebConf before the week of the conference itself. This year went for a 9 day talk schedule (Saturday -> Sunday) with various gaps of hacking time interspersed. I've found it hard to justify a full two weeks away in the past, so this setup worked a lot better from my viewpoint. Also I rarely go to DebConf with a predetermined list of things to do; the stuff I work on naturally falls out of talks I attend and informal discussions I have. Having hack time throughout the conference helped me avoid feeling I was having to trade off hacking vs talks.

Naturally enough a lot of my involvement at DebConf was around OpenPGP. Gunnar and I spent a fair bit of time getting Daniel up to speed with the keyring-maint team (Gunnar more than I, I'll confess). We finally set a hard timeframe for freeing Debian of older 1024 bit keys. I was introduced to the Gnuk, which is a particularly interesting piece of open specification hardware with a completely Free software stack on top if it that implements the OpenPGP smartcard spec. Currently it's limited to 2K keys but it's hoped that 4K support can be added (and I ended up spending a couple of hours after the closing talk hacking on the source and seeing how much needs to change for 4K support, aided by the very patient Niibe). These are the sort of things that really benefit from the face time that DebConf offers to the Debian project. I've said it before, but I think it's worth saying again: Debian is a bit like a huge telecommuting organization and it's my opinion that any such organization should try and ensure its members actually spend some time together on a regular basis. It improves the ability to work remotely a hell of a lot if you can actually put a face to the entity you're emailing / IRCing and have some sort of idea where they're coming from because you've spent some time with them, whether that's in talks or over dinner or just casual hallway chats.

For once I also found myself considering alternative employment while at DebConf and it was incredibly useful to be able to have various conversations with both old friends and people who were there with an eye on recruitment. Thanks to all those whose ears I bent about the subject (and more on the outcome in a future post). Thank you also to the many people involved with the organization of DebConf; I've been on the periphery a few times over the years and it's given me a glimpse into the amount of hard work all of the volunteers (be they global team, local organizing team, video team or just random volunteers) put into making DebConf one of my must-attend yearly conferences. If you're at all involved in Debian and haven't attended I strongly urge you to do so - I'll see you all next year at DebConf15 in Heidelberg!

12 September, 2014 02:03PM

Elena 'valhalla' Grandi

Sometimes apologies are the worst part

I am sick of hearing apologies directed to me just after a dirty joke.

Usually, I don't mind dirty jokes in themselves: I *do* have a dirty mind and make my share of those, or laugh for the ones my friends with even dirtier minds make.

There are of course times and places where they aren't appropriate, but I'd rather live in a world where they are the exception (although a common one) rather than the norm.

Then there is the kind of dirty jokes strongly based on the assumptions that women are sexual objects, a nuisance that must be tolerated because of sex or both: of course I don't really find them fun, but that's probably because I'm a nuisance and I don't even give them reason to tolerate me. :)

Even worse that those, however, is being singled out as the only women in the group with an empty apology for the joke (often of the latter kind): I know you are not really sorry, you probably only want to show that your parents taught you not to speak that way in front of women, but since I'm intruding in a male-only group I have to accept that you are going to talk as appropriate for it.

P.S. no, the episode that triggered this post didn't happen in a Debian environment, it happened in Italy, in a tech-related environment (and not in a wanking club for penis owners, where they would have every right to consider me an intruder).

12 September, 2014 11:53AM by Elena ``of Valhalla''

John Goerzen

The Thrill and Stress of Too Many Hobbies

Today, 4PM. Jacob and Oliver excitedly peer at the box in our kitchen – a really big box, taller than them. Inside is is the first model airplane I’d ever purchased. The three of us hunkered down on the kitchen floor, opened the box, unpacked the parts, examined the controller, and found the manual with cryptic assembly directions. Oliver turned some screws while Jacob checked out the levers on the controllers. Then they both left for a bit to play with their toy buses.

A little while later, the three of us went outside. It was too windy to fly. I had never flied an RC plane before — only RC quadcopters (much easier to fly), and some practice time on an RC simulator. But the excitement was too much. So out we went, and the plane took off perfectly, climbed, flew over the trees, and circled above our heads at my command. I even managed a good landing in the wind, despite about 5 aborted attempts due to coming in too high, wrong angle, too fast, or last-minute gusts of wind throwing everything off. I am not sure how I pulled that all off on my first flight, but somehow I did! It was thrilling!

I’ve had a lot of hobbies in my life. Computers have run through many of them; I learned Pascal (a programming language) at about the same time I learned cursive handwriting and started with C at around age 10. It was all fun. I’ve been a Debian developer for some 18 years now, and have written a lot of code, and even books about code, over the years.

Photography, music, literature, history, philosophy, and theology have been interests for quite some time as well. In the last few years, I’ve picked up amateur radio, model aircraft, etc. And last month, Laura led me into Ada’s Technical Books during our visit to Seattle, resulting in me getting interested in Arduino. (The boys and I have already built a light-activated crossing gate for their HO-gauge model trains, and Jacob can now say he’s edited a few characters of C!)

Sometimes I find ways to merge hobbies; I’ve set up all sorts of amateur radio systems on Linux, take aerial photographs, and set up systems to stream music in my house.

But I also have a lot less time for hobbies overall than I once did; other things in life, such as my children, are more important. Some of the code I once worked on actively I no longer use or maintain, and I feel guilty about that when people send bug reports that I have no interest in fixing anymore.

Sometimes I feel a need to cut down, and perhaps have; and then, I get an interest in RC aircraft and find an airplane that is great for a beginner and fairly inexpensive.

Perhaps it is the curse of being a curious person living in an interesting world. Do any of the rest of you have a large number of hobbies? How do you feel about that?

12 September, 2014 03:10AM by John Goerzen

September 11, 2014

Stefano Zacchiroli

debsources bugs and easy hacks

debsources debbugs oh

My ongoing quest for lowering the barrier for contributing to Debsources continues.
In this chapter:

  • I've migrated bug reports from the previous ad-hoc text file in the Git repo to the Debian BTS, under the umbrella of the qa.debian.org pseudo-package.
    From now on this is the recommended (and documented) way of reporting bugs against http://sources.debian.net.

    Look ma, it also has one of those newfangled short URLs: http://deb.li/debsrcbugs!

  • While at it, I've also properly tagged the current easy hacks on Debsources using the gift tag. There are definitely opportunities for new contributors there, and there might be more if you submit your own Debsources' pet peeves to the BTS.

    Again, mandatory mnemonic/short URL: http://deb.li/debsrceasy.

What's your excuse for not contributing to Debsources, again?

11 September, 2014 05:31PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.1.2: Still creating R Packages that purr

A brown bag release 0.1.2 of pkgKitten is now on CRAN, following yesterday's 0.1.1 upload

Next time I'll try to remember that when I have parameters name and path, it won't work so well to call them as path and name ...

Changes in version 0.1.2 (2014-09-11)

  • Brown-bag fix of calling the new helper function with then correct order of arguments.

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 September, 2014 02:41PM

Enrico Zini

suspend-i-really-mean-it

Laptop, I demand that you suspend!

Dear Lazyweb,

Sometimes some application prevents suspend on my laptop. I want to disable that feature: how?

I understand that there may exist some people who like that feature. I, on the other hand, consider a scenario like this inconceivable:

  1. I'm on a plane working with my laptop, the captain announces preparations for landing, so I quickly hit the suspend button (or close the lid) on my laptop and stow it away.
  2. One connecting flight later, I pick up my backpack, I feel it unusually hot and realise that my laptop has been on all along, and is now dead from either running out of battery or thermal protection.
  3. I think things that, if spoken aloud in front of a pentacle, might invoke major lovecraftian horrors.

I do not want this scenario to ever be possible. I want my suspend button to suspend the laptop no matter what. If a process does not agree, I'm fine with suspending it anyway, or killing it.

If I want my laptop to suspend, I generally have a good enough real-world reason for it, and I cannot conceive that a software could ever be allowed to override my command.

How do I change this? I don't know if I should look into systemd, upowerd, pm-utils, the kernel, the display manager or something else entirely. I worry that I cannot even figure where to start looking for a solution.

This happened to me multiple times already, and I consider it ridiculous. I know that it can cause me data loss. I know that it can cause me serious trouble in case I was relying on having some battery or state left at my arrival. I know that depending on what is in my backpack, this could also be physically dangerous.

So, what knob do I tweak for this? How do I make suspend reliable?

11 September, 2014 12:32PM

Sylvestre Ledru

Rebuild of Debian using Clang 3.5.0

Clang 3.5.0 has just been released. A new rebuild has been done highlight the progress to get Debian built with clang.

tl;dr: Great progress. We decreased from 9.5% to 5.7% of failures. Full results are available on http://clang.debian.net

At time of the rebuild with 3.4.2, we had 2040 packages failing to build with clang. With 3.5.0, this dropped to 1261 packages.

Fixes

With Arthur Marble and Alexander Ovchinnikov, both GSoC students, we worked on various ways to decrease the number of errors.

Upstream fixes

First, the most obvious way, we fixed programming bugs/mistakes in upstream sources. Basically, we took categories of failure and fixed issues one after the other. We started with simple bugs like 'Wrong main declaration', 'non-void function should return a value' or 'Void function should not return a value'.

They are trivial to fix. We continued with harder fixes like ' Undefined reference' or 'Variable length array for a non POD (plain old data) element'.

So, besides these one, we worked on:


In total, we reported 295 bugs with patches. 85 of them have been fixed (meaning that the Debian maintainer uploaded a new version with the fix).

In parallel, I think that the switch by FreeBSD and Mac OS X to Clang also helped to fix various issues by upstreams.

Hacking in clang

As a parallel approach, we started to implement a suggestion from Linus Torvalds and a few others. Instead of trying to fix all upstream, where we can, we tried to update clang to improve the gcc compatibility.

gcc has many flags to disable or enable optimizations. Some of them are legacy, others have no sense in clang, etc. Instead of failing in clang with an error, we create a new category of warnings (showing optimization flag '%0' is not supported) and moved all relevant flags into it. Some examples, r212805, r213365, r214906 or r214907

We also updated clang to silent some useless arguments like -finput_charset=UTF-8 (r212110), clang being UTF-8 compliant.

Finally, we worked on the forwarding of linker flags. Clang and gcc have a very different behavior: when gcc does not know an argument, it is going to forward the argument to the linker. Clang, in this case, is going to reject the argument and fail with an error. In clang, we have to explicitly declare which arguments are going to be transfer to the linker. Of course, the correct way to pass arguments to the linker is to use -Xlinker or -Wl but the Debian rebuild proved that these shortcuts are used. Two of these arguments are now forwarded:

  • -z keyword - r213198
  • -u Force symbol to be entered in the output file as an undefined symbol - r211756. This one fixed most of the haskell build failures. It fixed the most common issue that we had (701 occurrences but this does not mean that all these packages build fine now, some haskell-based package are failing later in the process)

New errors

Just like in other releases, new warnings are added in clang. With (bad) usage of -Werror by upstream software, this causes new build failures:

I also took the opportunity to add some further categorizations in the list of errors. Some examples:

Next steps

The Debile project being close to ready with Clément Schreiner's GSoC, we will now have an automatic and transparent way to rebuild packages using clang.

Conclusion

As stated, we can see a huge drop in term of number of failures over time:

Hopefully, Clang getting better and better, more and more projects adopting it as the default compiler or as a base for plugin/extension developments, this percentage will continue to decrease.
Having some kind of release goal with clang for Jessie+1 can now be considered as potentially reachable.

Want to help?

There are several things which can be done to help:

  • Point me common error patterns in the Not categorized list of errors to create new categories
  • Report and fix packages
  • As an upstream, integrate clang as part of your continuous integration system
  • Hack on cqa-scanlogs, the error detection tool to detect error patterns (example: Undetected error). This tool is used also for the regular rebuilds of the archive.
  • Improve clang.debian.net website

Acknowledgments

Thanks to David Suarez for the rebuilds of the archive, Arthur Marble and Alexander Ovchinnikov for their GSoC works and Nicolas Sévelin-Radiguet for the few fixes.

11 September, 2014 12:17PM by Sylvestre

hackergotchi for Steve Kemp

Steve Kemp

A small email utility and other updates.

Last night I was looking for an image I knew a model had mailed me a few months ago, as we were talking about rescheduling a shoot at the weekend. I couldn't find it, even with my awesome mail client and filing system.

With some free time I figured I could write a little utility to dump all attachments from email folders, and find it that way.

It did cross my mind that there is the simple mail-utility for dumping headers, etc, called formail, which is distributed alongside procmail, but it doesn't handle attachments ..

I was tempted to write a general purpose script to dump attachments, email header values, etc, etc but given the lack of time I merely solved my own problem.

I suspect there is room for a "mail utilities" package, similar to Joey's "moreutils" and my "sysadmin utils". However I note that there is a GNU Mailutils which does things differently than I'd expect - i.e. it contains a POP3 server.

Still if you want to dump attachments from emails, have GMIME installed, and want to filter by attachment-name, or MIME-type, you might look at my trivial attachment-dump program.

Related to that I spent some time last night updating my photography site, so the animals & pets section has updated images at least.

During the course of that I found a bug in my static-site generator, templer which stopped it from automatically populating image height/widths when called in a glob:

Title: Pets &amp; Animals
Images: file_glob( "*.jpg" )
---

This is the page body, it now has access to a variable called 'images'
which is a HTML::Template loop-structure containing name/height/width/etc
for each image in the current directory.

That should now be resolved, and life should once again be good.

11 September, 2014 10:28AM

Matthias Klumpp

Listaller: Back to the future!

Listaller-Logo (with text)It is time for another report on Listaller, the cross-distro 3rd-party package installer, which is now in development for – depending how you count – 5-6 years. This will become a longer post, so you might grab some coffee or tea ;-)

The original idea

The Listaller project was initially started with the goal to make application deployment on Linux distributions as simple as possible, by providing a unified package installation format and tools which make building apps for multiple distributions easier and deployment of updates simple. The key ideas were:

  • Seamless integration of all installation steps into the system – users shouldn’t care about the origin of their application, they just handle all installed apps with the same tool and update all apps with the same interface they use for updating the system.
  • Out-of-the-boy sandboxing for all 3rd-party apps
  • Easy signing and key-validation for Listaller packages
  • Simple creation of updates for developers
  • Resource-sharing: It should always be clear which application uses which library, duplicates should be avoided. The distribution-provided software should take priority, since it is often well-maintained and receives security updates.

The current state

The current release of Listaller handles all of this with a plugin for PackageKit, the cross-distro package-management abstraction layer. It hooks into PackageKit and reads information passing through to the native distributor backend, and if it encounters Listaller software, it handles it appropriately. It can also inject update information. This results in all Listaller software being shown in any PackageKit frontends, and people can work with it just like if the packages were native packages. Listaller package installations are controlled by a machine policy, so the administrator can decide that e.g. only packages from a trusted source (= GPG signature in trusted database) can be installed. Dependencies can be pulled from the distributor’s repositories, or optionally from external sources, like the PyPI.

This sounds good on paper, but the current implementation has various problems.

The issues

The current Listaller approach has some problems. The biggest one lies in the future: Soon, there will be no PackageKit plugins anymore! PackageKit 1.0 will remove support for them, because they appear to be a major source for crashes, even the in-tree plugins cause problems. Also, the PackageKit service itself is currently being trimmed of unneeded features and less-used code. These changes in PackageKit are great and needed for the project (and I support these efforts), but they cause a pretty huge problem for Listaller: The project relies on the PackageKit plugin – if used without it, you loose the system-integration part, which is one of the key concepts of Listaller, and a primary goal.

But this issue is not the only one. There are more. One huge problem for Listaller is dependency-solving: It needs to know where to get software from in case it isn’t installed already. And that has to be done in a cross-distributional way. This is an incredibly complex task, and Listaller contains lots of workarounds for various quirks. It contains so much hacks for distro-specific stuff, that it became really hard to understand. The Listaller dependency model also became very complex, because it tried to handle many corner-cases. This is bad, of course. But the workarounds weren’t added for fun, but because it was assumed to be easier than to fixing the root cause, which would have required collaboration between distributors and some changes on the stack, which seemed unlikely to happen at the time the code was written.

The systemd effort

Also a thing which affects Listaller, is the latest push from the systemd team to allow cross-distro 3rd-party installations to happen. I definitively recommend reading the linked blogpost from Lennart, if you have some spare time! The identified problems are the same as for Listaller, but the solution they propose is completely different, and about three orders of magnitude more invasive than whatever the Listaller project had in mind (I make these numbers up, so don’t ask!). There are also a few issues I see with Lennarts approach, I will probably go into detail about that in another blogpost (e.g. it requires multiple copies of a library lying around, where one version might have a security vulnerability, and another one doesn’t – it’s hard to ensure everything is up to date and secure that way, even if you have a top-notch sandbox). I have great respect for the systemd crew and especially Lennart, and I hope them to succeed with their efforts. However, I also think Listaller can achieve a similar things with a less-invasive solution, at least for the 3rd-party app-installations (Listaller is one of the partial-fix solutions with strict focus, so not a direct competitor to the holistic systemd approach. Both solutions could happily live together.)

A step into the future

Some might have guessed it already: There are some bigger changes coming to Listaller! The most important one is that there will be no Listaller anymore, at least not in its old form.

Since the current code relies heavily on the PackageKit plugin, and contains some ugly workarounds, it doesn’t make much sense to continue working on it.

Instead, I started the Listaller.NEXT project, which is a rewrite of Listaller in C. There are a some goals for the rewrite:

  • No stupid hacks and workarounds: We will not add any workaround. If there is a problem, we will fix it at its source, even if that might be more invasive.
  • Trimmed down project: The new incarnation of Listaller will only support installations of statically linked software at the beginning. We will start with a very small, robust core, and then add more features (like dependency-solving) gradually, but only if they are useful. There will be no feature-creep like in the previous version.
  • Faster development cycle: Releases will happen much faster, not only two or three times a year
  • Integration: Since there is no PackageKit plugin anymore, but integration is still one of Listaller’s key concepts, we will integrate Listaller into downstream tools, ranging from Apper to GNOME-Software. Richard Hughes will help with the integration and user interfaces, so Listaller applications get displayed properly.
  • AppStream-first: AppStream is the ultimate tool for Listaller to detect dependencies. With the 0.6 release, the Listaller component-concept was merged into it, which makes it a very powerful and non-hackish solution for dependency-detection. We will advance the use of its metadata, and probably use it exclusively, which would restrict Listaller to only work properly on distributions which ship AppStream metadata.
  • No desktop-only focus: The previous Listaller was focused only on desktop GUI apps. The new version will be developed with a much larger target audience in mind, including server deployments (“Can I use it to deploy my server app” is one very frequently asked questions about Listaller – with the new version, the answer is yes)
  • We will continue to improve the static-linking and cross-distro development toolchain (libuild, with ligcc, lig++ and binreloc), to make building portable apps easier.

I made a last release of the 0.5.x series of Listaller, to work with PackageKit 0.9.x – the future lies in the C port.

If you are using Listaller (and I know of people who do, for example some deploy statically-linked stuff on internal test-setups with it), stay tuned. The packaging format will stay mostly compatible with the current version, so you will not see many changes there (the plan is to freeze it very soon, so no backwards-incompatible changes are made anymore). The o.5.x series will receive critical bugfixes if necessary.

Help needed!

As always, there is help needed! Writing C is not that difficult ;-) But user feedback is welcome as well, in case you have an idea. The new code will be hosted on Github in the new listaller-next branch (currently not that much to find there). Long-term, we will completely migrate away from Launchpad.

You can expect more blogposts about the Listaller concepts and progress in the next months (as soon as I am done with some AppStream-related things, which take priority).

11 September, 2014 08:14AM by Matthias

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.1.1: Still creating R Packages that purr

A maintenance release 0.1.1 of pkgKitten is now on CRAN.

It has only one small change: the function playWithPerPackageHelpPage() was factored out of the main function kitten() as I happened to be needing something just like playWithPerPackageHelpPage() to make packages created by the Rcpp function Rcpp.package.skeleton() a little nicer.

We also added a NEWS.Rd file which restates major release features. As it is so short, we include it in its entirety.

Changes in version 0.1.1 (2014-09-09)

  • New (exported) function playWithPerPackageHelpPage() which lets other packages create a non-complaint-generating package help page

Changes in version 0.1.0 (2014-06-13)

  • Initial public version and CRAN upload

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 September, 2014 01:12AM

September 10, 2014

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

Will the packages you rely on be part of Debian Jessie?

The start of the jessie freeze is quickly approaching, so now is a good time to ensure that packages you rely on will the part of the upcoming release. Thanks to automated removals, the number of release critical bugs has been kept low, but this was achieved by removing many packages from jessie: 841 packages from unstable are not part of jessie, and won’t be part of the release if things don’t change.

It is actually simple to check if you have packages installed locally that are part of those 841 packages:

  1. apt-get install how-can-i-help (available in backports if you don’t use testing or unstable)
  2. how-can-i-help --old
  3. Look at packages listed under Packages removed from Debian ‘testing’ and Packages going to be removed from Debian ‘testing’

Then, please fix all the bugs :-) Seriously, not all RC bugs are hard to fix. A good starting point to understand why a package is not part of jessie is tracker.d.o.

On my laptop, the two packages that are not part of jessie are the geeqie image viewer (which looks likely to be fixed in time), and josm, the OpenStreetMap editor, due to three RC bugs. It seems much harder to fix… If you fix it in time for jessie, I’ll offer you a $drink!

10 September, 2014 07:28PM by lucas

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s first report about Debian Long Term Support

When we setup Freexian’s offer to bring together funding from multiple companies in order to sponsor the work of multiple developers on Debian LTS, one of the rules that I imposed is that all paid contributors must provide a public monthly report of their paid work.

While the LTS project officially started in June, the first month where contributors were actually paid has been July. Freexian sponsored Thorsten Alteholz and Holger Levsen for 10.5 hours each in July and for 16.5 hours each in August. Here are their reports:

It’s worth noting that Freexian sponsored Holger’s work to fix the security tracker to support squeeze-lts. It’s my belief that using the money of our sponsors to make it easier for everybody to contribute to Debian LTS is money well spent.

As evidenced by the progress bar on Freexian’s offer page, we have not yet reached our minimal goal of funding the equivalent of a half-time position. And it shows in the results, the dla-needed.txt still shows around 30 open issues. This is slightly better than the state two months ago but we can improve a lot on the average time to push out a security update…

To have an idea of the relative importance of the contributions of the paid developers, I counted the number of uploads made by Thorsten and Holger since July: of 40 updates, they took care of 19 of them, so about the half.

I also looked at the other contributors: Raphaël Geissert stands out with 9 updates (I believe that he is contracted by Électricité de France for doing this) and most of the other contributors look like regular Debian maintainers taking care of their own packages (Paul Gevers with cacti, Christoph Berg with postgresql, Peter Palfrader with tor, Didier Raboud with cups, Kurt Roeckx with openssl, Balint Reczey with wireshark) except Matt Palmer and Luciano Bello who (likely) are benevolent members of the LTS team.

There are multiple things to learn here:

  1. Paid contributors already handle almost 70% of the updates. Counting only on volunteers would not have worked.
  2. Quite a few companies that promised help (and got mentioned in the press release) have not delivered the promised help yet (neither through Freexian nor directly).

Last but not least, this project wouldn’t exist without the support of multiple companies and organizations. Many thanks to them:

Hopefully this list will expand over time! Any help to reach out to new companies and organizations is more than welcome.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

10 September, 2014 11:30AM by Raphaël Hertzog

Petter Reinholdtsen

Good bye subkeys.pgp.net, welcome pool.sks-keyservers.net

Yesterday, I had the pleasure of attending a talk with the Norwegian Unix User Group about the OpenPGP keyserver pool sks-keyservers.net, and was very happy to learn that there is a large set of publicly available key servers to use when looking for peoples public key. So far I have used subkeys.pgp.net, and some times wwwkeys.nl.pgp.net when the former were misbehaving, but those days are ended. The servers I have used up until yesterday have been slow and some times unavailable. I hope those problems are gone now.

Behind the round robin DNS entry of the sks-keyservers.net service there is a pool of more than 100 keyservers which are checked every day to ensure they are well connected and up to date. It must be better than what I have used so far. :)

Yesterdays speaker told me that the service is the default keyserver provided by the default configuration in GnuPG, but this do not seem to be used in Debian. Perhaps it should?

Anyway, I've updated my ~/.gnupg/options file to now include this line:

keyserver pool.sks-keyservers.net

With GnuPG version 2 one can also locate the keyserver using SRV entries in DNS. Just for fun, I did just that at work, so now every user of GnuPG at the University of Oslo should find a OpenGPG keyserver automatically should their need it:

% host -t srv _pgpkey-http._tcp.uio.no
_pgpkey-http._tcp.uio.no has SRV record 0 100 11371 pool.sks-keyservers.net.
%

Now if only the HKP lookup protocol supported finding signature paths, I would be very happy. It can look up a given key or search for a user ID, but I normally do not want that, but to find a trust path from my key to another key. Given a user ID or key ID, I would like to find (and download) the keys representing a signature path from my key to the key in question, to be able to get a trust path between the two keys. This is as far as I can tell not possible today. Perhaps something for a future version of the protocol?

10 September, 2014 11:10AM

Ian Donnelly

New Release: Elektra 0.8.8

Hi Everybody!

Great news! I am very happy to announce that we have reached a new milestone for Elektra and released a new version, 0.8.8! This release comes right on the tail of the 0.8.7 release and it might just be our biggest release yet! We already have a great article covering all the changes from the previous release on our News documentation on GitHub. I just wanted to focus on a few of those changes on this blog, especially the ones that pertain to my Google Summer of Code Project.

First of all, Felix has worked to greatly improve the ini plug-in. This is the plug-in I used in my technical demo for mounting Samba’s smb.conf file. It now works even better with complex ini files such as smb.conf which means the automatic merging of files like smb.conf is even better now! That really goes to show one of the greatest strengths of the design of Elektra. Just by improving plug-ins, all the functions of Elektra can improve as well. The merge code was not changed in this release, yet because of an updated plug-in, the merge has improved.

Secondly, there have been some good improvements to the kdb command-line tool. Many of these improvements were used in my technical demo, but now they are actually a part of release (and a little more refined from then). We added a new command called kdb remount which allows a user (or script) to mount a file to the Elektra Key Database using an existing backend. An example of this command is:
kdb remount [new filename] [new path] [existing mountpoint]

This command mounts the new file to the new path in the Key Database using an existing backend. This works with the conffile merging by allowing us to mount the various versions of the conffile without having to specify which backend to use (it will use the same backend as the currently used conffile). Additionally, the umount command was updated to allow users to umount using the current mountpath (allowing commands such as kdb umount system/smb.conf) as opposed to backends. Moreover, we added an option to the kdb import command to specify a merge strategy using thing -s option. Now you can import a file into the Key Database and merge the content of that file with the current Keys in the Database.

Thirdly, we added some new scripts to Elektra to help with the ucf integration. These scripts were used in my technical demo, but now they are part of the release. elektra-mount and elektra-umount are wrappers for the commands kdb mount and kdb umount respectively. They are designed to be used in debian package scripts and are adapted for easier use than the generic commands. For instance, running elektra-mount will check to see if a file is already mounted at that location in the Key Database. Similarly, elektra-umount will not produce an error if the file was already unmounted. This is because maintainer scripts can be run multiple times in a row and producing an error will stop dpkg even when it shouldn’t. Additionally, we added a script called elektra-merge which can be used as a method for ucf to merge configuration files. This script acts as a liaison between ucf and elektra allowing automatic merges to be done by ucf using Elektra’s merge features in a seamless manner. For information of how these scripts work, check out my tutorial on integrating elektra-merge into a debian package.

The last bit of news I would like to share is the great progress of the Debian package. Thanks to Pino Toscano, version 0.8.7-4 of Elektra is now available in the Debian testing repo! This is great news as we are now that much closer to replacing the outdated Elektra 0.7 versions that are currently the latest versions of Elektra in the stable repo. Once the 0.8.X versions of Elektra make it to stable it will be much easier for us to keep the latest versions of Elektra in Debian, and that’s key to allowing Elektra to help improve users lives.

You can download the release from:
http://www.markus-raab.org/ftp/elektra/releases/elektra-0.8.8.tar.gz

• size: 1644441
• md5sum: fe11c6704b0032bdde2d0c8fa5e1c7e3
• sha1: 16e43c63cd6d62b9fce82cb0a33288c390e39d12
• sha256: ae75873966f4b5b5300ef5e5de5816542af50f35809f602847136a8cb21104e2

And the API-Documentation can be found here:

http://doc.libelektra.org/api/0.8.8/html/

Hope you enjoy the new release!

Sincerely,
Ian S. Donnelly

10 September, 2014 09:19AM by Ian Donnelly

hackergotchi for Steve Kemp

Steve Kemp

kvm-hosting will be ceasing, soon.

Seven years ago I wanted to move on from the small virtual machine I had to a larger one. Looking at the the options available it seemed the best approach was to rent a big host, and divide it up into virtual machines myself.

Renting a machine with 8Gb of RAM and 500Gb of disk-space, then dividing that into eights would give a decent spec and assuming that I found enough users to pay for the other slots/shares it would be economically viable too.

After a few weeks I took the plunge, advertised here, and found users.

I had six users:

  • 1/8th for me.
  • 1/8th left empty/idle for the host machine.
  • 6/8th for other users.

There were some niggles, one user seemed to suffer from connectivity problems more than the others, but on the whole the experiment worked out well.

These days, thanks to BigV, Digital Ocean, and all the new-comers there is less need for this kind of thing so last December I announced that the service would cease - and gave all current users 1 year of free service to give them time to migrate away.

The service was due to terminate in December, but triggered by some upcoming downtime where our host would have been moved, in the back of a van, from Manchester to York, I've taken the decision to stop it early.

It was a fun experiment, it provided me with low cost hosting (subsidized by the other paying users), and provided some other people with hosting of their own that was setup nicely.

The only outstanding question is what to do with the domain-names? I could let them expire, I could try to sell them, or I could donate them to other people running hosting setups.

If anybody reading this has a use for kvm-hosting.org, kvm-hosting.net, or kvm-hosting.com, then do feel free to get in touch. No promises, obviously, but it'd be a shame for them to end up hosting adverts in a year or twos time..

10 September, 2014 08:17AM

September 09, 2014

hackergotchi for Holger Levsen

Holger Levsen

20140909-lts-august-2014

Debian LTS - feedback about the feedback from my LTS talk at DebConf14

So, I'm more or less back from dc14 and today, five days later, I think I might have mostly overcome jetlag. Probably...

So, at DebConf14 I gave a talk about LTS and while I'm sorry that I was that tired, I'm more or less happy how the talk went. Thankfully at least I was calm and relaxed...

There are a couple of things I learned from the talk: a.) LTS has been really really perceived well b.) it fits a demand c.) people already take it for granted (eg plan for Wheezy LTS) d.) people expect the same non-intrusive changes as currently done for security updates.

To explain the last point: when I explained the - so far - rather theoretical problem that ''squeeze-lts'' has no gatekeeper mechanisms whatsoever (eg no ''proposed-updates'', no NEW queue..) the reaction in the audience was basically "something like this should exist, else how can we deploy this in large scale / on important setup?!". Also currently there is no, well-documented, easily to be found policy for what kind of updates are acceptable. I said that we basically follow the same rules as there are for debian-security updates, but this should really be documented properly. This doesn't seem very hard to fix, just like many things it "just" needs someone to do the work.

IOW: we explain how to use LTS, we explain how to contribute to LTS (through uploads or financially) but we lack a simple explaination what LTS is and what kind of updates to expect. It's kinda self evident, but only kinda.

So since giving the talk I changed one thing in my personal usage of LTS: I don't use my personal LTS repo anymore, where I made sure only good packages got in. This is for two reasons: a.) I had too add new packages too often and b.) if it really is a problem that LTS has no gatekeeping mechanism (which I'm not sure anymore it is, after all, the updates are prepared by reasonable people with a common goal...) then I want to suffer this first hand, so I can build solutions which benefit everyone, not just me. That personal LTS repo only helped me.

On the technical side I prepared five DLAs, for lzo2, libwpd, squid3, lua5.1 and bind9. Not much to see here, they all were very smooth. I still enjoyed the challenge of digging in unknown sourcecode, as described in my previous post.

Then more interestingly, and with the help of Raphael Geissert and Salvatore Bonaccorso I fixed the security-tracker to also know about oldstable, after waiting for more than 8 weeks to someone else doing it. I'm very glad that this is done now, as without it was really tedious to check which issues were applying to oldstable.

Oh, and another afterthought from giving the talk: currently at least parts of the security-tracker codebase assume that there won't ever be support for oldoldstable, but once jessie has been released this won't be true anymore. Then we will support stable, oldstable and oldoldstable. And oldstable will be wheezy, not squeeze. We have something like 6 months to fix this, hopefully we won't have much more time... ;-) Oh, and surely there are other places than just the security-tracker which will need to be taught about this.

09 September, 2014 09:34PM

hackergotchi for Daniel Pocock

Daniel Pocock

xTupleCon WebRTC talk schedule change, new free event

As mentioned in my earlier blog, I'm visiting several events in the US and Canada in October and November. The first of these, the talk about WebRTC in CRM at xTupleCon, has moved from the previously advertised timeslot to Wednesday, 15 October at 14:15.

WebRTC meeting, Norfolk, VA

Later that day, there will be a WebRTC/JavaScript meetup in Norfolk hosted at the offices of xTuple. It is not part of xTupleCon and free to attend. Please register using the Eventbrite page created by xTuple.

This will be a hands on event for developers and other IT professionals, especially those in web development, network administration and IP telephony. Please bring laptops and mobile devices with the latest versions of both Firefox and Chrome to experience WebRTC.

Free software developers at xTupleCon

If you do want to attend xTupleCon itself, please contact xTuple directly through this form for details about the promotional tickets for free software developers.

09 September, 2014 06:51PM by Daniel.Pocock

hackergotchi for Holger Levsen

Holger Levsen

20140819-lts-august-2014

Debian LTS - feedback about the feedback from my LTS talk at DebConf14

So, I'm more or less back from dc14 and today, five days later, I think I might have mostly overcome jetlag. Probably...

So, at DebConf14 I gave a talk about LTS and while I'm sorry that I was that tired, I'm more or less happy how the talk went. Thankfully at least I was calm and relaxed.

There are a couple of things I learned from the talk: a.) LTS has been really really perceived well b.) it fits a demand c.) people already take it for granted (eg plan for Wheezy LTS) d.) people expect the same non-intrusive changes as currently done for security updates.

To explain the last point: when I explained the - so far - rather theoretical problem that ''squeeze-lts'' has no gatekeeper mechanisms whatsoever (eg no ''proposed-updates'', no NEW queue..) the reaction in the audience was basically "something like this should exist, else how can we deploy this in large scale / on important setup?!". Also currently there is no, well-documented, easily to be found policy for what kind of updates are acceptable. I said that we basically follow the same rules as there are for debian-security updates, but this should really be documented properly. This doesn't seem very hard to fix, just like many things it "just" needs someone to do the work.

IOW: we explain how to use LTS, we explain how to contribute to LTS (through uploads or financially) but we lack a simple explaination what LTS is and what kind of updates to expect. It's kinda self evident, but only kinda.

So since giving the talk I changed one thing in my personal usage of LTS: I don't use my personal LTS repo anymore, where I made sure only good packages got in. This is for two reasons: a.) I had too add new packages too often and b.) if it really is a problem that LTS has no gatekeeping mechanism (which I'm not sure anymore it is, after all, the updates are prepared by reasonable people with a common goal...) then I want to suffer this first hand, so I can build solutions which benefit everyone, not just me. That personal LTS repo only helped me.

On the technical side I prepared five DLAs, for lzo2, libwpd, squid3, lua5.1 and bind9. Not much to see here, they all were very smooth. I still enjoyed the challenge of digging in unknown sourcecode, as described in my previous post.

Then more interestingly, and with the help of Raphael Geissert and Salvatore Bonaccorso I fixed the security-tracker to also know about oldstable, after waiting for more than 8 weeks to someone else doing it. I'm very glad that this is done now, as without it was really tedious to check which issues were applying to oldstable.

Oh, and another afterthought from giving the talk: currently at least parts of the security-tracker codebase assume that there won't ever be support for oldoldstable, but once jessie has been released this won't be true anymore. Then we will support stable, oldstable and oldoldstable. And oldstable will be wheezy, not squeeze. We have something like 6 months to fix this, hopefully we won't have much more time... ;-) Oh, and surely there are other places than just the security-tracker which will need to be taught about this.

09 September, 2014 04:01PM

September 08, 2014

hackergotchi for Ana Beatriz Guerrero Lopez

Ana Beatriz Guerrero Lopez

DebConf14 and ten years contributing to Debian

It has been one week since I’m back from DebConf14 and I’m still recovering and catching up with things. DebConf14 has been amazing, it has been great to be back after missing it for two years. Thanks a lot to everybody who helped to make it real. On my side, I helped a bit in the talks team.

During DebConf14, I got the opportunity to discuss with Rene Mayorga about the MIA work-flow and we also got some feedback in the MIA BoF. We have plenty of ideas to implement and we’re aiming to improve things during this next year.

This summer has been also 10 years since I started contributing to Debian. It’s hard to believe. Ten years ago I barely knew where to start helping and now I have an endless TODO list of things I would like to do. And always during DebConf this list seems to grow ten times faster than usual. Thankfully, also motivation increases a lot :)

08 September, 2014 08:59PM by ana

hackergotchi for Joey Hess

Joey Hess

propellor is d-i 2.0

I think I've been writing the second system to replace d-i with in my spare time for a couple months, and never noticed.

I'm as suprised as you are, but consider this design:

  • Installation system consists of debian live + haskell + propellor + web browser.

  • Entire installation UI consists of a web-based (and entirely pictographic and prompt based, so does not need to be translated) selection of the installation target.

  • Installation target can be local disk, remote system via ssh (wiping out crufty hacked-up pre-installed debian), local VM, live ISO, etc.

  • Really, no other questions. Not even user name/password! The installed system will only allow login via the same method that was used to install it. So a locally installed system will accept console/X login with no password and then a forced password change. Or a system installed via ssh will only allow login using the same ssh key that was used to install it.

  • The entire installation process consists of a disk format, followed by debootstrap, followed by running propellor in the target system. This also means that the installed system includes a propellor config file which now describes the properties of the system as installed (so can be edited to tweak the installation, or reused as starting point for next installation).

  • Users who want to configure installation in any way write down properties of system using a simple propellor config file. I suppose some people still use more than one partiton or gnome or some such customization, so they'd use:

main :: IO
main = Installer.main
    & Installer.partition First "/boot" Ext3 (MiB 256)
    & Installer.partition Next "/" Ext4 (GiB 5)
    & Installer.partition Next "/home" Ext4 FreeSpace
    & Installer.grubBoots "hd0"
    & os (System (Debian Stable) "amd64")
    & Apt.stdSourcesList
    & Apt.installed ["task-gnome-desktop"]
  • The installation system is itself built using propellor. A free feature given the above design, so basically all it will take to build an installation iso is this code:
main :: IO
main = Installer.main
    & Installer.target CdImage "installer.iso"
    & os (System (Debian Stable) "amd64")
    & Apt.stdSourcesList
    & Apt.installed ["task-xfce-desktop", "ghc", "propellor"]
    & User.autoLogin "root"
    & User.loginStarts "propellor --installer"
  • Propellor has a nice display of what it's doing so there is no freaking progress bar.

Well, now I know where propellor might end up if I felt like spending a month and adding a few thousand lines of code to it.

08 September, 2014 09:32AM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Debconf 14 - Days 1 and 2

Unfortunately I was not able to attend debconf this year but thanks to the awesome video team the all the talks are available for your viewing pleasure.

In order to recreate an authentic Portland experience, I took my laptop into the shower along with a vegan donut and had my children stand outside yelling excerpts from salon.com in whiny Canadianesque accents. Here are some notes I took as I watched the talks.

Welcome Talk

  • Why is everyone on stage wearing shorts? Is this a thing now?
  • Langasek is pronounced with a 'sh' I did not know that.
  • Kees is pronounced 'case' I also did not know that.
  • Steve missed a good opportunity for a "Who moved my cheese?" joke. (Hey its not any more obscure than the "white Chevy Nova" joke.)
  • A well-deserved award to Russ Allbery from some of the UK people for being the voice of reason on the mailing lists. See Vincent Sanders blog for details.

Debian in the Dark Ages of Free software - Stefan Zacchiroli

  • More shorts. I am starting to feel overdressed.
  • Stefan reminisces about how he got involved in Free Software and his philosophy of the same. A good introduction for anyone wondering what
    makes a Debian hacker tick.
  • We should be concerned about software freedom in the new "cloud" environments. Debian can play an important role in this by making it really simple for
    users to set up their own cloud environments. My take: the focus should
    be on free standards and protocols. The power that Google, Facebook etc.
    have is drastically reduced if it is easy to jump ship.

Weapons of the Geek - Gabriella Coleman

  • pro: no shorts con: womens slacks
  • Intriguing anthropological investigation into Anonymous and how it/they relate to Free Software.
  • Coc acknowledged then ignored. Why do we need it again?
  • "Anonymous has cabals ... [that] make the cabals within Debian look like childs play."

bugs.debian.org -- Database Ho! - Don Armstrong

  • I would settle for a lungi at this point but noooo shorts again. (plaid to boot.)
  • The return of my yearly guilt about undertaking to Don to add RSS support to debbugs way back at debconf 10 and not following up on it. Damn it, I am going to get this done now.
  • Sadly, the initial part of the stream is missing and it begans right in the middle of Don saying something interesting.

  • "I have to admit my primary motivation for giving a talk was to try to force myself into actually doing the work I'm talking about."
  • Stats porn. The BTS is growing at a phenomenal rate. bugs opened 142/day. But only 95/day closed.

Grub Ancient and Modern - Colin and Watson

  • pro: no shorts con: kilt
  • I was interested in this talk because one of these days I want to get GRUB 2 running on Debian Minix but a lot went over my head so I'll have to do some
    more research first.

One year of fedmsg in Debian - Nicolas Dandrimont

  • trouser status: undetermined
  • Problem: There are many different services providing information in Debian but they do not interop very well.
  • fedmsg is a unified message bus originally developed by Fedora who were facing a similar problem.

  • It has now been implemented in Debian.

Coming of Age: My Life with Debian - Christine Spang

  • trouser sta- oh the hell with this.
  • Another talk which is more biographical than technical. Again, useful to help understand the motivations of hackers.
  • What interests me is that for younger generations it was open source which was a novel idea whereas for those of who grew up in the 8-bit era, having
    to hack on a computer was expected (whether you wanted to or not and it was
    proprietary software you couldn't share which was considered new and strange.

Status report of the Debian Printing Team - Didier Raboud

  • Ou est les pantalons? Je ne sais pas. (Apologies to Mme Terzini.)
  • Kudos to Didier for taking up this augean task mostly on his own. Despite the long-promised "paperless" office I need to be able to print and I was
    pleasantly surprised that my new xerox all-in-one worked under Debian with
    very little hassle.
  • Brother sucks. Don't buy their printers. Ditto for Epson and Samsung.
  • Buy HP instead.

08 September, 2014 03:47AM

September 07, 2014

Craig Small

How not to get Galaxy Tab into Safe Mode

For weeks my Galaxy Tab 10.1 has reasonably consistently gone into safe mode. Not booting into it but I’d use it fine then put it away and next time I looked at it, Safe Mode was there. It wasn’t every time, but averaged to be about every second time.

So the first thing was a bit of googling to see what this Safe Mode was. Most of the suggestions were around how to put it into safe mode during the boot process but my problem was opposite; it wasn’t during booting and I wanted something to stop safe mode, not put the device into it. The closest I got to it was there was some misbehaving program that kicked the thing into safe mode.

The problem was, I checked several times and there were no running programs. I really did start to worry I had a hardware fault or something wrong deep within the OS.

When you have problems in IT, you’re usually asked “What’s new? What’s changed?”. The answer is generally “Nothing” which gets a switch “No really, what did change”. The only answer I could come up with was a hardware keyboard. This slim aluminum uses bluetooth to communicate to the tablet and clips onto the front screen to protect it when not in use. Could this be the change I was looking for?

The clue was that sometimes when you boot Android, if you hold down some keys it boots into safemode. It seems that holding down some combination of keys (volume up/down, power) puts into safe mode. The keyboard can clip onto the tablet in two ways, one long edge has some raised edges while one doesn’t. If the raised edge was connected to the same side as the buttons, I’d get safe mode sometimes as the edge pushed some of those buttons. More importantly, putting the raised edge on the side with no buttons meant no more safe mode.

Not really a software or electrical fault, more one of just mechanics.

 

07 September, 2014 12:31PM by Craig Small

September 06, 2014

hackergotchi for Joachim Breitner

Joachim Breitner

ICFP 2014

Another on-my-the-journey-back blog post; this time from the Frankfurt Airport Train Station – my flight was delayed (if I knew that I could have watched the remaining Lightning Talks), and so was my train, but despite 5min of running through the Airport just not enough. And now that the free 30 Minutes of Railway Station Internet are used up, I have nothing else to do but blog...

Last week I was attending ICFP 2014 in Gothenburg, followed by the Haskell Symposium and the Haskell Implementors Workshop. The justification to attend was the paper on Safe Coercions (joint work with Richard Eisenberg, Simon Peyton Jones and Stephanie Weirich), although Richard got to hold the talk, and did so quite well. So I got to leisurely attend the talks, while fighting the jet-lag that I brought from Portland.

There were – as expected – quite a few interesting talks. Among them the first keynote, Kathleen Fisher on the need for formal methods in cars and toy-quadcopters and unmanned battle helicopters, which made me conclude that my Isabelle skills might eventually become relevant in practical applications. And did you know that if someone gains access to your car’s electronics, they can make the seat belt pull you back hard?

Stefanie Weirich’s keynote (and the subsequent related talks by Jan Stolarek and Richard Eisenberg) on what a dependently typed Haskell would look like and what we could use it for was mouth-watering. I am a bit worried that Haskell will be become a bit obscure for newcomers and people that simply don’t want to think about types too much, on the other hand it seems that Haskell as we know it will always stay there, just as a subset of the language.

Similarly interesting were refinement types for Haskell (talks by Niki Vazou and by Eric Seidel), in the form of LiquidTypes, something that I have not paid attention to yet. It seems to be a good way for more high assurance in Haskell code.

Finally, the Haskell Implementors Workshop had a truckload of exciting developments in and around Haskell: More on GHCJS, Partial type signatures, interactive type-driven development like we know it from Agda, the new Haskell module system and amazing user-defined error messages – the latter unfortunately only in Helium, at least for now.

But it’s not the case that I only sat and listened. During the Haskell Implementors Workshop I held a talk “Contributing to GHC” with a live demo of me fixing a (tiny) bug in GHC, with the aim of getting more people to hack on GHC (slides, video). The main message here is that it is not that big of deal. And despite me not actually saying much interesting in the talk, I got good feedback afterwards. So if it now actually motivates someone to contribute to GHC, I’m even more happier.

And then there is of course the Hallway Track. I discussed the issues with fusing a left fold (unfortunately, without a great solution). In order to tackle this problem more systematically, John Wiegley and I created the beginning of a “List Fusion Lab”, i.e. a bunch of list benchmark and the possibility to compare various implementations (e.g. with different RULES) and various compilers. With that we can hopefully better assess the effect of a change to the list functions.

PS: The next train is now also delayed, so I’ll likely miss my tram and arrive home even later...

PPS: I really have to update my 10 year old picture on my homepage (or redesign it completely). Quite a few people knew my name, but expected someone with shoulder-long hair...

PPPS: Haskell is really becoming mainstream: I just talked to a randomly chosen person (the boy sitting next to me in the train), and he is a Haskell enthusiast, building a structured editor for Haskell together with his brother. And all that as a 12th-grader...

06 September, 2014 10:46PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Jonathan McDowell

Jonathan McDowell

Breaking up with America

Back in January I changed jobs. This took me longer to decide to do than it should have. My US visa (an L-1B) was tied to the old job, and not transferable, so leaving the old job also meant leaving the US. That was hard to do; I'd had a mostly fun 3 and a half years in the SF Bay Area.

The new job had an office in Belfast, and HQ in the Bay Area. I went to work in Belfast, and got sent out to the US to meet coworkers and generally get up to speed. During that visit the company applied for an H-1B visa for me. This would have let me return to the US in October 2014 and start working in the US office; up until that point I'd have continued to work from Belfast. Unfortunately there were 172,500 applications for 85,000 available visas and mine was not selected for processing.

I'm disappointed by this. I've enjoyed my time in the US. I had a green card application in process, but after nearly 2 years it still hadn't completed the initial hurdle of the labor certification stage (a combination of a number of factors, human, organizational and governmental). However the effort of returning to live here seems too great for the benefits gained. I can work for a US company with a non-US office and return on an L-1B after a year. And once again have to leave should I grow out of the job, or the job change in some way that doesn't suit me, or the company hit problems and have to lay me off. Or I can try again for an H-1B next year, aiming for an October 2015 return, and hope that this time my application gets selected for processing.

Neither really appeals. Both involve putting things on hold in the hope longer terms pans out as I hope. And to be honest I'm bored of that. I've loved living in America, but I ended up spending at least 6 months longer in the job I left in January than I'd have done if I'd been freely able to change employer without having to change continent. So it seems the time has come to accept that America and I must part ways, sad as that is. Which is why I'm currently sitting in SFO waiting for a flight back to Belfast and for the first time in 5 years not having any idea when I might be back in the US.

06 September, 2014 10:38PM

hackergotchi for Thomas Goirand

Thomas Goirand

Debconf 14 activity

Before I start a short listing of (some of) the stuff I did during Debconf 14, I’d like to say how much I enjoyed everyone there. You guys (all of you, really!) are just awesome, and it’s always a real pleasure to see you all, each time.

Anyway, here’s a bits of the stuff I did.

1/ packaging of Google Cloud Engine client tools.

Thanks to the presence of Eric and Jimmy, I was able to finish the work I started at Debconf 13 last year. All python modules are packaged and uploaded. Only the final client (the “gcloud” command line utility) isn’t uploaded, even though it’s already packaged. The reason is that this client downloads “stuff” from internet, so I need to get the full, bundled, version of it, to avoid this. Eric gave me the link, I just didn’t have time to finish it yet. Though the (unfinished) package is already in the Git in Alioth.

2/ Tasksel talks

We discussed improvements in Tasksel both during the conference, and later (in front of beers…). I was able to add a custom task on a modified version of the Tasksel package for my own use. I volunteered myself for adding a “more task” option in Tasksel for Jessie+1 because I really would like to see this feature, and nobody raised hand, but honestly, I have no idea how to do it, and therefore, I’m not sure I’ll be able to do so. We’ll see… Anyway, before this happen, we must make sure that we know what kind of tasks we want in this “more tasks” screen, otherwise it’d be useless work for nothing. Therefore, I have setup a wiki page. Please edit the page and drop your ideas there. I’ve already added entries for desktops and Debian blends, but I’m sure there’s more that we could add.

3/ Custom Debian CD

I started experimentation on building my own Debian Wheezy CD image (well, DVD, since the resulting image is nearly 2GB). This was fun, but I am still having the issue that the installer fails to install Dash, so the CD is still unusable. I’ll try to debug it. Oh…  I nearly forgot… “of course”, the ISO image aims at including all OpenStack Icehouce packages backported to Wheezy, and the goal was to include the above custom Tasksel task, with an “OpenStack proxy node” task, and a “OpenStack compute” task. Let’s hope I can figure out what the issue is, and finally release it.

4/ OpenStack talk

Nothing special to say, just watch the video. I hope my talk was interesting enough. Of course, after watching myself, I hate everything I see, and would like to correct so many mistakes, but that’s the usual, I guess.

5/ Some RC fixing

Thanks to the nice work of our DPL rebuilding all the archive, I had to fix a couple of FTBFS issues on my own packages. 3 of them have been easy to fix (2 missing build-dependencies which I missed because my automated build environment has them by default, and a unit test failure), I still don’t understand what’s going on with Ceilometer. I also NMU-ed transmission (switching from 2.82 to 2.84, as upstream had the bugfix, and current maintainer was not responsive) which was the last blocker for the miniupnpc transition to Jessie. After the 5 days delay of the upload, it went in Sid, then migrated to Jessie, together with the miniupnpc library. I also fixed a trivial RC bug with python3-webob.

6/ Python team meeting

It was nice to see everyone, and hopefully, we’ll soon implement what we discussed. I hope to start migrating some of my OpenStack dependencies to the team once we move to Git (though please don’t expect this to happen before the Juno release, which keeps me very busy these days).

There’s probably more stuff which I did during Debconf 14 (hacking or otherwise), but either it’s not worth sharing, or I can’t remember… :)

06 September, 2014 05:42PM by admin

Russ Allbery

Accumulated hauls

I haven't made one of these in a long time, so I have some catching from random purchases to do, which includes a (repurposed) nice parting gift from my previous employer and a trip to Powell's since I was in the area for DebConf14. This also includes the contents of the Hugo voter's packet, which contained a wide variety of random stuff even if some of the novels were represented only by excerpts.

John Joseph Adams (ed.) — The Mad Scientist's Guide to World Domination (sff anthology)
Roger McBride Allen — The Ring of Charon (sff)
Roger McBride Allen — The Shattered Sphere (sff)
Iain M. Banks — The Hydrogen Sonata (sff)
Julian Barnes — The Sense of an Ending (mainstream)
M. David Blake (ed.) — 2014 Campbellian Anthology (sff anthology)
Algis Budrys — Benchmarks Continued (non-fiction)
Algis Budrys — Benchmarks Revisited (non-fiction)
Algis Budrys — Benchmarks Concluded (non-fiction)
Edgar Rice Burroughs — Carson of Venus (sff)
Wesley Chu — The Lives of Tao (sff)
Ernest Cline — Ready Player One (sff)
Larry Correia — Hard Magic (sff)
Larry Correia — Spellbound (sff)
Larry Correia — Warbound (sff)
Sigrid Ellis & Michael Damien Thomas (ed.) — Queer Chicks Dig Time Lords (non-fiction)
Neil Gaiman — The Ocean at the End of the Lane (sff)
Max Gladstone — Three Parts Dead (sff)
Max Gladstone — Two Serpents Rise (sff)
S.L. Huang — Zero Sum Game (sff)
Robert Jordan & Brandon Sanderson — The Wheel of Time (sff)
Drew Karpyshyn — Mass Effect: Revelation (sff)
Justin Landon & Jared Shurin (ed.) — Speculative Fiction 2012 (non-fiction)
John J. Lumpkin — Through Struggle, the Stars (sff)
L. David Marquet — Turn the Ship Around! (non-fiction)
George R.R. Martin & Raya Golden — Meathouse Man (graphic novel)
Ramez Naam — Nexus (sff)
Eiichiro Oda — One Piece Volume 1 (manga)
Eiichiro Oda — One Piece Volume 2 (manga)
Eiichiro Oda — One Piece Volume 3 (manga)
Eiichiro Oda — One Piece Volume 4 (manga)
Alexei Panshin — New Celebrations (sff)
K.J. Parker — Devices and Desires (sff)
K.J. Parker — Evil for Evil (sff)
Sofia Samatar — A Stranger in Olondria (sff)
John Scalzi — The Human Division (sff)
Jonathan Straham (ed.) — Fearsome Journeys (sff anthology)
Vernor Vinge — The Children of the Sky (sff)
Brian Wood & Becky Cloonan — Demo (graphic novel)
Charles Yu — How to Live Safely in a Science Fictional Universe (sff)

A whole bunch of this is from the Hugo voter's packet, and since the Hugos are over, much of that probably won't get prioritized. (I was very happy with the results of the voting, though.)

Other than that, it's a very random collection of stuff, including a few things that I picked up based on James Nicoll's reviews. Now that I have a daily train commute, I should pick up the pace of reading, and as long as I can find enough time in my schedule to also write reviews, hopefully there will be more content in this blog shortly.

06 September, 2014 04:38AM

September 05, 2014

Craig Small

WordPress 4.0 for Debian

Yesterday WordPress released version 4.0 or “Benny” of WordPress. I have now downloaded it and packed up for Debian users. The files just hit the ftp-master a few minutes ago and will then be distributed out to the various Debian mirrors.

The upgrade should go smoothly but you will probably need to upgrade the twentytwelve/twentyfourteen themes if you have them installed. It seems release 4.0 they also updated these themes.

My next Debian task for wordpress is to re-examine the permissions and locations of wp-content to see if we can have something that permits online updates of the plugins and themes but is still FHS compliant. I’ve also had some people report they have some installation problems, mainly around configuration and directories so let’s see if that can get fixed too.

 

05 September, 2014 11:35AM by Craig Small

hackergotchi for Wouter Verhelst

Wouter Verhelst

ASCII art Wouter

                          _____o****o__                               
                        dQ@@@@@WQ@Q&WWbo*,                            
                     __o@@@@@@@@@@@@@@@@&*b_,                         
                    .dQ@@@@@@@@@@@@@@@@@@@@&*bo__                     
                   .*@@@@@@@@@@@@@@@@@@@@@@@@&Q@bo                    
                   d@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@b                   
                  <]@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#b                  
                 _d#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&**,                
              _,.dd@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@*b,               
              d"Q@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@b*b_              
             <*Q@@@@@V?$@@@@@@@@V$F$@@@@@@@@@@@@@@@@@[**_,            
            .*Q@@@@?*"***VV$V?"******"V@@@@@@@@FV@@@@@&W**>           
            <#@@@@"`---<"*****?'`''?''**"VVV@"****$@@@@@&b,           
           .d@@@@"`'-------''`--------''"****?'--'"$@@@@@*>           
           ]#@@#"`'----------------------'?'-------"@@@@@*">          
          <*@@@*`  ,--------------------------------"@@@@&*,          
          d#@@V>   '--------------------------------'"@@@?"*,         
         <d@@$"`   .--------'------------------------<$@@&b*b,        
         <@@@"`     ---------  -----------------------"@@@[**>        
         d@@@"      '-----`     -      '--------------'#@@@***        
        <@@@"`       '-`-               --- -----------*@@@***>       
        <]@""                             -,-----------<$@@[**>       
         ][F[        ` '`                 '-------------]@@[**>       
        <Q@[`                               -`----------*@@&**>       
        <#"">                                   --------*@@@**>       
        <@"*                                    ----`---*@@@&*>       
        ]Q#[                                      ------]#@@&*        
        ]@#[                                          <-<#@@@*>       
        ]@#b                                          .-<]@@@*        
        "@@[                                         ---<#@@@*        
        <@@"                                         ----]@@@*        
        ]#@[                                           -.]@@&"        
        ]#@b                                           -<]@@@[        
        <@@[                                       <   -<]@@"[        
        <@@[                                           -<]@@V>        
        <@@>     ,                                 ..  -<]@@b,        
        d@#>   -._----___ _                 _ _-_   -- -<#@@#-        
        ]@*> _-d*****obo_----       __--------___-, ----]@@@*,        
        d@[> -**?"**@@@@@o*>--,  -,---.o*ooW******_,,.--]@@#"`        
        #@[> -???'''--''?"**----<----<*@@@@@V******[----]@@@b>        
        *@[  -------------------<----*"'----''----'--->-<@@@*>        
        *@[  ------ooWWo,------- ------------_ '--------<@@@*-        
        "]>  <---bd?"@@@"`.----,  -------dQ@@&b-.-----'''$@[*`        
        *">   '--*",'$@#> -,'---  ---`--''$@@@?&_---->  -]@[*-        
        "*`    --'--.'"` `--.---  --- --, ]@@"-?*----   -"@*`>        
        <*>      <------`   ----  ---  ''--'",------    -]"*->        
        <*,       -----,   -----'----,  ---------`      -]#*-         
        '">        ---`    -----  <---                  '][[          
         <`                -----------,                 <d"[          
         <>               .------------                 -**-          
          >               <-----  -----                <-`--          
          -               .-----  -----                .--'           
          -               ------,.-----                -'-            
          ->              -------------                -.-            
          --              ---  -  -----                --`            
          --              ---      -----              --->            
          --             .->       -----              ---             
          -->            -----    .-----              ---             
          <-> '         ----`_-----__---            ' ---             
           --_.        ------'-----'----           ...---             
           ---`       ---------''------.           <----`             
           ---,     --------`-------`--,-          -----              
           <-----  --------_--------,-----        .-----              
            --------------*?--------`--------,    -----`              
            -----'------_?`--`'------'b--------- .-----               
            ----- ---"""`---.   '  ---'bb-------.------               
            ----------d,-------___-----'*o"----`` -----               
            '--------''`-.,-------------<*[-`----.----                
             -----------'------'--------'"*----> -----                
             -----------, '---   ---------------------                
             -------------,_---->----`'_-------------                 
             ',---------------.__.,.-----------------                 
              '-------------''?*"?''----------------`                 
              '>------------------------------------                  
              'b`-----------------------------------                  
              <<,---------------------------------,-                  
               '*,-------------- --------------.--`                   
               -"*_----------------------------'d>-                   
           .o, --**>.o----------------------,-.**`-                   
         .@@@[ .-"*****_-_----------------o******--                   
         Q@@F`  -'******d*--------------,-******`--                   
        .@@F`   '-<*******-.-.,------.o*********---,_o@W,             
        ]@[--    --"***d***od*--****dbd********"----@@@@[             
      .d@@#,-    --'***"@o*************oWb****"-----'$@@&,            
    _oQ&@@@>    ----'"**]@@o********doQ@?****"--------"@@b            
 dQ@@@@@@@@b      '---**"$@@@@@@@@@@@@F"*****---------<]@@d,,         
@@@@@@@@@@@[,      ---'***V?""??VVV"********>---------<Q@@@@@&b,      
@@@@@@@@@@@@><      ---'*******************?----------d@@@@@@@@@F_    
@@@@@@@@@@@@[.      <----"****************"----------]Q@@@@@@@@@@@@bo_
@@@@@@@@@@@@@>-      -----'**************'----------.#@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@[>        ------**********`------------d@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@[        '-------"****?`-------------.Q@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@&,         ------'''''---------------d@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@o,         ------------------------.Q@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@[_         '----------------------d@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@[,          --------------------]@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@b-_          '----------------.Q@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@b-            '--------------Q@@@@@@@@@@@@@@@@@@@@@

You know you're doing a fun gig when you get to do things like the above on billable hours.

Full story: writing a test suite for reading data from eID cards. It makes sense to decode the JPEG data which you read from the card, so that you know there's no error in the lower-layer subroutines (which would result in corruption). And since we've decoded it anyway, why not show it in the test suite log? Right.

05 September, 2014 10:20AM

September 04, 2014

hackergotchi for Junichi Uekawa

Junichi Uekawa

Bluetooth network error.

Bluetooth network error. I think it's a network-manager feature to be able to use bluetooth tethering. I think it's a network-manager bug that when bluetooth tethering fails due to some error, and does not report that error. Yesterday I finally figured out what was going wrong after staring at hcidump. It was obvious after I did. I've reset my tablet so bluetooth PIN was wrong. If only the GUI told me that.

04 September, 2014 09:19PM by Junichi Uekawa

I wanted to file a bug but Debian BTS doesn't seem toreceive my SMTP mail for some reason.

I wanted to file a bug but Debian BTS doesn't seem to receive my SMTP mail for some reason. Somewhere between the MTA and the server something is wrong.

04 September, 2014 09:18PM by Junichi Uekawa

Joseph Bisch

My First Package

I got my first package uploaded to Debian this week. That package is winetricks. It was orphaned and I adopted it. Now the lastest version (0.0+20140818+svn1202) is available in sid and should migrate to testing in nine days.

I moved the vcs from collab-maint to a personal repo, since I don’t have access to collab-maint.

I also have a sponsor for slowaes. It is also a package that was orphaned that I am adopting. The changes I made are more minor than those for winetricks. Besides adding myself as a maintainer, I just fix some lintian warnings. Slowaes should be uploaded soon.

04 September, 2014 02:14PM

Jakub Wilk

Joys of East Asian encodings

In i18nspector I try to support all the encodings that were blessed by gettext, but it turns out to be more difficult than I anticipated:

$ roundtrip() { c=$(echo $1 | iconv -t $2); printf '%s -> %s -> %s\n' $1 $c $(echo $c | iconv -f "$2"); }

$ roundtrip ¥ EUC-JP
¥ -> \ -> \

$ roundtrip ¥ SHIFT_JIS
¥ -> \ -> ¥

$ roundtrip ₩ JOHAB
₩ -> \ -> ₩

Now let's do the same in Python:

$ python3 -q
>>> roundtrip = lambda s, e: print('%s -> %s -> %s' % (s, s.encode(e).decode('ASCII', 'replace'), s.encode(e).decode(e)))
>>> roundtrip('¥', 'EUC-JP')
¥ -> \ -> \
>>> roundtrip('¥', 'SHIFT_JIS')
¥ -> \ -> \
>>> roundtrip('₩', 'JOHAB')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 1, in <lambda>
UnicodeEncodeError: 'johab' codec can't encode character '\u20a9' in position 0: illegal multibyte sequence

So is 0x5C a backslash or a yen/won sign? Or both?

And what if 0x5C could be a second byte of a two-byte character? What could possibly go wrong?

04 September, 2014 12:01PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Debian PPA Utility

Debian remains to be my favorite distribution, however there’s one thing that’s missing, that thing is called PPA.

There were numerous discussions on this topic inside of Debian, but AFAIK without any visible movement. Thus, I decided to publish a utility I’ve been using for some time now.

PPA’s

Since its introduction, PPA’s are exclusively connected to Ubuntu and its derivatives (Mint, Elementary, etc …). But over time, a number of interesting projects appeared whose whole development is happening inside of PPA’s. To name few, I’m talking about TLP, Geary, Oracle Java Installer, Elementary OS and etc … Some of these projects are in WNPP without much happening for a long time, i.e: TLP

One option was to repackage these packages and then have them uploaded to Debian, or just go rogue and install them directly from its PPA’s. Title of this post might hint which path I took.

In theory, adding Ubuntu packages on your Debian system is a bad idea, and adding its PPA’s is probably even worse. But, I’ve been using couple (TLP, Geary, couple of custom icon sets) of these PPA’s on my personal/work boxes, and to be honest, never had a single problem. Also, setting Pinning priority to low for the PPA you added is never a bad idea.

Most of the PPA’s I use, are usually fairly simple packages with single binary and dependencies which are found in Debian itself. Of course, I don’t recommend adding PPA’s on production boxes, or even PPA’s such as GNOME3 Team PPA’s, but rather add Apt Pinning on your system and fetch those packages directly from Debian.

Debian PPA Utility

Is a very simple utility, which adds “add-apt-repositorybinary script that allows you to add PPA’s on Debian. Code is available on GitHub, it’s licensed as GPLv3, so feel free to fork it, improve it, use it and abuse it.

How to use it?

Download/Build package

You can download my signed package (source and changes file are in same directory)

Or you can build your own by running "dpkg-buildpackage -uc -us" inside of the debian-ppa source directory.

Install/Add PPA’s

After you install the package, you’re able to run “add-apt-repository” and add PPA’s. i,e:

sudo add-apt-repository ppa:linrunner/tlp

debian ppa utility

Currently, Debian PPA Utility only works on >= Wheezy.

At this point I have no plans to try pushing this utility into Debian, as I’m sure even this blog post will be labelled as heresy by many.

Update!

It was just pointed to me that “add-apt-repository” is available in “software-properties-common” package. However, PPA’s added by “add-apt-repository”  binary present in this package instead of adding Ubuntu codename’s to your list file, will add Debian codenames which without change will make whole PPA entry useless.

I believe codename handling is better in “Debian PPA Utility”. I admit, my only mistake is, instead of fixing things in “software-properties-common” package, I made a completely new utility which aims to do the same thing.

Added: conflicts/replaces: software-properties-common to debian/control file.

Anyway, enjoy!

04 September, 2014 11:55AM by Adnan Hodzic

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

The problem of distributing applications

A few days ago I watched a Q/A session with Linus Torvalds at Debconf 14. One of the main complaint of Linus towards Linux distribution was the way that distribution ends up using different versions of libraries than what has been used during application development. And the fact that it’s next to impossible to support properly all Linux distributions at the same time due to this kind of differences.

Warning, some internals ahead
And now I just discovered a new proposal of the systemd team that basically tries to address this: Revisiting how we put together Linux Systems.

They suggest to make extensive use of btrfs subvolumes to host multiple variants of the /usr tree (that is supposed to contain all the invariant system code/data) that you could combine with multiple runtime/framework subvolumes thanks to filesytem namespaces and make available to individual applications.

This way of grouping libraries in “runtime subvolumes” reminds me a bit of the concepts of baserock (they are using git instead of btrfs) and while I was a bit dubious of all this (because it goes against quite a few of the principles of distribution integration) I’m beginning to believe that there’s room for both models to work together.

It would be nice if Debian could become the reference distribution that upstream developers are using to develop against Linux. This would in turn mean that when upstream distribution their application under this new form, they will provide (or reference) Debian-based subvolumes ready for use by users (even those who are not using Debian as their main OS). And those subvolumes would be managed by the Debian project (probably automatically built from our collection of .deb).

We’re still quite far from this goal but it will interesting to see this idea mature and become reality. There are plenty of challenges facing us.

18 comments | Liked this article? Click here. | My blog is Flattr-enabled.

04 September, 2014 08:29AM by Raphaël Hertzog

Russell Coker

Inteltech/Clicksend SMS Script

USER=username
API_KEY=1234ABC
OUTPUTDIR=/var/spool/sms
LOG_SERVICE=local1

I’ve just written the below script to send SMS via the inteltech.com/clicksend.com service. It takes the above configuration in /etc/sms-pass.cfg where the username is assigned with the clicksend web page and the API key is a long hex string that clicksend provides as a password. The LOG_SERVICE is which syslog service to use for the log messages, on systems that are expected to send many messages I use the service “local1″ and I use “user” for development systems.

I hope this is useful to someone, and if you have any ideas for improvement then please let me know.

#!/bin/sh
# $1 is destination number
# text is on standard input
# standard output gives message ID on success, and 0 is returned
# standard error gives error from server on failure, and 1 is returned

. /etc/sms-pass.cfg
OUTPUT=$OUTPUTDIR/out.$$
TEXT=`tr "[:space:]" + | cut -c 1-159`

logger -t sms -p $LOG_SERVICE.info "sending message to $1"
wget -O $OUTPUT "https://api.clicksend.com/http/v2/send.php?method=http&username=$USER&key=$API_KEY&to=$1&message=$TEXT" > /dev/null 2> /dev/null

if [ "$?" != "0" ]; then
  echo "Error running wget" >&2
  logger -t sms -p $LOG_SERVICE.err "failed to send message \"$TEXT\" to $1 – wget error"
  exit 1
fi

if ! grep -q ^.errortext.Success $OUTPUT ; then
  cat $OUTPUT >&2
  echo >&2
  ERR=$(grep ^.errortext $OUTPUT | sed -e s/^.errortext.// -e s/..errortext.$//)
  logger -t sms -p $LOG_SERVICE.err "failed to send message \"$TEXT\" to $1 – $ERR"
  rm $OUTPUT
  exit 1
fi

ID=$(grep ^.messageid $OUTPUT | sed -e s/^.messageid.// -e s/..messageid.$//)
rm $OUTPUT

logger -t sms -p $LOG_SERVICE.info "sent message to $1 with ID $ID"
exit 0

04 September, 2014 04:19AM by etbe

September 03, 2014

Stefano Zacchiroli

interview for the gnu linux setup

my setup, take #1

Among the various things I've catched up with during the summer, I've finally managed to set aside some time to answer a pending interview request for The [GNU/]Linux Setup: a blog run by Steven Ovadia that collects interviews about how people use GNU/Linux-based desktops.

In the interview I discuss my day to day work-flow, from GNOME Shell to Mutt, from Emacs to Notmuch, and the various glue code tools I've written for integrating them.

Enjoy!

Feedback is most welcome.

03 September, 2014 08:48AM