September 04, 2015

hackergotchi for Guido Günther

Guido Günther

Debian work in August 2015

Debian LTS

August was the fourth month I contributed to Debian LTS under the Freexian umbrella. In total I spent four hours working on:

  • pykerberors: Fixed a regression in DLA-265-1 affecting the default parameters of checkPassword(). This resulted in DLA-265-2.

  • wordpress: Craig Small updated wordpress to fix CVE-2015-2213 CVE-2015-5622 CVE-2015-5731 CVE-2015-5732 and CVE-2015-5734. I did some testing and released DLA-294-1.

Besides that I did CVE triaging of 9 CVEs to check if and how they affect oldoldstable security as part of my LTS front desk work.

Debconf 15 was a great opportunity to meet some of the other LTS contributors in person and to work on some of my packages:

Git-buildpackage

git-buildpackage gained buildpackage-rpm based on the work by Markus Lehtonnen and merging of mock support is hopefully around the corner.

Debconf had two gbp skill shares hosted by dkg and a BoF by myself. A summary is here. Integration with dgit as (discussed with Ian) looks doable and I have parts of that on my todo list as well.

Among other things gbp import-orig gained a --merge-mode option so you can replace the upstream branches verbatim on your packaging branch but keep the contents of the debian/ directory.

Libvirt

I prepared an update for libvirt in Jessie fixing a crasher bug, QEMU error reporting. apparmor support now works out of the box in Jessie (thanks to intrigeri and Felix Geyer for that).

Speaking of apparmor I learned enough at Debconf to use this now by default so we hopefully see less breackage in this area when new libvirt versions hit the archive. The bug count wen't down quiet a bit and we have a new version of virt-manager in unstable now as well.

As usual I prepared the RC candidates of libvirt 1.2.19 in experimental and 1.2.19 final is now in unstable.

04 September, 2015 05:25PM

hackergotchi for Julien Danjou

Julien Danjou

Data validation in Python with voluptuous

Continuing my post series on the tools I use these days in Python, this time I would like to talk about a library I really like, named voluptuous.

It's no secret that most of the time, when a program receives data from the outside, it's a big deal to handle it. Indeed, most of the time your program has no guarantee that the stream is valid and that it contains what is expected.

The robustness principle says you should be liberal in what you accept, though that is not always a good idea neither. Whatever policy you chose, you need to process those data and implement a policy that will work – lax or not.

That means that the program need to look into the data received, check that it finds everything it needs, complete what might be missing (e.g. set some default), transform some data, and maybe reject those data in the end.

Data validation

The first step is to validate the data, which means checking all the fields are there and all the types are right or understandable (parseable). Voluptuous provides a single interface for all that called a Schema.

>>> from voluptuous import Schema
>>> s = Schema({
... 'q': str,
... 'per_page': int,
... 'page': int,
... })
>>> s({"q": "hello"})
{'q': 'hello'}
>>> s({"q": "hello", "page": "world"})
voluptuous.MultipleInvalid: expected int for dictionary value @ data['page']
>>> s({"q": "hello", "unknown": "key"})
voluptuous.MultipleInvalid: extra keys not allowed @ data['unknown']


The argument to voluptuous.Schema should be the data structure that you expect. Voluptuous accepts any kind of data structure, so it could also be a simple string or an array of dict of array of integer. You get it. Here it's a dict with a few keys that if present should be validated as certain types. By default, Voluptuous does not raise an error if some keys are missing. However, it is invalid to have extra keys in a dict by default. If you want to allow extra keys, it is possible to specify it.

>>> from voluptuous import Schema
>>> s = Schema({"foo": str}, extra=True)
>>> s({"bar": 2})
{"bar": 2}


It is also possible to make some keys mandatory.

>>> from voluptuous import Schema, Required
>>> s = Schema({Required("foo"): str})
>>> s({})
voluptuous.MultipleInvalid: required key not provided @ data['foo']


You can create custom data type very easily. Voluptuous data types are actually just functions that are called with one argument, the value, and that should either return the value or raise an Invalid or ValueError exception.

>>> from voluptuous import Schema, Invalid
>>> def StringWithLength5(value):
... if isinstance(value, str) and len(value) == 5:
... return value
... raise Invalid("Not a string with 5 chars")
...
>>> s = Schema(StringWithLength5)
>>> s("hello")
'hello'
>>> s("hello world")
voluptuous.MultipleInvalid: Not a string with 5 chars


Most of the time though, there is no need to create your own data types. Voluptuous provides logical operators that can, combined with a few others provided primitives such as voluptuous.Length or voluptuous.Range, create a large range of validation scheme.

>>> from voluptuous import Schema, Length, All
>>> s = Schema(All(str, Length(min=3, max=5)))
>>> s("hello")
"hello"
>>> s("hello world")
voluptuous.MultipleInvalid: length of value must be at most 5


The voluptuous documentation has a good set of examples that you can check to have a good overview of what you can do.

Data transformation

What's important to remember, is that each data type that you use is a function that is called and returns a value, if the value is considered valid. That value returned is what is actually used and returned after the schema validation:

>>> import uuid
>>> from voluptuous import Schema
>>> def UUID(value):
... return uuid.UUID(value)
...
>>> s = Schema({"foo": UUID})
>>> data_converted = s({"foo": "uuid?"})
voluptuous.MultipleInvalid: not a valid value for dictionary value @ data['foo']
>>> data_converted = s({"foo": "8B7BA51C-DFF5-45DD-B28C-6911A2317D1D"})
>>> data_converted
{'foo': UUID('8b7ba51c-dff5-45dd-b28c-6911a2317d1d')}


By defining a custom UUID function that converts a value to a UUID, the schema converts the string passed in the data to a Python UUID object – validating the format at the same time.

Note a little trick here: it's not possible to use directly uuid.UUID in the schema, otherwise Voluptuous would check that the data is actually an instance of uuid.UUID:

>>> from voluptuous import Schema
>>> s = Schema({"foo": uuid.UUID})
>>> s({"foo": "8B7BA51C-DFF5-45DD-B28C-6911A2317D1D"})
voluptuous.MultipleInvalid: expected UUID for dictionary value @ data['foo']
>>> s({"foo": uuid.uuid4()})
{'foo': UUID('60b6d6c4-e719-47a7-8e2e-b4a4a30631ed')}


And that's not what is wanted here.

That mechanism is really neat to transform, for example, strings to timestamps.

>>> import datetime
>>> from voluptuous import Schema
>>> def Timestamp(value):
... return datetime.datetime.strptime(value, "%Y-%m-%dT%H:%M:%S")
...
>>> s = Schema({"foo": Timestamp})
>>> s({"foo": '2015-03-03T12:12:12'})
{'foo': datetime.datetime(2015, 3, 3, 12, 12, 12)}
>>> s({"foo": '2015-03-03T12:12'})
voluptuous.MultipleInvalid: not a valid value for dictionary value @ data['foo']


Recursive schemas

So far, Voluptuous has one limitation so far: the ability to have recursive schemas. The simplest way to circumvent it is by using another function as an indirection.

>>> from voluptuous import Schema, Any
>>> def _MySchema(value):
... return MySchema(value)
...
>>> from voluptuous import Any
>>> MySchema = Schema({"foo": Any("bar", _MySchema)})
>>> MySchema({"foo": {"foo": "bar"}})
{'foo': {'foo': 'bar'}}
>>> MySchema({"foo": {"foo": "baz"}})
voluptuous.MultipleInvalid: not a valid value for dictionary value @ data['foo']['foo']


Usage in REST API

I started to use Voluptuous to validate data in a the REST API provided by Gnocchi. So far it has been a really good tool, and we've been able to create a complete REST API that is very easy to validate on the server side. I would definitely recommend it for that. It blends with any Web framework easily.

One of the upside compared to solution like JSON Schema, is the ability to create or re-use your own custom data types while converting values at validation time. It is also very Pythonic, and extensible – it's pretty great to use for all of that. It's also not tied to any serialization format.

On the other hand, JSON Schema is language agnostic and is serializable itself as JSON. That makes it easy to be exported and provided to a consumer so it can understand the API and validate the data potentially on its side.

04 September, 2015 12:30PM by Julien Danjou

hackergotchi for Norbert Preining

Norbert Preining

How to rename a local OfflineIMAP managed folder

Since quite some time I am using OfflineIMAP to keep my mailbox status from various servers locally available on my machine. I did more or less follow the excellent guide A unix style mail setup, but didn’t use folder name translation right from the beginning. So I got stuck with four folders named INBOX: acc1/INBOX, acc2/INBOX, etc. This wouldn’t be a big problem, but when using Mutt with the folder sidebar patch (as available in the mutt-patched package in Debian), there is the problem that only the INBOX part is shown.

offlineimap

So I decided to rename all the local folders to the scheme described in the above guide. At the end I want to have a folder layout as follows:

  Maildir/acc1/acc1
  Maildir/acc1/acc1.folder1
  Maildir/acc1/acc1.folder2

where the acc1 related to the INBOX on the remote account acc1, and the others are folders on the remote end.

Unfortunately, renaming is not supported in offlineimap, it would simply download everything again – at best! So I checked what needs to be done, and it turned out it is not as complicated as I thought. The necessary steps are:

  • rename the actual maildir folders
  • rename the status files in ~/.offlineimap/Account-acc1/LocalStatus/
  • rename the status files in ~/.offlineimap/Repository-acc1local/FolderValidity/
  • add the correct nametrans functions to your configuration file
  • do a dry-run test

To make this more specific, let us assume I want to reorganize the account acc1 in this way, and there are two folders synced. That is, before we start the Maildir folder contains:

Maildir/acc1/INBOX
Maildir/acc1/folder1
Maildir/acc1/folder2

and we want to achieve that the layout is

Maildir/acc1/acc1
Maildir/acc1/acc1.folder1
Maildir/acc1/acc1.folder2

Due to the fact that both the LocalStatus files and the FolderValidity files are naned in the same way as the folders, the following trivial script does all the necessary renaming for this simple case:

for i in acc1 folder1 folder2 ; do
for d in ~/Maildir/acc1 ~/.offlineimap/Account-acc1/LocalStatus ~/.offlineimap/Repository-acc1local/FolderValidity ; do
  mv $d/INBOX $d/acc1
  mv $d/folder1 $d/acc1.folder1
  mv $d/folder2 $d/acc1.folder2
done

Finally, the correct renaming has to be done by adding nametrans rules to the helper script. The functions oimaptransfolder_acc1 and localtransfolder_acc1 as defined in the above tutorial can be used as is, and need only be added to the respective Repository sections in the configuration file.

After having done the necessary renaming, what remains is to give it a good test. Best way to do it is with

offlineimap --dry-run -o -a acc1

which runs the syncronization in dry-run mode, so nothing is actually copied, and only once and only for acc1.

If during this test run nothing strange appears, like many many emails are deleted on either end, then you are fine.

Enjoy.

04 September, 2015 12:18PM by Norbert Preining

Sven Hoexter

wildcard SubjectAlternativeNames

After my experiment with a few more SANs then usual, I received the advice that multiple wildcards should in theory work as well. In practise I could not get something like 'DNS:..sven.stormbind.net' to be accepted by any decent browser. The other obstacle would be to find a CA to sign it for you. As far as I can tell everyone sticks to the wildcard definition given in the CAB BR.

Wildcard Certificate: A Certficiate containing an asterisk (*) in the left-most position of any of the
Subject Fully-Qualified Domain Names contained in the Certificate.

That said I could reduce the number of SANs from 1960 to 90 when I added a wildcard SAN per person. That's also a number small enough for the Internet Explorer to not fail during the handshake. I rejected that option initially because I thought that multiple wildcards on one certificate are not accepted by browsers. In practise it just seems to be a rarely available option provided on the market.

04 September, 2015 12:17PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.5.500.2.0

armadillo image

Once again time for the monthly upstream Armadillo update -- version 5.500.2 was released earlier today by Conrad. And a new and matching RcppArmadillo release 0.5.500.2.0 in already on CRAN and will go to Debian shortly.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

This release contains mostly bug fixes and some internal code refactoring:

Changes in RcppArmadillo version 0.5.500.2.0 (2015-09-03)

  • Upgraded to Armadillo 5.500.2 ("Molotov Cocktail")

    • expanded object constructors and generators to handle size() based specification of dimensions

    • faster handling of submatrix rows

    • faster clamp()

    • fixes for handling sparse matrices

Courtesy of CRANberries, there is also a diffstat report for the most recent CRAN release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 September, 2015 01:02AM

September 03, 2015

Vincent Fourmond

Releases 1.19.1 of Tioga and 0.13.1 of ctioga2

I've just released the versions 1.19.1 of Tioga and 0.13.1 of ctioga2. They both fix installation problems with recent versions of Ruby. Update as usual, though that isn't strictly necessary if you've managed to install them properly.
~ gem update tioga ctioga2

03 September, 2015 07:29PM by Vincent Fourmond (noreply@blogger.com)

John Goerzen

There’s still a chance to save WiFi

You may not know it, but wifi is under assault in the USA due to proposed FCC regulations about modifications to devices with modular radios. In short, it would make it illegal for vendors to sell devices with firmware that users can replace. This is of concern to everyone, because Wifi routers are notoriously buggy and insecure. It is also of special concern to amateur radio hobbyists, due to the use of these devices in the Amateur Radio Service (FCC Part 97).

I submitted a comment to the FCC about this, which I am pasting in here. This provides a background and summary of the issues for those that are interested. Here it is:

My comment has two parts: one, the impact on the Amateur Radio service; and two, the impact on security. Both pertain primarily to the 802.11 (“Wifi”) services typically operating under Part 15.

The Amateur Radio Service (FCC part 97) has long been recognized by the FCC and Congress as important to the nation. Through it, amateurs contribute to scientific knowledge, learn skills that bolster the technological competitiveness of the United States, and save lives through their extensive involvement in disaster response.

Certain segments of the 2.4GHz and 5GHz Wifi bands authorized under FCC Part 15 also fall under the frequencies available to licensed amateurs under FCC Part 97 [1].

By scrupulously following the Part 97 regulations, many amateur radio operators are taking devices originally designed for Part 15 use and modifying them for legal use under the Part 97 Amateur Radio Service. Although the uses are varied, much effort is being devoted to fault-tolerant mesh networks that provide high-speed multimedia communications in response to a disaster, even without the presence of any traditional infrastructure or Internet backbone. One such effort [2] causes users to replace the firmware on off-the-shelf Part 15 Wifi devices, reconfiguring them for proper authorized use under Part 97. This project has many vital properties, particularly the self-discovery of routes between nodes and self-healing nature of the mesh network. These features are not typically available in the firmware of normal Part 15 devices.

It should also be noted that there is presently no vendor of Wifi devices that operate under Part 97 out of the box. The only route available to amateurs is to take Part 15 devices and modify them for Part 97 use.

Amateur radio users of these services have been working for years to make sure they do not cause interference to Part 15 users [3]. One such effort takes advantage of the modular radio features of consumer Wifi gear to enable communication on frequencies that are within the Part 97 allocation, but outside (though adjacent) to the Part 15 allocation. For instance, the chart at [1] identifies frequencies such as 2.397GHz or 5.660GHz that will never cause interference to Part 15 users because they lie entirely outside the Part 15 Wifi allocation.

If the FCC prevents the ability of consumers to modify the firmware of these devices, the following negative consequences will necessarily follow:

1) The use of high-speed multimedia or mesh networks in the Amateur Radio service will be sharply curtailed, relegated to only outdated hardware.

2) Interference between the Amateur Radio service — which may use higher power or antennas with higher gain — and Part 15 users will be expanded, because Amateur Radio service users will no longer be able to intentionally select frequencies that avoid Part 15.

3) The culture of inventiveness surrounding wireless communication will be curtailed in an important way.

Besides the impact on the Part 97 Amateur Radio Service, I also wish to comment on the impact to end-user security. There have been a terrible slew of high-profile situations where very popular consumer Wifi devices have had incredible security holes. Vendors have often been unwilling to address these issues [4].

Michael Horowitz maintains a website tracking security bugs in consumer wifi routers [5]. Sadly these bugs are both severe and commonplace. Within just the last month, various popular routers have been found vulnerable to remote hacking [6] and platforms for launching Distributed Denial-of-Service (DDoS) attacks [7]. These impacted multiple models from multiple vendors. To make matters worse, most of these issues should have never happened in the first place, and were largely the result of carelessness or cluelessness on the part of manufacturers.

Consumers should not be at the mercy of vendors to fix their own networks, nor should they be required to trust unauditable systems. There are many well-regarded efforts to provide better firmware for Wifi devices, which still keep them operating under Part 15 restrictions. One is OpenWRT [8], which supports a wide variety of devices with a system built upon a solid Linux base.

Please keep control of our devices in the hands of consumers and amateurs, for the benefit of all.

References:

[1] https://en.wikipedia.org/wiki/High-speed_multimedia_radio

[2] http://www.broadband-hamnet.org/just-starting-read-this.html

[3] http://www.qsl.net/kb9mwr/projects/wireless/modify.html

[4] http://www.tomsitpro.com/articles/netgear-routers-security-hole,1-2461.html

[5] http://routersecurity.org/bugs.php

[6] https://www.kb.cert.org/vuls/id/950576

[7] https://www.stateoftheinternet.com/resources-cloud-security-2015-q2-web-security-report.html

[8] https://openwrt.org/

03 September, 2015 07:12PM by John Goerzen

Petter Reinholdtsen

Book cover for the Free Culture book finally done

Creating a good looking book cover proved harder than I expected. I wanted to create a cover looking similar to the original cover of the Free Culture book we are translating to Norwegian, and I wanted it in vector format for high resolution printing. But my inkscape knowledge were not nearly good enough to pull that off.

But thanks to the great inkscape community, I was able to wrap up the cover yesterday evening. I asked on the #inkscape IRC channel on Freenode for help and clues, and Marc Jeanmougin (Mc-) volunteered to try to recreate it based on the PDF of the cover from the HTML version. Not only did he create a SVG document with the original and his vector version side by side, he even provided an instruction video explaining how he did it. But the instruction video is not easy to follow for an untrained inkscape user. The video is a recording on how he did it, and he is obviously very experienced as the menu selections are very quick and he mentioned on IRC that he did use some keyboard shortcuts that can't be seen on the video, but it give a good idea about the inkscape operations to use to create the stripes with the embossed copyright sign in the center.

I took his SVG file, copied the vector image and re-sized it to fit on the cover I was drawing. I am happy with the end result, and the current english version look like this:

I am not quite sure about the text on the back, but guess it will do. I picked three quotes from the official site for the book, and hope it will work to trigger the interest of potential readers. The Norwegian cover will look the same, but with the texts and bar code replaced with the Norwegian version.

The book is very close to being ready for publication, and I expect to upload the final draft to Lulu in the next few days and order a final proof reading copy to verify that everything look like it should before allowing everyone to order their own copy of Free Culture, in English or Norwegian Bokmål. I'm waiting to give the the productive proof readers a chance to complete their work.

03 September, 2015 07:00PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Intel GPU memory bandwidth

Two days ago, I commented I was seeing only 1/10th or so of the theoretical bandwidth my Intel GPU should have been able to push, and asked if anyone could help be figure out the discrepancy. Now, several people (including the helpful people at #intel-gfx) helped me understand more of the complete picture, so I thought I'd share:

First of all, my application was pushing (more or less) a 1024x576 texture from a FBO to another, in fp16 RGBA, so eight bytes per pixel. This was measured to take 1.3 ms (well, sometimes long and sometimes shorter); 1024x576x8 bytes / 1.3 ms = 3.6 GB/sec. Given that the spec sheet for my CPU says 25.6 GB/sec, that's the basis for my “about 1/10th” number. (There's no separate number for the GPU bandwidth, since the CPU and GPU share memory subsystem and even the bottom level of the cache.)

But it turns out these numbers are not bidirectional as I thought they'd be; they cover both read and write. So I need to include the write bandwidth into the equation as well (and I was writing to a 1280x720 framebuffer). So with that into the picture, the number goes up to 9.3 GB/sec. In synthetic benchmarks, I was able to push this to 9.8, but no further.

So we're still a bit over a factor 2x off. But lo and behold, the quoted number is assuming dual memory channels—and Lenovo has only fitted the X240 with a single RAM chip, with no possibility of adding more! So the theoretical number is 12.8 GB/sec, not 25.6. ~75% of the theoretical memory bandwidth is definitely within what I'd say is reasonable.

So, to sum up: Me neglecting to count the writes, and Lenovo designing the X240 with a memory subsystem reminiscent of the Amiga. Thanks to all developers!

03 September, 2015 10:30AM

September 02, 2015

hackergotchi for Luca Falavigna

Luca Falavigna

Resource control with systemd

I’m receiving more requests for upload accounts to the Deb-o-Matic servers lately (yay!), but that means the resources need to be monitored and shared between the build daemons to prevent server lockups.

My servers are running systemd, so I decided to give systemd.resource-control a try. My goal was to assign lower CPU shares to the build processes (debomatic itself, sbuild, and all the related tools), in order to avoid blocking other important system services from being spawned when necessary.

I created a new slice, and set a lower CPU share weight:
$ cat /etc/systemd/system/debomatic.slice
[Slice]
CPUAccounting=true
CPUShares=512
$

Then, I assigned the slice to the service unit file controlling debomatic daemons by adding the Slice=debomatic.slice option under the Service directive.

That was not enough, though, as some processes were assigned to the user slice instead, which groups all the processes spawned by users:
systemd-cgls

This is probably because schroot spawns a login shell, and systemd considers it belonging to a different process group. So, I had to launch the command systemctl set-property user.slice CPUShares=512, so all processes belonging to the user.slice will receive the same share of the debomatic ones. I consider this a workaround, I’m open to suggestions how to properly solve this issue :)

I’ll try to explore more options in the coming days, so I can improve my knowledge of systemd a little bit more :)


02 September, 2015 04:31PM by Luca Falavigna

September 01, 2015

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Intensity Shuttle Linux driver

I've released bmusb, a free driver for the Intensity Shuttle, a $199 HDMI/component/S-video/composite capture card. (They also have SDI versions that are somewhat more expensive, but I don't have those and haven't tested.) Unfortunately newer firmwares have blocked out 1080p60, but I've done stable 720p60 capture on my X240; for instance, here's a proof-of-concept video of capture from my PS3 being sent through Movit, my library for high-quality, high-performance video filters.

On a related note, does anyone know if Intel's GPU division has a devrel point of contact, like ATI and NVIDIA have? I'm having problems with my Haswell (gen6) mobile GPU getting only 1/10 or so of the main memory bandwidth the specs say (3 GB/sec vs. 25.6 GB/sec), and I don't really understand why—the preliminary analysis is that it somehow can't handle deal with high latency.

01 September, 2015 09:54PM

Enrico Zini

if-you-know-a-browser-developer

If you happen to know a browser developer...

Do you happen to know a developer of Firefox or Chrome or some other mainstream browser?

If so, can you please talk to them about our experiments with Client Certificate authentication in Debian?

Client Certificate authentication rocks; with just a couple of little tweaks in the interface, it would be pretty close to perfect.

Visiting sites without using a certificate

If I want to browse a site unauthenticated instead of using a certificate, at the moment I can hit "Cancel" on the certificate popup menu, and it works nicely. I feel quite confused when I do that, though, because it's not clear to me if I am canceling use of certificates, or canceling the visit to the site.

Can you please change the wording on the Cancel button to something more descriptive?

See/change current certificate selection

My top wish is, once I choise to use (or not use) a certificate for a site, to be able to see which certificate I'm using and possibly change it.

At the moment I did not find a way to see what certificate I'm using, and the browser will remember the choice until it gets closed and reopened.

At the moment I can use a Private or Incognito window to switch identities or to stop authenticated access and continue anonymously, and that helps me immensely.

I think however that the ultimate solution could be to have the https lockpad popup show an indication of what certificate is currently being used, and offer a way to re-trigger certificate selection. That would be so cool.

Also, once the certificate choice can be seen and changed at any time, it could just get remembered so that sites can be visited again without any prompts, even after the browser has been closed and reopened. That would be, to me, the ultimate convenience.

Thanks! <3

Thank you very much for all the work you have already put into this: I have been told that a few years ago using client certificate was unthinkable, and now it seems to be down to just a couple of papercuts. And SPKAC/keygen seriously rocks!

I have been constantly impressed by how well this all works right now.

01 September, 2015 03:25PM

hackergotchi for Lunar

Lunar

Reproducible builds: week 18 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

  • Bdale Garbee uploaded tar/1.28-1 which includes the --clamp-mtime option. Patch by Lunar.

Aurélien Jarno uploaded glibc/2.21-0experimental1 which will fix the issue were locales-all did not behave exactly like locales despite having it in the Provides field.

Lunar rebased the pu/reproducible_builds branch for dpkg on top of the released 1.18.2. This made visible an issue with udebs and automatically generated debug packages.

The summary from the meeting at DebConf15 between ftpmasters, dpkg mainatainers and reproducible builds folks has been posted to the revelant mailing lists.

Packages fixed

The following 70 packages became reproducible due to changes in their build dependencies: activemq-activeio, async-http-client, classworlds, clirr, compress-lzf, dbus-c++, felix-bundlerepository, felix-framework, felix-gogo-command, felix-gogo-runtime, felix-gogo-shell, felix-main, felix-shell-tui, felix-shell, findbugs-bcel, gco, gdebi, gecode, geronimo-ejb-3.2-spec, git-repair, gmetric4j, gs-collections, hawtbuf, hawtdispatch, jack-tools, jackson-dataformat-cbor, jackson-dataformat-yaml, jackson-module-jaxb-annotations, jmxetric, json-simple, kryo-serializers, lhapdf, libccrtp, libclaw, libcommoncpp2, libftdi1, libjboss-marshalling-java, libmimic, libphysfs, libxstream-java, limereg, maven-debian-helper, maven-filtering, maven-invoker, mochiweb, mongo-java-driver, mqtt-client, netty-3.9, openhft-chronicle-queue, openhft-compiler, openhft-lang, pavucontrol, plexus-ant-factory, plexus-archiver, plexus-bsh-factory, plexus-cdc, plexus-classworlds2, plexus-component-metadata, plexus-container-default, plexus-io, pytone, scolasync, sisu-ioc, snappy-java, spatial4j-0.4, tika, treeline, wss4j, xtalk, zshdb.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #797027 on zyne by Chris Lamb: switch to pybuild to get rid of .pyc files.
  • #797180 on python-doit by Chris Lamb: sort output when creating completion script for bash and zsh.
  • #797211 on apt-dater by Chris Lamb: fix implementation of SOURCE_DATE_EPOCH.
  • #797215 on getdns by Chris Lamb: fix call to dpkg-parsechangelog in debian/rules.
  • #797254 on splint by Chris Lamb: support SOURCE_DATE_EPOCH for version string.
  • #797296 on shiro by Chris Lamb: remove username from build string.
  • #797408 on splitpatch by Reiner Herrmann: use SOURCE_DATE_EPOCH to set manpage date.
  • #797410 on eigenbase-farrago by Reiner Herrmann: sets the comment style to scm-safe which tells ResourceGen that no timestamps should be included.
  • #797415 on apparmor by Reiner Herrmann: sorting with the locale set to C. CAPABILITIES
  • #797419 on resiprocate by Reiner Herrmann: set the embedded hostname to a static value.
  • #797427 on jam by Reiner Herrmann: sorting with the locale set to C.
  • #797430 on ii-esu by Reiner Herrmann: sort source list using C locale.
  • #797431 on tatan by Reiner Herrmann: sort source list using C locale.

Chris Lamb also noticed that binaries shipped with libsilo-bin did not work.

Documentation update

Chris Lamb and Ximin Luo assembled a proper specification for SOURCE_DATE_EPOCH in the hope to convince more upstreams to adopt it. Thanks to Holger it is published under a non-Debian domain name.

Lunar documented easiest way to solve issues with file ordering and timestamps in tarballs that came with tar/1.28-1.

Some examples on how to use SOURCE_DATE_EPOCH have been improved to support systems without GNU date.

reproducible.debian.net

armhf is finally being tested, which also means the remote building of Debian packages finally works! This paves the way to perform the tests on even more architectures and doing variations on CPU and date. Some packages even produce the same binary Arch:all packages on different architectures (1, 2). (h01ger)

Tests for FreeBSD are finally running. (h01ger)

As it seems the gcc5 transition has cooled off, we schedule sid more often than testing again on amd64. (h01ger)

disorderfs has been built and installed on all build nodes (amd64 and armhf). One issue related to permissions for root and unpriviliged users needs to be solved before disorderfs can be used on reproducible.debian.net. (h01ger)

strip-nondeterminism

Version 0.011-1 has been released on August 29th. The new version updates dh_strip_nondeterminism to match recent changes in debhelper. (Andrew Ayer)

disorderfs

disorderfs, the new FUSE filesystem to ease testing of filesystem-related variations, is now almost ready to be used. Version 0.2.0 adds support for extended attributes. Since then Andrew Ayer also added support to reverse directory entries instead of shuffling them, and arbitrary padding to the number of blocks used by files.

Package reviews

142 reviews have been removed, 48 added and 259 updated this week.

Santiago Vila renamed the not_using_dh_builddeb issue into varying_mtimes_in_data_tar_gz_or_control_tar_gz to align better with other tag names.

New issue identified this week: random_order_in_python_doit_completion.

37 FTBFS issues have been reported by Chris West (Faux) and Chris Lamb.

Misc.

h01ger gave a talk at FrOSCon on August 23rd. Recordings are already online.

These reports are being reviewed and enhanced every week by many people hanging out on #debian-reproducible. Huge thanks!

01 September, 2015 12:51PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in August 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

  • Prepared and released DLA-301-1 fixing 2 CVE in python-django.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 11 commits to the security tracker.

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That’s always good. :-)

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new “overlay” filesystem that replaces “aufs” in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian’s SSO).

While the numerous discussions I had during DebConf can’t be qualified as “work”, they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the “Debian in the corporate IT” BoF led by Michael Meskes (Credativ’s CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian’s Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there’s no upstream fix for more than 8 months, I went back to the former version 2.9.1. It’s in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it’s not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It’s not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on bugs.gnupg.org. Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn’t solve the issue for users of other init systems so it’s not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It’s working nice, and this authentication scheme is far easier to support. Good job, Enrico!

tracker.debian.org broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported “Hashsum mismatch” on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously “aufs” was hardcoded).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 September, 2015 11:49AM by Raphaël Hertzog

Bits from Debian

New Debian Developers and Maintainers (July and August 2015)

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller

Congratulations!

01 September, 2015 11:45AM by Ana Guerrero López

Russ Allbery

Review: Kanban

Review: Kanban, by David J. Anderson

Publisher: Blue Hole
Copyright: 2010
ISBN: 0-9845214-0-2
Format: Trade paperback
Pages: 240

Another belated review, this time of a borrowed book. Which I can now finally return! A delay in the review of this book might be a feature if I had actually used it for, as the subtitle puts it, successful evolutionary change in my technology business. Sadly, I haven't, so it's just late.

So, my background: I've done a lot of variations of traditional project management for IT projects (both development and more operational ones), both as a participant and as a project manager. (Although I've never done the latter as a full-time job, and have no desire to do so.) A while back at Stanford, my team adopted Agile, specifically Scrum, so I did a fair bit of research about Scrum including a couple of training courses. Since then, at Dropbox, I've used a few different variations on a vaguely Agile-inspired planning process, although it's not particularly consistent with any one system.

I've been hearing about Kanban for a while and have friends who swear by it, but I only had a vague idea of how it worked. That seemed like a good excuse to read a book.

And Anderson's book is a good one if, like me, you're looking for a basic introduction. It opens with a basic description and definition, talks about motivation and the expected benefits, and then provides a detailed description of Kanban as a system. The tone is on the theory side, written using the terminology of (I presume) management theory and operations management, areas about which I know almost nothing, but the terminology wasn't so heavy as to make the book hard to read. Anderson goes into lots of detail, and I thought he did a good job of distinguishing between basic principles, optional techniques, and variations that may be appropriate for particular environments.

If you're also not familiar, the basic concept of Kanban is to organize work using an in-progress queue. It eschews time-bounded planning entirely in favor of staging work at the start of a sequence of queues and letting the people working on each queue pull from the previous queue when they're ready to take on a new task. As you might guess from that layout, Kanban was originally invented for assembly-line manufacturing (at Toyota). That was one of the problems that I had with it, or at least the presentation in this book: most of my software development doesn't involve finishing one part of something and handing it off to someone else, which made it hard to identify with the pipeline model. Anderson has clearly spent a lot of time working with large-scale programming shops, including outsourced development firms, with dedicated QA and operations roles. This is not at all how Silicon Valley agile development works, so parts of this book felt like missives from a foreign country.

That said, the key point of Kanban is not the assembly line but the work limit. One of the defining characteristics of Kanban, at least as Anderson presents it, is that one does not try to estimate work and tile it based on estimates, not even to the extent that Scrum and other Agile methodologies do within the sprint. Instead, each person takes as long as they take on the things they're working on, and pulls a new task when they have free capacity. The system instead puts a limit on how many things they can have in progress at a time. The problem of pipeline stalls is dealt with both via continuous improvement and "swarming" of a problem to unblock the line (since other teams may not be able to work until the block is fixed), and with being careful about the sizing of work items (I'm used to calling them stories) that go in the front end.

Predictability, which Scrum uses story sizing and team velocity analysis to try to achieve, is basically statistical. One uses a small number of buckets of sizes of stories, and the whole pipeline will finish some number of work items per unit time, with a variance. The promise made to clients and other teams is that some percentage of the work items will be finished within some time frame from when they enter the system. Most importantly, these are all descriptive properties, determined statistically after the fact, rather than planned properties worked out through story sizing and extensive team discussion. If you, like me, are pretty thoroughly sick of two-hour sprint planning meetings and endless sizing exercises, this is rather appealing.

My problem with most work planning systems is that I think they overplan and put too much weight our ability to estimate how long work will take. Kanban is very appealing when viewed through that lens: it gives up on things we're bad at in favor of simple measurement and building a system with enough slack that it can handle work of various different sizes. As mentioned at the top, I haven't had a chance to try it (and I'm not sure it's a good fit with the inter-group planning methods in use at my current workplace), but I came away from this book wanting to do so.

If, like me, your experience is mostly with small combined teams or individual programming work, Anderson's examples may seem rather large, rather formal, and bureaucratic. But this is a solid introduction and well worth reading if your only experience with Agile is sprint planning, story writing and sizing, and fast iterations.

Rating: 7 out of 10

01 September, 2015 04:06AM

hackergotchi for Norbert Preining

Norbert Preining

PiwigoPress release 2.31

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates new features for the sidebar widget, and better interoperability with some Piwigo galleries.

piwigopress-logo

The new features are:

  • piwigopress-sortorderSelection of images: Up to now images for the widget were selected at random. The current version allows selecting images either at random (the default as before), but also in ascending or descending order by various criteria (uploaded, availability time, id, name, etc). With this change it is now possible to always display the most recent image(s) from a gallery.
  • Interoperability: Some Piwigo galleries don’t have thumbnail sized representatives. For these galleries the widget was broken and didn’t display any image. We now check for either square or thumbnail.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

01 September, 2015 12:36AM by Norbert Preining

August 31, 2015

hackergotchi for Junichi Uekawa

Junichi Uekawa

I've been writing ELF parser for fun using C++ templates to see how much I can simplify.

I've been writing ELF parser for fun using C++ templates to see how much I can simplify. I've been reading bionic loader code enough these days that I already know how it would look like in C and gradually converted into C++ but it's nevertheless fun to have a pet project that kind of grows.

31 August, 2015 09:36PM by Junichi Uekawa

hackergotchi for Matthew Garrett

Matthew Garrett

Working with the kernel keyring

The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comment count unavailable comments

31 August, 2015 05:18PM

hackergotchi for Martín Ferrari

Martín Ferrari

Romania

It's been over 2 years since I decided to start a new, nomadic life. I had the idea of blogging about this experience as it happened, but not only I am incredibly lazy when it comes to writing, most of the time I have been too busy just enjoying this lifestyle!

The TL;DR version of these last 2 years:

  • Lounged a bit in Ireland after leaving work, went on a great road trip along the West coast.
  • Lived in Nice 3 months, back in the same house I lived between 2009 and 2010.
    • During that time, my dad visited and took him for a trip around Nothern Italy, the Côte d'Azur and Paris; then travelled to DebConf in Switzerland, visited Gregor in Innsbruck, and travelled back to Nice by train, crossing the alps and a big chunk of Italy.
  • Then went to Buenos Aires for 3 months (mom was very happy).
  • Back in Europe, attended Fosdem, and moved to Barcelona for 3 months; so far, the best city I ever lived in!
  • Went back to Dublin for a while, ended up staying over 8 months, including getting a temporary job at a normal office (booo!).
    • Although one of these months I spent travelling in the States (meh).
    • And of course, many more short trips, including a couple of visits to Barcelona, Lille, Nice, and of course Brussels for Fosdem.
  • Again went to Buenos Aires, only 2 months this time.
  • Another month in Dublin, then holidays visiting my friends in Lisbon, wedding in Oviedo, and a road trip around Asturias and Galicia.
  • From Spain I flew to Prague and stayed for 2 months (definitely not enough).
  • Quick trip to Dublin, then CCC camp and DebConf in Germany.

And now, I am in Cluj-Napoca, Romania.

View from my window
View from my window

31 August, 2015 06:59AM

IkiWiki

I haven't posted in a very long time. Not only because I suck at this, but also because IkiWiki decided to stop working with OpenID, so I can't use the web interface any more to post.. Very annoying.

Already spent a good deal of time trying to find a solution, without any success.. I really don't want to migrate to another software again, but this is becoming a showstopper for me.

31 August, 2015 04:55AM

Russ Allbery

Review: Through Struggle, The Stars

Review: Through Struggle, The Stars, by John J. Lumpkin

Series: Human Reach #1
Publisher: John J. Lumpkin
Copyright: July 2011
ISBN: 1-4611-9544-6
Format: Kindle
Pages: 429

Never let it be said that I don't read military SF. However, it can be said that I read books and then get hellishly busy and don't review them for months. So we'll see if I can remember this well enough to review it properly.

In Lumpkin's future world, mankind has spread to the stars using gate technology, colonizing multiple worlds. However, unlike most science fiction futures of this type, it's not all about the United States, or even the United States and Russia. The great human powers are China and Japan, with the United States relegated to a distant third. The US mostly maintains its independence from either, and joins the other lesser nations and UN coalitions to try to pick up a few colonies of its own. That's the context in which Neil and Rand join the armed services: the former as a pilot in training, and the latter as an army grunt.

This is military SF, so of course a war breaks out. But it's a war between Japan and China: improved starship technology and the most sophisticated manufacturing in the world against a much larger economy with more resources and a larger starting military. For reasons that are not immediately clear, and become a plot point later on, the United States president immediately takes an aggressive tone towards China and pushes the country towards joining the war on the side of Japan.

Most of this book is told from Neil's perspective, following his military career during the outbreak of war. His plans to become a pilot get derailed as he gets entangled with US intelligence agents (and a bad supervisor). The US forces are not in a good place against China, struggling when they get into ship-to-ship combat, and Neil's ship goes on a covert mission to attempt to complicate the war with political factors. Meanwhile, Rand tries to help fight off a Chinese invasion of one of the rare US colony worlds.

Through Struggle, The Stars does not move quickly. It's over 400 pages, and it's a bit surprising how little happens. What it does instead is focus on how the military world and the war feels to Neil: the psychological experience of wanting to serve your country but questioning its decisions, the struggle of working with people who aren't always competent but who you can't just make go away, the complexities of choosing a career path when all the choices are fraught with politics that you didn't expect to be involved in, and the sheer amount of luck and random events involved in the progression of one's career. I found this surprisingly engrossing despite the slow pace, mostly because of how true to life it feels. War is not a never-ending set of battles. Life in a military ship has moments when everything is happening far too quickly, but other moments when not much happens for a long time. Lumpkin does a great job of reflecting that.

Unfortunately, I thought there were two significant flaws, one of which means I probably won't seek out further books in this series.

First, one middle portion of the book switches away from Neil to follow Rand instead. The first part of that involves the details of fighting orbiting ships with ground-based lasers, which was moderately interesting. (All the technological details of space combat are interesting and thoughtfully handled, although I'm not the sort of reader who will notice more subtle flaws. But David Weber this isn't, thankfully.) But then it turns into a fairly generic armed resistance story, which I found rather boring.

It also ties into the second and more serious flaw: the villain. The overall story is constructed such that it doesn't need a personal villain. It's about the intersection of the military and politics, and a war that may be ill-conceived but that is being fought anyway. That's plenty of conflict for the story, at least in my opinion. But Lumpkin chose to introduce a specific, named Chinese character in the villain role, and the characterization is... well.

After he's humiliated early in the story by the protagonists, Li Xiao develops an obsession with killing them, for his honor, and then pursues them throughout the book in ways that are sometimes destructive to the Chinese war efforts. It's badly unrealistic compared to the tone of realism taken by the rest of the story. Even if someone did become this deranged, it's bizarre that a professional military (and China's forces are otherwise portrayed as fairly professional) would allow this. Li reads like pure caricature, and despite being moderately competent apart from his inexplicable (but constantly repeated) obsession, is cast in a mustache-twirling role of personal villainy. This is weirdly out of place in the novel, and started irritating me enough that it took me out of the story.

Through Struggle, The Stars is the first book of a series, and does not resolve much by the end of the novel. That plus its length makes the story somewhat unsatisfying. I liked Neil's development, and liked him as a character, and those who like the details of combat mixed with front-lines speculation about politics may enjoy this. But a badly-simplified mustache-twirling victim and some extended, uninteresting bits mar the book enough that I probably won't seek out the sequels.

Followed by The Desert of Stars.

Rating: 6 out of 10

31 August, 2015 03:54AM

August 30, 2015

Andrew Cater

Rescuing a Windows 10 failed install using GParted Live on CD

Windows 10 is here, for better or worse. As the family sysadmin, I've been tasked to update the Windows machines: ultimately, failure modes are not well documented and I needed Free software and several hours to recover a vital machine.

The "free upgrade for users of prior Windows versions" is a limited time offer for a year from launch. Microsoft do not offer licence keys for the upgrade: once a machine has updated to Windows 10 and authenticated on the 'Net, then a machine can be re-installed and will be regarded by Microsoft as pre-authorised. Users don't get the key at any point.

Although Microsoft have pushed the fact that this can be done through Windows Update, there is the option to use Microsoft's Media Creation tool to do the upgrade directly on the Windows machine concerned. This would be necessary to get the machine to upgrade and register before a full clean install of Windows 10 from media.

This Media Creation Tool failed for me on three machines with "Unable to access System reserved partition'

All the machines have SSDs from various makers: a colleague suggested that resizing the partition might enable the upgrade to continue.  Of the machines that failed, all were running Windows 7 - two were running using BIOS, one using UEFI boot on a machine with no Legacy/CSM mode.

Using GParted live .iso  - itself based on Debian Live - allowed me to resize the System partition from 100MiB to 200MiB by moving the Windows partition but  Windows became unbootable.

In two cases, I was able to boot from DVD Windows installation media and make Windows bootable again at which point the Microsoft Media Creation Tool could be used to install Windows 10

The UEFI boot machine proved more problematic: I had to create a Windows 7 System Repair disk and repair Windows booting before Windows 10 could proceed.

My Windows-using colleaague had used only Windows-based recovery disks: using Linux tools allowed me to repair Windows installations I couldn't boot

30 August, 2015 08:09PM by Andrew Cater (noreply@blogger.com)

Antonio Terceiro

DebConf15, testing debian packages, and packaging the free software web

This is my August update, and by the far the coolest thing in it is Debconf.

Debconf15

I don’t get tired of saying it is the best conference I ever attended. First it’s a mix of meeting both new people and old friends, having the chance to chat with people whose work you admire but never had a chance to meet before. Second, it’s always quality time: an informal environment, interesting and constructive presentations and discussions.

This year the venue was again very nice. Another thing that was very nice was having so many kids and families. This was no coincidence, since this was the first DebConf in which there was organized childcare. As the community gets older, this a very good way of keeping those who start having kids from being alienated from the community. Of course, not being a parent yet I have no idea how actually hard is to bring small kids to a conference like DebConf. ;-)

I presented two talks:

  • Tutorial: Functional Testing of Debian packages, where I introduced the basic concepts of DEP-8/autopkgtest, and went over several examples from my packages giving tips and tricks on how to write functional tests for Debian packages.
  • Packaging the Free Software Web for the end user, where I presented the motivation for, and the current state of shak, a project I am working on to make it trivial for end users to install server side applications in Debian. I spent quite some hacking time during DebConf finishing a prototype of the shak web interface, which was demonstrated live in the talk (of course, as usual with live demos, not everything worked :-)).

There was also the now traditional Ruby BoF, where discussed the state and future of the Ruby ecosystem in Debian; and an in promptu Ruby packaging workshop where we introduced the basics of packaging in general, and Ruby packaging specifically.

Besides shak, I was able to hack on a few cool things during DebConf:

  • debci has been updated with a first version of the code to produce britney hints files that block packages that fail their tests from migrating to testing. There are some issues to be sorted out together with the release team to make sure we don’t block packages unecessarily, e.g. we don’t want to block packages that never passed their test suite — most the test suite, and not the package, is broken.
  • while hacking I ended up updating jquery to the newest version in the 1.x series, and in fact adopting it I guess. This allowed emeto drop the embedded jquery copy I used to have in the shak repository, and since then I was able to improve the build to produce an output that is identical, except for a build timestamp inside a comment and a few empty lines, to the one produced by upstream, without using grunt (.

Miscellaneous updates

30 August, 2015 07:12PM

hackergotchi for DebConf team

DebConf team

DebConf15: Farewell, and thanks for all the Fisch (Posted by DebConf Team)

A week ago, we concluded our biggest DebConf ever! It was a huge success.

Handwritten feedback note

We are overwhelmed by the positive feedback, for which we’re very grateful. We want to thank you all for participating in the talks; speakers and audience alike, in person or live over the global Internet — it wouldn’t be the fantastic DebConf experience without you!

Many of our events were recorded and streamed live, and are now available for viewing, as are the slides and photos.

To share a sense of the scale of what all of us accomplished together, we’ve compiled a few statistics:

  • 555 attendees from 52 countries (including 28 kids)
  • 216 scheduled events (183 talks and workshops), of which 119 were streamed and recorded
  • 62 sponsors and partners
  • 169 people sponsored for food & accommodation
  • 79 professional and 35 corporate registrations

Our very own designer Valessio Brito made a lovely video of impressions and images of the conference.

We’re collecting impressions from attendees as well as links to press articles, including Linux Weekly News coverage of specific sessions of DebConf. If you find something not yet included, please help us by adding links to the wiki.

We tried a few new ideas this year, including a larger number of invited and featured speakers than ever before.

On the Open Weekend, some of our sponsors presented their career opportunities at our job fair, which was very well attended.

And a diverse selection of entertainment options provided the necessary breaks and ample opportunity for socialising.

On the last Friday, the Oscar-winning documentary “Citizenfour” was screened, with some introductory remarks by Jacob Appelbaum and a remote address by its director, Laura Poitras, and followed by a long Q&A session by Jacob.

DebConf15 was also the first DebConf with organised childcare (including a Teckids workshop for kids of age 8-16), which our DPL Neil McGovern standardised for the future: “it’s a thing now,” he said.

The participants used the week before the conference for intensive work, sprints and workshops, and throughout the main conference, significant progress was made on Debian and Free Software. Possibly the most visible was the endeavour to provide reproducible builds, but the planning of the next stable release “stretch” received no less attention. Groups like the Perl team, the diversity outreach programme and even DebConf organisation spent much time together discussing next steps and goals, and hundreds of commits were made to the archive, as well as bugs closed.

DebConf15 was an amazing conference, it brought together hundreds of people, some oldtimers as well as plenty of new contributors, and we all had a great time, learning and collaborating with each other, says Margarita Manterola of the organiser team, and continues: The whole team worked really hard, and we are all very satisfied with the outcome. Another organiser, Martin Krafft adds: We mainly provided the infrastructure and space. A lot of what happened during the two weeks was thanks to our attendees. And that’s what makes DebConf be DebConf.

Photo of hostel staff wearing DebConf15 staff t-shirts (by Martin Krafft)

Our organisation was greatly supported by the staff of the conference venue, the Jugendherberge Heidelberg International, who didn’t take very long to identify with our diverse group, and who left no wishes untried. The venue itself was wonderfully spacious and never seemed too full as people spread naturally across the various conference rooms, the many open areas, the beergarden, the outside hacklabs and the lawn.

The network installed specifically for our conference in collaboration with the nearby university, the neighbouring zoo, and the youth hostel provided us with a 1 Gbps upstream link, which we managed to almost saturate. The connection will stay in place, leaving the youth hostel as one with possibly the fastest Internet connection in the state.

And the kitchen catered high-quality food to all attendees and their special requirements. Regional beer and wine, as well as local specialities, were provided at the bistro.

DebConf exists to bring people together, which includes paying for travel, food and accomodation for people who could not otherwise attend. We would never have been able to achieve what we did without the support of our generous sponsors, especially our Platinum Sponsor Hewlett-Packard. Thank you very much.

See you next year in Cape Town, South Africa!

The DebConf16 logo with white background

30 August, 2015 06:24PM by DebConf Organizers

hackergotchi for Philipp Kern

Philipp Kern

Automating the 3270 part of a Debian System z install

If you try to install Debian on System z within z/VM you might be annoyed at the various prompts it shows before it lets you access the network console via SSH. We can do better. From within CMS copy the default EXEC and default PARMFILE:

COPYFILE DEBIAN EXEC A DEBAUTO EXEC A
COPYFILE PARMFILE DEBIAN A PARMFILE DEBAUTO A

Now edit DEBAUTO EXEC A and replace the DEBIAN in 'PUNCH PARMFILE DEBIAN * (NOHEADER' with DEBAUTO. This will load the alternate kernel parameters file into the card reader, while still loading the original kernel and initrd files.

Replace PARMFILE DEBAUTO A's content with this (note the 80 character column limit):

ro locale=C                                                              
s390-netdevice/choose_networktype=qeth s390-netdevice/qeth/layer2=true   
s390-netdevice/qeth/choose=0.0.fffc-0.0.fffd-0.0.fffe                    
netcfg/get_ipaddress=<IPADDR> netcfg/get_netmask=255.255.255.0       
netcfg/get_gateway=<GW> netcfg/get_nameservers=<FIRST-DNS>    
netcfg/confirm_static=true netcfg/get_hostname=debian                    
netcfg/get_domain=                                                       
network-console/authorized_keys_url=http://www.kern.lc/id_rsa.pub        
preseed/url=http://www.kern.lc/preseed-s390x.cfg                       

Replace <IPADDR>, <GW>, and <FIRST-DNS> to suit your local network config. You might also need to change the netmask, which I left in for clarity about the format. Adjust the device address of your OSA network card. If it's in layer 3 mode (very likely) you should set layer2=false. Note that mixed case matters, hence you will want to SET CASE MIXED in xedit.

Then there are the two URLs that need to be changed. The authorized_keys_url file contains your SSH public key and is fetched unencrypted and unauthenticated, so be careful what networks you traverse with your request (HTTPS is not supported by debian-installer in Debian).

preseed/url is needed for installation parameters that do not fit the parameters file - there is an upper character limit that's about two lines longer than my example. This is why this example only contains the bare minimum for the network part, everything else goes into this preseeding file. It file can optionally be protected with a MD5 checksum in preseed/url/checksum.

Both URLs need to be very short. I thought that there was a way to specify a line continuation, but in my tests I was unable to produce one. Hence it needs to fit on one line, including the key. You might want to use an IPv4 as the hostname.

To skip the initial boilerplate prompts and to skip straight to the user and disk setup you can use this as preseed.cfg:

d-i debian-installer/locale string en_US
d-i debian-installer/country string US
d-i debian-installer/language string en
d-i time/zone US/Eastern
d-i mirror/country manual
d-i mirror/http/mirror string httpredir.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string

I'm relatively certain that the DASD disk setup part cannot be automated yet. But the other bits of the installation should be preseedable just like on non-mainframe hardware.

30 August, 2015 05:36PM by Philipp Kern (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.0

A new version of RcppGSL just arrived on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using our Rcpp package.

Following on the heels of an update last month we updated the package (and its vignette) further. One of the key additions concern memory management: Given that our proxy classes around the GSL vector and matrix types are real C++ object, we can monitor their scope and automagically call free() on them rather then insisting on the user doing it. This renders code much simpler as illustrated below. Dan Dillon added const correctness over a series on pull request which allows us to write more standard (and simply nicer) function interfaces. Lastly, a few new typedef declaration further simply the use of the (most common) double and int vectors and matrices.

Maybe a code example will help. RcppGSL contains a full and complete example package illustrating how to write a package using the RcppGSL facilities. It contains an example of computing a column norm -- which we blogged about before when announcing an much earlier version. In its full glory, it looks like this:

#include <RcppGSL.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_blas.h>

extern "C" SEXP colNorm(SEXP sM) {

  try {

        RcppGSL::matrix<double> M = sM;     // create gsl data structures from SEXP
        int k = M.ncol();
        Rcpp::NumericVector n(k);           // to store results

        for (int j = 0; j < k; j++) {
            RcppGSL::vector_view<double> colview = gsl_matrix_column (M, j);
            n[j] = gsl_blas_dnrm2(colview);
        }
        M.free() ;
        return n;                           // return vector

  } catch( std::exception &ex ) {
        forward_exception_to_r( ex );

  } catch(...) {
        ::Rf_error( "c++ exception (unknown reason)" );
  }
  return R_NilValue; // -Wall
}

We manually translate the SEXP coming from R, manually cover the try and catch exception handling, manually free the memory etc pp.

Well in the current version, the example is written as follows:

#include <RcppGSL.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_blas.h>

// [[Rcpp::export]]
Rcpp::NumericVector colNorm(const RcppGSL::Matrix & G) {
    int k = G.ncol();
    Rcpp::NumericVector n(k);           // to store results
    for (int j = 0; j < k; j++) {
        RcppGSL::VectorView colview = gsl_matrix_const_column (G, j);
        n[j] = gsl_blas_dnrm2(colview);
    }
    return n;                           // return vector
}

This takes full advantage of Rcpp Attributes automagically creating the interface and exception handler (as per the previous release), adds a const & interface, does away with the tedious and error-pronce free() and uses the shorter-typedef forms for RcppGSL::Matrix and RcppGSL::VectorViews using double variables. Now the function is short and concise and hence easier to read and maintain. The package vignette has more details on using RcppGSL.

The NEWS file entries follows below:

Changes in version 0.3.0 (2015-08-30)

  • The RcppGSL matrix and vector class now keep track of object allocation and can therefore automatically free allocated object in the destructor. Explicit x.free() use is still supported.

  • The matrix and vector classes now support const reference semantics in the interfaces (thanks to PR #7 by Dan Dillon)

  • The matrix_view and vector_view classes are reorganized to better support const arguments (thanks to PR #8 and #9 by Dan Dillon)

  • Shorthand forms such as Rcpp::Matrix have been added for double and int vectors and matrices including views.

  • Examples such as fastLm can now be written in a much cleaner and shorter way as GSL objects can appear in the function signature and without requiring explicit .free() calls at the end.

  • The included examples, as well as the introductory vignette, have been updated accordingly.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 August, 2015 03:05PM

Sven Hoexter

1960 SubjectAlternativeNames on one certificate

tl;dr; You can add 1960+ SubjectAlternativeNames on one certificate and at least Firefox and Chrome are working fine with that. Internet Explorer failed but I did not investigate why.

So why would you want to have close to 2K SANs on one certificate? While we're working on adopting a more dynamic development workflow at my workplace we're currently bound to a central development system. From there we serve a classic virtual hosting setup with "projectname.username.devel.ourdomain.example" mapped on "/web/username/projectname/". That is 100% dynamic with wildcard DNS entries and you can just add a new project to your folder and use it directly. All of that is served from just a single VirtualHost.

Now our developers started to go through all our active projects to make them fit for serving via HTTPS. While we can verify the proper usage of https on our staging system where we've validating certificates, that's not the way you'd like to work. So someone approached me to look into a solution for our development system. Obvious choices like wildcard certificates do not work here because we've two dynamic components in the FQDN. So we would've to buy a wildcard certificate for every developer and we would've to create a VirtualHost entry for every new developer. That's expensive and we don't want all that additional work. So I started to search for documented limits on the number of SANs you can have on a certificate. The good news: there are none. The RFC does not define a limit. So much about the theory. ;)

Following Ivans excellent documentation I setup an internal CA and an ugly "find ... |sed ...|tr ..." one-liner later I had a properly formated openssl config file to generate a CSR with all 1960 "projectname.username..." SAN combinations found on the development system. Two openssl invocations (CSR generation and signing) later I had a signed certificate with 1960 SANs on it. I imported the internal CA I created in Firefox and Chrome, and to my surprise it worked.

Noteworthy: To sign with "openssl ca" without interactive prompts you've to use the "-batch" option.

I'm thinking about regenerating the certificate every morning so our developers just have to create a new project directory and within 24h serving via HTTPS would be enabled. The only thing I'm currently pondering about is how to properly run the CA in a corporate Windows world. We could of course ask the Windows guys to include it for everyone but then we would've to really invest time in properly running the CA. I'd like to avoid that hassle. So I'd guess we just stick to providing the CA for those developers who need it. This all or nothing model is a constant PITA, and you really do not want to get owned via your own badly managed CA. :(

Regarding Internet Explorer it jumped in my face with a strange error message that recommended to enable TLS 1.0, 1.1 and 1.2 in the options menu. Of course that's already enable. I'll try to take a look at the handshake next week, but I bet we've to accept for the moment that IE will not work with so many SANs. Would be interesting to try out Windows 10 with Spartan, but well I'm not that interested in Windows to invest more time on that front. Other TLS implementations, like Java, would be also interesting to test.

30 August, 2015 01:58PM

hackergotchi for Ben Hutchings

Ben Hutchings

Securing www.decadent.org.uk

Sven Hoexter replied to my previous entry to say that WoSign also provides free DV TLS certificates. What's more, they allow up to 10 alternate names, unlike StartSSL. So I've gone ahead with a new certificate for www.decadent.org.uk and other virtual servers including git.decadent.org.uk.

WoSign sensibly mandates a key length of 2048 bits, and together with the default TLS configuration for Apache in Debian 'jessie' this resulted in a A- rating from Qualys SSL Server Test.

I then disabled non-PFS and otherwise weak cipher suites in /etc/apache2/mods-enabled/ssl.conf:

SSLCipherSuite HIGH:!aNULL:!kRSA:!3DES

This resulted in an A rating. Finally, I added redirection of all plaintext HTTP connections to HTTP-S (which is easier than working out how to make the virtual server work with and without TLS, anyway). I enabled HSTS for each VirtualHost:

Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"

This resulted in an A+ rating. These web sites will now be inaccessible to Java 6 and IE on Windows XP, but that's no great loss (1 in 1500 hits over the past few weeks).

30 August, 2015 11:59AM

Russ Allbery

Review: Pound Foolish

Review: Pound Foolish, by Helaine Olen

Publisher: Penguin
Copyright: 2012, 2013
Printing: 2013
ISBN: 1-59184-679-X
Format: Trade paperback
Pages: 241

For at least the last six years, it's not been surprising news that the relationship between the average person and the US financial system is tense at best, and downright exploitative at worst. The collapse of the housing bubble revealed a morass of predatory lending practices, spectacularly poor (or spectacularly cynical) investment decisions, and out-of-control personal debt coupled with erosion of bankruptcy law. Despite this, there's always a second story in all discussions of the finances of the US population: the problem isn't with financial structures and products, but with us. We're too stupid, or naive, or emotional, or uninformed, or greedy, or unprincipled, or impatient. Finances are complicated, yes, but that just means we have to be more thoughtful. All of these complex financial products could have been used properly.

Helaine Olen's Pound Foolish is a systemtic, biting, well-researched, and pointed counter to that second story. The short summary of this book is that it's not us. We're being set up for failure, and there is a large and lucrative industry profiting off of that failure. And many (although not all) people in that industry know exactly what they're doing.

Pound Foolish is one of my favorite forms of non-fiction: long-form journalism. This is an investigative essay into the personal finance and investment industry, developed to book length. Readers of Michael Lewis will feel right at home. Olen doesn't have Lewis's skill with characterization and biography, but she makes up for it in big-picture clarity. She takes a wealth of individual facts about who is involved in personal finance, how they make money, what they recommend, and who profits, and develops it into a clear and coherent overview.

If you have paid any attention to US financial issues, you'll know some of this already. Frontline has done a great job of covering administrative fees in mutual funds. Lots of people have warned about annuities. The spectacular collapse of the home mortgage is old news now. But Olen does a great job of finding the connections between these elements and adding some less familiar ones, including an insightful and damning analysis of financial literacy campaigns and the widespread belief that these problems are caused by lack of consumer understanding. I've read and watched a lot of related material, including several full-book treatments of the mortgage crisis, so I think it's telling that I never got bored in the middle of Olen's treatment.

I find the deep US belief in the power of personal improvement fascinating. It feels like one of the defining characteristics of US culture, for both good and for ill. We're very good at writing compelling narratives of personal improvement, and sometimes act on them. We believe that everyone can and should improve themselves. But that comes coupled to a dislike and distrust of expertise, even when it is legitimate and earned (Hofstadter's Anti-Intellectualism in American Life develops this idea at length). And I believe we significantly overestimate the ability of individuals to act despite systems that are stacked against us, and significantly overestimate our responsibility for the inevitable results.

This was the main message I took from Pound Foolish: we desperately want to believe in the myth of personal control. We want to believe that our financial troubles are something we can fix through personal education, more will power, better decisions, or just the right investment. And so, we turn to gurus like Suze Orman and buy their mix of muddled financial advice and "tough love" that completely ignores broader social factors. We're easy marks for psychologically-trained investment sellers who mix fear, pressure, and a fantasy of inside knowledge and personal control. We're fooled by a narrative of empowerment and stand by while a working retirement system (guaranteed benefit pensions) is undermined and destroyed in favor of much riskier investment schemes... riskier for us, at least, but loaded with guaranteed profits for the people who "advise" us. And we cling to financial literacy campaigns that are funded by exactly the same credit card companies who fight tooth and nail against regulations that would require they provide simple, comprehensible descriptions of loan terms. One wonders if they support them precisely because they know they don't work.

Olen mentions, in passing, the Stanford marshmallow experiment, which is often used as a foundation for arguments about personal responsibility for financial outcomes, but she doesn't do a detailed critique. I wish she had, since I think it's such a good example of the theme of this book.

The Stanford marshmallow experiment was a psychological experiment from the late 1960s and early 1970s in delayed gratification. Children were put in a room in front of some treat (marshmallows, cookies, pretzels) and told that they could eat it if they wished. But if they didn't eat the treat before the monitor came back, they would get two of the treat instead. Long-term follow-up studies found that the children who refrained from eating the treat and got the reward had better life outcomes along multiple metrics: SAT scores, educational attainment, and others.

On the surface, this seems to support everything US culture loves to believe about the power of self-control, self-improvement, and the Protestant work ethic. People who can delay gratification and save for a future reward do better in the world. (The darker interpretation, also common since the experiment was performed on children, is that the ability to delay gratification has a genetic component, and some people are just doomed to make poor decisions due to their inability to exercise self-control.)

However, I can call the traditional interpretation into question with one simple question that the experimenters appeared not to ask: under what circumstances would taking the treat immediately be the rational and best choice?

One answer, of course, is when one does not trust the adult to produce the promised reward. If the adult might come back, take the treat away, and not give any treat, it's to the child's advantage to take the treat immediately. Even if the adult left the treat but wouldn't actually double it, it's to the child's advantage to take the treat immediately. The traditional interpretation assumes the child trusts the adults performing the experiment — a logical assumption for those of us whose childhood experience was that adults could generally be trusted and that promised rewards would materialize. If the child instead came from a chaotic family where adults weren't reliable, or just one where frequent unexpected money problems meant that promised treats often didn't materialize, the most logical assumption may be much different. One has to ask if such a background may have more to do with the measured long-term life outcomes than the child's unwillingness to trust in a future reward.

And this is one of the major themes of this book. Problems the personal finance industry attributes to our personal shortcomings (which they're happy to take our money to remedy) are often systematic, or at least largely outside of our control. We may already be making the most logical choices given our personal situations. We're in worse financial shape because we're making less money. Our retirements are in danger because our retirement systems were dismantled and replaced with risky and expensive alternatives. And where problems are attributed to our poor choices, one can find entire industries that focus on undermining our ability to make good choices: scaring us, confusing us, hiding vital information, and exploiting known weaknesses of human psychology to route our money to them.

These are not problems that can be solved by watching Suze Orman yell at us to stop buying things. These are systematic social problems that demand a sweeping discussion about regulation, automatic savings systems, and social insurance programs to spread risk and minimize the weaknesses of human psychology. Exactly the kind of discussion that the personal finance industry doesn't want us to have.

Those who are well-read in these topics probably won't find a lot new here. Those who aren't in the US will shake their heads at some of the ways that the US fails its citizens, although many of Olen's points apply even to countries with stronger social safety nets. But if you're interested in solid long-form journalism on this topic, backed by lots of data on just how badly a focus on personal accountability is working for us, I recommend Pound Foolish.

Rating: 8 out of 10

30 August, 2015 01:04AM

August 29, 2015

Tassia Camoes Araujo

Report from the MicroDebconf Brasília 2015

This was an event organized due to a coincidental meeting of a few DD’s in the city of Brasilia on May 31st 2015. What a good thing when we can mix vacations, friends and Debian ;-)


Group photo

We called it Micro due to its short duration and planning phase, to be fair with other Mini DebConfs that take a lot more of organization. We also ended up having a translation sprint inside the event that attracted contributors from other cities.

Our main goal was to boost the local community and bring new contributors to Debian. And we definitely made it!

The meeting happened at University of Brasilia (UnB Gama). It started with a short presentation where each DD and Debian contributor presented their involvement with Debian and plans for the hacking session. This was an invitation for new contributors to choose the activities they were willing to engage, taking advantage of being guided by more experienced people.

Then we moved to smaller rooms where participants were split in different groups to work on each track: packaging, translation and community/contribution. We all came together later for the keysigning party.

Some of the highlights of the day:

  • ~40 participants, from which ~10 were already engaged in the Debian community
  • hands-on packaging tutorial
  • 4 new packages uploaded
  • from the 6 brazilian names annouced as new contributors in the DPN just after the meeting, 4 were among us in Brasília
  • hands-on translation tutorial
  • newbie translators paired with more experienced ones, numerous translations committed
  • discussion about chalenges of migrating debianArt to Noosfero
  • initial setup of Collab.Debian (with Noosfero), aiming to facilitate contributions of users to the Debian project (this platform was offcially released at DC15 lightining talks (46:00))
  • first keysigning party for many of the participants
  • first time some longterm Brazilian contributors had the change to meet in person

For more details of what happened, you can read our full report.

The MicroDebconf wouldn’t be possible without the support of prof. Paulo Meirelles from UnB Gama and all the LAPPIS team for the local organization and students mobilization. We also need to thank to Debian donnors, who covered the travel costs of one of our contributors.

Last but not least, thanks to our participants and the large Brazilian community who is giving a good example of team work. A similar meeting happened in July during the Free Software International Forum (FISL) and another one is already planned to happen in October as part of the LatinoWare.

I hope I can join those folks again in the near future!

29 August, 2015 11:22PM by tassia

hackergotchi for Francois Marier

Francois Marier

Letting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

29 August, 2015 09:20PM

Zlatan Todorić

Interviews with FLOSS developers: Elena Grandi

One of fresh additions to Debian family, and thus wider FLOSS family is Elena Grandi. She is from realms of Valhalla and is setting her footprint into the community. A hacker mindset, a Free software lover and a 3D printing maker. Elena has big dedication to make the world free and better place for all. She tries to push limits on personal level with much care and love, and FLOSS community will benefit from her work and way of life in future. So what has the Viking lady to say about FLOSS? Meet Elena "of Valhalla" Grandi.

Read more… (12 min remaining to read)

29 August, 2015 02:23PM by Zlatan Todoric

hackergotchi for Norbert Preining

Norbert Preining

Kobo Glo and GloHD firmware 3.17.3 mega update (KSM, nickel patch, ssh, fonts)

I have updated my mega-update for Kobo to the latest firmware 3.17.3. Additionally, I have not built (and tested) updates for both Mark4 hardware (Glo) and Mark6 hardware (GloHD). Please see the previous post for details on what is included.

The only difference that is important is the update to KSM (Kobo Start Menu) version 8, which is still in testing phase (thus a warning: the layout and setup of KSM8 might change till release). This is an important update as all version up to V7 could create database corruptions (which I have seen several times!) when used with Calibre and the Kepub driver.

Kobo Logo

Other things that are included are as usual: Metazoa firmware patches – for the Glo (non HD) version I have activated the compact layout patch; koreader, pbchess, coolreader, the ssh part of kobohack, custom dictionaries support, and some side-loaded fonts. Again, for details please see the previous post

You can check for database corruption by selecting tools - nickel diverse.msh - db chk integrity.sh in the Kobo Start Menu. If it returns ok, then all is fine. Otherwise you might see problems.

I solved the corruption of my database by first dumping the database to an sql file, and reloading it into a new database. Assuming that you have the file KoboReader.sqlite, what I did is:

$ sqlite3  KoboReader.sqlite 
SQLite version 3.8.11.1 2015-07-29 20:00:57
Enter ".help" for usage hints.
sqlite> PRAGMA integrity_check;
*** in database main ***
Page 5237: btreeInitPage() returns error code 11
On tree page 889 cell 1: 2nd reference to page 5237
Page 4913 is never used
Page 5009 is never used
Error: database disk image is malformed
sqlite> .output foo.sql
sqlite> .dump
sqlite> .quit
$ sqlite3 KoboReader.sqlite-NEW
SQLite version 3.8.11.1 2015-07-29 20:00:57
Enter ".help" for usage hints.
sqlite> .read foo.sql
sqlite> .quit

The first part shows that the database is corrupted. Fortunately dumping succeeded and then reloading it into a new database, too. Finally I replaced (after backup) the sqlite on the device with the new database.

Download

Mark6 – Kobo GloHD

firmware: Kobo 3.17.3 for GloHD

Mega update: Kobo-3.17.3-combined/Mark6/KoboRoot.tgz

Mark4 – Kobo Glo, Auro HD

firmware: Kobo 3.17.3 for Glo and AuroHD

Mega update: Kobo-3.17.3-combined/Mark4/KoboRoot.tgz

Enjoy.

29 August, 2015 12:04AM by Norbert Preining

August 28, 2015

hackergotchi for Gunnar Wolf

Gunnar Wolf

180

180 degrees — people say their life has changed by 180° whenever something alters their priorities, their viewpoints, their targets in life.

In our case, it's been 180 days. 183 by today, really. The six most amazing months in my life.

We are still the same people, with similar viewpoints and targets. Our priorities have clearly shifted.

But our understanding of the world, and our sources of enjoyment, and our outlook for the future... Are worlds apart. Not 180°, think more of a quantic transposition.

28 August, 2015 04:09PM by gwolf

Zlatan Todorić

The big life adventure called DebConf15

By the help of sponsorship I managed again to attend the conference where Debian family gathers. This is going to be a mix without any particular order of everything, anything and nothing else ;)

attendance pic

I arrived to Heidelberg Main Train Station around 9am on 15th August and almost right away found Debian people so it made my trip to hostel easier. After arrival I checked in but needed to wait for 3 hours to get the key (it seems that SA will not have that problem at all, which is already an improvement). Although waiting was 3 hours long, it wasn't actually difficult at all as I started hugging and saying hi to many old (the super old super friend of mine - moray, or how I call him, "doc") and new friends. I just must say - if you know or don't know Rhonda, try to get somehow into her hugs. With her hug I acknowledged that I really did arrive to reunion.

Read more… (14 min remaining to read)

28 August, 2015 10:38AM by Zlatan Todoric

Dimitri John Ledkov

Go enjoy Python3

Given a string, get a truncated string of length up to 12.

The task is ambiguous, as it doesn't say anything about whether or not 12 should include terminating null character or not. None the less, let's see how one would achieve this in various languages.
Let's start with python3

import sys
print(sys.argv[1][:12])

Simple enough, in essence given first argument, print it up to length 12. As an added this also deals with unicode correctly that is if passed arg is 車賈滑豈更串句龜龜契金喇車賈滑豈更串句龜龜契金喇, it will correctly print 車賈滑豈更串句龜龜契金喇. (note these are just random Unicode strings to me, no idea what they stand for).

In C things are slightly more verbose, but in essence, I am going to use strncpy function:

#include <stdio.h>
#include <string.h>
void main(int argc, char *argv[]) {
char res[12];
strncpy(res,argv[1],12);
printf("%s\n",res);
}
This treats things as byte-array instead of unicode, thus for unicode test it will end up printing just 車賈滑豈. But it is still simple enough.
Finally we have Go
package main

import "os"
import "fmt"
import "math"

func main() {
fmt.Printf("%s\n", os.Args[1][:int(math.Min(12, float64(len(os.Args[1]))))])
}
This similarly treats argument as a byte array, and one needs to cast the argument to a rune to get unicode string handling. But there are quite a few caveats. One cannot take out of bounds slices. Thus a naïve os.Args[1][:12] can result in a runtime panic that slice bounds are out of range. Or if a string is known at compile time, a compile time error. Hence one needs to calculate length, and do a min comparison. And there lies the next caveat, math.Min() is only defined for float64 type, and slice indexes can only be integers and thus we end up writing ]))))])...

12 points for python3, 8 points for C, and Go receives nul points Eurovision style.

EDIT: Andreas Røssland and James Hunt are full of win. Both suggesting fmt.Printf("%.12s\n", os.Args[1]) for go. I like that a lot, as it gives simplicity & readability without compromising the default safety against out of bounds access. Hence the scores are now: 14 points for Go, 12 points for python3 and 8 points for C.

EDIT2: I was pointed out much better C implementation by Keith Thompson - http://pastebin.com/5i7rFmMQ in essence it uses strncat() which has much better null termination semantics. And Ben posted a C implementation which handles wide characters http://www.decadent.org.uk/ben/blog/truncating-a-string-in-c.html. I regret to inform you that this blog post got syndicated onto hacker news and has now become the top viewed post on my blog of all time, overnight. In retrospect, I regret awarding points at the end of the blog post, as that's just was merely an expression of opinion and is highly subjective measure. But this problem statement did originate from me reviewing go code that did "if/then/else" comparison and got it wrong to truncate a string and I thought surely one can just do [:12] which has lead me down the rabbit hole of discovering a lot about Go; it's compile and runtime out of bounds access safeguards; lack of universal Min() function; runes vs strings handling and so on. I'm only a beginner go programmer and I am very sorry for wasting everyone's time on this. I guess people didn't have much to do on a Throwback Thursday.

The postings on this site are my own and don't necessarily represent Intel’s positions, strategies, or opinions.

28 August, 2015 09:48AM by Dimitri John Ledkov (noreply@blogger.com)

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

DebConf’15

I attended DebConf’15 last week. After being on semi-vacation from Debian for the last few months, recovering after the end of my second DPL term, it was great to be active again, talk to many people, and go back to doing technical work. Unfortunately, I caught the debbug quite early in the week, so I was not able to make it as intense as I wanted, but it was great nevertheless.

I still managed to do quite a lot:

  • I rewrote a core part of UDD, which will make it easier to monitor data importer scripts and reduce the cron-spam
  • with DSA members, I worked on finding a suitable workaround for the storage performance issues that have been plaguing UDD for the last few months. fsyncs() will now longer hang for 15 minutes, yay!
  • I added a DUCK importer to UDD, and added that information to the Debian Maintainer Dashboard
  • I worked a bit on cleaning up the status of my packages, including digging into a strange texlive issue (that showed up in developers-reference), that is now fixed in unstable
  • I worked a bit on improving git-buildpackage documentation (more to come in that area)
  • Last but not least, I played Mao for the first time in years, and it was a lot of fun. (even if my brain is still slowly recovering)

DC15 was a great DebConf, probably one of the two bests I’ve attended so far. I’m now looking forward to DC16 in Cape Town!

28 August, 2015 08:47AM by lucas

hackergotchi for Ben Hutchings

Ben Hutchings

Securing my own blog

Yeah I know, a bit ironic that this isn't available over HTTP-S. I could reuse the mail server certificate to make https://decadent.org.uk/ work...

28 August, 2015 01:03AM

Securing debcheckout of git repositories

Some source packages have Vcs-Git URLs using the git: scheme, which is plain-text and unauthenticated. It's probably harder to MITM than HTTP, but still we can do better than this even for anonymous checkouts. git is now nearly as efficient at cloning/pulling over HTTP-S, so why not make that the default?

Adding the following lines to ~/.gitconfig will make git consistently use HTTP-S to access Alioth. It's not quite HTTPS-Everywhere, but it's a step in that direction:

[url "https://anonscm.debian.org/git/"]
	insteadOf = git://anonscm.debian.org/
	insteadOf = git://git.debian.org/

Additionally you can automatically fix up the push URL in case you have or are later given commit access to the repository on Alioth:

[url "git+ssh://git.debian.org/git/"]
	pushInsteadOf = git://anonscm.debian.org/
	pushInsteadOf = git://git.debian.org/

Similar for git.kernel.org:

[url "https://git.kernel.org/pub/scm/"]
	insteadOf = git://git.kernel.org/pub/scm/
[url "git+ssh://ra.kernel.org/pub/scm/"]
	pushInsteadOf = git://git.kernel.org/pub/scm/

RTFM for more information on these configuration variables.

28 August, 2015 01:01AM

Securing git imap-send in Debian

I usually send patches from git via git imap-send, which gives me a chance to edit and save them through my regular mail client. Obviously I want to make a secure connection to the IMAP server. The upstream code now supports doing this with OpenSSL, but git is under GPL and it seems that not all relevant contributors have given the extra permission to link with OpenSSL. So in Debian you still need to use an external program to provide a TLS tunnel.

The commonly used TLS tunnelling programs, openssl s_client and stunnel, do not validate server certificates in a useful way - at least by default.

Here's how I've configured git imap-send and stunnel to properly validate the server certificate. If you use the PLAIN or LOGIN authentication method with the server, you will still see the warning:

*** IMAP Warning *** Password is being sent in the clear

The server does see the clear-text password, but it is encrypted on the wire and git imap-send just doesn't know that.

~/.gitconfig

[imap]
	user = ben
	folder = "drafts"
	tunnel = "stunnel ~/.git-imap-send/stunnel.conf"

~/.git-imap-send/stunnel.conf

debug = 3
foreground = yes
client = yes
connect = mail.decadent.org.uk:993
sslVersion = TLSv1.2
renegotiation = no
verify = 2
; Current CA for the IMAP server.
; If you don't want to pin to a specific CA certificate, use
; /etc/ssl/certs/ca-certificates.crt instead.
CAfile = /etc/ssl/certs/StartCom_Certification_Authority.pem
checkHost = mail.decadent.org.uk

If stunnel chokes on the checkHost variable, it doesn't support certificate name validation. Unfortunately no Debian stable release has this feature - only testing/unstable. I'm wondering whether it would be worthwhile to backport it or even to make a stable update to add this important security feature.

28 August, 2015 12:26AM

August 27, 2015

hackergotchi for Norbert Preining

Norbert Preining

Kobo Japanese Dictionary Enhancer 1.1

Lots of releases in quick succession – the new Kobo Japanese Dictionary Enhancer brings multi-dictionary support and merged translation support. Using the Wadoku project’s edict2 database we can now add also German translations.

kobo-japanese-dictionary-enhancer

Looking at the numbers, we have now 326064 translated entries when using the English edict2, and 368943 translated entries when using the German Wadoku edict version. And more than that, as an extra feature it is now also possible to have merged translations, so to have both German and English translations added.

kobo-dict-de-en

Please head over to the main page of the project for details and download instructions. If you need my help in creating the updated dictionary, please feel free to contact me.

Enjoy.

27 August, 2015 10:39PM by Norbert Preining

hackergotchi for Ben Hutchings

Ben Hutchings

Truncating a string in C

This version uses the proper APIs to work with the locale's multibyte encoding (with single-byte encodings being a trivial case of multibyte). It will fail if it encounters an invalid byte sequence (e.g. byte > 127 in the "C" locale), though it could be changed to treat each rejected byte as a single character.

#include <locale.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <wchar.h>

int main(int argc, char **argv)
{
    size_t n = 12, totlen = 0, maxlen, chlen;

    setlocale(LC_ALL, "");

    if (argc != 2)
	return EXIT_FAILURE;

    maxlen = strlen(argv[1]);

    while (n--) {
	chlen = mbrlen(argv[1] + totlen, maxlen - totlen, NULL);
	if (chlen > MB_CUR_MAX)
	    return EXIT_FAILURE;
	totlen += chlen;
    }

    printf("%.*s\n", (int)totlen, argv[1]);
    return 0;
}

27 August, 2015 08:10PM

hackergotchi for Alexander Wirt

Alexander Wirt

Basic support for SSO Client certificates on paste.debian.net

Sometimes waiting for a delayed flight helps to implement things. I added some basic support for the new Debian SSO Client Certificate feature to paste.debian.net.

If you are using such a certificate most anti-spam restrictions, code limitations and so on won’t count for you anymore.

27 August, 2015 07:07PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Laptop Mode Tools - 1.68

I am please to announce the release of Laptop Mode Tools, version 1.68.

This release is mainly focused on integration with the newer init system, systemd. Without the help from the awesome Debian systemd maintainers, this would not have been possible. Thank you folks.

While the focus now is on systemd, LMT will still support the older SysV Init.

With this new release, there are some new files: laptop-mode.service, laptop-mode.timer and lmt-poll.service. All the files should be documented well enough for users. lmt-poll.service is the equivalent of the module battery-level-polling, should you need it.

Filtered git log:

1.68 - Thu Aug 27 22:36:43 IST 2015

    * Fix all instances for BATTERY_LEVEL_POLLING

    * Group kill the polling daemon so that its child process get the same signal

    * Release the descriptor explicitly

    * Add identifier about who's our parent

    * Narrow down our power_supply subsystem event check condition

    * Fine tune the .service file

    * On my ultrabook, AC as reported as ACAD

    * Enhance lmt-udev to better work with systemd

    * Add a timer based polling for LMT. It is the equivalent of battery-polling-daemon,

      using systemd

    * Disable battery level polling by default, because most systems will have systemd running

    * Add documentation reference in systemd files
The md5 checksum for the tarball is 15edf643990e08deaebebf66b128b270
 

Categories: 

Keywords: 

Like: 

27 August, 2015 05:39PM by Ritesh Raj Sarraf

hackergotchi for Thorsten Glaser

Thorsten Glaser

Go enjoy shell

Dimitri, I personally enjoy shell…

tglase@tglase:~ $ x=車賈滑豈更串句龜龜契金喇車賈滑豈更串句龜龜契金喇
tglase@tglase:~ $ echo ${x::12}
車賈滑豈更串句龜龜契金喇
tglase@tglase:~ $ printf '%s\n' 'import sys' 'print(sys.argv[1][:12])' >x.py
tglase@tglase:~ $ python x.py $x
車賈滑豈
 

… much more than Python, actually. (Python is the language in which you do not want to write code dealing with strings, due to UnicodeDecodeError and all; even py3k is not much better.)

I would have commented on your post if it allowed doing so without getting a proprietary Google+ account.

27 August, 2015 02:12PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Joey Hess

Joey Hess

then and now

It's 2004 and I'm in Oldenburg DE, working on the Debian Installer. Colin and I pair program on partman, its new partitioner, to get it into shape. We've somewhat reluctantly decided to use it. Partman is in some ways a beautful piece of work, a mass of semi-object-oriented, super extensible shell code that sprang fully formed from the brow of Anton. And in many ways, it's mad, full of sector alignment twiddling math implemented in tens of thousands of lines of shell script scattered amoung hundreds of tiny files that are impossible to keep straight. In the tiny Oldenburg Developers Meeting, full of obscure hardware and crazy intensity of ideas like porting Debian to VAXen, we hack late into the night, night after night, and crash on the floor.

sepia toned hackers round a table

It's 2015 and I'm at a Chinese bakery, then at the Berkeley pier, then in a SF food truck lot, catching half an hour here and there in my vacation to add some features to Propellor. Mostly writing down data types for things like filesystem formats, partition layouts, and then some small amount of haskell code to use them in generic ways. Putting these peices together and reusing stuff already in Propellor (like chroot creation).

Before long I have this, which is only 2 undefined functions away from (probably) working:

let chroot d = Chroot.debootstrapped (System (Debian Unstable) "amd64") mempty d
        & Apt.installed ["openssh-server"]
        & ...
    partitions = fitChrootSize MSDOS
        [ (Just "/boot", mkPartiton EXT2)
        , (Just "/", mkPartition EXT4)
        , (Nothing, const (mkPartition LinuxSwap (MegaBytes 256)))
        ]
 in Diskimage.built chroot partitions (grubBooted PC)

This is at least a replication of vmdebootstrap, generating a bootable disk image from that config and 400 lines of code, with enormous customizability of the disk image contents, using all the abilities of Propellor. But is also, effectively, a replication of everything partman is used for (aside from UI and RAID/LVM).

sailboat on the SF bay

What a difference a decade and better choices of architecture make! In many ways, this is the loosely coupled, extensible, highly configurable system partman aspired to be. Plus elegance. And I'm writing it on a lark, because I have some spare half hours in my vacation.

Past Debian Installer team lead Tollef stops by for lunch, I show him the code, and we have the conversation old d-i developers always have about partman.

I can't say that partman was a failure, because it's been used by millions to install Debian and Ubuntu and etc for a decade. Anything that deletes that many Windows partitions is a success. But it's been an unhappy success. Nobody has ever had a good time writing partman recipes; the code has grown duplication and unmaintainability.

I can't say that these extensions to Propellor will be a success; there's no plan here to replace Debian Installer (although with a few hundred more lines of code, propellor is d-i 2.0); indeed I'm just adding generic useful stuff and building further stuff out of it without any particular end goal. Perhaps that's the real difference.

27 August, 2015 12:01AM

August 26, 2015

Carl Chenet

Retweet 0.2 : bump to Python 3

Follow me on Identi.ca  or Twitter  or Diaspora*diaspora-banner

Don’t know Retweet? My last post about it introduced this small Twitter bot whichs just retweets (for now) every tweets from a Twitter account to another one.

Retweet

Retweet was created in order to improve the Journal du hacker Twitter account. The Journal du hacker is a Hacker News-like French-speaking website.

logo-journal-du-hacker

Especially useful to broadcast news through a network of Twitter accounts, Retweet was improved to bump Python version to 3.4 and to improve pep8 compliance (work in progress).

The project is also well documented and should be quite simple to install, configure and use.

After my first blog post about Retweet, new users gave me feedback about it and I now have great ideas for future features for the next release.

Twitter_logo_blue

What about you? If you try it, please tell me what you think about it, opening a bug request or ask for new features. Or just write your comment here ;)


26 August, 2015 09:01PM by Carl Chenet

hackergotchi for Holger Levsen

Holger Levsen

20150826-jenkins-fourth-state

jenkins has a fourth state

So, at the jenkins.debian.org BOF (very short summary: j.d.o will be coming soonish, long summary thanks to the awesome video team) I shared a trick I discovered almost a year ago, but had never really announced anywhere yet, which enables one to programatically use a fourth state to the existing three jenkins job states ("success", "unstable" and "failed"), which is "aborted".

Common knowledge is that it's only possible to abort jobs manually, but it's also possible to do that like this:

TMPFILE=$(mktemp)
curl https://jenkins.debian.net/jnlpJars/jenkins-cli.jar -o $TMPFILE
java -jar $TMPFILE -s http://localhost:8080/ set-build-result aborted
rm $TMPFILE
exit

The nice thing about aborted job runs is that these don't cause any notifications (neither mail nor IRC), so I intend to use this for several cases:

  • to abort jobs which encounter network problems
  • to abort jobs where a known bug will prevent the job from succeeding. This will require a small database to map bugs to jobs and some way to edit that database, so I will probably go with a .yaml file in some git repo.

I've no idea when I'll get along to actually implement that, so help doing this is very much welcome and I'd also be glad to help hooking this into the existing jenkins.debian.net.git codebase.

In related news, I'm back home since Monday and am thankful for having shared a very nice and productive DebConf15 with many old and new friends in Heidelberg. Many thanks to everyone involved in making this happen!

26 August, 2015 10:41AM

NOKUBI Takatsugu

1Gbps FTTH

This month, I changed FTTH Internet from 100Mbps to 1Gbps. The costs is almost same as the past line.

To change the line, I had need to be witness in the construction, so I  couldn’t get time to attend DebConf 2015.

According to Speedtest.net, I can get about 300 Mbps upstream bandwidth.

26 August, 2015 09:22AM by knok

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, July 2015

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 79.50 work hours have been dispatched among 7 paid contributors. Their reports are available:

Evolution of the situation

August has seen a small decrease in terms of sponsored hours (71.50 hours per month) because two sponsors did not pay their renewal invoice on time. That said they reconfirmed their willingness to support us and things should be fixed after the summer. And we should be able to reach our first milestone of funding the equivalent of a half-time position, in particular since a new platinum sponsor might join the project.

DebConf 15 happened this month and Debian LTS was featured in a talk and in a work session. Have a look at the video recordings:

In terms of security updates waiting to be handled, the situation is better than last month: the dla-needed.txt file lists 20 packages awaiting an update (4 less than last month), the list of open vulnerabilities in Squeeze shows about 22 affected packages in total (11 less than last month). The new LTS frontdesk ensures regular triage of CVE reports and the difference between both counts dropped significantly. That’s good!

Thanks to our sponsors

Thanks to Sig-I/O, a new bronze sponsor, which joins our 35 other sponsors.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

26 August, 2015 09:14AM by Raphaël Hertzog

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.3

A new maintenance release 0.4.3 of RProtoBuf is now on CRAN. RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release comes upon the request of CRAN and adds additional import statements to the NAMESPACE file. While we were at it, a few more things got cleaned up and edited---but no new code was added. Full details are below.

Changes in RProtoBuf version 0.4.3 (2015-08-25)

  • Declare additional imports from methods in NAMESPACE.

  • Travis CI tests now run faster as all CRAN dependencies are installed as binaries.

  • The tools/winlibs.R script now tests for R (< 3.3.0) before calling the (soon-to-be phased out) setInternet2() function.

  • Several small edits were made to DESCRIPTION to clarify library dependencies, provide additonal references and conform to now-current R packaging standards.

CRANberries also provides a diff to the previous release. The RProtoBuf page has a package vignette, a a 'quick' overview vignette, and a unit test summary vignette. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 August, 2015 03:01AM

August 25, 2015

hackergotchi for Norbert Preining

Norbert Preining

Plex Home Theater 1.4.1 updated for Debian/sid

Debian/sid is going through a big restructuring with the switch to a new gcc and libstc++. Furthermore, libcec3 is now the default. So I have updated my PHT builds for Debian/sid to build and install on the current status, both for amd64 and i386.

plex-debian-new

Add the following lines to your sources.list:

deb http://www.preining.info/debian/ sid pht
deb-src http://www.preining.info/debian/ sid pht

You can also grab the binary for amd64 directly here for amd64 and i386, you can get the source package with

dget http://www.preining.info/debian/pool/pht/p/plexhometheater/plexhometheater_1.4.1-2.dsc

The release file and changes file are signed with my official Debian key 0x860CDC13.

For Debian/testing I am waiting until the transition has settled. Please wait a bit more.

Now be ready for enjoying the next movie!

25 August, 2015 11:21PM by Norbert Preining

Richard Hartmann

Tor-enabled Debian mirror, part 2

Well, that was quite some feedback to my last post; via blog, email, irc, and in person. I actually think this may be the most feedback I ever got to any single blog post. If you are still waiting for a reply after this new post, I will get back to you.

To handle common question/information at once:

  • It was the first download from an official Tor-enabled mirror; I know people downloaded updates via Tor before
  • Yes, having this in the Debian installer as an option would be very nice
  • Yes, there are ways to load balance Tor hidden services these days and the pre-requisites are being worked on already
    • Yes, that load balanced setup will support hardware key tokens
  • A natively hidden service is more secure than accessing a non-hidden service via Tor because there is no way for a third-party exit node to mess with your traffic
  • apt-get etc will leak information about your architecture, release, suites, desired packages, and package versions. That can't be avoided, but else it will not leak anything to the server. And even if it did.. see above
  • Using Tor is also more secure than normal ftp/http/https as you don't build up an IP connection so the server can not get back to the client other than through the single one connection the client built up
  • noodles Tor-enabled his partial debmirror as well: http://earthqfvaeuv5bla.onion/
    • It took him 14322255 tries to get a private key which produced that address
    • He gave up to find one starting with earthli after 9474114341 attempts
  • I have been swamped with queries if I had tried apt-transport-tor instead of torify
    • I had forgotten about it, re-reading the blog post reminded me about apt transports
    • Tim even said in his post that Tor hidden mirror services would be nice
    • Try it yourself before you ask ;)
    • Yes, it works!

So this whole thing is a lot easier now:

# apt-get install torsocks apt-transport-tor
# mv /etc/apt/sources.list /etc/apt/sources.list--backup2
# > /etc/apt/sources.list << EOF
deb tor+http://vwakviie2ienjx6t.onion/debian/ unstable main contrib non-free
deb tor+http://earthqfvaeuv5bla.onion/debian/ unstable main contrib non-free
EOF
# apt-get update
# apt-get install vcsh

25 August, 2015 11:11PM by Richard 'RichiH' Hartmann

hackergotchi for Lunar

Lunar

Reproducible builds: week 17 in Stretch cycle

A good amount of the Debian reproducible builds team had the chance to enjoy face-to-face interactions during DebConf15.

Names in red and blue were all present at DebConf15
Picture of the “reproducible builds” talk during DebConf15

Hugging people with whom one has been working tirelessly for months gives a lot of warm-fuzzy feelings. Several recorded and hallway discussions paved the way to solve the remaining issues to get “reproducible builds” part of Debian proper. Both talks from the Debian Project Leader and the release team mentioned the effort as important for the future of Debian.

A forty-five minutes talk presented the state of the “reproducible builds” effort. It was then followed by an hour long “roundtable” to discuss current blockers regarding dpkg, .buildinfo and their integration in the archive.

Picture of the “reproducible builds” roundtable during DebConf15

Toolchain fixes

  • Kenneth J. Pronovici uploaded epydoc/3.0.1+dfsg-12 which makes class and modules ordering predictable (#795835) and fixes __repr__ so memory addresses don't appear in docs (#795826). Patches by Val Lorentz.
  • Sergei Golovan uploaded erlang/1:18.0-dfsg-2 which adds support for SOURCE_DATE_EPOCH to erlc. Patch by Chris West (Faux) and Chris Lamb.
  • Dmitry Shachnev uploaded sphinx/1.3.1-5 which make grammar, inventory, and JavaScript locales generation deterministic. Original patch by Val Lorentz.
  • Stéphane Glondu uploaded ocaml/4.02.3-2 to experimental, making startup files and native packed libraries deterministic. The patch adds deterministic .file to the assembler output.
  • Enrico Tassi uploaded lua-ldoc/1.4.3-3 which now pass the -d option to txt2man and add the --date option to override the current date.

Reiner Herrmann submitted a patch to make rdfind sort the processed files before doing any operation. Chris Lamb proposed a new patch for wheel implementing support for SOURCE_DATE_EPOCH instead of the custom WHEEL_FORCE_TIMESTAMP. akira sent one making man2html SOURCE_DATE_EPOCH aware.

Stéphane Glondu reported that dpkg-source would not respect tarball permissions when unpacking under a umask of 002.

After hours of iterative testing during the DebConf workshop, Sandro Knauß created a test case showing how pdflatex output can be non-deterministic with some PNG files.

Packages fixed

The following 65 packages became reproducible due to changes in their build dependencies: alacarte, arbtt, bullet, ccfits, commons-daemon, crack-attack, d-conf, ejabberd-contrib, erlang-bear, erlang-cherly, erlang-cowlib, erlang-folsom, erlang-goldrush, erlang-ibrowse, erlang-jiffy, erlang-lager, erlang-lhttpc, erlang-meck, erlang-p1-cache-tab, erlang-p1-iconv, erlang-p1-logger, erlang-p1-mysql, erlang-p1-pam, erlang-p1-pgsql, erlang-p1-sip, erlang-p1-stringprep, erlang-p1-stun, erlang-p1-tls, erlang-p1-utils, erlang-p1-xml, erlang-p1-yaml, erlang-p1-zlib, erlang-ranch, erlang-redis-client, erlang-uuid, freecontact, givaro, glade, gnome-shell, gupnp, gvfs, htseq, jags, jana, knot, libconfig, libkolab, libmatio, libvsqlitepp, mpmath, octave-zenity, openigtlink, paman, pisa, pynifti, qof, ruby-blankslate, ruby-xml-simple, timingframework, trace-cmd, tsung, wings3d, xdg-user-dirs, xz-utils, zpspell.

The following packages became reproducible after getting fixed:

Uploads that might have fixed reproducibility issues:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #795861 on fakeroot by Val Lorentz: set the mtime of all files to the time of the last debian/changelog entry.
  • #795870 on fatresize by Chris Lamb: set build date to the time of the latest debian/changelog entry.
  • #795945 on projectl by Reiner Herrmann: sort with LC_ALL set to C.
  • #795977 on dahdi-tools by Dhole: set the timezone to UTC before calling asciidoc.
  • #795981 on x11proto-input by Dhole: set the timezone to UTC before calling asciidoc.
  • #795983 on dbusada by Dhole: set the timezone to UTC before calling asciidoc.
  • #795984 on postgresql-plproxy by Dhole: set the timezone to UTC before calling asciidoc.
  • #795985 on xorg by Dhole: set the timezone to UTC before calling asciidoc.
  • #795987 on pngcheck by Dhole: set the date in the man pages to the latest debian/changelog entry.
  • #795997 on python-babel by Val Lorentz: make build timestamp independent from the timezone and remove the name of the build system locale from the documentation.
  • #796092 on a7xpg by Reiner Herrmann: sort with LC_ALL set to C.
  • #796212 on bittornado by Chris Lamb: remove umask-varying permissions.
  • #796251 on liblucy-perl by Niko Tyni: generate lib/Lucy.xs in a deterministic order.
  • #796271 on tcsh by Reiner Herrmann: sort with LC_ALL set to C.
  • #796275 on hspell by Reiner Herrmann: remove timestamp from aff files generated by mk_he_affix.
  • #796324 on fftw3 by Reiner Herrmann: remove date from documentation files.
  • #796335 on nasm by Val Lorentz: remove extra timestamps from the build system.
  • #796360 on libical by Chris Lamb: removes randomess caused Perl in generated icalderivedvalue.c.
  • #796375 on wcd by Dhole: set the date in the man pages to the latest debian/changelog entry.
  • #796376 on mapivi by Dhole: set the date in the man pages to the latest debian/changelog entry.
  • #796527 on vserver-debiantools by Dhole: set the date in the man pages to the latest debian/changelog entry.

Stéphane Glondu reported two issues regarding embedded build date in omake and cduce.

Aurélien Jarno submitted a fix for the breakage of make-dfsg test suite. As binutils now creates deterministic libraries by default, Aurélien's patch makes use of a wrapper to give the U flag to ar.

Reiner Herrmann reported an issue with pound which embeds random dhparams in its code during the build. Better solutions are yet to be found.

reproducible.debian.net

Package pages on reproducible.debian.net now have a new layout improving readability designed by Mattia Rizzolo, h01ger, and Ulrike. The navigation is now on the left as vertical space is more valuable nowadays.

armhf is now enabled on all pages except the dashboard. Actual tests on armhf are expected to start shortly. (Mattia Rizzolo, h01ger)

The limit on how many packages people can schedule using the reschedule script on Alioth has been bumped to 200. (h01ger)

mod_rewrite is now used instead of JavaScript for the form in the dashboard. (h01ger)

Following the rename of the software, “debbindiff” has mostly been replaced by either “diffoscope” or “differences” in generated HTML and IRC notification output.

Connections to UDD have been made more robust. (Mattia Rizzolo)

diffoscope development

diffoscope version 31 was released on August 21st. This version improves fuzzy-matching by using the tlsh algorithm instead of ssdeep.

New command line options are available: --max-diff-input-lines and --max-diff-block-lines to override limits on diff input and output (Reiner Herrmann), --debugger to dump the user into pdb in case of crashes (Mattia Rizzolo).

jar archives should now be detected properly (Reiner Herrman). Several general code cleanups were also done by Chris Lamb.

strip-nondeterminism development

Andrew Ayer released strip-nondeterminism version 0.010-1. Java properties file in jar should now be detected more accurately. A missing dependency spotted by Stéphane Glondu has been added.

Testing directory ordering issues: disorderfs

During the “reproducible builds” workshop at DebConf, participants identified that we were still short of a good way to test variations on filesystem behaviors (e.g. file ordering or disk usage). Andrew Ayer took a couple of hours to create disorderfs. Based on FUSE, disorderfs in an overlay filesystem that will mount the content of a directory at another location. For this first version, it will make the order in which files appear in a directory random.

Documentation update

Dhole documented how to implement support for SOURCE_DATE_EPOCH in Python, bash, Makefiles, CMake, and C.

Chris Lamb started to convert the wiki page describing SOURCE_DATE_EPOCH into a Freedesktop-like specification in the hope that it will convince more upstream to adopt it.

Package reviews

44 reviews have been removed, 192 added and 77 updated this week.

New issues identified this week: locale_dependent_order_in_devlibs_depends, randomness_in_ocaml_startup_files, randomness_in_ocaml_packed_libraries, randomness_in_ocaml_custom_executables, undeterministic_symlinking_by_rdfind, random_build_path_by_golang_compiler, and images_in_pdf_generated_by_latex.

117 new FTBFS bugs have been reported by Chris Lamb, Chris West (Faux), and Niko Tyni.

Misc.

Some reproducibility issues might face us very late. Chris Lamb noticed that the test suite for python-pykmip was now failing because its test certificates have expired. Let's hope no packages are hiding a certificate valid for 10 years somewhere in their source!

Pictures courtesy and copyright of Debian's own paparazzi: Aigars Mahinovs.

25 August, 2015 04:11PM

Richard Hartmann

Tor-enabled Debian mirror

During Jacob Applebaum's talk at DebConf15, he noted that Debian should TLS-enable all services, especially the mirrors.

His reasoning was that when a high-value target downloads a security update for package foo, an adversary knows that they are still using a vulnerable version of foo and try to attack before the security update has been installed.

In this specific case, TLS is not of much use though. If the target downloads 4.7 MiB right after a security update with 4.7 MiB has been released, or downloads from security.debian.org, it's still obvious what's happening. Even padding won't help much as the 5 MiB download will also be suspicious. The mere act of downloading anything from the mirrors after an update has been released is reason enough to try an attack.

The solution, is, of course, Tor.

weasel was nice enough to set up a hidden service on Debian's infrastructure; initally we agreed that he would just give me a VM and I would do the actual work, but he went the full way on his own. Thanks :) This service is not redundant, it uses a key which is stored on the local drive, the .onion will change, and things are expected to break.

But at least this service exists now and can be used, tested, and put under some load:

http://vwakviie2ienjx6t.onion/

I couldn't get apt-get to be content with a .onion in /etc/apt/sources.list and Acquire::socks::proxy "socks://127.0.0.1:9050"; in /etc/apt/apt.conf, but the torify wrapper worked like a charm. What follows is, to the best of my knowledge, the first ever download from Debian's "official" Tor-enabled mirror:

~ # apt-get install torsocks
~ # mv /etc/apt/sources.list /etc/apt/sources.list.backup
~ # echo 'deb http://vwakviie2ienjx6t.onion/debian/ unstable main non-free contrib' > /etc/apt/sources.list
~ # torify apt-get update
Get:1 http://vwakviie2ienjx6t.onion unstable InRelease [215 kB]
Get:2 http://vwakviie2ienjx6t.onion unstable/main amd64 Packages [7548 kB]
Get:3 http://vwakviie2ienjx6t.onion unstable/non-free amd64 Packages [91.9 kB]
Get:4 http://vwakviie2ienjx6t.onion unstable/contrib amd64 Packages [58.5 kB]
Get:5 http://vwakviie2ienjx6t.onion unstable/main i386 Packages [7541 kB]
Get:6 http://vwakviie2ienjx6t.onion unstable/non-free i386 Packages [85.4 kB]
Get:7 http://vwakviie2ienjx6t.onion unstable/contrib i386 Packages [58.1 kB]
Get:8 http://vwakviie2ienjx6t.onion unstable/contrib Translation-en [45.7 kB]
Get:9 http://vwakviie2ienjx6t.onion unstable/main Translation-en [5060 kB]
Get:10 http://vwakviie2ienjx6t.onion unstable/non-free Translation-en [80.8 kB]
Fetched 20.8 MB in 2min 0s (172 kB/s)
Reading package lists... Done
~ # torify apt-get install vim
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  vim-common vim-nox vim-runtime vim-tiny
Suggested packages:
  ctags vim-doc vim-scripts cscope indent
The following packages will be upgraded:
  vim vim-common vim-nox vim-runtime vim-tiny
5 upgraded, 0 newly installed, 0 to remove and 661 not upgraded.
Need to get 0 B/7719 kB of archives.
After this operation, 2048 B disk space will be freed.
Do you want to continue? [Y/n] 
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Reading changelogs... Done
(Reading database ... 316427 files and directories currently installed.)
Preparing to unpack .../vim-nox_2%3a7.4.826-1_amd64.deb ...
Unpacking vim-nox (2:7.4.826-1) over (2:7.4.712-3) ...
Preparing to unpack .../vim_2%3a7.4.826-1_amd64.deb ...
Unpacking vim (2:7.4.826-1) over (2:7.4.712-3) ...
Preparing to unpack .../vim-tiny_2%3a7.4.826-1_amd64.deb ...
Unpacking vim-tiny (2:7.4.826-1) over (2:7.4.712-3) ...
Preparing to unpack .../vim-runtime_2%3a7.4.826-1_all.deb ...
Unpacking vim-runtime (2:7.4.826-1) over (2:7.4.712-3) ...
Preparing to unpack .../vim-common_2%3a7.4.826-1_amd64.deb ...
Unpacking vim-common (2:7.4.826-1) over (2:7.4.712-3) ...
Processing triggers for man-db (2.7.0.2-5) ...
Processing triggers for mime-support (3.58) ...
Processing triggers for desktop-file-utils (0.22-1) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
Setting up vim-common (2:7.4.826-1) ...
Setting up vim-runtime (2:7.4.826-1) ...
Processing /usr/share/vim/addons/doc
Setting up vim-nox (2:7.4.826-1) ...
Setting up vim (2:7.4.826-1) ...
Setting up vim-tiny (2:7.4.826-1) ...
~ # 

More services will follow. noodles, weasel, and me agreed that the project as a whole should aim to Tor-enable the complete package lifecycle, package information, and the website.

Maybe a more secure install option on the official images which, amongst others, sets up apt, apt-listbugs, dput, reportbug, et al up to use Tor without further configuration could even be a realistic stretch goal.

25 August, 2015 07:50AM by Richard 'RichiH' Hartmann

Raphael Geissert

Updates to the sources.debian.net editor

Debconf is a great opportunity to meet people in real life, to express and share ideas in a different way, and to work on all sort of stuff.

I therefore spent some time to finish a couple of features in the editor for sources.debian.net. Here are some of the changes:

  • Compare the source file with that of another version of the package
  • And in order to present that: tabs! editor tabs!
  • at the same time: generated diffs are now presented in a new editor tab, from where you can download it or email it


Get it for chromium, and iceweasel.

If your browser performs automatic updates of the extensions (the default), you should soon be upgraded to version 0.1.0 or later, bringing all those changes to your browser.

Want to see more? multi-file editing? in-browser storage of the editing session? that and more can be done, so feel free to join me and contribute to the Debian sources online editor!

25 August, 2015 07:00AM by Raphael Geissert (noreply@blogger.com)

August 24, 2015

Richard Hartmann

DebConf15

Even though the week of DebCamp took its toll and the stress level will not go down any time soon...

...DebConf15 has finally started! :)

24 August, 2015 10:48PM by Richard 'RichiH' Hartmann

Iustin Pop

Finally, systemd!

Even though Debian has moved to systemd as default a long while ago now, I've stayed with sysv as I have somewhat custom setups (self-built trimmed down kernels, separate /usr not pre-mounted by initrd, etc.).

After installing a new system with Jessie and playing a bit with systemd on it a couple of months ago, I said it's finally time to upgrade. Easier said than starting to actually do it ☹.

The first system I upgraded was a recent (~1 year old) install. It was a trimmed-down system with Debian's kernel, so everything went smoothly. So smoothly that I soon forgot I made the change, and didn't do any more switches for a while.

Systemd was therefore out of my mind until this recent Friday when I got a bug report about mt's rcS init script and shipping a proper systemd unit. The first step should be to actually start using systemd, so I said - let's convert some more things!

During the weekend I upgraded one system, still a reasonably small install, but older - probably 6-7 years. First reboot into systemd flagged the fact that I had some forced-load modules which no longer exist, fact that was too easy to ignore with sysv. Nice! The only downside was that there seems to be some race condition between and ntp, as it fails to start on boot (port listen conflict). I'll see if it repeats. Another small issue is that systemd doesn't like duplicate fstab entries (i.e. two devices which both refer to the same mount point), while this works fine for mount itself (when specifying the block device).

I said that after that system, I'll wait a while until to upgrade the next. But so it happened that today another system had an issue and I had to reboot it (damn lost uptimes!). The kernel was old so I booted into a newer one (this time compiled with the required systemd options), so I had a though - what if I take the opportunity and also switch to systemd on this system?

Caution said to wait, since this was the oldest system - installed sometime during or before 2004. Plus it doesn't use an initrd (long story), and it has a split /usr. Caution… excitement… caution lost ☺ and I proceeded.

It turns out that systemd does warn about split /usr but itself has no problems. I learned that I also had very old sysfs entries that no longer exist, and which I didn't know about as sysv doesn't make it obvious. I also had a crypttab entry which was obsolete, and I forgot about it, until I met the nice red moving ASCII bar which—fortunately—had a timeout.

To be honest, I believed I'll have to rescue boot and fix things on this "always-unstable" machine, on which I install and run random things, and which has a hackish /etc/fstab setup. I'm quite surprised it just worked. On unstable.

So thanks a lot to the Debian systemd team. It was much simpler than I thought, and now, on to exploring systemd!

P.S.: the sad part is that usually I'm a strong proponent of declarative configuration, but for some reason I was reluctant to migrate to systemd also on account on losing the "power" of shell scripts. Humans…

24 August, 2015 09:40PM

hackergotchi for David Moreno

David Moreno

Thanks Debian

I sent this email to debian-private a few days ago, on the 10th anniversary of my Debian account creation:

Date: Fri, 14 Aug 2015 19:37:20 +0200
From: David Moreno 
To: debian-private@lists.debian.org
Subject: Retiring from Debian
User-Agent: Mutt/1.5.23 (2014-03-12)

[-- PGP output follows (current time: Sun 23 Aug 2015 06:18:36 PM CEST) --]
gpg: Signature made Fri 14 Aug 2015 07:37:20 PM CEST using RSA key ID 4DADEC2F
gpg: Good signature from "David Moreno "
gpg:                 aka "David Moreno "
gpg:                 aka "David Moreno (1984-08-08) "
[-- End of PGP output --]

[-- The following data is signed --]

Hi,

Ten years ago today (2005-08-14) my account was created:

https://nm.debian.org/public/person/damog

Today, I don't feel like Debian represents me and neither do I represent the
project anymore.

I had tried over the last couple of years to retake my involvement but lack of
motivation and time always got on the way, so the right thing to do for me is
to officially retire and gtfo.

I certainly learned a bunch from dozens of Debian people over these many years,
and I'm nothing but grateful with all of them; I will for sure carry the project
close to my heart — as I carry it with the Debian swirl I still have tattooed
on my back ;)

http://damog.net/blog/2005/06/29/debian-tattoo/

I have three packages left that have not been updated in forever and you can
consider orphaned now: gcolor2, libperl6-say-perl and libxml-treepp-perl.

With all best wishes,
David Moreno.
http://damog.net/


[-- End of signed data --]

I received a couple of questions about my decision here. I basically don’t feel like Debian represents my interests and neither do I represent the project – this doesn’t mean I don’t believe in free software, to the contrary. I think some of the best software advancements we’ve made as society are thanks to it. I don’t necessarily believe on how the project has evolved itself, whether that has been the right way, to regain relevancy and dominance, and if it’s remained primarily a way to feed dogmatism versus pragmatism. This is the perfect example of a tragic consequence. I was very happy to learn that the current Debian Conference being held in Germany got the highest attendance ever, hopefully that can be utilized in a significant and useful way.

Regardless, my contributions to Debian were never noteworthy so it’s also not that big of a deal. I just need to close cycles myself and move forward, and the ten year anniversary looked like a significant mark for that.

Poke me in case you wanna discuss some more. I’ll always be happy to. Specially over beer :)

Peace.

24 August, 2015 07:43PM