July 29, 2016

Russ Allbery

remctl 3.12

This release adds a new, experimental server implementation: remctl-shell. As its name implies, this is designed to be run as a shell of a dedicated user rather than as a server. It does not use the remctl protocol, instead relying on ssh to pass in the command and user information (via special authorized_keys configuration). But it supports the same configuration as the normal remctl server. It can be useful for allowing remctl-style simple RPC in environments that only use ssh public key authentication.

Also in this release is a new configuration option, sudo, which is like the existing user option to run a command as another user but uses sudo instead of calling setuid() directly. This allows the server to switch users when running as a non-root user, which will be the normal case for remctl-shell.

The remctl-shell implementation in this release should be considered a first draft and is likely to improve in the future. (I already have a list of things that probably should be improved.)

You can get the latest release from the remctl distribution page.

29 July, 2016 08:37PM

Debian Sysadmin Team

Peter Palfrader: Onion Services

Abstract

I just set up a lot of Onion Services for many of Debian's static websites.

You can find the entire list of services on onion.debian.org.

More might come in the future.

-- Peter Palfrader

29 July, 2016 08:16PM

Bits from Debian

Looking for the artwork for the next Debian release

Each release of Debian has a shiny new theme, which is visible on the boot screen, the login screen and, most prominently, on the desktop wallpaper.

Debian plans to release Stretch next year. As ever, we need your help in creating its theme! You have the opportunity to design a theme that will inspire thousands of people while working in their Debian systems.

They might be people working in exciting NASA missions:

Debian Squeeze Space Fun Spotted during the Juno Orbital Insertion live stream

Or DYI users who decided to make a matching keyboard:

Keyboard matching Debian Lenny Theme

If you're interested, please take a look at https://wiki.debian.org/DebianDesktop/Artwork/Stretch

29 July, 2016 05:15PM by Ana Guerrero Lopez

Luciano Prestes Cavalcanti

Contributing with Debian Recommendation System

Hi, my name is Luciano Prestes, I am participating in the program Google Summer of Code (GSoC), my mentor is Antonio Terceiro, and my co-mentor is Tassia Camoes, both are Debian Developers. The project that I am contributing is the AppRecommender, which is a package recommender for Debian systems, my goal is to add a new strategy of recommendation to AppRecommender, to make it recommend packages after the user installs a new package with 'apt'.
 
At principle AppRecommender has three recommendation strategies, being them, content-based, collaborative and hybrid. To my work on GSoC this text explains two of these strategies, content-based and collaborative. Content-based strategy get the user packages and analyzes yours descriptions to find another Debian packages that they are similar to the user packages, so AppRecommender uses the content of user packages to recommender similar packages to user. The collaborative strategy compare the user packages with the packages of another users, and then recommends packages that users with similar profile have, where a profile of user is your packages. On her work, Tassia Camoes uses the popularity-contest data to compare the users profiles on the collaborative strategy, the popularity-contest is an application that get the users packages into a submission and send to the popularity-contest server and generates statistical data analyzing the users packages.
 
I have been working with a classmate on our bachelor thesis since August 2015, in our work we created new strategies to AppRecommender, one using machine-learning and another using a deterministic method to generates the recommendation, another feature that we implemented its improve the user profile using the recently used packages to makes the profile. During our work we study the collaborative strategy and analyzed that strategy and remove it from AppRecommender, because this implementation of collaborative strategy needs to get the popularity-contest submissions on the user's pc, and this is against the privacy policy of popularity-contest.
 
My work on Google Summer of Code is create a new strategy on AppRecommender, as described above, where this strategy should be able to get an referenced package, or a list of referenced packages, then analyze the users packages making a recommendation using the referenced packages such as base, example: if users run "$ sudo apt install vim", the AppRecommender use "vim" as referenced package, and should recommender packages with relation between "vim" and the other packages that user has installed. This new strategy can be implemented like a content-based strategy, or the collaborative strategy.
 
The first month of Google Summer of Code its destined to students knows the community of the project, so I talk with the Debian community about my project, to get feedback and ideas about the project. I talk with Debian community on IRC channels, and then came the idea to use the data of popularity-contest to improve the recommendations. Talking with my mentors, they approve the idea of usage popularity-contest data, so we started a discussion about how to use the popularity-contest data on AppRecommender without broken the privacy policy of popularity-contest.
 
Now my work on Google Summer of Code is create the new strategy for AppRecommender that can makes recommendation using a list of packages as reference, so as explained above, when user install packages like "sudo apt install vim vagrant", AppRecommender should recommends packages with relation between the packages "vim" and "vagrant", and this recommendation should be relation with the user profile. The other work its use the popularity-contest data to improve the recommendations of AppRecommender using a new model of collaborative strategies.

29 July, 2016 01:59PM

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 4 – Books, ooh Books (and Boats)

Talks have been finished, and as a special present to the participants, Pavneet has organized an excursion that probably was one of the best I ever had. First we visited the Toronto Reference Library where we were treated to a delicious collection of rare books (not to mention all the other books and architecture), and then a trip through the Ismaili Centre Toronto and the Aga Khan Museum.

Page from "A Dream of John Ball", Kelmscott Press Edition, 1892.

(Kelmscott press edition from 1892 of William Morris’ A Dream of John Ball.) All these places were great pieces of architecture with excellent samples of the writing and printing art. And after all that and not to be mentioned, the conference dinner evening cruise!

Our first stop was the Toronto Reference Library. Designed by Raymond Moriyama, it features a large open atrium with skylights, and it gives the library an open and welcoming feeling. We were told that it resembles a tea cup that needs to be filled – with knowledge.

The Toronto Reference Library's atrium

The library also features running water at several places – the architect had the idea that natural ambient noise is more natural for a library than the eclectic silence that anyway never happens. Originally there were lots of greens hanging down into the Atrium, resembling the Hanging Gardens, but they have been scrapped due to financial reasons. But there are still green oasis like this beautiful green wall in a corner of the library.

Wall of Green in the middle of the library

We were guided first to the fifth floor where the special collection is housed. And what a special collection. The librarian in charge has laid out about 20 exquisite books starting from early illuminated manuscripts over incunabula to high pieces of printing art from the 18th and 19th century. Here we have a illuminated script in Carolingian minuscule.

Illuminated script in Carolingian minuscule

What was really surprising for all of us in this special collection that all these books were simply laid out in front of us, that the librarian touched and used it without gloves, and above all, that he told us that if one wants it is common practice to check out these books for study sessions and enjoy them on the spot in the reading room. I don’t know any other library that allows you to actually handle these rare and beauty specimens!

The library not only featured lots of great books, it also had some art installation like these light rods.

Art Light installation in the Toronto Reference Library

In one of the books I found by chance a map of my hometown of Vienna. Looking at this map from very old times, the place where I grew up is still uninhabited somewhere in the far upper right corner of the map. Times have changed.

Map op Vienna found in the Toronto Reference Library

After we left this open and welcoming treasure house of beautiful books, we moved to the Aga Khan Museum and Ismaili Centre Toronto, which are standing face-to-face separated by some water ponds in the Aga Khan park a bit outside of central Toronto. Here we see the Ismaili Centre from the Aga Khan Museum entrance. The big glass dome is the central prayer room, and is illuminated at night. Just one detail – one can see in the outer wall one part that looks like glass, too. This is the prayer alcove in the back of the prayer hall, and is made from huge slabs of Onyx that are also lit up in the night.

View onto the Ismaili Centre's Prayer Hall formed by a glass dome

The Ismaili Centre, designed by Charles Correa combines modern functional and simple style with the wonderful ornamental art of the Islam heritage. The inside of the Ismaili Centre features many pieces of exquisite art – calligraphy, murals, stone work, etc – here is a medallion made from precious stone and set onto a hand-carved wall.

Medaillon made of precious stones in hand carved wall, Ismaili Centre Toronto

A calligraphy on the wall in the Ismaili Centre

Wall Calligraphy in the Ismaili Centre Toronto

Following the Ismaili Centre we turned to the Aga Khan museum which documents Islamic art, science, and history with an extensive collection. We didn’t have much time, and in addition I had to do some fire-fighting over the phone, but the short trip through the permanent collection with samples of excellent calligraphy was amazing.

Koran Calligraphy, Aga Khan Museum Toronto

After returning from this lovely excursion and a short break, we set off for the last stop for tonight, the dinner cruise. After a short bus ride we could board our ship and off we go. Although the beer selection was not on par with what we are used from carft breweries, the perfectly sized boat with two decks and lots of places to hang around invited us to many discussions and chitchats. And finally I could enjoy also the skyline of Toronto.

View onto Toronto from the boat

After the dinner we had some sweets, one of which was a specially made cake with the TUG 2016 logo on it. I have to say, it was not only this cake but the whole excellent and overboarding food we had during all these days, that will make me go on diet when I am back in Japan. Pavneet organized for the lunch breaks three different style of kitchens (Thai, Indian, Italian), then the excursions to local brewers and and and… If it wouldn’t be for TeX, I would call it a “Mastkur”.

TUG 2016 cake

During the cruise we also had a little ceremony thanking Jim for his work as president of the TUG, but above all Pavneet for this incredible well organized conference. I think everyone agreed that this was the best TUG conference since long.

Sunset near Toronto

pensDuring the ceremony, Pavneet also announced the winners of the TUG 2016 fountain pen auction. These pens have a long history/travel behind them, see details on the linked page, and were presented to the special guests of the conference. Two remaining pens were auctioned with funds going to the TUG. The first one was handed over to Steve Grathwohl, and – to my utter surprise – the second one to myself. So now I am a happy owner of a TUG 2016 fountain pen. What a special feature!

Just one more detail about these pens: They are traditional style, so without ink capsules, but one needs to insert the ink with a syringe. I guess I need to stack up a bit at home, and more importantly, train my really ugly hand-writing, otherwise it would be a shame to use this exquisite tool.

We returned to the harbor around 10pm, and back to the hotel, where there was much greeting and thanking, as many people will return the following day.

Heading back to Toronto

I will also leave on Friday morning to meet with friends, thus I will not be participating in (and not reporting on) the last excursion of the TUG 2016. I will leave Toronto and the TUG 2016 with (nearly) exclusively good memories of excellent talks, great presentations, wonderful excursions, and lots of things I have learned. I hope to see all of the participants on next year’s TUG meeting – and I hope I will be able to attend it.

Thanks a lot to Pavneet, you have done an incredible job. And last but not least, thanks to your lovely wife for letting you do all this, I know how much time we did steal from her.

A few more photos can be found at the album Day 4 – Books, ooh books.

29 July, 2016 10:17AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGetconf 0.0.1

A new package RcppGetconf to read system configuration --- not unlike getconf from the libc library. Now R can read what system calls sysconf, pathconf and confstr have to say. The package is still pretty green, and now on CRAN in a very first version, corresponding to a very first (and single !) commit.

Right now, the CRAN just has one function getAll() similar to getconf -a. A first example shows how it provides all values which can be retried -- currently 320 on my systems.

R> res <- getAll()
R> head(res)
               key value type
1         LINK_MAX 65000 path
2  _POSIX_LINK_MAX 65000 path
3        MAX_CANON   255 path
4 _POSIX_MAX_CANON   255 path
5        MAX_INPUT   255 path
6 _POSIX_MAX_INPUT   255 path
R> tail(res)
                      key  value type
315    LEVEL4_CACHE_ASSOC      0  sys
316 LEVEL4_CACHE_LINESIZE      0  sys
317                  IPV6 200809  sys
318           RAW_SOCKETS 200809  sys
319           _POSIX_IPV6 200809  sys
320    _POSIX_RAW_SOCKETS 200809  sys
R> 

Earlier this evening I added a second function to the GitHub repo which can access indivial values.

But right now, the biggest need is really for someone with some systems skills---and an OS X machine---to look at the code, and maybe the getconf.c from the C library, in order to make this build OS X. If you can help, please get in touch.

More about the package is at the local RcppGetconf page and the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

29 July, 2016 03:57AM

July 28, 2016

hackergotchi for Thorsten Glaser

Thorsten Glaser

Please save GMane!

GMane has been down for a day or two, and flakey for a day before that. MidnightBSD’s laffer1 just linked the reason, which made me cry out loud.

GMane is really great, and I rely on the NNTP interface a lot, both posting and especially reading — it gives me the ability to download messages from mailing lists I don’t receive in order to be able to compose replies with (mostly) correct References and In-Reply-To headers. Its web interface, especially the article permalinks, are also extremely helpful.

This is a request for a petition to save GMane. Please, someone, do something! Thanks in advance!

28 July, 2016 11:00PM by MirOS Developer tg (tg@mirbsd.org)

hackergotchi for Gunnar Wolf

Gunnar Wolf

Subtitling DebConf talks — Come and join!

As I have said here a couple of times already, I am teaching a diploma course on embedded Linux at UNAM, and one of the modules I'm teaching (with Sandino Araico) is the boot process. We focus on ARM for obvious reasons, and while I have done my reading on the topic, I am very far from considering myself an expert.

So, after attending Martin Michlmayr's «Debian on ARM devices» talk, I decided to do its subtitles as part of my teaching job. This talk gives a great panorama on what actually has to happen in order to get an ARM machine to boot, and how support for new ARM devices comes around to Linux in general and to Debian in particular — Perfect for our topic! But my students are not always very fluent in English, so giving a hand is always most welcome.

In case any of you dear readers didn't know, we have a DebConf subtitling team. Yes, our work takes much longer to reach the public, and we have no hopes whatsoever in getting it completed, but every person lending a hand and subtitling a talk that they thought was interesting helps a lot to improve our talks' usability. Even if you don't have enough time to do the whole talk (we are talking about some 6hr per 45 minute session), adding a bit of work is very very very welcome. So...

Enjoy — And thanks in advance for your work!

28 July, 2016 06:54PM by gwolf

Elena 'valhalla' Grandi

kvm virtualization on a liberated X200, part 1

kvm virtualization on a liberated X200, part 1

As the libreboot website warns https://libreboot.org/docs/hcl/x200.html: there are issues with virtualization on x200 without microcode updated.

Virtualization is something that I use, and I have a number of VMs on that laptop, managed with libvirt; since it has microcode version 1067a, I decided to try and see if I was being lucky and virtualization was working anyway.

The result is that the machines no longer start: the kernel loads, and then it crashes and reboots. I don't remember why, however, I tried to start a debian installer CD (iso) I had around, and that one worked.

So, I decided to investigate a bit more: apparently a new installation done from that iso (debian-8.3.0-amd64-i386-netinst.iso) boots and works with no problem, while my (older, I suspect) installations don't. I tried to boot one of the older VMs with that image in recovery mode, tried to chroot in the original root and got failed to run command '/bin/bash': Exec format error.

Since that shell was lacking even the file command, I tried then to start a live image, and choose the lightweight debian-live-8.0.0-amd64-standard.iso: that one didn't start in the same way as the existing images.

Another try with debian-live-8.5.0-i386-lxde-desktop.iso confirmed that apparently Debian > 8.3 works, Debian 8.0 doesn't (I don't have ISOs for versions 8.1 and 8.2 to bisect properly the issue).

I've skimmed the release notes for 8.3 https://www.debian.org/News/2016/20160123 and noticed that there was an update in the intel-microcodehttps://packages.debian.org/jessie/intel-microcode package, but AFAIK the installer doesn't have anything from non-free, and I'm sure that non-free wasn't enabled on the VMs.

My next attempt (thanks tosky on #debian-it for suggesting this obvious solution that I was missing :) ) was to run one of the VMs with plain qemu instead of kvm and bring it up-to-date: the upgrade was successful and included the packages in this screenshot, but on reboot it's still not working as before.

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/ecb0e193b16fbd507d0148636177961b

Right now, I think I will just recreate from scratch the images I need, but when I'll have time I'd like to investigate the issue a bit more, so hopefully there will be a part 2 to this article.

28 July, 2016 04:08PM by Elena ``of Valhalla''

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 3 – Stories and Histories

The last day of TUG 2016, or at least the last day of talks, brought four one-hour talks from special guests, and several others, where many talks told us personal stories and various histories.
tug2016-biglow

A great finish of a great conference.

Jennifer Claudio – The case for justified text

Due to a strange timezone bug in my calendar program, I completely overslept a morning meeting and breakfast, as well as the first talk, so unfortunately I don’t have anything to report about this surely intersting talk comparing justification in various word processors and TeX.

Leyla Akhmadeeva, Rinat Gizatullin, Boris Veytsman – Are justification and hyphenation good or bad for the reader?

Still half dizzy and without coffee, I couldn’t really follow this talk, and only woke up till the end when there was a lot of interesting discussion about speed reading and its non-existence (because it is simply skimming over text), and improvements on reading comprehension.

Charles Bigelow – Looking for legibility

Another special guest, Charles Bigelow, presented a huge pool of research and work into readability, and how attitude and usage of fonts change over time. A very involving and well laid out talk, full of interesting background images and personal opinions and thoughts. Charles also touched onto topics of readability on modern devices like e-readers and mobiles. He compared the recent developments in font design for mobile devices with their work on Lucida 20+ years ago, and concluded that both arrived at the same solutions.

A very educating and amusing talk packed full with information on readability. I surely will revisit the recording in a study session.

David Walden – Some notes on the history of digital typography

David touches on many topics of the history of digital typography which he has experienced himself over the years: First the development of newspaper production and printing, then the evolvement of editors from simple text editors over word processors to full-fledged DTP programs. Finally he touches on various algorithmic problems that appear in the publishing business.

Tim Inkster – The beginning of my career

Tim, our fanatastic guide through his print shop the Procupine’s Quill on the second excursion day, talked about his private ups and downs in the printing business, all filled with an infinite flow of funny stories and surprising anecdotes. Without slides and anything but his voice and his stories, he kept us hanging on his lips without a break. I recommend watching the recording of his talk because one cannot convey the funny comments and great stories he shared with us in this simple blog.

Joe Clark – Type and tiles on the TTC

Joe unveils the history of rise and fall of the underground types and
tiles in Toronto. It is surprising to me that a small metro network as in Toronto can have such a long history of changes of design, layout, presentation. Some of the photos completely stymed me – how can anyone put up signs like that? I was thinking. To quote Joe (hopefully I remember correctly):

You see what happens without adult supervision.

Abdelouahad Bayar – Towards an operational (La)TeX package supporting optical scaling of dynamic mathematical symbols

A technical talk about a trial in providing optical scaling of mathematical symbols. As far as I understand it tries to improve on the TeX way of doing extensible math symbols by glueing things together. It seems to be highly involved and technically interesting project, but I couldn’t completely grasp the aim of it.

Michael Cohen, Blanca Mancilla, John Plaice – Zebrackets: A score of years and delimiters

John introduced us to Zebrackets, stripped parentheses and brackets, to help us keep track of pairing of those beasts. But as we know, Zebras are very elusive animals, … and so we saw lots of stripped brackets around. The idea of better markup of matching parentheses is definitely worth developing.

Charles Bigelow – Probably approximately not quite correct: Revise, repeat

The second talk of Charles, this time on the history of the Lucida fonts, from the early beginnings drawn on graph paper to recent developments using FontLab producing OpenType fonts. A truly unique crash course through the development of one of the very big families of fonts, and one of the first outside Computer Modern that had also support for proper math typesetting in TeX.

Aggressively legible!

One of the key phrases that popped up again and again was aggressively legible, mostly in negative connotations as far to fat symbols or far to big Arabic letters. But for me this font family is still close to my heart. I purchased it back than from Y&Y for my PhD thesis, and since then have upgraded to the TUG version including the OpenType fonts, and I use them for most of my presentation. Maybe I like the aggressive legibility!

Chuck slided in lots of nice comments about Kris Holmes, the development practice in their cooperation, stories of business contacts, and many more, making this talk a very lively and amusing, and at the same time very educating talk.


This concluded the TUG conference talks, and we thanked Pavneet for his excellent organization. But since we still have up to two days of excursions, many people dispersed quickly, just to meet again for a optional Type and Tile Tour – 3-5 subway stops with discussion of typesetting

This guided tour through the underground of Toronto, guided by Joe Clark who spoke in the morning about this topic, was attended by far too many participants. I think there were around 25 when we left. I thought that this will not work out properly, and so decided to leave the group and wander around alone.

The last program point for today was dinner with a blues music concert in the near by Jazz Bistro:
jazz-bistro

Excellent life music in a bit schick/sophisticated atmosphere was a good finish for this excellent day. With Herb from MacTeX and his wife we killed two bottles of red wine, before slowly tingling back to the hotel.

A great finishing day of talks.

28 July, 2016 11:28AM by Norbert Preining

hackergotchi for Michael Prokop

Michael Prokop

systemd backport of v230 available for Debian/jessie

At DebConf 16 I was working on a systemd backport for Debian/jessie. Results are officially available via the Debian archive now.

In Debian jessie we have systemd v215 (which originally dates back to 2014-07-03 upstream-wise, plus changes + fixes from pkg-systemd folks of course). Now via Debian backports you have the option to update systemd to a very recent version: v230. If you have jessie-backports enabled it’s just an `apt install systemd -t jessie-backports` away. For the upstream changes between v215 and v230 see upstream’s NEWS file for list of changes.

(Actually the systemd backport is available since 2016-07-19 for amd64, arm64 + armhf, though for mips, mipsel, powerpc, ppc64el + s390x we had to fight against GCC ICEs when compiling on/for Debian/jessie and for i386 architecture the systemd test-suite identified broken O_TMPFILE permission handling.)

Thanks to the Alexander Wirt from the backports team for accepting my backport, thanks to intrigeri for the related apparmor backport, Guus Sliepen for the related ifupdown backport and Didier Raboud for the related usb-modeswitch/usb-modeswitch-data backports. Thanks to everyone testing my systemd backport and reporting feedback. Thanks a lot to Felipe Sateler and Martin Pitt for reviews, feedback and cooperation. And special thanks to Michael Biebl for all his feedback, reviews and help with the systemd backport from its very beginnings until the latest upload.

PS: I cannot stress this enough how fantastic Debian’s pkg-systemd team is. Responsive, friendly, helpful, dedicated and skilled folks, thanks folks!

28 July, 2016 09:10AM by mika

July 27, 2016

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru in Debian

Uploading to ftp-master (via ftp to ftp.upload.debian.org):
Uploading nageru_1.3.3-1.dsc: done.
Uploading nageru_1.3.3.orig.tar.gz: done.
Uploading nageru_1.3.3-1.debian.tar.xz: done.
Uploading nageru-dbgsym_1.3.3-1_amd64.deb: done.
Uploading nageru_1.3.3-1_amd64.deb: done.
Uploading nageru_1.3.3-1_amd64.changes: done.

So now it's in the NEW queue, along with its dependency bmusb. Let's see if I made any fatal mistakes in release preparation :-)

Edit: Whoa, that was fast—ACCEPTED into unstable.

27 July, 2016 11:11PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

DebConf16 low resolution videos

By popular request...

If you go to the Debian video archive, you will notice the appearance of an "lq" directory in the debconf16 subdirectory of the archive. This directory contains low-resolution re-encodings of the same videos that are available in the toplevel.

The quality of these videos is obviously lower than the ones that have been made available during debconf, but their file sizes should be up to about 1/4th of the file sizes of the full-quality versions. This may make them more attractive as a quick download, as a version for a small screen, as a download over a mobile network, or something of the sorts.

Note that the audio quality has not been reduced. If you're only interested in the audio of the talks, these files may be a better option.

27 July, 2016 08:13PM

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 2 – Figures to Fonts

The second day of TUG 2016 was again full of interesting talks spanning from user experiences to highly technical details about astrological chart drawing, and graphical user interfaces to TikZ to the invited talk by Robert Bringhurst on the Palatino family of fonts.

tug2016-bringhurst

With all these interesting things there is only one thing to compain – I cannot get out of the dark basement and enjoy the city…

After a evening full of sake and a good night’s sleep we were ready to dive into the second day of TUG.

Kaveh Bazargan – A graphical user interface for TikZ

The opening speaker of Day 2 was Kaveh. He first gave us a quick run-down on what he is doing for business and what challenges publishers are facing in these times. After that he introduced us to his new development of a command line graphical user interface for TikZ. I wrote command line on purpose, because the editing operations are short commands issued on a kind of command line, which will give an immediate graphical feedback. Basic of the technique is a simplified TikZ-like meta language that is not only easy to write, but also easy to parse.

While the amount of supported commands and features of TikZ is still quite small, I think the basic idea is a good one, and there is a good potential in it.

Matthew Skala – Astrological charts with horoscop and starfont

Next up was Matthew who introduced us to the involved task of typesetting astrological charts. He included comparisons with various commercial and open source solutions, where Matthew of course, but me too, felt that his charts came of quite well!

As an extra bonus we got some charts of famous singers, as well as the TUG 2016 horoscope.

David Tulett – Development of an e-textbook using LaTeX and PStricks

David reported on his project to develop an e-textbook on decision modeling (lots of math!) using LaTeX and PStricks. His e-book is of course a PDF. There were a lot of very welcoming feedback – free (CC-BY-NC-ND) textbooks for sciences are rare and we need more of them.

Christian Gagné – An Emacs-based writing workflow inspired by TeX and WEB, targeting the Web

Christian’s talk turned around editing and publishing using org-mode of Emacs and the various levels of macros one can use in this setup. He finished with a largely incomprehensible vision of a future equational logic based notation mode. I have used equational logic in my day-in-day-out job, and I am not completely convinced that this is a good approach for typesetting and publishing – but who knows, I am looking forward to a more logic-based approach!

Barbara Beeton, Frank Mittelbach – In memoriam: Sebastian Rahtz (1955-2016)

Frank recalled Sebastian’s many contribution to a huge variety of fields, and recalled our much missed colleague with many photos and anecdotes.

Jim Hefferon – A LaTeX reference manual

Jim reported about the current state of a LaTeX reference manual, which tries to provide a documentation orthogonally to the many introduction and user guides available, by providing a straight down-to-earth reference manual with all the technical bells and whistles necessary.

As I had to write myself a reference manual for a computer language, it was very interested to see how they dealt with many of the same problems I am facing.

Arthur Reutenauer, Mojca Miklavec – Hyphenation past and future: hyph-utf8 and patgen

Arthur reports about the current statue of the hyphenation pattern project, and in particular the license and usage hell they recently came into with large cooperations simply grabbing the patterns without proper attribution.

In a second part he gave a rough sketch of his shot at a reimplementation of patgen. Unfortunately he wrote in rather unreadable hand-writing on a flip-chart, which made only the first line audience to actually see what he was writing.

Federico Garcia-De Castro – TeXcel?

As an artist organizing large festivals Federico has to fight with financial planning and reports. He seemed not content with the abilities of the usual suspects, so he developed a way to do Excel like book-keeping in TeX. Nice idea, I hope I can use that system for the next conference I have to organize!

Jennifer Claudio – A brief reflection on TeX and end-user needs

Last speaker in the morning session was Jennifer who gave us a new and end-user’s view onto the TeX environment, and the respective needs. These kind of talks are a very much welcomed contrast to technical talks and hopefully all of us developers take home some of her suggestions.

Sungmin Kim, Jaeyoung Choi, Geunho Jeong – MFCONFIG: Metafont plug-in module for the Freetype rasterizer

Jaeyoung reported about an impressive project to make Metafont fonts available to fontconfig and thus windowing systems. He also explained their development of a new font format Stemfont, which is a Metafont-like system that can work also for CJK fonts, and which they envisage to be built into all kind of mobile devices.

Michael Sharpe – New font offerings — Cochineal, Nimbus15 and LibertinusT1Math

Michael reports about his last font projects. The first two being extensions of the half-made half-butchered rereleased URW fonts, as well as his first (?) math font project.

I talked to him over lunch one day, and asked him how many man-days he need for these fonts, and his answer was speaking a lot: For the really messed up new URW fonts, like Cochineal, he guessed about 5 man-months of work, while other fonts only needed a few days. I think we all can be deeply thankful to all the work he is investing into all these font projects.

Robert Bringhurst – The evolution of the Palatino tribe

The second invited talk was Robert Bringhurst, famous for his wide contributions to typpography, book culture in general, as well as poetry. He gave a quick historic overview on the development of the Palatino tribe of fonts, with lots of beautiful photos.

I was really looking forward to Robert’s talk, and my expectations were extremely high. And unfortunately I must say I was quite disappointed. Maybe it is his style of presentation, but the feeling he transfered to me (the audience?) was that he was going through a necessary medical check, not much enjoying the presentation. Also, the content itself was not really full of his own ideas or thoughts, but a rather superficial listing of historical facts.

Of course, a person like Robert Bringhurst is so full of anecdotes and background knowledge still was a great pleasure to listen and lots of things to learn, I only hoped for a bit more enthusiasm.

TUG Annual General Meeting

The afternoon session finished with the TUG Annual General Meeting, reports will be sent out soon to all TUG members.

Herbert Schulz – Optional workshop: TeXShop tips & tricks

After the AGM, Herbert from MacTeX and TeXShop gave an on-the-spot workshop on TeXShop. Since I am not a Mac user, I skipped on that.


Another late afternoon program consisted of an excursion to Eliot’s bookshop, where many of us stacked up on great books. This time again I skipped and took a nap.

In the evening we had a rather interesting informal dinner in the food court of some building, where only two shops were open and all of us lined up in front of the Japanese Curry shop, and then gulped down from plastic boxes. Hmm, not my style I have to say, not even for informal dinner. But at least I could meet up with a colleague from Debian and get some gpg key signing done. And of course, talking to all kind of people around.

The last step for me was in the pub opposite the hotel, with beer and whiskey/scotch selected by specialists in the field.

27 July, 2016 02:55PM by Norbert Preining

July 26, 2016

Iustin Pop

More virtual cycling

Last weekend I had to stay at home, so I did some more virtual training (slowly, in order to not overwork myself again). This time, after all the Zwift, I wanted to test something else: Tacx Trainer Software. Still virtual, but of a different kind.

The difference between Zwift, which does video-game-like worlds, is that TTS, in the configuration that I used, uses a real-life video which scrolls faster or slower, based on your speed. This speed adjustment is so-so, but the appeal was that I could ride roads that I actually know and drove before. Modern technology++!

And this was the interesting part: I chose for the first ride the road up to Cap de Formentor, which is one of my favourite places in Mallorca. The road itself is also nice, through some very pleasant woods and with some very good viewpoints, ending at the lighthouse, from where you have wonderful views of the sea.

Now, I've driven two times on this road, so I kind of remembered it, but driving a road and cycling the same road, especially when it goes up and down and up, are very different things. I remembered well the first uphill (after the flat area around Port de Pollença), but after that my recollection of how much uphill the road goes was slightly off, and I actually didn't remember that there's that much downhill, which was a very pleasant surprise. I did remember the view points (since I took quite a few pictures along the road), but otherwise I was completely off about the height profile of the road. Interesting how the brain works ☺

Overall, this is considered a "short" ride in Tacx's film library; it was 21Km, 835m uphill, and I did it in 1h11m, which for me, after two weeks of no sports, was good enough. Also Tacx has bike selection, and I did this on a simulated mountain bike, with the result that downhill speeds were quite slow (max. 57Km/h, at a -12% grade), so not complaining at all.

Next I'll have to see how the road to Sa Calobra is in the virtual world. And next time I go to Mallorca (when/if), I'll have to actually ride these in the real world.

In the meantime, some pictures from an actual trip there. I definitely recommend visiting this, preferably early in the morning (it's very crowded):

Infinite blue

Sea, boats and mountains

Mountains, vegetation and a bit of sea

View towards El Colomer

A few more pictures and larger sizes here.

26 July, 2016 10:24PM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Debian LGBTIQA+

I have a long overdue blog entry about what happened in recent times. People that follow my tweets did catch some things. Most noteworthy there was the Trans*Inter*Congress in Munich at the start of May. It was an absolute blast. I met so many nice and great people, talked and experienced so many great things there that I'm still having a great motivational push from it every time I think back. It was also the time when I realized that I in fact do have body dysphoria even though I thought I'm fine with my body in general: Being tall is a huge issue for me. Realizing that I have a huge issue (yes, pun intended) with my length was quite relieving, even though it doesn't make it go away. It's something that makes passing and transitioning for me harder. I'm well aware that there are tall women, and that there are dedicated shops for lengthy women, but that's not the only thing that I have trouble with. What bothers me most is what people read into tall people: that they are always someone they can lean on for comfort, that tall people are always considered to be self confident and standing up for themselves (another pun, I know ... my bad).

And while I'm fine with people coming to me for leaning on to, I rarely get the chance to do so myself. And people don't even consider it. When I was there in Munich, talking with another great (... pun?) trans woman who was as tall as me I finally had the possibility to just rest my head on her shoulder and finally feel the comfort I need just as much as everyone else out there, too. Probably that's also the reason why I'm so touchy and do go Free Hugging as often as possible. But being tall also means that you are usually only the big spoon when cuddling up. Having a small mental breakdown because of realizing that didn't change the feeling directly but definitely helped with looking for what I could change to fix that for myself.

Then, at the end of may, the movie FtWTF - female to what the fuck came to cinema. It's a documentary about six people who got assigned female at birth. And it's absolutely charming, and has great food for thoughts in it. If you ever get the chance to watch it you definitely should.

And then came debconf16 in Capetown. The flight to there was canceled and we had to get rebooked. The first offer was to go through Dubai, and gladly a colleague did point out to the person behind the desk that that wouldn't be safe for myself and thus out of scope. In the end we managed to get to Capetown quite nice, and even though it was winter when the sun was shining it was quite nice. Besides the cold nights that is. Or being stuck on the way up to table mountain because a colleague had cramps in his lags and we had to call mountain rescue. Gladly the night was clear, and when the mountain rescue finally got us to top and it was night already we had one of the nicest views from up there most people probably never will experience.

And then ... I got invited to a trans meetup in Capetown. I was both excited and nervous about it, what to expect there. But it was simply great. The group there was simply outstandingly great. The host gave update information on progress on clinical support within south Africa, from what I took with me is that there is only one clinic there for SRS which manages only two people a year which is simply ... yuck. Guess you can guess how many years (yes, decades) the waiting line is ... I was blown away though by the diversity of the group, on so many levels, most notably on the age spectrum. It was a charm to meet you all there! If you ever stop by in Capetown and you are part of the LGBTIQ community, make sure you get in contact with the Triangle Project.

But, about the real reason to write this entry: I was approached at Debconf by at least two people who asked me what I thought about creating an LGBTIQA+ group within Debian, and if I'd like to push for that. Actually I think it would be a good idea to have some sort of exchange between people on the queer spectrum (and I hope I don't offend anyone with just saying queer for LGBTIQA+ people). Given that I'm quite outspoken people approach me every now and then so I'm aware that there is a fair amount of people that would fall into that category. On the other hand some of them wouldn't want to have it publicly known because it shouldn't matter and isn't really the business of others.

So I'm uncertain. If we follow that path I guess something that is closed or at least offers the possibility to have a closed communication would be needed to not out someone by just joining in the discussion. It's was easier with Debian Women where it was (somewhat) clear that male participants are allies supporting the cause and not considered being women themselves, but often enough (mostly cis hetero male) people are afraid to join a dedicated LGBTIQA+ group because they have the fear of having their identity judged. These things should be considered before creating such a place so that people can feel comfortable when joining and know what to expect beforehand.

For the time being I created #debian-diversity on irc.debian.org to discuss how to move forward. Please bear in mind that even the channel name is up for discussion. Acronyms might not be the way to go in my opinion, just read back up the discussion that lead to the Diversity Statement of Debian where the original approach was to start listing groups for inclusiveness but it was quickly clear that it can get outdated too easily.

I am willing to be part of that effort, but right now I have some personal things to deal which eat up a fair amount of my time. My kid starts school in September (yes, it's that long already, time flies ...). And it looks like I'll have to move a second time in the near future: I'll have to leave my current flat by the end of the year and the Que[e]rbau I'm moving into won't be ready by that time to host me yet ... F*ck. :(

/personal | permanent link | Comments: 1 | Flattr this

26 July, 2016 11:49AM by Rhonda

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 1 – Routers and Reading

The first day of the real conference started with an excellent overview of what one can do with TeX, spanning from traditional scientific journal styles to generating router configuration for cruising ships.
tug2016-color

All this was crowned with an invited talk my Kevin Larson from Microsoft’s typography department on how to support reading comprehension.

Pavneet Aurora – Opening: Passport to the TeX canvas

Pavneet, our never-sleeping host and master of organization, opened the conference with a very philosophical introduction, touching upon a wide range of topics ranging from Microsoft, Twitter to the beauty of books, pages, and type. I think at some point he even mentioned TeX, but I can’t remember for sure. His words put up a very nice and all-inclusive stage, a community that is open to all kind of influences with any disregard or prejudice. Let us hope that is reflects reality. Thanks Pavneet.

Geoffrey Poore – Advances in PythonTeX

Our first regular talk was by Geoffrey reporting on recent advances in PythonTeX, a package that allows including python code in your TeX document. Starting with an introduction to PythonTeX, Geoggrey reports about an improved verbatim environment, fvextra, which patches fancyvrb, and improved interaction between tikz and PythonTeX.

As I am a heavy user of listings for my teaching on algebraic specification languages, I will surely take a look at this package and see how it compares to listings.

Stefan Kottwitz – TeX in industry I: Programming Cisco network switches using TeX

Next was Stefan from Lufthansa Industry Solutions, who reported first about his working environment, Cruise Ships with a very demanding IT infrastructure he has to design and implement. Then he introduced us to his way of generating IP configurations for all the devices using TeX. The reason he chose this method is that it allows him to generate at the same time proper documentation.

It was surprising for me to hear that by using TeX he could far more efficiently and quicker produce well designed and easily accessible documentation, which helped both the company as well as made the clients happy!

Stefan Kottwitz – TeX in industry II: Designing converged network solutions

After a coffee break, Stefan continued his exploration into industrial usage of TeX, this time about using tikz to generate graphics representing the network topology on the ships.

Boris Veytsman – Making ACM LaTeX styles

Next up was Boris which brought us back to traditional realms of TeX when he guided us into the abyss of ACM LaTeX styles he tried to maintain for some time, until he plunged into a complete rewrite of the styles.

Frank Mittelbach – Alice goes floating — global optimized pagination including picture placements

The last talk before lunch (probably a strategic placement, otherwise Frank would continue for hours and hours) was Frank on global optimization of page breaks. Frank showed us what can and can not be done with current LaTeX, and how to play around with global optimization of pagination, using Alice in Wonderland as running example. We can only hope that his package is soon available in an easily consumable version to play around.

Thai lunch

Pavneet has organized three different lunch-styles for the three days of the conference, today’s was Thai with spring rools, fried noodles, one kind of very interesting orange noodles, and chicken something.

Michael Doob – baseball rules summary

After lunch Michael gave us an accessible explanation of the most arcane rules a game can have – the baseball rules – by using pseudo code. I think the total number of loc needed to explain the overall rules would fill more pages than the New York phonebook, so I am deeply impressed by all those who can understand these rules. Some of us even wandered off in the late afternoon to see a match with life explanations of Michael.

Amartyo Banerjee, S.K. Venkatesan – A Telegram bot for printing LaTeX files

Next up was Amartyo who showed a Telegram (as in messenger application) bot running on a Raspberry Pi, that receives (La)TeX files and sends back compiled PDF files. While it is not ready for consumption (If you sneeze the bot will crash!), it looks like a promising application. Furthermore, it is nice to see how open APIs (like Telegram) can spur development of useful tools, while closed APIs (including threatening users, like WhatApp) hinders it.

Norbert Preining – Security improvements in the TeX Live Manager and installer

Next up was my own talk about beefing up the security of TeX Live by providing integrity and authenticity checks via GnuPG, a feature that has been introduced with the recent release of TeX Live 2016.

The following discussion gave me several good idea on how to further improve security and usability.

Arthur Reutenauer -The TeX Live M sub-project (and open discussion)

Arthur presented the TeX Live M (where the M stands supposedly for Mojca, who couldn’t attend unfortunately) project: Their aim is to provide a curated and quality verified sub-part of TeX Live that is sufficiently complete for many applications, and easier for distributors and packagers.

We had a lively discussion after Arthur’s short presentation, mostly about why TeX Live does not have a “on-the-fly” installation like MikTeX. I insisted that this is already possible, using the “tex-on-the-fly” package which uses the mktextex infrastructure, but also caution against using it by default due to delays induced by repeatably reading the TeX Live database. I think this is a worth-while project for someone interested in learning the internals of TeX Live, but I am not sure whether I want to invest time into this feature.

Another discussion point was about a testing infrastructure, which I am currently working on. This is in fact high on my list, to have some automatic minimal functionality testing – a LaTeX package should at least load!

Kevin Larson – Reading between the lines: Improving comprehension for students

Having a guest from Microsoft is rather rare in our quite Unix-centered environment, so big thanks to Pavneet again for setting up this contact, and big thanks to Kevin for coming.

Kevin gave us a profound introduction to reading disabilities and how to improve reading comprehension. Starting with an excursion into what makes a font readable and how Microsoft develops optimally readable fonts, he than turned to reading disabilities like dyslexia, and how markup of text can increase students comprehension rate. He also toppled my long-term believe that dyslexia is connected to the similar shape of letters which are somehow visually malprocessed – this was the scientific status from the 1920ies till the 70ies, but since then all researchers have abandoned this interpretation and dyslexia is now linked to problems linking shape to phonems.

Kevin did an excellent job with a slightly difficult audience – some people picking about grammer differences between British and US English and permanently derailing the discussion, and even more the high percentage of typographically somehow specially tasted participants.

After the talk I had a lengthy discussion with Kevin about if/how this research can be carried over to non-Roman writing systems, in particular Kanji/Hanzi based writing systems, where dyslexia shows itself probably in different context. Kevin also mentioned that they want to add interword space to Chinese to help learners of Chinese (children, foreigners) to better parse, and studies showed that this helps a lot in comprehension.

On a meta-level, this talk bracketed with the morning introduction by Pavneet, describing an open environment with stimulus back and forth in all directions. I am very happy that Kevin took the pain to come in his tight schedule, and I hope that the future will bring better cooperation – at the end we are all working somehow on the same front – only the the tools differ.


izakaya-sake-partyAfter the closing of the session, one part of our group went off to the baseball match, while another group dived into a Japanese-style Izakaya where we managed to kill huge amounts of sake and quite an amount of food. The photo shows me after the first bottle of sake, while just seeping on an intermediate small amount of genshu (kind of strong undiluted sake) before continuing to the next bottle.

An interesting and stimulating first day of TUG, and I am sure that everyone was looking forward to day 2.

26 July, 2016 11:20AM by Norbert Preining

July 25, 2016

Simon Désaulniers

[GSOC] Week 8&9 Report

Week 8

This particular week has been tiresome as I did catch a cold ;). I did come back from Cape Town where debconf taking place. My arrival at Montreal was in the middle of the week, so this week is not plenty of news…

What I’ve done

I have synced myself with my coworker Nicolas Reynaud, who’s working on building the indexation system over the DHT. We have worked together on critical algorithms: concurrent maintenance of data in the trie (PHT).

Week 9

What I’ve done

Since my mentor, who’s also the main author of OpenDHT, was gone for presenting Ring at the RMLL, I’ve been attributed tasks that needed attention quickly. I’ve been working on making OpenDHT run properly when compiled with some failing version of Apple’s LLVM. I’ve had the pleasure of debugging obscure runtime errors that you don’t get depending on the compiler you use, but I mean very obscure.

I have released OpenDHT 0.6.2! This release was meant to fix a critical functionality bug that would arise if one of the two routing table (IPv4, IPv6) was empty. This was really critical for Ring to have the 0.6.2 version because it is not rare that a user connects to some router not giving IPv6 address.

Finally, I have fixed some minor bugs in my work on the queries.

25 July, 2016 11:41PM

Reproducible builds folks

Reprotest 0.2 released, with virtualization support

Author: ceridwen

reprotest 0.2 is available in PyPi and should hit Debian soon. I have tested null (no container, build on the host system), schroot, and qemu, but it's likely that chroot, Linux containers (lxc/lxd), and quite possibly ssh are also working. I haven't tested the autopkgtest code on a non-Debian system, but again, it probably works. At this point, reprotest is not quite a replacement for the prebuilder script because I haven't implemented all the variations yet, but it offers better virtualization because it supports qemu, and it can build non-Debian software because it doesn't rely on pbuilder.

With HW42's help, I fixed the schroot/disorderfs/autopkgtest permission error and got pip/setuptools to install the autopkgtest code from virt/. The permission error came from autopkgtest automatically running all commands on a testbed as root, if it can run commands as root, which caused schroot to mount disorderfs as root. This caused the build artifact to be owned by root, but unlike qemu, a chroot shares the same file system, so it would then try to copy this root-owned file with a process running with ordinary user permissions. The fix was to mount disorderfs with --multi-user=yes when the container has root permissions, allowing a user process to access it even when it's mounted by root. The fix for setuptools is an ugly hack: including the virt/ scripts as non-code data with include reprotest/virt/* in MANIFEST.in works. (According to the documentation, graft reprotest/virt/, which I also tried, ought to work, but in my testing it doesn't.) This still uses sys.path hacking as a consequence of autopkgtest's design choices.

For the next release, I want to finish all the remaining variations, which at this point are build path, domain, host, group, shell, and user. I also want to ensure that reprotest runs on non-Debian and possibly non-Linux systems.

For now, what I need most is more real-world testing. If you have any interest in reproducible builds, please try reprotest out! There's a README included which describes how to set up appropriate containers and run the tests.

25 July, 2016 06:54PM

hackergotchi for Norbert Preining

Norbert Preining

TUG 2016 – Day 0 – Books and Beers

The second pre-conference day was dedicated to books and beers, with a visit to an exquisite print studio, and a beer tasting session at one of the craft breweries in Canada. In addition we could grab a view into the Canadian lifestyle by visiting Pavneet’s beautiful house in the countryside, as well as enjoying traditional style pastries from a bakery.

Heidelberg printing machine at Porcupine's quilt

In short, a perfect combination for us typography and beer savvy freaks!

This morning we had somehow an early start from the hotel. Soon the bus left downtown Toronto and entered countryside Ontario, large landscapes filled with huge (for my Japanese feeling) estates and houses, separated by fields, forests and wild landscape. Very beautiful and inviting to live there. On our way to the printing workshop we stopped at Pavneet’s house for a very short visit of the exterior, which includes mathematics in the bricking. According to Pavneet, his kids hate to see math on the wall – I would be proud to have it.

Pavneet's house is hiding some mathematics

A bit further on we entered into Erin, where the Porcupine’s Quill is located. A small building along the street, one could easily oversee that rare jewel! In addition considering that according to the owners, Google Maps has a bad error which would lead you to a completely different location. This printing workshop, led by Tim and Elke Inkster, is producing books in a traditional style using an old Heidelberg Offset printing machine.

Entrance to the Porcupine's Quill, a local bookshop doing excellent printing

Elke introduced us to the sewing of folded signatures together with a lovely old sewing machine. It was the first time I actually saw one in action.

Sewn signatures

Tim, the head master of the printing shop, first entertained us with stories about Chinese publishers visiting them in the old cold-war times, before diving into explanations of the actual machines around, like the Heidelberg offset printing machine.

Master Tim is showing us offset technique

In the back of the basement of the little studio there is also a huge folding machine, which cuts up the big signatures of 16 pages and folds them into bundles. An impressive example of tricky engineering.

The folding machine creates from a printed signature 16 pages in proper order.

Due to the small size of the printing studio, our groups were actually split into two groups, and while the other group got its guided tour, we grabbed coffee and traditional cookies and pastries from the nearby Holtom’s bakery. Loads of nice pastries with various filling, my favorite being the slightly salty cherry pie, and above all the rhubarb-raspberry pie.

Nearby old-style bakery, selling Viennese-style "Kaisersemmel", calling them "Kaiser buns". There must be an Austrian hiding somewhere around.

To my absolute astonishment I also found there a Viennese “Kaisersemmel“, called “Kaiser bun” here, but keeping the shape and the idea (but unfortunately not the crispy cracky quality of the original in Vienna). Of course I got two of them, together with a nice jam from the region, and enjoyed these “Viennese breakfast” the next day morning.

Viennese breakfast from the Bakery near Porcupine's Quill

Leaving the Quill we headed for a lunch in a nice pizzeria (I got Pizza Toscana) which also served excellent local beer – how would I like to have something like this over in Japan! Our last stop on this excursion was Stone Hammer Brewery, ne of the most famous craft breweries in Canada.

One of the top craft breweries in Canada, the Stone Hammer

keep-calmAlthough they wont win a prize for typography (besides one page of a coaster there that carried a nice pun), their beers are exquisite. We got five different beers to taste, plus extensive explanations on brewing methods and differences. Now I finally understand why most of the new craft breweries in Japan are making Ales: Ales don’t need a long process and are ready for sale in rather short time, compared to e.g. lagers.)

Explanations of the the secrets of beer brewing

Nothing to add to this poster found in the Stone Hammer Brewery!
Also at the Stone Hammer Brewery I spotted this very nice poster on the wall of the toilet. And I cannot agree more, everything can easily be discussed over a good beer – it calms down aversions, makes even the worst enemies friends, and is healthy for both the mind and body.

Filled with excellent beer, some of us (notably an unnamed US TeXnician and politician), stacked up on beers to carry home. I was very tempted to get a huge batch, but putting cans into an airplane seems to be not the optimal idea. Since we are talking cans, I was surprised to hear that many craft beer brewers nowadays prefer cans due to their better protection of the beer from light and oxygen, both killers of good beer.

Before leaving we took a last look at the Periodic Table of Beer Types, which left me in awe about how much I don’t know and I probably never will know. In particular, I heard the first time of a “Vienna style beer” – Vienna is not really famous for beer, better to say, it is infamous. So maybe this is a different Vienna than my home town that is meant here.

Lots to study here, the Periodic Table of Beers

Another two hour bus ride brought us back to Toronto, where we met with other participants at the reception in a restaurant of Mediterranean cuisine, where I could enjoy for the first time in years a good Tahina and Humus.

All around another excellent day, now I only would like to have two days of holidays, guess I need to relax in the lectures starting from tomorrow.


25 July, 2016 06:09PM by Norbert Preining

Sven Hoexter

me vs terminal emulator

I think my demands for a terminal emulator are pretty basic but none the less I run into trouble every now and then. This time it was a new laptop and starting from scratch with an empty $HOME and the current Debian/testing instead of good old Jessie.

For the last four or five years I've been a happy user of gnome-terminal, configured a mono space font, a light grey background with black text color, create new tabs with Ctrl-n, navigate the tabs with Ctrl-Left and Ctrl-Right, show no menubar, select URLs with double click. Suited me well with my similarly configured awesome window manager, where I navigate with Mod4-Left and Mod4-Right between the desktops on the local screen and only activate a handful of the many default tiling modes.

While I could get back most of my settings, somehow all cited gconf kung-foo to reconfigure the URL selection pattern in gnome-terminal failed, and copy&pasting URLs from the terminal was a pain in the ass. Long story short I now followed the advice of a coworker to just use the xfce4-terminal.

That still required a few tweaks to get back to do what I want it to do. To edit the keybindings you've to know that you've to use the GTK way and edit them within in the menu while selecting the menu entry. But you've to allow that first (why oh why?):

echo "gtk-can-change-accels=1" >> ~/.gtkrc-2.0

Fair enough that is documented. Changing the keybinding generates fancy things in ~/.config/xfce4/terminal/accels.scm in case you plan to hand edit a few more of them.

I also edited a few things in ~/.config/xfce4/terminal/terminalrc:

MiscAlwaysShowTabs=TRUE
MiscMenubarDefault=FALSE

So I guess I can remove gnome-terminal for now and stay with another GTK2 application. Doesn't feel that good but well at least it works.

25 July, 2016 04:44PM

Mike Hommey

Announcing git-cinnabar 0.4.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b1?

  • Some more bug fixes.
  • Updated git to 2.9.2 for cinnabar-helper.
  • Now supports `git push –dry-run`.
  • Added a new `git cinnabar fetch` command to fetch a specific revision that is not necessarily a head.
  • Some improvements to the experimental native wire protocol support.

25 July, 2016 08:38AM by glandium

hackergotchi for Martin Michlmayr

Martin Michlmayr

Debian on Jetson TK1

Debian on Jetson TK1

I became interested in running Debian on NVIDIA's Tegra platform recently. NVIDIA is doing a great job getting support for Tegra upstream (u-boot, kernel, X.org and other projects). As part of ensuring good Debian support for Tegra, I wanted to install Debian on a Jetson TK1, a development board from NVIDIA based on the Tegra K1 chip (Tegra 124), a 32-bit ARM chip.

Ian Campbell enabled u-boot and Linux kernel support and added support in the installer for this device about a year ago. I updated some kernel options since there has been a lot of progress upstream in the meantime, performed a lot of tests and documented the installation process on the Debian wiki. Wookey made substantial improvements to the wiki as well.

If you're interested in a good 32-bit ARM development platform, give Debian on the Jetson TK1 a try.

There's also a 64-bit board. More on that later...

25 July, 2016 01:31AM

July 24, 2016

Tom Marble

jai-gagne-le-tour-de-crosstown-2016

J'ai gagné le Tour de Crosstown 2016!

Everyone knows that today the finish line for Le Tour de France was crossed on Les Champs-Élysées in Paris... And if you haven't seen some of the videos I highly recommend checking out the onboard camera views and the landscapes! Quel beau pays

I'm happy to let you know that today I won the Tour de Crosstown 2016 which is the cycling competition at Lifetime Crosstown inspired by and concurrent to Le Tour de France. There were about twenty cyclists competing to see who could earn the most points -- by attending cycling class bien sûr. I earned the maillot jaune with 23 points and my next closest competitor had 16 points (with the peloton far behind). But that's just part of the story.



Tour de Crosstown 2016

For some time I've been coming to Life Time Fitness at Crosstown for yoga (in Josefina's class) and playing racquetball with my friend David. The cycling studio is right next to the racquetball courts and there's been a class on Saturday's at the same time we usually play. I told David that it looked like fun and he said, having tried it, that it is fun (and a big workout). In early June David got busy and then had an injury that has kept him off the court ever since. So one Saturday morning I decided to try cycling.

I borrowed a heart rate monitor (but had no idea what it was for) and tried to bike along in my regular gym shorts, shoes and a t-shirt. Despite being a cycling newbie I was immediately captured by Alison's music and enthusiasm. She's dancing on her bike and you can't help but lock in the beat. Of course that's just after she tells you to dial up the resistance... and the sweat just pours out!

I admit that workout hit me pretty hard, but I had to come back and try the 5:45 am Wednesday EDGE cycle class (gulp). Despite what sounds like a crazy impossible time to get out and on a bike it actually works out super well. This plan requires one to up-level one's organization and after the workout I can assure you that you're fully awake and charged for the day!

Soon I invested in my own heart rate monitor. Then I realized it would work so much better if I had a metabolic assessment to tune my aerobic and anaerobic training zones. While I signed up for the assessment I decided to work with May as my personal trainer. In addition to helping me with my upper body (complementing the cycling) May is a nutritionist and has helped me dial in this critical facet of training. Even though I'm still working to tune my diet around my workouts, I've already learned a lot by using My Fitness Pal and, most importantly, I have a whole new attitude about food.

Pour les curieux, la nutritioniste maison s'est absentée en France pendant le mois de juillet.

Soon I would invest in bike shoes, jerseys and shorts and begin to push myself into the proper zones during workouts and fuel my body properly afterwords. All these changes have led to dramatic weight loss \o/

A few of you know that the past two years have involved a lot of personal hardship. Upon reflection I have come to appreciate that things in my life that I can actually control are a massive opportunity. I decided that fixing my exercise and nutrition were the opportunities I want to focus on. A note for for my Debian friends... I'm sorry to have missed you in Cape Town, but I hope to join you in Montréal next year.

So when the Tour de Crosstown started in July I decided this was the time for me to get serious. I want to thank all the instructors for the great workouts (and for all the calories I've left on the bike): Alison, Kristine, Olivia, Tasha, and Caroline!

The result of my lifestyle changes are hard to describe.. I feel an amazing amount of energy every day. The impact of prior back injury is now almost non-existent. And what range of motion I hadn't recovered from the previous summer's being "washing machined" by a 3 meter wave while body surfing at the beach in Hossegor is now fully working.

Now I'm thinking it's time to treat myself to a new bike :) I'm looking at large touring frames and am currently thinking of the Surly Disc Trucker. In terms of bike shops I've had a good experience with One on One and Grand Performance has come highly recommended. If anyone has suggestions for bikes, bike features, or good shops please let me know!

I would encourage everyone here in Minneapolis to join me as guest for a Wed morning 5:45am EDGE cycle class. I'm betting you'll have as much fun as a I do.. and I guarantee you will sweat! The challenge in waking up will pay off handsomely in making you energized for the whole day.

Let's bike allons-y!

24 July, 2016 11:22PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.200.2.0

armadillo image

The second Armadillo release of the 7.* series came out a few weeks ago: version 7.200.2. And RcppArmadillo version 0.7.200.2.0 is now on CRAN and uploaded to Debian. This followed the usual thorough reverse-dependecy checking of by now over 240 packages using it.

For once, I let it simmer a little preparing only a package update via the GitHub repo without preparing a CRAN upload to lower the update frequency a little. Seeing that Conrad has started to release 7.300.0 tarballs, the time for a (final) 7.200.2 upload was now right.

Just like the previous, it now requires a recent enough compiler. As g++ is so common, we explicitly test for version 4.6 or newer. So if you happen to be on an older RHEL or CentOS release, you may need to get yourself a more modern compiler. R on Windows is now at 4.9.3 which is decent (yet stable) choice; the 4.8 series of g++ will also do. For reference, the current LTS of Ubuntu is at 5.4.0, and we have g++ 6.1 available in Debian testing.

This new upstream release adds new indexing helpers, additional return codes on some matrix transformations, increased speed for compound expressions via vectorise, corrects some LAPACK feature detections (affecting principally complex number use under OS X), and a rewritten sample() function thanks to James Balamuta.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab.

Changes in this release (and the preceding GitHub-only release 0.7.200.1.0 are as follows:

Changes in RcppArmadillo version 0.7.200.2.0 (2016-07-22)

  • Upgraded to Armadillo release 7.200.2

  • The sampling extension was rewritten to use Armadillo vector types instead of Rcpp types (PR #101 by James Balamuta)

Changes in RcppArmadillo version 0.7.200.1.0 (2016-06-06)

  • Upgraded to Armadillo release 7.200.1

    • added .index_min() and .index_max()

    • expanded ind2sub() to handle vectors of indices

    • expanded sub2ind() to handle matrix of subscripts

    • expanded expmat(), logmat() and sqrtmat() to optionally return a bool indicating success

    • faster handling of compound expressions by vectorise()

  • The configure code now (once again) sets the values for the LAPACK feature #define correctly.

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 July, 2016 07:35PM

Elena 'valhalla' Grandi

One Liberated Laptop

One Liberated Laptop

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/5a480cd2d5842101fc8975d927d030f3

After many days of failed attempts, yesterday @Diego Roversi finally managed to setup SPI on the BeagleBone White¹, and that means that today at our home it was Laptop Liberation Day!

We took the spare X200, opened it, found the point we were on in the tutorial installing libreboot on x200 https://libreboot.org/docs/install/x200_external.html, connected all of the proper cables on the clip³ and did some reading tests of the original bios.

Immagine/fotohttp://social.gl-como.it/photos/valhalla/image/77e61745d9c43833b7c0a4a919d17222

While the tutorial mentioned a very conservative setting (512kHz), just for fun we tried to read it at different speed and all results up to 16384 kHz were equal, with the first failure at 32784 kHz, so we settled on using 8192 kHz.

Then it was time to customize our libreboot image with the right MAC address, and that's when we realized that the sheet of paper where we had written it down the last time had been put in a safe place… somewhere…

Luckily we also had taken a picture, and that was easier to find, so we checked the keyboard map², followed the instructions to customize the image https://libreboot.org/docs/hcl/gm45_remove_me.html#ich9gen, flashed the chip, partially reassembled the laptop, started it up and… a black screen, some fan noise and nothing else.

We tried to reflash the chip (nothing was changed), tried the us keyboard image, in case it was the better tested one (same results) and reflashed the original bios, just to check that the laptop was still working (it was).

It was lunchtime, so we stopped our attempts. As soon as we started eating, however, we realized that this laptop came with 3GB of RAM, and that surely meant "no matching pairs of RAM", so just after lunch we reflashed the first image, removed one dimm, rebooted and finally saw a gnu-hugging penguin!

We then tried booting some random live usb key https://tails.boum.org/ we had around (failed the first time, worked the second and further one with no changes), and then proceeded to install Debian.

Running the installer required some attempts and a bit of duckduckgoing: parsing the isolinux / grub configurations from the libreboot menu didn't work, but in the end it was as easy as going to the command line and running:


linux (usb0)/install.amd/vmlinuz
initrd (usb0)/install.amd/initrd.gz
boot



From there on, it was the usual debian installation and a well know environment, and there were no surprises. I've noticed that grub-coreboot is not installed (grub-pc is) and I want to investigate a bit, but rebooting worked out of the box with no issue.

Next step will be liberating my own X200 laptop, and then if you are around the @Gruppo Linux Como area and need a 16 pin clip let us know and we may bring everything to one of the LUG meetings⁴

¹ yes, white, and most of the instructions on the interwebz talk about the black, which is extremely similar to the white… except where it isn't

² wait? there are keyboard maps? doesn't everybody just use the us one regardless of what is printed on the keys? Do I *live* with somebody who doesn't? :D

³ the breadboard in the picture is only there for the power supply, the chip on it is a cheap SPI flash used to test SPI on the bone without risking the laptop :)

⁴ disclaimer: it worked for us. it may not work on *your* laptop. it may brick it. it may invoke a tentacled monster, it may bind your firstborn son to a life of servitude to some supernatural being. Whatever happens, it's not our fault.

24 July, 2016 06:35PM by Elena ``of Valhalla''

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/01-29

seems I've neglected both my blog & my RC bug fixing activities in the last months. – anyway, since I still keep track of RC bugs I worked on, I thought I might as well publish the list:

  • #798023 – src:cssutils: "cssutils: FTBFS with Python 3.5"
    sponsor NMU by Chris Knadle, upload to DELAYED/2
  • #800303 – src:libipc-signal-perl: "libipc-signal-perl: Please migrate a supported debhelper compat level"
    bump debhelper compat level, upload to DELAYED/5
  • #808331 – src:libpgplot-perl: "libpgplot-perl: needs manual rebuild for Perl 5.22 transition"
    manually build+upload packages (pkg-perl)
  • #809056 – src:xca: "xca: FTBFS due to CPPFLAGS containing spaces"
    sponsor NMU by Chris Knadle, upload to DELAYED/5
  • #810017 – src:psi4: "psi4: FTBFS with perl 5.22"
    propose a patch
  • #810707 – src:libdbd-sqlite3-perl: "libdbd-sqlite3-perl: FTBFS: t/virtual_table/21_perldata_charinfo.t (Wstat: 512 Tests: 4 Failed: 1)"
    investigate a bit (pkg-perl)
  • #810710 – src:libdata-objectdriver-perl: "libdata-objectdriver-perl: FTBFS: t/02-basic.t (Wstat: 256 Tests: 67 Failed: 1)"
    add patch to handle sqlite 3.10 in test suite's version comparison (pkg-perl)
  • #810900 – libanyevent-rabbitmq-perl: "libanyevent-rabbitmq-perl: Can't locate object method "bind_exchange" via package "AnyEvent::RabbitMQ::Channel""
    add info, versioned close (pkg-perl)
  • #810910 – libmath-bigint-gmp-perl: "libmath-bigint-gmp-perl: FTBFS: test failures with newer libmath-bigint-perl"
    upload new upstream release (pkg-perl)
  • #810912 – libx11-xcb-perl: "libx11-xcb-perl: missing dependency on libxs-object-magic-perl"
    update dependencies (pkg-perl)
  • #813420 – src:libnet-server-mail-perl: "libnet-server-mail-perl: FTBFS: error: Can't call method "peerhost" on an undefined value at t/starttls.t line 78."
    close, as the underlying problem is fixed (pkg-perl)
  • #814730 – src:libmath-mpfr-perl: "libmath-mpfr-perl: FTBFS on most architectures"
    upload new upstream release (pkg-perl)
  • #815775 – zeroc-ice: "Build-Depends on unavailable packages mono-gmcs libmono2.0-cil"
    sponsor NMU by Chris Knadle, upload to DELAYED/2
  • #816527 – src:libtest-file-contents-perl: "libtest-file-contents-perl: FTBFS with Text::Diff 1.44"
    upload new upstream release (pkg-perl)
  • #816638 – mhonarc: "mhonarc: fails to run with perl5.22"
    propose a patch in the BTS
  • #817528 – src:libemail-foldertype-perl: "libemail-foldertype-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817529 – src:libimage-base-bundle-perl: "libimage-base-bundle-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817530 – src:liberror-perl: "liberror-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817531 – src:libimage-info-perl: "libimage-info-perl: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #817647 – src:randomplay: "randomplay: Removal of debhelper compat 4"
    raise debhelper compat level, upload to DELAYED/2
  • #818924 – libjson-webtoken-perl: "libjson-webtoken-perl: missing dependency on libmodule-runtime-perl"
    add missing (build) dependency (pkg-perl)
  • #819787 – libdbix-class-schema-loader-perl: "libdbix-class-schema-loader-perl: FTBFS: t/10_01sqlite_common.t failure"
    close bug, works again with recent sqlite3 (pkg-perl)
  • #821412 – libnet-rblclient-perl: "libnet-rblclient-perl: Net::DNS 1.01 breaks Net::RBLClient"
    add patch (pkg-perl)
  • #823310 – libnanomsg-raw-perl: "libnanomsg-raw-perl: FTBFS: test failures"
    add patch from upstream git repo (pkg-perl)
  • #824046 – src:libtkx-perl: "libtkx-perl: FTBFS: Tcl error 'Foo at /usr/lib/x86_64-linux-gnu/perl5/5.22/Tcl.pm line 585.\n' while invoking scalar result call"
    first investigation (pkg-perl)
  • #824143 – libperinci-sub-normalize-perl: "libperinci-sub-normalize-perl: FTBFS: Can't locate Sah/Schema/Rinci.pm in @INC"
    upload new upstream version (pkg-perl)
  • #825366 – src:libdist-zilla-plugin-ourpkgversion-perl: "libdist-zilla-plugin-ourpkgversion-perl: FTBFS: Can't locate Path/Class.pm in @INC"
    add missing dependency (pkg-perl)
  • #825424 – libdist-zilla-plugin-test-podspelling-perl: "libdist-zilla-plugin-test-podspelling-perl: FTBFS: Can't locate Path/Class.pm in @INC"
    first triaging, forward upstream, import new release (pkg-perl)
  • #825608 – libnet-jifty-perl: "libnet-jifty-perl: FTBFS: t/006-uploads.t failure"
    triaging, mark unreproducible (pkg-perl)
  • #825629 – src:libgd-perl: "libgd-perl: FTBFS: Could not find gdlib-config in the search path. "
    first triaging, forward upstream (pkg-perl)
  • #829064 – libparse-debianchangelog-perl: "libparse-debianchangelog-perl: FTBFS with new tidy version"
    patch TML template (pkg-perl)
  • #829066 – libparse-plainconfig-perl: "FTBFS: Can't modify constant item in scalar assignment"
    new upstream release (pkg-perl)
  • #829176 – src:libapache-htpasswd-perl: "libapache-htpasswd-perl: FTBFS: dh_clean: Please specify the compatibility level in debian/compat"
    add debian/compat, upload to DELAYED/2
  • #829409 – src:libhtml-tidy-perl: "libhtml-tidy-perl: FTBFS: Failed 7/21 test programs. 8/69 subtests failed."
    apply patches from Simon McVittie (pkg-perl)
  • #829668 – libparse-debianchangelog-perl: "libparse-debianchangelog-perl: FTBFS: Failed test 'Output of dpkg_str equal to output of dpkg-parsechangelog'"
    add patch for compatibility with new dpkg (pkg-perl)
  • #829746 – src:license-reconcile: "license-reconcile: FTBFS: Failed 7/30 test programs. 11/180 subtests failed."
    versioned close, already fixed in latest upload (pkg-perl)
  • #830275 – src:libgravatar-url-perl: "libgravatar-url-perl: accesses the internet during build"
    skip test which needs internet access (pkg-perl)
  • #830324 – src:libhttp-async-perl: "libhttp-async-perl: accesses the internet during build"
    add patch to skip tests with DNS queries (pkg-perl)
  • #830354 – src:libhttp-proxy-perl: "libhttp-proxy-perl: accesses the internet during build"
    skip tests which need internet access (pkg-perl)
  • #830355 – src:libanyevent-http-perl: "libanyevent-http-perl: accesses the internet during build"
    skip tests which need internet access (pkg-perl)
  • #830356 – src:libhttp-oai-perl: "libhttp-oai-perl: accesses the internet during build"
    add patch to skip tests with DNS queries (pkg-perl)
  • #830476 – src:libpoe-component-client-http-perl: "libpoe-component-client-http-perl: accesses the internet during build"
    update existing patch (pkg-perl)
  • #831233 – src:libmongodb-perl: "libmongodb-perl: build hangs with sbuild and libeatmydata"
    lower severity (pkg-perl)
  • #832361 – src:libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: Failed 2/22 test programs. 2/356 subtests failed."
    upload new upstream release (pkg-perl)

24 July, 2016 04:34PM

Russ Allbery

Review: The Run of His Life

Review: The Run of His Life, by Jeffrey Toobin

Publisher: Random House
Copyright: 1996, 1997
Printing: 2015
ISBN: 0-307-82916-2
Format: Kindle
Pages: 498

The O.J. Simpson trial needs little introduction to anyone who lived through it in the United States, but a brief summary for those who didn't.

O.J. Simpson is a Hall of Fame football player and one of the best running backs to ever play the game. He's also black, which is very relevant much of what later happened. After he retired from professional play, he became a television football commentator and a spokesperson for various companies (particularly Hertz, a car rental business). In 1994, he was arrested for the murder of two people: his ex-wife, Nicole Brown Simpson, and Ron Goldman (a friend of Nicole's). The arrest happened after a bizarre low-speed police chase across Los Angeles in a white Bronco that was broadcast live on network television. The media turned the resulting criminal trial into a reality TV show, with live cable television broadcasts of all of the court proceedings. After nearly a full year of trial (with the jury sequestered for nine months — more on that later), a mostly black jury returned a verdict of not guilty after a mere four hours of deliberation.

Following the criminal trial, in an extremely unusual legal proceeding, Simpson was found civilly liable for Ron Goldman's death in a lawsuit brought by his family. Bizarre events surrounding the case continued long afterwards. A book titled If I Did It (with "if" in very tiny letters on the cover) was published, ghost-written but allegedly with Simpson's input and cooperation, and was widely considered a confession. Another legal judgment let the Goldman family get all the profits from that book's publication. In an unrelated (but also bizarre) incident in Las Vegas, Simpson was later arrested for kidnapping and armed robbery and is currently in prison until at least 2017.

It is almost impossible to have lived through the O.J. Simpson trial in the United States and not have formed some opinion on it. I was in college and without a TV set at the time, and even I watched some of the live trial coverage. Reactions to the trial were extremely racially polarized, as you might have guessed. A lot of black people believed at the time that Simpson was innocent (probably fewer now, given subsequent events). A lot of white people thought he was obviously guilty and was let off by a black jury for racial reasons. My personal opinion, prior to reading this book, was a common "third way" among white liberals: Simpson almost certainly committed the murders, but the racist Los Angeles police department decided to frame him for a crime that he did commit by trying to make the evidence stronger. That's a legitimate reason in the US justice system for finding someone innocent: the state has an obligation to follow correct procedure and treat the defendant fairly in order to get a conviction. I have a strong bias towards trusting juries; frequently, it seems that the media second-guesses the outcome of a case that makes perfect sense as soon as you see all the information the jury had (or didn't have).

Toobin's book changed my mind. Perhaps because I wasn't watching all of the coverage, I was greatly underestimating the level of incompetence and bad decision-making by everyone involved: the prosecution, the defense, the police, the jury, and the judge. This court case was a disaster from start to finish; no one involved comes away looking good. Simpson was clearly guilty given the evidence presented, but the case was so badly mishandled that it gave the jury room to reach the wrong verdict. (It's telling that, in the far better managed subsequent civil case, the jury had no trouble reaching a guilty verdict.)

The Run of His Life is a very detailed examination of the entire Simpson case, from the night of the murder through the end of the trial and (in an epilogue) the civil case. Toobin was himself involved in the media firestorm, breaking some early news of the defense's decision to focus on race in The New Yorker and then involved throughout the trial as a legal analyst, and he makes it clear when he becomes part of the story. But despite that, this book felt objective to me. There are tons of direct quotes, lots of clear description of the evidence, underlying interviews with many of the people involved to source statements in the book, and a comprehensive approach to the facts. I think Toobin is a bit baffled by the black reaction to the case, and that felt like a gap in the comprehensiveness and the one place where he might be accused of falling back on stereotypes and easy judgments. But other than hole, Toobin applies his criticism even-handedly and devastatingly to all parties.

I won't go into all the details of how Toobin changed my mind. It was a cumulative effect across the whole book, and if you're curious, I do recommend reading it. A lot was the detailed discussion of the forensic evidence, which was undermined for the jury at trial but looks very solid outside the hothouse of the case. But there is one critical piece that I would hope would be handled differently today, twenty years later, than it was by the prosecutors in that case: Simpson's history of domestic violence against Nicole. With what we now know about patterns of domestic abuse, the escalation to murder looks entirely unsurprising. And that history of domestic abuse was exceedingly well-documented: multiple external witnesses, police reports, and one actual prior conviction for spousal abuse (for which Simpson did "community service" that was basically a joke). The prosecution did a very poor job of establishing this history and the jury discounted it. That was a huge mistake by both parties.

I'll mention one other critical collection of facts that Toobin explains well and that contradicted my previous impression of the case: the relationship between Simpson and the police.

Today, in the era of Black Lives Matter, the routine abuse of black Americans by the police is more widely known. At the time of the murders, it was less recognized among white Americans, although black Americans certainly knew about it. But even in 1994, the Los Angeles police department was notorious as one of the most corrupt and racist big-city police departments in the United States. This is the police department that beat Rodney King. Mark Fuhrman, one of the police officers involved in the case (although not that significantly, despite his role at the trial), was clearly racist and had no business being a police officer. It was therefore entirely believable that these people would have decided to frame a black man for a murder he actually committed.

What Toobin argues, quite persuasively and with quite a lot of evidence, is that this analysis may make sense given the racial tensions in Los Angeles but ignores another critical characteristic of Los Angeles politics, namely a deference to celebrity. Prior to this trial, O.J. Simpson largely followed the path of many black athletes who become broadly popular in white America: underplaying race and focusing on his personal celebrity and connections. (Toobin records a quote from Simpson earlier in his life that perfectly captures this approach: "I'm not black, I'm O.J.") Simpson spent more time with white businessmen than the black inhabitants of central Los Angeles. And, more to the point, the police treated him as a celebrity, not as a black man.

Toobin takes some time to chronicle the remarkable history of deference and familiarity that the police showed Simpson. He regularly invited police officers to his house for parties. The police had a long history of largely ignoring or downplaying his abuse of his wife, including not arresting him in situations that clearly seemed to call for that, showing a remarkable amount of deference to his side of the story, not pursuing clear violations of the court judgment after his one conviction for spousal abuse, and never showing much inclination to believe or protect Nicole. Even on the night of the murder, they started following a standard playbook for giving a celebrity advance warning of investigations that might involve them before the news media found out about them. It seems clear, given the evidence that Toobin collected, that the racist Los Angeles police didn't focus that animus at Simpson, a wealthy celebrity living in Brentwood. He wasn't a black man in their eyes; he was a rich Hall of Fame football player and a friend.

This obviously raises the question of how the jury could return an innocent verdict. Toobin provides plenty of material to analyze that question from multiple angles in his detailed account of the case, but I can tell you my conclusion: Judge Lance Ito did a horrifically incompetent job of managing the case. He let the lawyers wander all over the case, interacted bizarrely with the media coverage (and was part of letting the media turn it into a daytime drama), was not crisp or clear about his standards of evidence and admissibility, and, perhaps worst of all, let the case meander on at incredible length. With a fully sequestered jury allowed only brief conjugal visits and no media contact (not even bookstore shopping!).

Quite a lot of anger was focused on the jury after the acquittal, and I do think they reached the wrong conclusion and had all the information they would have needed to reach the correct one. But Toobin touches on something that I think would be very hard to comprehend without having lived through it. The jury and alternate pool essentially lived in prison for nine months, with guards and very strict rules about contact with the outside world, in a country where compensation for jury duty is almost nonexistent. There were a lot of other factors behind their decision, including racial tensions and the sheer pressure from judging a celebrity case about which everyone has an opinion, but I think it's nearly impossible to underestimate the psychological tension and stress from being locked up with random other people under armed guard for three quarters of a year. It's hard for jury members to do an exhaustive and careful deliberation in a typical trial that takes a week and doesn't involve sequestration. People want to get back to their lives and families. I can only imagine the state I would be in after nine months of this, or how poor psychological shape I would be in to make a careful and considered decision.

Similarly, for those who condemned the jury for profiting via books and media appearances after the trial, the current compensation for jurors is $15 per day (not hour). I believe at the time it was around $5 per day. There are a few employers who will pay full salary for the entire jury service, but only a few; many cap the length at a few weeks, and some employers treat all jury duty as unpaid leave. The only legal requirement for employers in the United States is that employees that serve on a jury have their job held for them to return to, but compensation is pathetic, not tied to minimum wage, and employers do not have to supplement it. I'm much less inclined to blame the jurors than the system that badly mistreated them.

As you can probably tell from the length of this review, I found The Run of His Life fascinating. If I had followed the whole saga more closely at the time, this may have been old material, but I think my vague impressions and patchwork of assumptions were more typical than not among people who were around for the trial but didn't invest a lot of effort into following it. If you are like me, and you have any interest in the case or the details of how the US criminal justice system works, this is a fascinating case study, and Toobin does a great job with it. Recommended.

Rating: 8 out of 10

24 July, 2016 02:13AM

July 23, 2016

hackergotchi for Steve Kemp

Steve Kemp

A final post about the lua-editor.

I recently mentioned that I'd forked Antirez's editor and added lua to it.

I've been working on it, on and off, for the past week or two now. It's finally reached a point where I'm content:

  • The undo-support is improved.
  • It has buffers, such that you can open multiple files and switch between them.
    • This allows this to work "kilua *.txt", for example.
  • The syntax-highlighting is improved.
    • We can now change the size of TAB-characters.
    • We can now enable/disable highlighting of trailing whitespace.
  • The default configuration-file is now embedded in the body of the editor, so you can run it portably.
  • The keyboard input is better, allowing multi-character bindings.
    • The following are possible, for example ^C, M-!, ^X^C, etc.

Most of the obvious things I use in Emacs are present, such as the ability to customize the status-bar (right now it shows the cursor position, the number of characters, the number of words, etc, etc).

Anyway I'll stop talking about it now :)

23 July, 2016 05:30AM

hackergotchi for Francois Marier

Francois Marier

Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s
print

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb

23 July, 2016 05:00AM

Mateus Bellomo

Send/receive text messages to buddies

Some weeks ago I’ve implemented the option to send a text message from telepathy-resiprocate Empathy. At that time I just implemented it at apps/telepathy/TextChannel class which wasn’t the ideal. Now, with a better understand of the resip/recon and resip/dum APIs, I was able to move this implementation there.

Besides that I also have implemented the option to receive a text message. For that I have done some changes at resip/recon/ConversationManager and resip/recon/UserAgent classes, along some other.

The complete changes could be seen at [1]. This branch also holds modifications related to send/receive presence. This is necessary since to send a message to a contact he/she should be online.

There is still work to be done specially checking the possible error cases but at least we could see a first prototype working. Follow some images:

textChannel_JitsiAccount logged in with Jitsi

 

 

textChannel_EmpathyAccount logged in with Empathy using telepathy-resiprocate

[1] https://github.com/resiprocate/resiprocate/compare/master…MateusBellomo:mateus-presence-text


23 July, 2016 04:00AM by mateusbellomo

July 22, 2016

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

HOPE 11

I’ll be at HOPE 11 this year - if anyone else will be around, feel free to send me an email! I won’t have a phone on me (so texting only works if you use Signal!)

Looking forward for a chance to see everyone soon!

22 July, 2016 12:16PM

Russell Coker

802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

network={
 key_mgmt=IEEE8021X
 eap=PEAP
 identity="USERNAME"
 anonymous_identity="USERNAME"
 password="PASS"
 phase1="auth=MD5"
 phase2="auth=CHAP password=PASS"
 eapol_flags=0
}

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

22 July, 2016 06:10AM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.0.5

Version 0.0.5 of RcppCCTZ arrived on CRAN a couple of days ago. It reflects an upstream fixed made a few weeks ago. CRAN tests revealed that g++-6 was tripping over one missing #define; this was added upstream and I subsequently synchronized with upstream. At the same time the set of examples was extended (see below).

Somehow useR! 2016 got in the way and while working on the then-incomplete examples during the traveling I forgot to release this until CRAN reminded me that their tests still failed. I promptly prepared the 0.0.5 release but somehow failed to update NEWS files etc. They are correct in the repo but not in the shipped package. Oh well.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo. RcppCCTZ connects this library to R by relying on Rcpp.

Two good examples are now included, and shown here. The first one tabulates the time difference between New York and London (at a weekly level for compactness):

R> example(tzDiff)

tzDiffR> # simple call: difference now
tzDiffR> tzDiff("America/New_York", "Europe/London", Sys.time())
[1] 5

tzDiffR> # tabulate difference for every week of the year
tzDiffR> table(sapply(0:52, function(d) tzDiff("America/New_York", "Europe/London",
tzDiff+                                       as.POSIXct(as.Date("2016-01-01") + d*7))))

 4  5 
 3 50 
R> 

Because the two continents happen to spring forward and fall backwards between regular and daylight savings times, there are, respectively, two and one week periods where the difference is one hour less than usual.

A second example shifts the time to a different time zone:

R> example(toTz)

toTzR> toTz(Sys.time(), "America/New_York", "Europe/London")
[1] "2016-07-14 10:28:39.91740 CDT"
R> 

Note that because we return a POSIXct object, it is printed by R with the default (local) TZ attribute (for "America/Chicago" in my case). A more direct example asks what time it is in my time zone when it is midnight in Tokyo:

R> toTz(ISOdatetime(2016,7,15,0,0,0), "Japan", "America/Chicago")
[1] "2016-07-14 15:00:00 CDT"
R>

More changes will come in 0.0.6 as soon as I find time to translate the nice time_tool (command-line) example into an R function.

Changes in this version are summarized here:

Changes in version 0.0.5 (2016-07-09)

  • New utility example functions toTz() and tzDiff

  • Synchronized with small upstream change for additional #ifdef for compiler differentiation

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 July, 2016 03:08AM

hackergotchi for Martin Michlmayr

Martin Michlmayr

Debian on Seagate Personal Cloud and Seagate NAS

The majority of NAS devices supported in Debian are based on Marvell's Kirkwood platform. This platform is quite dated now and can only run Debian's armel port.

Debian now supports the Seagate Personal Cloud and Seagate NAS devices. They are based on Marvell's Armada 370, a platform which can run Debian's armhf port. Unfortunately, even the Armada 370 is a bit dated now, so I would not recommend these devices for new purchases. If you have one already, however, you now have the option to run native Debian.

There are some features I like about the Seagate NAS devices:

  • Network console: you can connect to the boot loader via the network. This is useful to load Debian or to run recovery commands if needed.
  • Mainline support: the devices are supported in the mainline kernel.
  • Good contacts: Seagate engineer Simon Guinot is interested in Debian support and is a joy to work with. There's also a community for LaCie NAS devices (Seagate acquired LaCie).

If you have a Seagate Personal Cloud and Seagate NAS, you can follow the instructions on the Debian wiki.

If Seagate releases more NAS devices on Marvell's Armada platform, I intend to add Debian support.

22 July, 2016 02:50AM

July 21, 2016

Vincent Fourmond

QSoas version 2.0 is out / QSoas paper

I thought it would come before that, but I've finally gotten around releasing version 2.0 of my data analysis program, QSoas !


It provides significant improvements to the fit interface, in particular for multi-buffer fits, with a “Multi” fit engine that performs very well for large multibuffer fits, a spreadsheet editor for fit parameters, and more usability improvements. It also features the definition of fits with distribution of values of one of the fit parameter, and new built-in fits. In addition, QSoas version 2.0 features new commands to derive data, to flag buffers and handle large multi-column datasets, and improvements of existing commands. The full list of changes since version 1.0 can be found there.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too.

In addition, I am glad to announce that QSoas is now described in a recent publication, Fourmond, Anal. Chem., 2016, 88, 5050-5052. Please cite this publication if you used QSoas to process your data.

21 July, 2016 09:04PM by Vincent Fourmond (noreply@blogger.com)

Olivier Grégoire

Height week: create an API on library ring client (LRC)

At the beginning of the week, I didn’t really use the LRC to communicate with my client.
-The client calls an function in it to call my method who calls my program
-The daemon sends his signal connect to an Qslot in LRC. After that, I just send another signal connect to a lambda function of the client

I have never programmed API before and I began to write some code without checking how doing that. I needed to extract all the information of my map<s,s> sending by the daemon to present all it in my API. After observing the code, I saw LRC follow the kde library code policy. So, I change my architecture to follow the same policies . Basically, I needed to create a public and private header by using the D-Pointer. My private header contains my slot who is connect with the daemon and all private variable. My public header contains a signal connect to lambda function who indicates to the client when some information change and he need to refresh it. This header contains obviously all the getters too.

I have now a functional API.



Next week I will work on the gnome client to use this new API.

21 July, 2016 04:57PM

Reproducible builds folks

Reproducible builds: week 62 in Stretch cycle

What happened in the Reproducible Builds effort between June 26th and July 2nd 2016:

Read on to find out why we're lagging some weeks behind…!

GSoC and Outreachy updates

  • Ceridwen described using autopkgtest code to communicate with containers and how to test the container handling.

  • reprotest 0.1 has been accepted into Debian unstable, and any user reports, bug reports, feature requests, etc. would be appreciated. This is still an alpha release, and nothing is set in stone.

Toolchain fixes

  • Matthias Klose uploaded doxygen/1.8.11-3 to Debian unstable (closing #792201) with the upstream patch improving SOURCE_DATE_EPOCH support by using UTC as timezone when parsing the value. This was the last patch we were carrying in our repository, thus this upload obsoletes the version in our experimental repository.
  • cmake/3.5.2-2 was uploaded by Felix Geyer, which sorts file lists obtained with file(GLOB).
  • Dmitry Shachnev uploaded sphinx/1.4.4-2, which fixes a timezone related issue when SOURCE_DATE_EPOCH is set.

With the doxygen upload we are now down to only 2 modified packages in our repository: dpkg and rdfind.

Weekly reports delay and the future of statistics

To catch up with our backlog of weekly reports we have decided to skip some of the statistics for this week. We might publish them in a future report, or we might switch to a format where we summarize them more (and which we can create (even) more automatically), we'll see.

We are doing these weekly statistics because we believe it's appropriate and useful to credit people's work and make it more visible. What do you think? We would love to hear your thoughts on this matter! Do you read these statistics? Somewhat?

Actually, thanks to the power of notmuch, Holger came up with what you can see below, so what's missing for this week are the uploads fixing irreprodubilities. Which we really would like to show for the reasons stated above and because we really really need these uploads to happen ;-)

But then we also like to confirm the bugs are really gone, which (atm) requires manual checking, and to look for the words "reproducible" and "deterministic" (and spelling variations) in debian/changelogs of all uploads, to spot reproducible work not tracked via the BTS.

And we still need to catch up on the backlog of weekly reports.

Bugs submitted with reproducible usertags

It seems DebCamp in Cape Town was hugely successful and made some people get a lot of work done:

61 bugs have been filed with reproducible builds usertags and 60 of them had patches:

Package reviews

437 new reviews have been added (though most of them were just linking the bug, "only" 56 new issues in packages were found), an unknown number has been been updated and 60 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been found:

Weekly QA work

98 FTBFS bugs have been reported by Chris Lamb and Santiago Vila.

diffoscope development

strip-nondeterminism development

  • Chris Lamb made sure that .zhfst files are treated as ZIP files.

tests.reproducible-builds.org

  • Mattia Rizzolo uploaded pbuilder/0.225.1~bpo8+1 to jessie-backports and it has been installed on all build nodes. As a consequence all armhf and i386 builds will be done with eatmydata; this will hopefully cut down the build time by a noticable factor.

Misc.

This week's edition was written by Mattia Rizzolo, Reiner Herrmann, Ceridwen and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

21 July, 2016 01:13PM

hackergotchi for Chris Lamb

Chris Lamb

Python quirk: Signatures are evaluated at import time

Every Python programmer knows to avoid mutable default arguments:

def fn(mutable=[]):
    mutable.append('elem')
    print mutable

fn()
fn()
$ python test.py
['elem']
['elem', 'elem']

However, many are not clear that this is due to arguments being evaluated at import time, rather than the first time the function is evaluated.

This results in related quirks such as:

def never_called(error=1/0):
    pass
$ python test.py
Traceback (most recent call last):
  File "test.py", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero

... and an—implementation-specific—quirk caused by naive constant folding:

def never_called():
    99999999 ** 9999999
$ python test.py
[hangs]

I suspect that this can be used as denial-of-service vector.

21 July, 2016 11:07AM

July 20, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

How many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.


Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

20 July, 2016 05:48PM by Daniel.Pocock

hackergotchi for Michal Čihař

Michal Čihař

New projects on Hosted Weblate

For almost two months I found very little time to process requests to host free software on Hosted Weblate. Today the queue has been emptied, what means that you can find many new translations there.

To make it short, here is list of new projects:

PS: If you didn't receive reply for your hosting request today, it was probably lost, so don't hesitate to ask again.

Filed under: Debian English Weblate | 0 comments

20 July, 2016 05:00PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Debconf 16 and My Experience with Debian

It has been often said that you should continually try new things in life so that

a. Unlike the fish you do not mistake the pond to be the sea.

b. You see other people, other types and ways of living and being which you normally won’t in your day-to-day existence.

With both of those as mantras I decided to take a leap into the unknown. I was unsure both about the visa process as well as the travel bit as I was traveling to an unknown place and although I had done some research about the place I was unsure about the authenticity of whatever is/was shared on the web.

During the whole journey both to and fro, I couldn’t sleep a wink. The Doha airport is huge. There are 5 Concourses, A, B , C, D, E and around 30+ gates in each Concourse. The ambition of the small state is something to be reckoned with. Almost 95% of the blue workers in the entire airport were of Asian sub-continent. While the Qatari Rial is 19 times stronger to us, the workers I suspect are worse-off than people doing similar things back home. Add to that the sharia law, even for all the money in the world, I wouldn’t want to settle therein.

Anyways, during the journey, a small surprise awaited me, Ritesh Raj Saraff, a DD was also traveling to Debconf. We bumped into each other while going to see the Doha City, courtesy Al-Hamad International Airport. I would probably share a bit more about Doha and my experiences with the city in upcoming posts.

Cut to Cape Town, South Africa, we landed in the city half an hour after our scheduled time and then we sped along to University of Cape Town (UCT) which was to become our home for the next 13 odd days.

The first few days were a whirlwind as there were new people to meet, old people whom I knew only as an e-mail id or an IRC nickname turned out to be real people and you have to try to articulate yourself in English, which is not a native language of mine. During Debcamp I was fortunate to be able visit some of the places and the wiki page had a lot of places which I knew I wouldn’t be able to complete unless I had 15 days unlimited time and money to go around so didn’t even try.

I had gone with few goals in mind :-

a. Do some documentation of the event – In this I failed completely as just the walk from the venue to where the talks were energy-draining for me. Apart from that, you get swept in meeting new people and talking about one of million topics in Debian which interest you or the other person and while they are fulfilling, it is and was both physically and emotionally draining for me (in a good way). Bernelle (one of the organizers) had warned us of this phenomenon but you disregard it as you know you have a limited time-frame in which to meet and greet people and it is all a over-whelming experience.

b. Another goal was to meet my Indian brethren who had left the country around 60~100 years mostly as slaves of East India company – In this I was partially successful. I met a couple of beautiful ladies who had either a father or a mother who was Indian while the other was of African heritage. It seemed in them a yearning to know the culture but from what little they had, only Bollywood and Indian cuisine was what they could make of Indian culture. One of the girls, ummm… women to be more truer, shared a somewhat grim tale. She had both an African boyfriend as well as Indian boyfriend in her life and in both cases, she was rejected by the boy’s parents because she wasn’t pure enough. This was deja vu all over again as the same thing can be seen here happening in casteism so there wasn’t any advice I could give but just nod in empathy. What was sort of relevation was when their parents or grandparents came, the name and surnames were thrown off and the surname was just the place from where they belong. From the discussions it emerged that there were also lot of cases of forced conversions to Christianity during that era as well as temptations of a better life.

As shared, this goal succeeded partially, as I was actually interested in their parents or grand-parents to know the events that shaped the Indian diaspora over there. While the children know only of today, yester-years could only be known by those people who made the unwilling perilous journey to Africa. I had also wanted to know more about Gandhiji’s role in that era but alas, that part of history would have to wait for another day as I guess, both those goals would only have met had I visited Durban but that was not to be.

I had applied for one talk ‘My Experience with Debian’ and one workshop for Installation of Debian on systems. The ‘My Experience with Debian’ was aimed at newbies and I had thought of using show-and-tell to share the differences between proprietary Operating Systems and a FOSS distribution such as Debian. I was going to take simple things such as changelogs, apt-listbugs, real-time knowledge of updates and upgrades as well as /etc/apt/sources.list to share both the versatility of the Debian desktop and real improvements than what proprietary Operating Systems had to offer. But I found myself engaging with Debian Developers (DD’s) rather than the newbies so had to change the orientation and fundamentals of the talk on the fly. I knew or suspected rather that the old idea would not work as it would just be repeating to the choir. With that in the back of mind, and the idea that perhaps they would not be so aware of the politics and events which happened in India over the last couple of decades, I tried to share what little I was able to recollect what little I was able to remember about those times. Apart from that, I was also highly conscious that I had been given just the before lunch slot aka ‘You are in the way of my lunch’ slot. So I knew I had to speak my piece as quickly as possible being as clear as can be. Later, I did get feedback that I was fast and seeing it through couple of times, do agree that I could have done a better job. What’s done is done and the only thing I could do to salvage it a bit is to make a presentation which I am sharing as below.

my_experience_with_debian

Would be nice if somebody could come up with a lighter template for presentations. For reference the template I have taken it from is shared at https://wiki.debian.org/Presentations . Some pictures from the presentation.

vlcsnap-00004

me_sharing

my_experience_with_debian

You can find the video at http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/My_Experience_with_Debian.webm

This is by no means the end of the Debconf16 experience, but actually the starting. I hope to share more of my thoughts, ideas and get as much feedback from all the wonderful people I met during Debconf.


Filed under: Miscellenous Tagged: #Debconf16, Doha, My talk, Qatar

20 July, 2016 02:38PM by shirishag75

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solskogen 2016 videos

I just published the videos from Solskogen 2016 on Youtube; you can find them all in this playlist. The are basically exactly what was being sent out on the live stream, frame for frame, except that the audio for the live shader compos has been remastered, and of course a lot of dead time has been cut out (the stream was sending over several days, but most of the time, only the information loop from the bigscreen).

YouTube doesn't really support the variable 50/60 Hz frame rate we've been using well as far as I can tell, but mostly it seems to go to some 60 Hz upconversion, which is okay enough, because the rest of your setup most likely isn't free-framerate anyway.

Solskogen is interesting in that we're trying to do a high-quality stream with essentially zero money allocated to it; where something like Debconf can use €2500 for renting and transporting equipment (granted, for two or three rooms and not our single stream), we're largely dependent on personal equipment as well as borrowing things here and there. (I think we borrowed stuff from more or less ten distinct places.) Furthermore, we're nowhere near the situation of “two cameras, a laptop, perhaps a few microphones”; not only do you expect to run full 1080p60 to the bigscreen and switch between that and information slides for each production, but an Amiga 500 doesn't really have an HDMI port, and Commodore 64 delivers an infamously broken 50.12 Hz signal that you really need to deal with carefully if you want it to not look like crap.

These two factors together lead to a rather eclectic setup; here, visualized beautifully from my ASCII art by ditaa:

Solskogen 2016 A/V setup diagram

Of course, for me, the really interesting part here is near the end of the chain, with Nageru, my live video mixer, doing the stream mixing and encoding. (There's also Cubemap, the video reflector, but honestly, I never worry about that anymore. Serving 150 simultaneous clients is just not something to write home about anymore; the only adjustment I would want to make would probably be some WebSockets support to be able to deal with iOS without having to use a secondary HLS stream.) Of course, to make things even more complicated, the live shader compo needs two different inputs (the two coders' laptops) live on the bigscreen, which was done with two video capture cards, text chroma-keyed on top from Chroma, and OBS, because the guy controlling the bigscreen has different preferences from me. I would take his screen in as a “dirty feed” and then put my own stuff around it, like this:

Solskogen 2016 shader compo screenshot

(Unfortunately, I forgot to take a screenshot of Nageru itself during this run.)

Solskogen was the first time I'd really used Nageru in production, and despite super-extensive testing, there's always something that can go wrong. And indeed there was: First of all, we discovered that the local Internet line was reduced from 30/10 to 5/0.5 (which is, frankly, unusable for streaming video), and after we'd half-way fixed that (we got it to 25/4 or so by prodding the ISP, of which we could reserve about 2 for video—demoscene content is really hard to encode, so I'd prefer a lot more)… Nageru started crashing.

It wasn't even crashes I understood anything of. Generally it seemed like the NVIDIA drivers were returning GL_OUT_OF_MEMORY on things like creating mipmaps; it's logical that they'd be allocating memory, but we had 6 GB of GPU memory and 16 GB of CPU memory, and lots of it was free. (The PC we used for encoding was much, much faster than what you need to run Nageru smoothly, so we had plenty of CPU power left to run x264 in, although you can of course always want more.) It seemed to be mostly related to zoom transitions, so I generally avoided those and ran that night's compos in a more static fashion.

It wasn't until later that night (or morning, if you will) that I actually understood the bug (through the godsend of the NVX_gpu_memory_info extension, which gave me enough information about the GPU memory state that I understood I wasn't leaking GPU memory at all); I had set Nageru to lock all of its memory used in RAM, so that it would never ever get swapped out and lose frames for that reason. I had set the limit for lockable RAM based on my test setup, with 4 GB of RAM, but this setup had much more RAM, a 1080p60 input (which uses more RAM, of course) and a second camera, all of which I hadn't been able to test before, since I simply didn't have the hardware available. So I wasn't hitting the available RAM, but I was hitting the amount of RAM that Linux was willing to lock into memory for me, and at that point, it'd rather return errors on memory allocations (including the allocations the driver needed to make for its texture memory backings) than to violate the “never swap“ contract.

Once I fixed this (by simply increasing the amount of lockable memory in limits.conf), everything was rock-stable, just like it should be, and I could turn my attention to the actual production. Often during compos, I don't really need the mixing power of Nageru (it just shows a single input, albeit scaled using high-quality Lanczos3 scaling on the GPU to get it down from 1080p60 to 720p60), but since entries come in using different sound levels (I wanted the stream to conform to EBU R128, which it generally did) and different platforms expect different audio work (e.g., you wouldn't put a compressor on an MP3 track that was already mastered, but we did that on e.g. SID tracks since they have nearly zero ability to control the overall volume), there was a fair bit of manual audio tweaking during some of the compos.

That, and of course, the live 50/60 Hz switches were a lot of fun: If an Amiga entry was coming up, we'd 1. fade to a camera, 2. fade in an overlay saying we were switching to 50 Hz so have patience, 3. set the camera as master clock (because the bigscreen's clock is going to go away soon), 4. change the scaler from 60 Hz to 50 Hz (takes two clicks and a bit of waiting), 5. change the scaler input in Nageru from 1080p60 to 1080p50, 6. steps 3,2,1 in reverse. Next time, I'll try to make that slightly smoother, especially as the lack of audio during the switch (it comes in on the bigscreen SDI feed) tended to confuse viewers.

So, well, that was a lot of fun, and it certainly validated that you can do a pretty complicated real-life stream with Nageru. I have a long list of small tweaks I want to make, though; nothing beats actual experience when it comes to improving processes. :-)

20 July, 2016 10:23AM

Daniel Stender

Theano in Debian: maintenance, BLAS and CUDA

I'm glad to announce that we have the current release of Theano (0.8.2) in Debian unstable now, it's on its way into the testing branch and the Debian derivatives, heading for Debian 9. The Debian package is maintained in behalf of the Debian Science Team.

We have a binary package with the modules in the Python 2.7 import path (python-theano), if you want or need to stick to that branch a little longer (as a matter of fact, in the current popcon stats it's the most installed package), and a package running on the default Python 3 version (python3-theano). The comprehensive documentation is available for offline usage in another binary package (theano-doc).

Although Theano builds its extensions on run time and therefore all binary packages contain the same code, the source package generates arch specific packages1 for the reason that the exhaustive test suite could run over all the architectures to detect if there are problems somewhere (#824116).

what's this?

In a nutshell, Theano is a computer algebra system (CAS) and expression compiler, which is implemented in Python as a library. It is named after a Classical Greek female mathematician and it's developed at the LISA lab (located at MILA, the Montreal Institute for Learning Algorithms) at the Université de Montréal.

Theano tightly integrates multi-dimensional arrays (N-dimensional, ND-array) from NumPy (numpy.ndarray), which are broadly used in Scientific Python for the representation of numeric data. It features a declarative Python based language with symbolic operations for the functional definition of mathematical expressions, which allows to create functions that compute values for them. Internally the expressions are represented as directed graphs with nodes for variables and operations. The internal compiler then optimizes those graphs for stability and speed and then generates high-performance native machine code to evaluate resp. compute these mathematical expressions2.

One of the main features of Theano is that it's capable to compute also on GPU processors (graphical processor unit), like on custom graphic cards (e.g. the developers are using a GeForce GTX Titan X for benchmarks). Today's GPUs became very powerful parallel floating point devices which can be employed also for scientific computations instead of 3D video games3. The acronym "GPGPU" (general purpose graphical processor unit) refers to special cards like NVIDIA's Tesla4, which could be used alike (more on that below). Thus, Theano is a high-performance number cruncher with an own computing engine which could be used for large-scale scientific computations.

If you haven't came across Theano as a Pythonistic professional mathematician, it's also one of the most prevalent frameworks for implementing deep learning applications (training multi-layered, "deep" artificial neural networks, DNN) around5, and has been developed with a focus on machine learning from the ground up. There are several higher level user interfaces build in the top of Theano (for DNN, Keras, Lasagne, Blocks, and others, or for Python probalistic programming, PyMC3). I'll seek for some of them also becoming available in Debian, too.

helper scripts

Both binary packages ship three convenience scripts, theano-cache, theano-test, and theano-nose. Instead of them being copied into /usr/bin, which would result into a binaries-have-conflict violation, the scripts are to be found in /usr/share/python-theano (python3-theano respectively), so that both module packages of Theano can be installed at the same time.

The scripts could be run directly from these folders, e.g. do $ python /usr/share/python-theano/theano-nose to achieve that. If you're going to heavy use them, you could add the directory of the flavour you prefer (Python 2 or Python 3) to the $PATH environment variable manually by either typing e.g. $ export PATH=/usr/share/python-theano:$PATH on the prompt, or save that line into ~/.bashrc.

Manpages aren't available for these little helper scripts6, but you could always get info on what they do and which arguments they accept by invoking them with the -h (for theano-nose) resp. help flag (for theano-cache).

running the tests

On some occasions you might want to run the testsuite of the installed library, like to check over if everything runs fine on your GPU hardware. There are two different ways to run the tests (anyway you need to have python{,3}-nose installed). One is, you could launch the test suite by doing $ python -c 'import theano; theano.test() (or the same with python3 to test the other flavour), that's the same what the helper script theano-test does. However, by doing it that way some particular tests might fail by raising errors also for the group of known failures.

Known failures are excluded from being errors if you run the tests by theano-nose, which is a wrapper around nosetests, so this might be always the better choice. You can run this convenience script with the option --theano on the installed library, or from the source package root, which you could pull by $ sudo apt-get source theano (there you have also the option to use bin/theano-nose). The script accept options for nosetests, so you might run it with -v to increase verbosity.

For the tests the configuration switch config.device must be set to cpu. This will also include the GPU tests when a proper accessible device is detected, so that's a little misleading in the sense of it doesn't mean "run everything on the CPU". You're on the safe side if you run it always like this: $ THEANO_FLAGS=device=cpu theano-nose, if you've set config.device to gpu in your ~/.theanorc.

Depending on the available hardware and the used BLAS implementation (see below) it could take quite a long time to run the whole test suite through, on the Core-i5 in my laptop that takes around an hour even excluded the GPU related tests (which perform pretty fast, though). Theano features a couple of switches to manipulate the default configuration for optimization and compilation. There is a rivalry between optimization and compilation costs against performance of the test suite, and it turned out the test suite performs a quicker with lesser graph optimization. There are two different switches available to control config.optimizer, the fast_run toggles maximal optimization, while fast_compile runs only a minimal set of graph optimization features. These settings are used by the general mode switches for config.mode, which is either FAST_RUN by default, or FAST_COMPILE. The default mode FAST_RUN (optimizer=fast_run, linker=cvm) needs around 72 minutes on my lower mid-level machine (on un-optimized BLAS). To set mode=FAST_COMPILE (optimizer=fast_compile, linker=py) brings some boost for the performance of the test suite because it runs the whole suite in 46 minutes. The downside of that is that C code compilation is disabled in this mode by using the linker py, and also the GPU related tests are not included. I've played around with using the optimizer fast_compile with some of the other linkers (c|py and cvm, and their versions without garbage collection) as alternative to FAST_COMPILE with minimal optimization but also machine code compilation incl. GPU testing. But to my experience, fast_compile without another than the linker py results in some new errors and failures of some tests on amd64, and this might the case also on other architectures, too.

By the way, another useful feature is DebugMode for config.mode, which verifies the correctness of all optimizations and compares the C to Python results. If you want to have detailed info on the configuration settings of Theano, do $ python -c 'import theano; print theano.config' | less, and check out the chapter config in the library documentation in the documentation.

cache maintenance

Theano isn't a JIT (just-in-time) compiler like Numba, which generates native machine code in the memory and executes it immediately, but it saves the generated native machine code into compiledirs. The reason for doing it that way is quite practical like the docs explain, the persistent cache on disk makes it possible to avoid generating code for the same operation, and to avoid compiling again when different operations generate the same code. The compiledirs by default are located within $(HOME)/.theano/.

After some time the folder becomes quite large, and might look something like this:

$ ls ~/.theano
compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--2.7.11+-64
compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--2.7.12-64
compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--2.7.12rc1-64
compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--3.5.1+-64
compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--3.5.2-64
compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--3.5.2rc1-64

If the used Python version changed like in this example you might to want to purge obsolete cache. For working with the cache resp. the compiledirs, the helper theano-cache comes in handy. If you invoke it without any arguments the current cache location is put out like ~/.theano/compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--2.7.12-64 (the script is run from /usr/share/python-theano). So, the compiledirs for the old Python versions in this example (11+ and 12rc1) can be removed to free the space they occupy.

All compiledirs resp. cache directories meaning the whole cache could be erased by $ theano-cache basecompiledir purge, the effect is the same as by performing $ rm -rf ~/.theano. You might want to do that e.g. if you're using different hardware, like when you got yourself another graphics card. Or habitual from time to time when the compiledirs fill up so much that it slows down processing with the harddisk being very busy all the time, if you don't have an SSD drive available. For example, the disk space of build chroots carrying (mainly) the tests completely compiled through on default Python 2 and Python 3 consumes around 1.3 GB (see here).

BLAS implementations

Theano needs a level 3 implementation of BLAS (Basic Linear Algebra Subprograms) for operations between vectors (one-dimensional mathematical objects) and matrices (two-dimensional objects) carried out on the CPU. NumPy is already build on BLAS and pulls the standard implementation (libblas3, soure package: lapack), but Theano links directly to it instead of using NumPy as intermediate layer to reduce the computational overhead. For this, Theano needs development headers and the binary packages pull libblas-dev by default, if any other development package of another BLAS implementation (like OpenBLAS or ATLAS) isn't already installed, or pulled with them (providing the virtual package libblas.so). The linker flags could be manipulated directly through the configuration switch config.blas.ldflags, which is by default set to -L/usr/lib -lblas -lblas. By the way, if you set it to an empty value, Theano falls back to using BLAS through NumPy, if you want to have that for some reason.

On Debian, there is a very convenient way to switch between BLAS implementations by the alternatives mechanism. If you have several alternative implementations installed at the same time, you can switch from one to another easily by just doing:

$ sudo update-alternatives --config libblas.so
There are 3 choices for the alternative libblas.so (providing /usr/lib/libblas.so).

  Selection    Path                                  Priority   Status
------------------------------------------------------------
* 0            /usr/lib/openblas-base/libblas.so      40        auto mode
  1            /usr/lib/atlas-base/atlas/libblas.so   35        manual mode
  2            /usr/lib/libblas/libblas.so            10        manual mode
  3            /usr/lib/openblas-base/libblas.so      40        manual mode

Press <enter> to keep the current choice[*], or type selection number:

The implementations are performing differently on different hardware, so you might want to take the time to compare which one does it best on your processor (the other packages are libatlas-base-dev and libopenblas-dev), and choose that to optimize your system. If you want to squeeze out all which is in there for carrying out Theano's computations on the CPU, another option is to compile an optimized version of a BLAS library especially for your processor. I'm going to write another blog posting on this issue.

The binary packages of Theano ship the script check_blas.py to check over how well a BLAS implementation performs with it, and if everything works right. That script is located in the misc subfolder of the library, you could locate it by doing $ dpkg -L python-theano | grep check_blas (or for the package python3-theano accordingly), and run it with the Python interpreter. By default the scripts puts out a lot of info like a huge perfomance comparison reference table, the current setting of blas.ldflags, the compiledir, the setting of floatX, OS information, the GCC version, the current NumPy config towards BLAS, NumPy location and version, if Theano linked directly or has used the NumPy binding, and finally and most important, the execution time. If just the execution time for quick perfomance comparisons is needed this script could be invoked with -q.

Theano on CUDA

The function compiler of Theano works with alternative backends to carry out the computations, like the ones for graphics cards. Currently, there are two different backends for GPU processing available, one docks onto NVIDIA's CUDA (Compute Unified Device Architecture) technology7, and another one for libgpuarray, which is also developed by the Theano developers in parallel.

The libgpuarray library is an interesting alternative for Theano, it's a GPU tensor (multi-dimensional mathematical object) array written in C with Python bindings based on Cython, which has the advantage of running also on OpenCL8. OpenCL, unlike CUDA9, is full free software, vendor neutral and overcomes the limitation of the CUDA toolkit being only available for amd64 and the ppc64el port (see here). I've opened an ITP on libgpuarray and we'll see if and how this works out. Another reason for it would be great to have it available is that it looks like CUDA currently runs into problems with GCC 610. More on that, soon.

Here's a litle checklist for setting up your CUDA device so that you don't have to experience something like this:

$ THEANO_FLAGS=device=gpu,floatX=float32 python ./cat_dog_classifier.py 
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected)

hardware check

For running Theano on CUDA you need an NVIDIA graphics card which is capable of doing that. You can recheck if your device is supported by CUDA here. When the hardware isn't too old (CUDA support started with GeForce 8 and Quadro X series) or too strange I think it isn't working only in exceptional cases. You can check your model and if the device is present in the system on the bare hardware level by doing this:

$ lspci | grep -i nvidia
04:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940M] (rev a2)

If a line like this doesn't get returned, your device most probably is broken, or not properly connected (ouch). If rev ff appears at the end of the line that means the device is off meaning powered down. This might be happening if you have a laptop with Optimus graphics hardware, and the related drivers have switched off the unoccupied device to safe energy11.

kernel module

Running CUDA applications requires the proprietary NVIDIA driver kernel module to be loaded into the kernel and working.

If you haven't already installed it for another purpose, the NVIDIA driver and the CUDA toolkit are both in the non-free section of the Debian archive, which is not enabled by default. To get non-free packages you have to add non-free (and it's better to do so, also contrib) to your package source in /etc/apt/sources.list, which might then look like this:

deb http://httpredir.debian.org/debian/ testing main contrib non-free

After doing that, perform $ apt-cache update to update the package lists, and there you go with the non-free packages.

The headers of the running kernel are needed to compile modules, you can get them together with the NVIDIA kernel module package by running:

$ sudo apt-get install linux-headers-$(uname -r) nvidia-kernel-dkms build-essential

DKMS will then build the NVIDIA module for the kernel and does some other things on the system. When the installation has finished, it's generally advised to reboot the system completely.

troubleshooting

If you have problems with the CUDA device, it's advised to verify if the following things concerning the NVIDIA driver resp. kernel module are in order:

blacklist nouveau

Check if the default Nouveau kernel module driver (which blocks the NVIDIA module) for some reason still gets loaded by doing $ lsmod | grep nouveau. If nothing gets returned, that's right. If it's still in the kernel, just add blacklist nouveau to /etc/modprobe.d/blacklist.conf, and update the booting ramdisk with § sudo update-initramfs -u afterwards. Then reboot once more, this shouldn't be the case then anymore.

rebuild kernel module

To fix it when the module haven't been properly compiled for some reason you could trigger a rebuild of the NVIDIA kernel module with $ sudo dpkg-reconfigure nvidia-kernel-dkms. When you're about to send your hardware in to repair because everything looks all right but the device just isn't working, that really could help (own experience).

After the rebuild of the module or modules (if you have a few kernel packages installed) has completed, you could recheck if the module really is available by running:

$ sudo modinfo nvidia-current
filename:       /lib/modules/4.4.0-1-amd64/updates/dkms/nvidia-current.ko
alias:          char-major-195-*
version:        352.79
supported:      external
license:        NVIDIA
alias:          pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias:          pci:v000010DEd*sv*sd*bc03sc02i00*
alias:          pci:v000010DEd*sv*sd*bc03sc00i00*
depends:        drm
vermagic:       4.4.0-1-amd64 SMP mod_unload modversions 
parm:           NVreg_Mobile:int

It should be something similiar to this when everything is all right.

reload kernel module

When there are problems with the GPU, maybe the kernel module isn't properly loaded. You could recheck if the module has been properly loaded by doing

$ lsmod | grep nvidia
nvidia_uvm             73728  0
nvidia               8540160  1 nvidia_uvm
drm                   356352  7 i915,drm_kms_helper,nvidia

The kernel module could be loaded resp. reloaded with $ sudo nvidia-modprobe (that tool is from the package nvidia-modprobe).

unsupported graphics card

Be sure that you graphics cards is supported by the current driver kernel module. If you have bought new hardware, that's quite possible to come out being a problem. You can get the version of the current NVIDIA driver with:

$ cat /proc/driver/nvidia/version 
NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.79  Wed Jan 13 16:17:53 PST 2016
GCC version:  gcc version 5.3.1 20160528 (Debian 5.3.1-21)

Then, google the version number like nvidia 352.79, this should get you onto an official driver download page like this. There, check for what's to be found under "Supported Products".

I you're stuck with that there are two options, to wait until the driver in Debian got updated, or replace it with the latest driver package from NVIDIA. That's possible to do, but something more for experienced users.

occupied graphics card

The CUDA driver cannot work while the graphical interface is busy like by processing the graphical display of your X.Org server. Which kernel driver actually is used to process the desktop could be examined by this command:12

$ grep '(II).*([0-9]):' /var/log/Xorg.0.log
[    37.700] (II) intel(0): Using Kernel Mode Setting driver: i915, version 1.6.0 20150522
[    37.700] (II) intel(0): SNA compiled: xserver-xorg-video-intel 2:2.99.917-2 (Vincent Cheng <vcheng@debian.org>)
{...}
[    39.808] (II) intel(0): switch to mode 1920x1080@60.0 on eDP1 using pipe 0, position (0, 0), rotation normal, reflection none
[    39.810] (II) intel(0): Setting screen physical size to 508 x 285
[    67.576] (II) intel(0): EDID vendor "CMN", prod id 5941
[    67.576] (II) intel(0): Printing DDC gathered Modelines:
[    67.576] (II) intel(0): Modeline "1920x1080"x0.0  152.84  1920 1968 2000 2250  1080 1083 1088 1132 -hsync -vsync (67.9 kHz eP)

This example shows that the rendering of the desktop is performed by the graphical device of the Intel CPU, which is just like it's needed for running CUDA applications on your NVIDIA graphics card, if you don't have another one.

nvidia-cuda-toolkit

With the Debian package of the CUDA toolkit everything pretty much runs out of the box for Theano. Just install it with apt-get, and you're ready to go, the CUDA backend is the default one. Pycuda is also a suggested dependency of the binary packages, it could be pulled together with the CUDA toolkit.

The up-to-date CUDA release 7.5 is of course available, with that you have Maxwell architecture support so that you can run Theano on e.g. a GeForce GTX Titan X with 6,2 TFLOPS on single precision13 at an affordable price. CUDA 814 is around the corner with support for the new Pascal architecture15. Like the GeForce GTX 1080 high-end gaming graphics card already has 8,23 TFLOPS16. When it comes to professional GPGPU hardware like the Tesla P100 there is much more computational power available, scalable by multiplication of cores resp. cards up to genuine little supercomputers which fit on a desk, like the DGX-117. Theano can use multiple GPUs for calculations to work with high scaled hardware, I'll write another blog post on this issue.

Theano on the GPU

It's not difficult to run Theano on the GPU.

Only single precision floating point numbers (float32) are supported on the GPU, but that is sufficient for deep learning applications. Theano uses double precision floats (float64) by default, so you have to set the configuration variable config.floatX to float32, like written on above, either with the THEANO_FLAGS environment variable or better in your .theanorc file, if you're going to use the GPU a lot.

Switching to the GPU actually happens with the config.device configuration variable, which must be set to either gpu or gpu0, gpu1 etc., to choose a particular one if multiple devices are available.

Here's is a little test script, it's taken from the docs but slightly altered:

from __future__ import print_function
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
from six.moves import range

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print("Used the cpu")
else:
    print("Used the gpu")
exit(0)

You can run that script either with python or python3 (there was a single test failure on the Python 3 package, so the Python 2 library might be a little more stable currently). For comparison, here's an example on how it perfoms on my hardware, one time on the CPU, one more time on the GPU:

$ THEANO_FLAGS=floatX=float32 python ./check1.py 
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)]
Looping 1000 times took 4.481719 seconds
Result is [ 1.23178029  1.61879337  1.52278066 ...,  2.20771813  2.29967761
  1.62323284]
Used the cpu

$ THEANO_FLAGS=floatX=float32,device=gpu python ./check1.py 
Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN not available)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 1.164906 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
  1.62323296]
Used the gpu

If you got a result like this you're ready to go with Theano on Debian, training computer vision classifiers or whatever you want to do with it. I'll write more on for what Theano could be used, soon.


  1. Some ports are disabled because they are currently not supported by Theano. There are NotImplementedErrors and other errors in the tests on the numpy.ndarray object being not aligned. The developers commented on that, see here. And on some ports the build flags -m32 resp. -m64 of Theano aren't supported by g++, the build flags can't be manipulated easily. 

  2. Theano Development Team: "Theano: a Python framework for fast computation of mathematical expressions

  3. Marc Couture: "Today's high-powered GPUs: strong for graphics and for maths". In: RTC magazine June 2015, pp. 22–25 

  4. Ogier Maitre: "Understanding NVIDIA GPGPU hardware". In: Tsutsui/Collet (eds.): Massively parallel evolutionary computation on GPGPUs. Berlin, Heidelberg: Springer 2013, pp. 15-34 

  5. Geoffrey French: "Deep learing tutorial: advanved techniques". PyData London 2016 presentation 

  6. Like the description of the Lintian tag binary-without-manpage says, that's not needed for them being in /usr/share

  7. Tom. R. Halfhill: "Parallel processing with CUDA: Nvidia's high-performance computing platform uses massive multithreading". In: Microprocessor Report January 28, 2008 

  8. Faber et.al: "Parallelwelten: GPU-Programmierung mit OpenCL". In: C't 26/2014, pp. 160-165 

  9. For comparison, see: Valentine Sinitsyn: "Feel the taste of GPU programming". In: Linux Voice February 2015, pp. 106-109 

  10. https://lists.debian.org/debian-devel/2016/07/msg00004.html 

  11. If Optimus (hybrid) graphics hardware is present (like commonly today on PC laptops), Debian launches the X-server on the graphics processing unit of the CPU, which is ideal for CUDA. The problem with Optimus actually is the graphics processing on the dedicated GPU. If you are using Bumblebee, the Python interpreter which you want to run Theano on has be to be started with the launcher primusrun, because Bumblebee powers the GPU down with the tool bbswitch every time it isn't used, and I think also the kernel module of the driver is dynamically loaded. 

  12. Thorsten Leemhuis: "Treiberreviere. Probleme mit Grafiktreibern für Linux lösen": In: C't Nr.2/2013, pp. 156-161 

  13. Martin Fischer: "4K-Rakete: Die schnellste Single-GPU-Grafikkarte der Welt". In C't 13/2015, pp. 60-61 

  14. http://www.heise.de/developer/meldung/Nvidia-CUDA-8-bringt-Optimierungen-fuer-die-Pascal-Architektur-3164254.html 

  15. Martin Fischer: "All In: Nvidia enthüllt die GPU-Architektur 'Pascal'". In: C't 9/2016, pp. 30-31 

  16. Martin Fischer: "Turbo-Pascal: High-End-Grafikkarte für Spieler: GeForce GTX 1080". In: C't 13/2016, pp. 100-103 

  17. http://www.golem.de/news/dgx-1-nvidias-supercomputerchen-mit-8x-tesla-p100-1604-120155.html 

20 July, 2016 05:13AM by Daniel Stender

July 19, 2016

hackergotchi for Michael Prokop

Michael Prokop

DebConf16 in Capetown/South Africa: Lessons learnt

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:

  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app

My technical lessons from DebConf16:

  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • sources.debian.net features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.

19 July, 2016 08:48PM by mika

hackergotchi for Joey Hess

Joey Hess

Re: Debugging over email

Lars wrote about the remote debugging problem.

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this.

This is also something I've thought about on and off, that affects me most every day.

I've found that building the test suite into the program, such that users can run it at any time, is a great way to smoke out problems. If a user thinks they have problem A but the test suite explodes, or also turns up problems B C D, then I have much more than the user's problem report to go on. git annex test is a good example of this.

Asking users to provide a recipe to reproduce the bug is very helpful; I do it in the git-annex bug report template, and while not all users do, and users often provide a reproducion recipe that doesn't quite work, it's great in triage to be able to try a set of steps without thinking much and see if you can reproduce the bug. So I tend to look at such bug reports first, and solve them more quickly, which tends towards a virtuous cycle.

I've noticed that reams of debugging output, logs, test suite failures, etc can be useful once I'm well into tracking a problem down. But during triage, they make it harder to understand what the problem actually is. Information overload. Being able to reproduce the problem myself is far more valuable than this stuff.

I've noticed that once I am in a position to run some commands in the environment that has the problem, it seems to be much easier to solve it than when I'm trying to get the user to debug it remotely. This must be partly psychological?

Partly, I think that the feeling of being at a remove from the system, makes it harder to think of what to do. And then there are the times where the user pastes some output of running some commands and I mentally skip right over an important part of it. Because I didn't think to run one of the commands myself.

I wonder if it would be helpful to have a kind of ssh equivilant, where all commands get vetted by the remote user before being run on their system. (And the user can also see command output before it gets sent back, to NACK sending of personal information.) So, it looks and feels a lot like you're in a mosh session to the user's computer (which need not have a public IP or have an open ssh port at all), although one with a lot of lag and where rm -rf / doesn't go through.

19 July, 2016 04:57PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Debugging over email

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this. I've been thinking about this for a while, and talking with friends about it, and here's my current ideas.

First idea: have a script that gathers as much information as possible, which the user can run. For example, log files, full configuration, full environment, etc. The user would then mail the output to me. The information will need to be anonymised suitably so that no actual secrets are leaked. This would be similar to Debian's package specific reportbug scripts.

Second idea: make it less likely that the user needs help solving their issue, with better error messages. This would require error messages to have sufficient explanation that a user can solve their problem. That doesn't necessarily mean a lot of text, but also code that analyses the situation when the error happens to include things that are relevant for the problem resolving process, and giving error messages that are as specific as possible. Example: don't just fail saying "write error", but make the code find out why writing caused an error.

Third idea: in addition to better error messages, might provide diagnostics tools as well.

A friend suggested having a script that sets up a known good set of operations and verifies they work. This would establish a known-working baseline, or smoke test, so that we can rule things like "software isn't completely installed".

Do you have ideas? Mail me (liw@liw.fi) or tell me on identi.ca (@liw) or Twitter (@larswirzenius).

19 July, 2016 03:08PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.6: Rolling on

The sixth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.6 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, and the 0.12.5 release in May --- making it the tenth release at the steady bi-montly release frequency. Just like the previous release, this one is once again more of a refining maintenance release which addresses small bugs, nuisances or documentation issues without adding any major new features. That said, some nice features (such as caching support for sourceCpp() and friends) were added.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 703 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by about fourty packages from the last release in May!

Similar to the previous releases, we have contributions from first-time committers. Artem Klevtsov made na_omit run faster on vectors without NA values. Otherwise, we had many contributions from "regulars" like Kirill Mueller, James "coatless" Balamuta and Dan Dillon as well as from fellow Rcpp Core contributors. Some noteworthy highlights are encoding and string fixes, generally more robust builds, a new iterator-based approach for vectorized programming, the aforementioned caching for sourceCpp(), and several documentation enhancements. More details are below.

Changes in Rcpp version 0.12.6 (2016-07-18)

  • Changes in Rcpp API:

    • The long long data type is used only if it is available, to avoid compiler warnings (Kirill Müller in #488).

    • The compiler is made aware that stop() never returns, to improve code path analysis (Kirill Müller in #487 addressing issue #486).

    • String replacement was corrected (Qiang in #479 following mailing list bug report by Masaki Tsuda)

    • Allow for UTF-8 encoding in error messages via RCPP_USING_UTF8_ERROR_STRING macro (Qin Wenfeng in #493)

    • The R function Rf_warningcall is now provided as well (as usual without leading Rf_) (#497 fixing #495)

  • Changes in Rcpp Sugar:

    • Const-ness of min and max functions has been corrected. (Dan Dillon in PR #478 fixing issue #477).

    • Ambiguities for matrix/vector and scalar operations have been fixed (Dan Dillon in PR #476 fixing issue #475).

    • New algorithm header using iterator-based approach for vectorized functions (Dan in PR #481 revisiting PR #428 and addressing issue #426, with futher work by Kirill in PR #488 and Nathan in #503 fixing issue #502).

    • The na_omit() function is now faster for vectors without NA values (Artem Klevtsov in PR #492)

  • Changes in Rcpp Attributes:

    • Add cacheDir argument to sourceCpp() to enable caching of shared libraries across R sessions (JJ in #504).

    • Code generation now deals correctly which packages containing a dot in their name (Qiang in #501 fixing #500).

  • Changes in Rcpp Documentation:

    • A section on default parameters was added to the Rcpp FAQ vignette (James Balamuta in #505 fixing #418).

    • The Rcpp-attributes vignette is now mentioned more prominently in question one of the Rcpp FAQ vignette.

    • The Rcpp Quick Reference vignette received a facelift with new sections on Rcpp attributes and plugins begin added. (James Balamuta in #509 fixing #484).

    • The bib file was updated with respect to the recent JSS publication for RProtoBuf.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 July, 2016 12:49PM

hackergotchi for Chris Lamb

Chris Lamb

Python quirk: os.stat's return type

import os
import stat

st = os.stat('/etc/fstab')

# __getitem__
x = st[stat.ST_MTIME]
print((x, type(x)))

# __getattr__
x = st.st_mtime
print((x, type(x)))
(1441565864, <class 'int'>)
(1441565864.3485234, <class 'float'>)

19 July, 2016 10:20AM

July 18, 2016

John Goerzen

Building a home firewall: review of pfsense

For some time now, I’ve been running OpenWRT on an RT-N66U device. I initially set that because I had previously been using my Debian-based file/VM server as a firewall, and this had some downsides: every time I wanted to reboot that, Internet for the whole house was down; shorewall took a fair bit of care and feeding; etc.

I’ve been having indications that all is not well with OpenWRT or the N66U in the last few days, and some long-term annoyances prompted me to search out a different solution. I figured I could buy an embedded x86 device, slap Debian on it, and be set.

The device I wound up purchasing happened to have pfsense preinstalled, so I thought I’d give it a try.

As expected, with hardware like that to work with, it was a lot more capable than OpenWRT and had more features. However, I encountered a number of surprising issues.

The biggest annoyance was that the system wouldn’t allow me to set up a static DHCP entry with the same IP for multiple MAC addresses. This is a very simple configuration in the underlying DHCP server, and OpenWRT permitted it without issue. It is quite useful so my laptop has the same IP whether connected by wifi or Ethernet, and I have used it for years with no issue. Googling it a bit turned up some rather arrogant pfsense people saying that this is “broken” and poor design, and that your wired and wireless networks should be on different VLANs anyhow. They also said “just give it the same hostname for the different IPs” — but it rejects this too. Sigh. I discovered, however, that downloading the pfsense backup XML file, editing the IP within, and re-uploading it gets me what I want with no ill effects!

So then I went to set up DNS. I tried to enable the “DNS Forwarder”, but it wouldn’t let me do that while the “DNS Resolver” was still active. Digging in just a bit, it appears that the DNS Forwarder and DNS Resolver both provide forwarding and resolution features; they just have different underlying implementations. This is not clear at all in the interface.

Next stop: traffic shaping. Since I use VOIP for work, this is vitally important for me. I dove in, and found a list of XML filenames for wizards: one for “Dedicated Links” and another for “Multiple Lan/Wan”. Hmmm. Some Googling again turned up that everyone suggests using the “Multiple Lan/Wan” wizard. Fine. I set it up, and notice that when I start an upload, my download performance absolutely tanks. Some investigation shows that outbound ACKs aren’t being handled properly. The wizard had created a qACK queue, but neglected to create a packet match rule for it, so ACKs were not being dealt with appropriately. Fixed that with a rule of my own design, and now downloads are working better again. I also needed to boost the bandwidth allocated to qACK (setting it to 25% seemed to do the trick).

Then there was the firewall rules. The “interface” section is first-match-wins, whereas the “floating” section is last-match-wins. This is rather non-obvious.

Getting past all the interface glitches, however, the system looks powerful, solid, and well-engineered under the hood, and fairly easy to manage.

18 July, 2016 09:34PM by John Goerzen

Reproducible builds folks

Preparing for the second release of reprotest

Author: ceridwen

I now have working test environments set up for null (no container, build on the host system), schroot, and qemu. After fixing some bugs, null and qemu now pass all their tests!

schroot still has a permission error related to disorderfs. Since the same code works for null and qemu and for schroot when disorderfs is disabled, it's something specific to disorderfs and/or its combination with schroot. The following is debug output that shows ls for the build directory on the testbed before and after the mock build, and stat for both the build directory and the mock build artifact itself. The first control run, without disorderfs, succeeds:

test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/control/ ;\n python3 mock_build.py ;\n'], kind short, sout raw, serr pipe, env ['LANG=en_US.UTF-8', 'HOME=/nonexistent/first-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'PYTHONHASHSEED=559200286', 'TZ=GMT+12']
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 56h/86d Inode: 1351634     Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.105915342 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: fc01h/64513d    Inode: 40767795    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.089915352 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/control/artifact host /tmp/tmpw_mwks82/control_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/control/artifact is not already at destination /tmp/tmpw_mwks82/control_artifact, copying
test.py: DBG: got reply from testbed: ok

That last bit indicates that copy command for the build artifact from the testbed to a temporary directory on the host succeeded. This is the debug output from the second run, with disorderfs enabled:

test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/disorderfs/ ;\n umask 0002 ;\n linux64 --uname-2.6 python3 mock_build.py ;\n'], kind short, sout raw, serr pipe, env ['LC_ALL=fr_CH.UTF-8', 'CAPTURE_ENVIRONMENT=i_capture_the_environment', 'HOME=/nonexistent/second-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/i_capture_the_path', 'LANG=fr_CH.UTF-8', 'PYTHONHASHSEED=559200286', 'TZ=GMT-14']
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 58h/88d Inode: 1           Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.201915291 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: 58h/88d Inode: 7           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.185915299 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/disorderfs/artifact host /tmp/tmpw_mwks82/experiment_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact is not already at destination /tmp/tmpw_mwks82/experiment_artifact, copying
schroot: DBG: cleanup...
schroot: DBG: execute-timeout: schroot --run-session --quiet --directory=/ --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52 --user=root -- rm -rf -- /tmp/autopkgtest.5oMipL
rm: cannot remove '/tmp/autopkgtest.5oMipL/disorderfs': Device or resource busy
schroot: DBG: execute-timeout: schroot --quiet --end-session --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52
Unexpected error:
Traceback (most recent call last):
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 708, in mainloop
    command()
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 646, in command
    r = f(c, ce)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 584, in cmd_copyup
    copyupdown(c, ce, True)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 469, in copyupdown
    copyupdown_internal(ce[0], c[1:], upp)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 494, in copyupdown_internal
    copyup_shareddir(sd[0], sd[1], dirsp, downtmp_host)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 408, in copyup_shareddir
    shutil.copy(tb, host)
  File "/usr/lib/python3.5/shutil.py", line 235, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib/python3.5/shutil.py", line 114, in copyfile
    with open(src, 'rb') as fsrc:
PermissionError: [Errno 13] Permission denied: '/var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact'

ls shows that the artifact is created in the right place. However, when reprotest tries to copy it from the testbed to the host, it gets a permission error. The traceback is coming from virt/schroot, and it's a Python open() call that's failing. Note that the permissions are wrong for the second run, but that's expected because my schroot is stable so the umask bug isn't fixed yet; and that the rm error from disorderfs not being unmounted early enough (see below). I expect to see the umask test fail, though, not a crash in every test where the build succeeds.

After a great deal of effort, I isolated the bug that was causing the process to hang not to my code or autopkgtest's code, but to CPython and contextlib. It's supposed to be fixed in CPython 3.5.3, but for now I've worked around the problem by monkey-patching the patch provided in the latter issue onto contextlib.

Here is my current to-do list:

  • Fix PyPi not installing the virt/ scripts correctly.

  • Move the disorderfs unmount into the shell script. (When the virt/ scripts encounter an error, they try to delete a temporary directory, which fails if disorderfs is mounted, so the script needs to unmount it before that happens.)

  • Find and fix the schroot/disorderfs permission error bug.

  • Convert my notes on setting up for the tests into something useful for users.

  • Write scripts to synch version numbers and documentation.

  • Fix the headers in the autopkgtest code to conform to reprotest style.

  • Add copyright information for the contextlib monkey-patch and the autopkgtest files I've changed.

  • Close #829113 as wontfix.

And here are the questions I'd like to resolve before the second release:

  • Is there any other documentation that's essential? Finishing the documentation will come later.

  • Should I release before finishing the rest of the variation? This will slow down the release of the first version with something resembling full fuctionality.

  • Do I need to write a chroot test now? Given the duplication with schroot, I'm unconvinced this is worthwhile.

18 July, 2016 03:51PM

July 17, 2016

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

KDEPIM ready to be more broadly tested

As was posted a couple of weeks ago, the latest version of KDEPIM has been uploaded to unstable.

All packages are now uploaded and built and we believe this version is ready to be more broadly tested.

If you run unstable but have refrained from installing the kdepim packages up to now, we would appreciate it if you go ahead and install them now, reporting any issues that you may find.

Given that this is a big update that includes quite a number of plugins and libraries, it's strongly recommended that you restart your KDE session after updating the packages.

Happy hacking,

The Debian Qt/KDE Team.

Note lun jul 18 08:58:53 ART 2016: Link fixed and s/KDE/KDEPIM/.

17 July, 2016 11:15PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

Iustin Pop

Energy bar restored!

So, I've been sick. Quite sick, as for the past ~2 weeks I wasn't able to bike, run, work or do much beside watch movies, look at photos and play some light games (ARPGs rule in this case, all you need to do is keep the left mouse button pressed).

It was supposed to be only a light viral infection, but it took longer to clear out than I expected, probably due to it happening right after my dental procedure (and possibly me wanting to restart exercise too soon, to fast). Not fun, it felt like the thing that refills your energy/mana bar in games broke. I simply didn't feel restored, despite sleeping a lot; 2-3 naps per day sound good as long as they are restorative, if they're not, sleeping is just a chore.

The funny thing is that recovery happened so slow, that when I finally had energy it took me by surprise. It was like “oh, wait, I can actually stand and walk without feeling dizzy! Wohoo!” As such, yesterday was a glorious Saturday ☺

I was therefore able to walk a bit outside the house this weekend and feel like having a normal cold, not like being under a “cursed: -4 vitality” spell. I expect the final symptoms to clear out soon, and that I can very slowly start doing some light exercise again. Not tomorrow, though…

In the meantime, I'm sharing a picture from earlier this year that I found while looking through my stash. Was walking in the forest in Pontresina on a beatiful sunny day, when a sudden gust of wind caused a lot of the snow on the trees to fly around and make it look a bit magical (photo is unprocessed beside conversion from raw to jpeg, this is how it was straight out of the camera):

Winter in the forest

Why a winter photo? Because that's exactly how cold I felt the previous weekend: 30°C outside, but I was going to the doctor in jeans and hoodie and cap, shivering…

17 July, 2016 09:32PM

Michael Stapelberg

mergebot: easily merging contributions

Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload.

I wondered how close we can bring Debian to a model where accepting a contribution is just a single click as well. In principle, I think it can be done.

To demonstrate the feasibility and collect some feedback, I wrote a program called mergebot. The first stage is done: mergebot can be used on your local machine as a command-line tool. You provide it with the source package and bug number which contains the patch in question, and it will do the rest:

midna ~ $ mergebot -source_package=wit -bug=#831331
2016/07/17 12:06:06 will work on package "wit", bug "831331"
2016/07/17 12:06:07 Skipping MIME part with invalid Content-Disposition header (mime: no media type)
2016/07/17 12:06:07 gbp clone --pristine-tar git+ssh://git.debian.org/git/collab-maint/wit.git /tmp/mergebot-743062986/repo
2016/07/17 12:06:09 git config push.default matching
2016/07/17 12:06:09 git config --add remote.origin.push +refs/heads/*:refs/heads/*
2016/07/17 12:06:09 git config --add remote.origin.push +refs/tags/*:refs/tags/*
2016/07/17 12:06:09 git config user.email stapelberg AT debian DOT org
2016/07/17 12:06:09 patch -p1 -i ../latest.patch
2016/07/17 12:06:09 git add .
2016/07/17 12:06:09 git commit -a --author Chris Lamb <lamby AT debian DOT org> --message Fix for “wit: please make the build reproducible” (Closes: #831331)
2016/07/17 12:06:09 gbp dch --release --git-author --commit
2016/07/17 12:06:09 gbp buildpackage --git-tag --git-export-dir=../export --git-builder=sbuild -v -As --dist=unstable
2016/07/17 12:07:16 Merge and build successful!
2016/07/17 12:07:16 Please introspect the resulting Debian package and git repository, then push and upload:
2016/07/17 12:07:16 cd "/tmp/mergebot-743062986"
2016/07/17 12:07:16 (cd repo && git push)
2016/07/17 12:07:16 (cd export && debsign *.changes && dput *.changes)

midna ~ $ cd /tmp/mergebot-743062986/repo
midna /tmp/mergebot-743062986/repo $ git log HEAD~2..
commit d983d242ee546b2249a866afe664bac002a06859
Author: Michael Stapelberg <stapelberg AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Update changelog for 2.31a-3 release

commit 5a327f5d66e924afc656ad71d3bfb242a9bd6ddc
Author: Chris Lamb <lamby AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Fix for “wit: please make the build reproducible” (Closes: #831331)
midna /tmp/mergebot-743062986/repo $ git push
Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 1.59 KiB | 0 bytes/s, done.
Total 11 (delta 6), reused 0 (delta 0)
remote: Sending notification emails to: dispatch+wit_vcs@tracker.debian.org
remote: Sending notification emails to: dispatch+wit_vcs@tracker.debian.org
To git+ssh://git.debian.org/git/collab-maint/wit.git
   650ee05..d983d24  master -> master
 * [new tag]         debian/2.31a-3 -> debian/2.31a-3
midna /tmp/mergebot-743062986/repo $ cd ../export
midna /tmp/mergebot-743062986/export $ debsign *.changes && dput *.changes
[…]
Uploading wit_2.31a-3.dsc
Uploading wit_2.31a-3.debian.tar.xz
Uploading wit_2.31a-3_amd64.deb
Uploading wit_2.31a-3_amd64.changes

Of course, this is not quite as convenient as clicking a “Merge” button yet. I have some ideas on how to make that happen, but I need to know whether people are interested before I spend more time on this.

Please see github.com/Debian/mergebot for more details, and please get in touch if you think this is worthwhile or would even like to help. Feedback is accepted in the GitHub issue tracker for mergebot or the project mailing list mergebot-discuss. Thanks!

17 July, 2016 12:00PM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Switching from approx to apt-cacher-ng

After a long ~5 years (from 2011) journey with approx I finally wanted to switch to something new like apt-cacher-ng. And after a bit of changes I finally managed to get apt-cacher-ng into my work flow.

Bit of History

I should first give you a brief on how I started using approx. It all started in MiniDebconf 2011 which I organized at my Alma-mater. I met Jonas Smedegaard here and from him I learned about approx. Jonas has a bunch of machines at his home and he was active user of approx and he showed it to me while explaining the Boxer project. I was quite impressed with approx. Back then I was using a 230kbps slow INTERNET connection and I was also maintaining a couple of packages in Debian. Updating the pbuilder chroots was time consuming task for me as I had to download multiple times over slow net. And approx largely solved this problem and I started using it.

5 years fast forward I now have quite fast INTERNET with good FUP. (About 50GB a month), but I still tend to use approx which makes building packages quite faster. I also use couple of containers on my laptop which all use my laptop as approx cache.

Why switch?

So why change to apt-cacher-ng?. Approx is a simple tool, it runs mainly with inetd and sits between apt and the repository on INTERNET. Where as apt-cacher-ng provides a lot of features. Below are some listed from the apt-cacher-ng manual.

  • use of TLS/SSL repositories (may be possible with approx but I'm notsure how to do it)
  • Access control of who can access caching server
  • Integration with debdelta (I've not tried, approx also supports debdelta)
  • Avoiding use of apt-cacher-ng for some hosts
  • Avoiding caching of some file types
  • Partial mirroring for offline usage.
  • Selection of ipv4 or ipv6 for connections.

The biggest change I see is the speed difference between approx and apt-cacher-ng. I think this is mainly because apt-cacher-ng is threaded where as approx runs using inetd.

I do not want all features of apt-cacher-ng at the moment, but who knows in future I might need some features and hence I decided to switch to apt-cacher-ng over approx.

Transition

Transition from approx to apt-cacher-ng was smoother than I expected. There are 2 approaches you can use one is explicit routing another is transparent routing. I prefer transparent routing and I only had to change my /etc/apt/sources.list to use the actual repository URL.

deb http://deb.debian.org/debian unstable main contrib non-free
deb-src http://deb.debian.org/debian unstable main

deb http://deb.debian.org/debian experimental main contrib non-free
deb-src http://deb.debian.org/debian experimental main

After above change I had to add a 01proxy configuration file to /etc/apt/apt.conf.d/ with following content.

Acquire::http::Proxy "http://localhost:3142/"

I use explicit routing only when using apt-cacher-ng with pbuilder and debootstrap. Following snippet shows explicit routing through /etc/apt/sources.list.

deb http://localhost:3142/deb.debian.org/debian unstable main

Usage with pbuilder and friends

To use apt-cacher-ng with pbuilder you need to modify /etc/pbuilderrc to contain following line

MIRRORSITE=http://localhost:3142/deb.debian.org/debian

Usage with debootstrap

To use apt-cacher-ng with debootstrap, pass MIRROR argument of debootstrap as http://localhost:3142/deb.debian.org/debian.

Conclusion

I've now completed full transition of my work flow to apt-cacher-ng and purged approx and its cache.

Though it works fine I feel that there will be 2 caches created when you use transparent and explicit proxy using localhost:3142 URL. I'm sure it is possible to configure this to avoid duplication, but I've not yet figured it. If you know how to fix this do let me know.

Update

Jonas told me that its not 2 caches but 2 routing paths, one for transparent routing and another for explicit routing. So I guess there is nothing here to fix :-).

17 July, 2016 11:00AM by copyninja

hackergotchi for Neil Williams

Neil Williams

Deprecating dpkg-cross

Deprecating the dpkg-cross binary

After a discussion in the cross-toolchain BoF at DebConf16, the gross hack which is packaged as the dpkg-cross binary package and supporting perl module have finally been deprecated, long after multiarch was actually delivered. Various reasons have complicated the final steps for dpkg-cross and there remains one use for some of the files within the package although not the dpkg-cross binary itself.

2.6.14 has now been uploaded to unstable and introduces a new binary package cross-config, so will spend a time in NEW. The changes are summarised in the NEWS entry for 2.6.14.

The cross architecture configuration files have moved to the new cross-config package and the older dpkg-cross binary with supporting perl module are now deprecated. Future uploads will only include the cross-config package.

Use cross-config to retain support for autotools and CMake cross-building configuration.

If you use the deprecated dpkg-cross binary, now is the time to migrate away from these path changes. The dpkg-cross binary and the supporting perl module should NOT be expected to be part of Debian by the time of the Stretch release.

2.6.14 also marks the end of my involvement with dpkg-cross. The Uploaders list has been shortened but I'm still listed to be able to get 2.6.14 into NEW. A future release will drop the perl module and the dpkg-cross binary, retaining just the new cross-config package.

17 July, 2016 10:30AM by Neil Williams

Valerie Young

Work after DebConf

First week after DebCamp and DebConf! Both were incredible — the debian project and it’s contributors never fail to impress and delight me. None the less it felt great to have a few quiet, peaceful days of uninterrupted programming.

Notes about last week:

1. Finished Mattia’s final suggestions for the conversion of the package set pages script to python.

Hopefully it will be deployed soon, awaiting final approval 🙂

2. Replace the bash code that produced the left navigation on the home page (and most other pages) with the mustache template the python scripts use.

Previously, html was constructed and spat out from both a python and shell script — now we have a single, DRY mustache template. (At the top of the bash function that produced the navigation html, you will find the comment: “this is really quite incomprehensible and should be killed, the solution is to write all html pages with python…”. Turns out the intermediate solution is to use templates 😉 )

3. Thought hard about navigation of the test website, and redesigned (by rearranging) links in the left hand navigation.

After code review, you will see these changes as well! Things to look forward to include:
– A link to the Debian dashboard on the top left of every page (except the package specific pages).
– The title of each page (except the package pages) stretches across the whole page (instead of being squashed into the top left).
– Hover text has been added to most links in the left navigation.
– Links in left navigation have been reordered, and headers added.

Once you see the changes, please let me know if you think anything is unintuitive or confusion, everything can be easily changed!

4. Cross suite and architecture navigation enabled for most pages.

For most pages, you will be one click away from seeing the same statistics for a different suite or architecture! Whoo!

Notes about next week:

Last week I got carried away imagining minor improvements that can be made to the test websites UI, and I now have a backlog of ideas I’d like to implement. I’ve begun editing the script that makes most of the pages with statistics or package list (for example, all packages with notes, or all recently tested packages) to use templates and contain a bit more descriptive text. I’d also like to do a some revamping of the package set pages I converted.

These addition UI changes will be my first tasks for the coming week — since they are fresh on my mind and I’m quite excited about them. The following week I’d like to get back to extensibility and database issues mentioned previously!

17 July, 2016 01:42AM by spectranaut

July 16, 2016

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

The Open Source License API

Around a year ago, I started hacking together a machine readable version of the OSI approved licenses list, and casually picking parts up until it was ready to launch. A few weeks ago, we officially announced the osi license api, which is now live at api.opensource.org.

I also took a whack at writing a few API bindings, in Python, Ruby, and using the models from the API implementation itself in Go. In the following few weeks, Clint wrote one in Haskell, Eriol wrote one in Rust, and Oliver wrote one in R.

The data is sourced from a repo on GitHub, the licenses repo under OpenSourceOrg. Pull Requests against that repo are wildly encouraged! Additional data ideas, cleanup or more hand collected data would be wonderful!

In the meantime, use-cases for using this API range from language package managers pulling OSI approval of a licence programatically to using a license identifier as defined in one dataset (SPDX, for exampele), and using that to find the identifer as it exists in another system (DEP5, Wikipedia, TL;DR Legal).

Patches are hugly welcome, as are bug reports or ideas! I'd also love more API wrappers for other languages!

16 July, 2016 07:30PM by Paul Tagliamonte

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, June 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 158.25 work hours have been dispatched among 11 paid contributors. Their reports are available:

DebConf 16 Presentation

If you want to know more about how the LTS project is organized, you can watch the presentation I gave during DebConf 16 in Cape Town.

Evolution of the situation

The number of sponsored hours increased a little bit at 135 hours per month thanks to 3 new sponsors (Laboratoire LEGI – UMR 5519 / CNRS, Quarantainenet BV, GNI MEDIA). Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file lists 38 packages awaiting an update.

Thanks to our sponsors

New sponsors are in bold.

16 July, 2016 06:31AM by Raphaël Hertzog

July 15, 2016

hackergotchi for Lars Wirzenius

Lars Wirzenius

Two-factor auth for local logins in Debian using U2F keys

Warning: This blog post includes instructions for a procedure that can lead you to lock yourself out of your computer. Even if everything goes well, you'll be hunted by dragons. Keep backups, have a rescue system on a USB stick, and wear flameproof clothing. Also, have fun, and tell your loved ones you love them.

I've recently gotten two U2F keys. U2F is a open standard for authentication using hardware tokens. It's probably mostly meant for website logins, but I wanted to have it for local logins on my laptop running Debian. (I also offer a line of stylish aluminium foil hats.)

Having two-factor authentication (2FA) for local logins improves security if you need to log in (or unlock a screen lock) in a public or potentially hostile place, such as a cafe, a train, or a meeting room at a client. If they have video cameras, they can film you typing your password, and get the password that way.

If you set up 2FA using a hardware token, your enemies will also need to lure you into a cave, where a dragon will use a precision flame to incinerate you in a way that leaves the U2F key intact, after which your enemies steal the key, log into your laptop and leak your cat GIF collection.

Looking up information for how to set this up, I found a blog post by Sean Brewer, for Ubuntu 14.04. That got me started. Here's what I understand:

  • PAM is the technology in Debian for handling authentication for logins and similar things. It has a plugin architecture.

  • Yubico (maker of Yubikeys) have written a PAM plugin for U2F. It is packaged in Debian as libpam-u2f. The package includes documentation in /usr/share/doc/libpam-u2f/README.gz.

  • By configuring PAM to use libpam-u2f, you can require both password and the hardware token for logging into your machine.

Here are the detailed steps for Debian stretch, with minute differences from those for Ubuntu 14.04. If you follow these, and lock yourself out of your system, it wasn't my fault, you can't blame me, and look, squirrels! Also not my fault if you don't wear sufficient protection against dragons.

  1. Install pamu2fcfg and libpam-u2f.
  2. As your normal user, mkdir ~/.config/Yubico. The list of allowed U2F keys will be put there.
  3. Insert your U2F key and run pamu2fcfg -u$USER > ~/.config/Yubico/u2f_keys, and press the button on your U2F key when the key is blinking.
  4. Edit /etc/pam.d/common-auth and append the line auth required pam_u2f.so cue.
  5. Reboot (or at least log out and back in again).
  6. Log in, type in your password, and when prompted and the U2F key is blinking, press its button to complete the login.

pamu2fcfg reads the hardware token and writes out its identifying data in a form that the PAM module understands; see the pam-u2f documentation for details. The data can be stored in the user's home directory (my preference) or in /etc/u2f_mappings.

Once this is set up, anything that uses PAM for local authentication (console login, GUI login, sudo, desktop screen lock) will need to use the U2F key as well. ssh logins won't.

Next, add a second key to your u2f_keys. This is important, because if you lose your first key, or it's damaged, you'll otherwise have no way to log in.

  1. Insert your second U2F key and run pamu2fcfg -n > second, and press the second key's button when prompted.
  2. Edit ~/.config/Yubico/u2f_keys and append the output of second to the line with your username.
  3. Verify that you can log in using your second key as well as the first key. Note that you should have only one of the keys plugged in at the same time when logging in: the PAM module wants the first key it finds so you can't test both keys plugged in at once.

This is not too difficult, but rather fiddly, and it'd be nice if someone wrote at least a way to manage the list of U2F keys in a nicer way.

15 July, 2016 11:19AM