August 02, 2015

Carl Chenet

My Free activities in July 2015

Follow me on  or Twitter  or Diaspora*diaspora-banner

Here are the details of my Free activities in July 2015.

Carl Chenet’s projects:


  • – pull request to add Carl Chenet’s blog – #5
    • Carl Chenet’s blog is now on, which is an aggregator for french-speaking sysadmins. You should git it a try!

Debian bug reports:

  • – Manual page for the docker-compose executable is missing – #792518
  • – New Docker version 1.7.1 available – #793483
  • Backupchecker – now available in Debian Stretch (Testing) – Backupchecker migration log report

Other bug reports:

  • Docker-compose – A manual page for docker-compose – #1727
  • feedDiasp – Failed to login: ‘NoneType’ object has no attribute ‘group#6
  • rss-bot-diasp – feedDiasp.feedDiasp.Diasp.LoginException: ‘NoneType’ object has no attribute ‘group’ – #1

Feature requests:

  • Weboob – SFR mobile phone Invoices in sfr module – #2045

02 August, 2015 10:00PM by Carl Chenet

John Goerzen

The Time Machine of Durango

“The airplane may be the closest thing we have to a time machine.”

– Brian J. Terwilliger


There is something about that moment. Hiking in the mountains near Durango, Colorado, with Laura and the boys, we found a beautiful spot with a view of the valley. We paused to admire, and then –

The sound of a steam locomotive whistle from down below, sounding loud all the way up there, then echoing back and forth through the valley. Then the quieter, seemingly more distant sound of the steam engine heading across the valley, chugging and clacking as it goes. More whistles, the sight of smoke and then of the train full of people, looking like a beautiful model train from our vantage point.


I’ve heard that sound on a few rare recordings, but never experienced it. I’ve been on steam trains a few times, but never spent time in a town where they still run all day, every day. It is a different sort of feeling to spend a week in a place where Jacob and Oliver would jump up several times a day and rush to the nearest window in an attempt to catch sight of the train.


Airplanes really can be a time machine in a sense — what a wondrous time to be alive, when things so ancient are within the reach of so many. I have been transported to Lübeck and felt the uneven 700-year-old stones of the Marienkirche underneath my feet, feeling a connection to the people that walked those floors for centuries. I felt the same in Prague, in St. George’s Basilica, built in 1142, and at the Acropolis of Lindos, with its ancient Greek temple ruins. In Kansas, I feel that when in the middle of the Flint Hills — rolling green hills underneath the pure blue sky with billowing white clouds, the sounds of crickets, frogs, and cicadas in my ears; the sights and sounds are pretty much as they’ve been for tens of thousands of years. And, of course, in Durango, arriving on a plane but seeing the steam train a few minutes later.


It was fitting that we were in Durango with Laura’s parents to celebrate their 50th anniversary. As we looked forward to riding the train, we heard their stories of visits to Durango years ago, of their memories of days when steam trains were common. We enjoyed thinking about what our lives would be like should we live long enough to celebrate 50 years of marriage. Perhaps we would still be in good enough health to be able to ride a steam train in Durango, telling about that time when we rode the train, which by then will have been pretty much the same for 183 years. Or perhaps we would take them to our creek, enjoying a meal at the campfire like I’ve done since I was a child.

Each time has its unique character. I am grateful for the cameras and airplanes and air conditioning we have today. But I am also thankful for those things that connect us with each other trough time, those rocks that are the same every year, those places that remind us how close we really are to those that came before.

02 August, 2015 08:15PM by John Goerzen

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Understanding Hydroplane Races for the New Seattleite

It’s Seafair weekend in Seattle. As always, the centerpiece is the H1 Unlimited hydroplane races on Lake Washington.

EllstromManufacturingHydroplaneIn my social circle, I’m nearly the only person I know who grew up in area. None of the newcomers I know had heard of hydroplane racing before moving to Seattle. Even after I explain it to them — i.e., boats with 3,000+ horse power airplane engines that fly just above the water at more than 320kph (200mph) leaving 10m+ (30ft) wakes behind them! — most people seem more puzzled than interested.

I grew up near the shore of Lake Washington and could see (and hear!) the races from my house. I don’t follow hydroplane racing throughout the year but I do enjoy watching the races at Seafair. Here’s my attempt to explain and make the case for the races to new Seattleites.

Before Microsoft, Amazon, Starbucks, etc., there were basically three major Seattle industries: (1) logging and lumber based industries like paper manufacturing; (2) maritime industries like fishing, shipbuilding, shipping, and the navy; (3) aerospace (i.e., Boeing). Vintage hydroplane racing represented the Seattle trifecta: Wooden boats with airplane engines!

The wooden U-60 Miss Thriftway circa 1955 (Thriftway is a Washinton-based supermarket that nobody outside has heard of) below is a picture of old-Seattle awesomeness. Modern hydroplanes are now made of fiberglass but two out of three isn’t bad.

miss_thriftwayAlthough the boats are racing this year in events in Indiana, San Diego, and Detroit in addition to the two races in Washington, hydroplane racing retains deep ties to the region. Most of the drivers are from the Seattle area. Many or most of the teams and boats are based in Washington throughout the year. Many of the sponsors are unknown outside of the state. This parochialness itself cultivates a certain kind of appeal among locals.

In addition to old-Seattle/new-Seattle cultural divide, there’s a class divide that I think is also worth challenging. Although the demographics of hydro-racing fans is surprisingly broad, it can seem like Formula One or NASCAR on the water. It seems safe to suggest that many of the demographic groups moving to Seattle for jobs in the tech industry are not big into motorsports. Although I’m no follower of motorsports in general, I’ve written before cultivated disinterest in professional sports, and it remains something that I believe is worth taking on.

It’s not all great. In particular, the close relationship between Seafair and the military makes me very uneasy. That said, even with the military-heavy airshow, I enjoy the way that Seafair weekend provides a little pocket of old-Seattle that remains effectively unchanged from when I was a kid. I’d encourage others to enjoy it as well!

02 August, 2015 02:45AM by Benjamin Mako Hill

August 01, 2015

hackergotchi for Steve McIntyre

Steve McIntyre

Tracking broken UEFI implementations

There can be issues with shipping installer images including UEFI. But they're mainly due to crappy UEFI implementations that vendors have shipped. It's fairly well-known that Apple have shipped some really shoddy firmware over the years, and to allow people to install Debian on older Apple x86 machines we've now added the workaround of a non-UEFI 32-bit installer image too. But Apple aren't the only folks shipping systems with horrendously buggy UEFI, and a lot of Linux folks have had to deal with this over the last few years.

I've been talking to a number of other UEFI developers lately, and we've agreed to start a cross-distro resource to help here - a list of known-broken UEFI implementations so that we can share our experiences. The place for this in in the OSDev wiki at We're going to be adding new information here as we find it. If you've got a particular UEFI horror story on your own broken system, then please either add details there or let me know and I'll try to do it for you.

01 August, 2015 11:40PM

New UEFI team in Debian

We've just started a new team in Debian for maintaining our UEFI packages together, with git repositories in a shared project on alioth etc. We're just working out the exact details of how we're going to manage things, but for now we've moved the following packages under the team's umbrella:

  • efibootmgr
  • efivar
  • fwupd
  • fwupdate
  • pesign

and in the future we'll clearly end up adding more. We've also started a new IRC channel (#debian-efi) on aka New members always welcome to help with the work here!

01 August, 2015 11:40PM

Justifying 32-bit UEFI on 64-bit Intel hardware, and tracking broken UEFI implementations

You might have seen some of the posts I've written in the last few months about adding support in Debian for so-called Mixed-EFI systems like the Intel Bay Trail: a 64-bit processor shipped with a 32-bit EFI implementation.

I've finally seen a public justification from Intel evangelist Brian Richardson as to why these systems are crippled^Wconfigured this way, and it's nice to see our guesses confirmed. The reason is simply cost - like most consumer PCs shipped today, they come with Windows. In terms of system design, it's cheaper to just include the limited memory and storage needed for 32-bit Windows. 64-bit Windows takes a lot more storage in particular. And on modern systems 32-bit Windows can only boot using 32-bit UEFI. Fair enough...

However, Brian goes on to state some more things that are simply out of date, saying that "Linux support for UEFI IA32 is still an unanswered question". Ummm, Brian: we've got working 32-bit x86 UEFI support in our standard Jessie (and newer) installation images already, and they work just fine on CD/DVD or USB stick. We've even gone one stage further than anybody else (thus far!) in adding easy support for running a full 64-bit Linux system on top of those 32-bit UEFI implementations.

I say "thus far" here because all the work here here is Free Software. Other folks added the support in Linux for making a 64-bit kernel work with a 32-bit UEFI; I added code in Linux to expose some of the details to userspace, and code in Grub to work with it. My changes have gone upstream already, so I'd expect to see other distros like Fedora or Ubuntu also using them soon.

01 August, 2015 11:40PM

hackergotchi for Francois Marier

Francois Marier

Setting the wifi regulatory domain on Linux and OpenWRT

The list of available wifi channels is slightly different from country to country. To ensure access to the right channels and transmit power settings, one needs to set the right regulatory domain in the wifi stack.


For most Linux-based computers, you can look and change the current regulatory domain using these commands:

iw reg get
iw reg set CA

where CA is the two-letter country code when the device is located.

On Debian and Ubuntu, you can make this setting permanent by putting the country code in /etc/default/crda.

Finally, to see the list of channels that are available in the current config, use:

iwlist wlan0 frequency


On OpenWRT-based routers (including derivatives like Gargoyle), looking and setting the regulatory domain temporarily works the same way (i.e. the iw commands above).

In order to persist your changes though, you need to use the uci command:

uci set
uci set
uci commit wireless

where wireless.radio0 and wireless.radio1 are the wireless devices specific to your router. You can look them up using:

uci show wireless

To test that it worked, simply reboot the router and then look at the selected regulatory domain:

iw reg get

Scanning the local wifi environment

Once your devices are set to the right country, you should scan the local environment to pick the least congested wifi channel. You can use the Kismet spectools (free software) if you have the hardware, otherwise WifiAnalyzer (proprietary) is a good choice on Android (remember to manually set the available channels in the settings).

01 August, 2015 08:20PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

Obnam 1.13 released (backup software)

I have just released version 1.13 of Obnam, my backup program. See the website at for details on what it does. The new version is available from git (see and as Debian packages from, and uploaded to Debian, and soon in unstable.

The NEWS file extract below gives the highlights of what's new in this version.

Version 1.13, released 2015-08-01

Bug fixes:

  • Lukáš Poláček found and fixed a repository corruption problem: if obnam forget was interrupted at the wrong moment, it might remove a chunk, but not the reference to it. This would case a future run of obnam forget to crash due to a missing chunk (error code R43272X). obnam forget will now ignore such a missing chunk, since it would've deleted it anyway.

    Lars Wirzenius then changed things so that chunk files are only removed once references to the chunks have been committed.


  • obnam forget now commits changes after each generation it has removed. This means that if the operation is committed, less work is lost. Suggested by Lukáš Poláček, re-implemented by Lars Wirzenius.

01 August, 2015 05:07PM

Russ Allbery

Review: The Pyramid Waltz

Review: The Pyramid Waltz, by Barbara Ann Wright

Series: Katya and Starbride #1
Publisher: Bold Strokes
Copyright: September 2012
ISBN: 1-60282-792-3
Format: Kindle
Pages: 264

Princess Katya Nar Umbriel is publicly a bored, womanizing, and difficult daughter to the rulers of Farraday. It's all an act, though, with the full knowledge of her parents. As the second child, she's the leader of the Order of Vestra: the equivalent of the Secret Service, devoted to protecting the royal family and, by extension, the kingdom, particularly against magical attacks.

Starbride is new to court and entirely out of place. From a northern neighboring country, and far more comfortable in practical clothing than the frilled court dresses that her mother wants her to wear, she has been sent to court to make contacts. Her people are getting the bad side of various trade contracts and desperately need some political maneuvering space of their own. Starbride's best hope for this is to study law in the palace library when she can manage to avoid the other courtiers. But then she and Katya stumble across each other, outside of the roles they're playing, and might have an opportunity for a deeper connection. One that neither of them want to entangle in their personal worries.

This is the last of a set of books I picked up while looking for lesbian romance with fantasy or science fiction elements. On the romance front, it's one of the better entries in that set. Both Katya and Starbride are likeable, in large part due to their mutual exasperation with the trappings of the court. (Making the protagonists more serious, thoughtful, and intelligent than the surrounding characters is an old trick, but it works.) Wright has a good ear for banter, particularly the kind when two people of good will are carefully feeling each other out. And despite Katya's need to keep a deep secret from Starbride for some of the book, The Pyramid Waltz mostly avoids irritating communication failures as a plot driver.

The fantasy portion and the plot drivers, alas, are weaker. The world building is not exactly bad, but it's just not that interesting. There are a couple of moderately good ideas, in the form of pyramid magic and secret (and dangerous) magical powers that run in the royal family, but they're not well-developed. Pyramid magic turns out to look much like any other generic fantasy magic system, with training scenes that could have come from a Valdemar or Wheel of Time novel (and without as much dramatic tension). And the royal family's secret, while better-developed and integral to the plot, still felt rather generic and one-sided.

Maybe that's something Wright develops better in future novels in this series, but that was another problem: the ending of The Pyramid Waltz was rather weak. Partly, I think, this is because the cast is too large and not well-developed. I cared about Katya and Starbird, and to a lesser extent their servants and one of the Order members. (Wright has a moderately interesting bit of worldbuilding about how servants work in Starbride's culture, which I wish we'd seen more of.) But there are a bunch of other Order of Vesta members, Katya's family, and various other bits of history and hinted world views, none of which seemed to get much depth. The ending climax involved a lot of revelations and twists that primarily concerned characters I didn't care about. It lost something in the process.

This book is clearly set up for a sequel. There is an ending, but it's not entirely satisfying. Unfortunately, despite liking Katya and Starbird a lot, the rest of the story wasn't compelling enough to make me want to buy it, particularly since the series apparently goes through another three books before reaching a real ending.

I enjoyed parts of this book, particularly Katya and Starbird feeling each other out and discovering similarities in their outlook. Katya teasing Starbird, and Starbird teasing herself, over her mother's choice of her clothing was probably the best part. It's not bad for what it's trying to do, but I think it's a bit too generic and not satisfying enough to really recommend.

Followed by For Want of a Fiend.

Rating: 6 out of 10

01 August, 2015 06:01AM

July 31, 2015

Scott Kitterman

Plasma 5 (KDE) In Testing

A few days ago, fellow Qt/KDE team member Lisandro gave an update on the situation with migration to Plasma 5 in Debian Testing (AKA Stretch).  It’s changed again.  All of Plasma 5 is now in Testing.  The upgrade probably won’t be entirely smooth, which we’ll work on that after the gcc5 transition is done, but it will be much better than the half KDE4 SC half Kf5/Plasma 5 situation we’ve had for the last several days.

The issues with starting kwin should be resolved once users upgrade to Plasma 5.  To use the current kwin with KDE SC 4, you will need to add a symlink from /usr/bin/kwin to /usr/bin/kwin_x11.  That will be included in the next upload after gcc5.

Systemsettings and plasma-nm now work.

In my initial testing, I didn’t see anything major that was broken.  One user reported an issue with sddm starting automatically, but it worked fine for me.  During the upgrade you should get a debconf prompt asking if you want to use kdm or sddm.  Pick sddm.

When I tried to dist-upgrade, apt wanted to remove task-kde-desktop.  I let it remove it and some other packages and then in a second step did apt-get install task-kde-desktop.  That pulled it back in successfully along with adding and removing a reasonably large stack of packages.  Obviously we need to make that work better before Stretch is released, but as long as you don’t restart KDE in between those two steps it should be fine.  Lastely, I used apt-get autoremove to clear out a lot of no longer needed KDE4 things (when it asks if you want to stop the running kdm, say no).

Here are a few notes on terminology and what I understand of the future plans:

What used to be called KDE is now three different things (in part because KDE is now the community of people, not the software):

KDE Frameworks 5 (Kf5): This is a group of several dozen small libraries that as a group, roughly equate to what used to be kdelibs.

Plasma (Workspaces) 5: This is the desktop that we’ve just transitioned to.

Applications: These are a mix of kdelibs and Kf5 based applications.  Currently in Testing there are some of both and this will evolve over time based on upstream development.  As an example, the Kf5 based version of konsole is in Unstable and should transition to Testing shortly.

Finally, thanks to Maximiliano Curia (maxy on IRC) for doing virtually all of the packaging of Kf5, Plasma 5, and applications.  He did the heavy lifting, the rest of us just nibbled around the edges to keep it moving towards testing.

31 July, 2015 08:15PM by skitterman

hackergotchi for Steve McIntyre

Steve McIntyre

Linaro VLANd v0.3

VLANd is a python program intended to make it easy to manage port-based VLAN setups across multiple switches in a network. It is designed to be vendor-agnostic, with a clean pluggable driver API to allow for a wide range of different switches to be controlled together.

There's more information in the README file. I've just released v0.3, with a lot of changes included since the last release:

  • Massive numbers of bugfixes and code cleanups
  • Added two new switch drivers:
    • TP-Link TL-SG2XXX family (TPLinkTLSG2XXX)
    • Netgear XSM family (NetgearXSM)
  • Added "debug" option to all the switch drivers to log all interactions
  • Added internal caching of port modes within the driver core for a large speed-up in normal use
  • Bug fix to handling of trunk ports in the CiscoCatalyst driver, improving VLAN interop with other switches
  • Huge changes to the test lab, now using 5 switches and 10 hosts
  • Big improvements to the test suite:
    • Match the new test lab layout
    • Move more of the core test code into the test-common utility library
    • Massively improved the check-networks test runner for the test hosts
    • Added parsing of the UP/DOWN results in test-common to give a simple PASS/FAIL result for each test
    • Added more tests
  • All logging now in UTC

VLANd is Free Software, released under the GPL version 2 (or any later version). For now, grab it from git; tarballs will be coming shortly.

31 July, 2015 04:04PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in July 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 15 hours on Debian LTS. In that time I did the following:

  • Finished the work on to make it display detailed security status on each supported release (example).
  • Prepared and released DLA-261-2 fixing a regression in the aptdaemon security update (happening only when you have python 2.5 installed).
  • Prepared and released DLA-272-1 fixing 3 CVE in python-django.
  • Prepared and released DLA-286-1 fixing 1 CVE in squid3. The patch was rather hard to backport. Thankfully upstream was very helpful, he reviewed and tested my patch.
  • Did one week of “LTS Frontdesk” with CVE triaging. I pushed 19 commits to the security tracker.

Kali Linux / Debian Stretch work

kaliKali Linux wants to experiment something close to Debian Constantly Usable Testing: we have a kali-rolling release that is based on Debian Testing and we want to take a new snapshot every 4 months (in order to have 3 releases per year).

More specifically we have a kali-dev repository which is exactly Debian Stretch + our own Kali packages (the kali package take precedence) updated 4 times a day, just like testing is. And we have a britney2 setup that generates kali-rolling out of kali-dev (without any requirement in terms of delay/RC bugs, it just ensures that dependencies are not broken), also 4 times a day.

We have jenkins job that ensures that our metapackages are installable in kali-dev (and kali-rolling) and that we can build our ISO images. When things break, I have to fix them and I try to fix them on the Debian side first. So here are some examples of stuff I did in response to various failures:

  • Reported #791588 on texinfo. It was missing a versioned dependency on tex-common and migrated too early. The package was uninstallable in testing for a few days.
  • Reported #791591 on pinba-engine-mysql-5.5: package was uninstallable (had to be rebuilt). It appeared on output files of our britney instance.
  • I made a non-maintainer upload (NMU) of chkrootkit to fix two RC bugs so that the package can go back to testing. The package is installed by our metapackages.
  • Reported #791647: debtags no longer supports “debtags update –local” (a feature that went away but that is used by Kali).
  • I made a NMU of debtags to fix a release critical bug (#791561 debtags: Missing dependency on python3-apt and python3-debian). kali-debtags was uninstallable because it calls debtags in its postinst.
  • Reported #791874 on python-guess-language: Please add a python 2 library package. We have that package in Kali and when I tried to sync it from Debian I broke something else in Kali which depends on the Python 2 version of the package.
  • I made a NMU of tcpick to fix a build failure with GCC5 so that the package could go back to testing (it’s part of our metapackages).
  • I requested a bin-NMU of jemalloc and a give-back of hiredis on powerpc in #792246 to fix #788591 (hiredis build failure on powerpc). I also downgraded the severity of #784768 to important so that the package could go back to testing. Hiredis is a dependency of OpenVAS and we need the package in testing.

If you analyze this list, you will see that a large part of the issues we had come down to package getting removed from testing due to RC bugs. We should be able to anticipate those issues and monitor the packages that have an impact on Kali. We will probably add new jenkins job that installs all the metapackages and then run how-can-i-help -s testing-autorm --old… I just submitted #794238 as a wishlist against how-can-i-help.

At the same time, there are bugs that make it into testing and that I fix / work around on the Kali side. But those fixes / work around might be more useful if they were pushed to testing via testing-proposed-updates. I tried to see whether other derivatives had similar needs to see if derivatives could join their efforts at this level but it does not look like so for now.

Last but not least, bugs reported on the Kali side also resulted in Debian improvements:

  • I reported #793360 on apt: APT::Never-MarkAuto-Sections not working as advertised. And I submitted a patch.
  • I orphaned dnswalk and made a QA upload to fix its only bug.
  • We wanted a newer version of the nvidia drivers. I filed #793079 requesting the new upstream release and the maintainer quickly uploaded it to experimental. I imported it on the Kali side but discovered that it was not working on i386 so I submitted #793160 with a patch.
  • I noticed that Kali build daemons tend to accumulate many /dev/shm mounts and tracked this down to schroot. I reported it as #793081.

Other Debian work

Sponsorship. I sponsored multiple packages for Daniel Stender who is packaging prospector, a software that I requested earlier (through RFP bug). So I reviewed and uploaded python-requirements-detector, python-setoptconf, pylint-celery and pylint-common. During a review I also discovered a nice bug in dh-python (#793609a comment in the middle of a Build-Depends could break a package). I also sponsored an upload of notmuch-addrlookup (new package requested by a Freexian customer).

Packaging. I uploaded python-django 1.7.9 in unstable and 1.8.3 in experimental to fix security issues. I uploaded a new upstream release of ditaa through a non-maintainer uploaded (again at the request of a Freexian customer).

Distro Tracker. Beside the work to integrate detailed security status, I fixed the code to be compatible with Django 1.8 and modified the tox configuration to ensure that the test suite is regularly run against Django 1.8. I also merged multiple patches of Christophe Siraut (cf #784151 and #754413).


See you next month for a new summary of my activities.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

31 July, 2015 02:45PM by Raphaël Hertzog

hackergotchi for Simon Kainz

Simon Kainz

DUCK challenge: week 4

The DUCK challenge is making a quite stable progress: in the last 4 weeks there were approximately 12.25 packages fixed and uploaded per week. In the current week the following packages were fixed and uploaded into unstable:

So we had 14 packages fixed and uploaded by 10 different uploaders. A big "Thank You" to you!!

Since the start of this challenge, a total of 49 packages, uploaded by 31 different persons were fixed.

Here is a quick overview:

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7
# Packages 10 15 10 14 - - -
Total 10 25 35 49 - - -

The list of the fixed and updated packages is availabe here. I will try to update this ~daily. If I missed one of your uploads, please drop me a line.

DebConf15 is approaching quite fast, so please get involved: The DUCK Challenge is running until end of DebConf15!

Pevious articles are here: Week 1, Week 2, Week 3.

31 July, 2015 07:15AM by Simon Kainz

July 30, 2015

hackergotchi for DebConf team

DebConf team

hackergotchi for Daniel Pocock

Daniel Pocock

Free Real-time Communications (RTC) at DebConf15, Heidelberg

The DebConf team have just published the first list of events scheduled for DebConf15 in Heidelberg, Germany, from 15 - 22 August 2015.

There are two specific events related to free real-time communications and a wide range of other events related to more general topics of encryption and privacy.

15 August, 17:00, Free Communications with Free Software (as part of the DebConf open weekend)

The first weekend of DebConf15 is an open weekend, it is aimed at a wider audience than the traditional DebConf agenda. The open weekend includes some keynote speakers, a job fair and various other events on the first Saturday and Sunday.

The RTC talk will look at what solutions exist for free and autonomous voice and video communications using free software and open standards such as SIP, XMPP and WebRTC as well as some of the alternative peer-to-peer communications technologies that are emerging. The talk will also look at the pervasive nature of communications software and why success in free RTC is so vital to the health of the free software ecosystem at large.

17 August, 17:00, Challenges and Opportunities for free real-time communications

This will be a more interactive session people are invited to come and talk about their experiences and the problems they have faced deploying RTC solutions for professional or personal use. We will try to look at some RTC/VoIP troubleshooting techniques as well as more high-level strategies for improving the situation.

Try the Debian and Fedora RTC portals

Have you registered for It can successfully make federated SIP calls with users of other domains, including Fedora community members trying

You can use for regular SIP (with clients like Empathy, Jitsi or Lumicall) or WebRTC.

Can't get to DebConf15?

If you can't get to Heidelberg, you can watch the events on the live streaming service and ask questions over IRC.

To find out more about deploying RTC, please see the RTC Quick Start Guide.

Did you know?

Don't confuse Heidelberg, Germany with Heidelberg in Melbourne, Australia. Heidelberg down under was the site of the athlete's village for the 1956 Olympic Games.

30 July, 2015 09:23AM by Daniel.Pocock

hackergotchi for Steve Kemp

Steve Kemp

The differences in Finland start at home.

So we're in Finland, and the differences start out immediately.

We're renting a flat, in building ten, on a street. You'd think "10 Streetname" was a single building, but no. It is a pair of buildings: 10A, and 10B.

Both of the buildings have 12 flats in them, with 10A having 1-12, and 10B having 13-24.

There's a keypad at the main entrance, which I assumed was to let you press a button and talk to the people inside "Hello I'm the postmaster", but no. There is no intercom system, instead you type in a magic number and the door opens.

The magic number? Sounds like you want to keep that secret, since it lets people into the common-area? No. Everybody has it. The postman, the cleaners, the DHL delivery man, and all the ex-tenants. We invited somebody over recently and gave it out in advance so that they could knock on our flat-door.

Talking of cleaners: In the UK I lived in a flat and once a fortnight somebody would come and sweep the stair-well, since we didn't ever agree to do it ourselves. Here somebody turns up every day, be it to cut the grass, polish the hand-rail, clean the glass on the front-door, or mop the floors of the common area. Sounds awesome. But they cut the grass, right outside our window, at 7:30AM. On the dot. (Or use a leaf-blower, or something equally noisy.)

All this communal-care is paid for by the building-association, of which all flat-owners own shares. Sounds like something we see in England, or even like Americas idea of a Home-Owners-Association. (In Scotland you own your own flat, you don't own shares of an entity which owns the complete building. I guess there are pros and cons to both approaches.)

Moving onwards other things are often the same, but the differences when you spot them are odd. I'm struggling to think of them right now, somebody woke me up by cutting our grass for the second time this week (!)

Anyway I'm registered now with the Finnish government, and have a citizen-number, which will be useful, I've got an appointment booked to register with the police - which is something I had to do as a foreigner within the first three months - and today I've got an appointment with a local bank so that I can have a euro-bank-account.

Happily I did find a gym to join, the owner came over one Sunday to give me a tiny-tour, and then gave me a list of other gyms to try if his wasn't good enough - which was a nice touch - I joined a couple of days later, his gym is awesome.

(I'm getting paid in UK-pounds, to a UK-bank, so right now I'm getting local money by transferring to my wifes account here, but I want to do that to my own, and open a shared account for paying for rent, electricity, internet, water, & etc).

My flat back home is still not rented, because the nice property management company lost my keys. Yeah you can't make that up can you? With a bit of luck the second set of keys I mailed them will arrive soon and the damn thing can be occupied, while I'm not relying on that income I do wish to have it.

30 July, 2015 06:15AM

July 28, 2015

hackergotchi for Jonathan Dowland

Jonathan Dowland

Sound effect pitch-shifting in Doom

My previous blog posts about deterministic Doom proved very popular.

The reason I was messing around with Doom's RNG was I was studying how early versions of Doom performed random pitch-shifting of sound effects, a feature that was removed early on in Doom's history. By fixing the random number table and replacing the game's sound effects with a sine wave, one second long and tuned to middle-c, I was able to determine the upper and lower bounds of the pitch shift.

Once I knew that, I was able to write some patches to re-implement pitch shifting in Chocolate Doom, which I'm pleased to say have been accepted. The patches have also made their way into the related projects Crispy Doom and Doom Retro.

I'm pleased with the final result. It's the most significant bit of C code I've ever released publically, as well as my biggest Doom hack and the first time I've ever done any audio manipulation in code. There was a load of other notes and bits of code that I produced in the process. I've put them together on a page here: More than you ever wanted to know about pitch-shifting.

28 July, 2015 05:02PM

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Plasma/KF5 : Testing situation

Dear Debian/KDE users,

We are aware that the current situation in testing is very unfortunate, with two main issues:

  1. systemsettings transitioned to testing before the corresponding KDE Control Modules. The result is that systemsettings displays an empty screen. This is tracked in the following bug
  2. plasmoids such as plasma-nm transitioned to testing before plasma-desktop 5. The result is that the plasmoid are no longer displayed in the system tray.

We are working on getting plasma-desktop to transition to testing as soon as possible (hopefully in 2 days time), which will resolve both those issues. We appreciate that the transition to KF5 is much rougher than we would have liked, and apologize to all those impacted.

On behalf of the Qt/KDE team,

28 July, 2015 03:19PM by Lisandro Damián Nicanor Pérez Meyer (

hackergotchi for Norbert Preining

Norbert Preining

ePub editor Sigil landed in Debian

Long long time ago I wanted to have Sigil, an epub editor, to appear in Debian. There was a packaging wishlist bug from back in 2010 with intermittent activities. But thanks to concerted effort, especially by Mattia Rizzolo and Don Armstrong, packaging progressed to a state that I could sponsor the upload to experimental about 4 months ago. And yesterday, after long waiting, finally Sigil passed the watchful eyes of the Debian ftp-masters and has entered Debian/experimental.


I have already updated the packaging for the latest version 0.8.7, which will be included in Debian/sid rather soon. Thanks again especially Mattia for his great work.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

28 July, 2015 12:00AM by Norbert Preining

July 27, 2015

hackergotchi for Kees Cook

Kees Cook

3D printing Poe

I helped print this statue of Edgar Allan Poe, through “We the Builders“, who coordinate large-scale crowd-sourced 3D print jobs:

Poe's Face

You can see one of my parts here on top, with “-Kees” on the piece with the funky hair strand:

Poe's Hair

The MakerWare I run on Ubuntu works well. I wish they were correctly signing their repositories. Even if I use non-SSL to fetch their key, as their Ubuntu/Debian instructions recommend, it still doesn’t match the packages:

W: GPG error: trusty Release: The following signatures were invalid: BADSIG 3D019B838FB1487F MakerBot Industries dev team <>

And it’s not just my APT configuration:

$ wget
$ wget
$ gpg --verify Release.gpg Release
gpg: Signature made Wed 11 Mar 2015 12:43:07 PM PDT using RSA key ID 8FB1487F
gpg: requesting key 8FB1487F from hkp server
gpg: key 8FB1487F: public key "MakerBot Industries LLC (Software development team) <>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: BAD signature from "MakerBot Industries LLC (Software development team) <>"
$ grep ^Date Release
Date: Tue, 09 Jun 2015 19:41:02 UTC

Looks like they’re updating their Release file without updating the signature file. (The signature is from March, but the Release file is from June. Oops!)

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

27 July, 2015 11:08PM by kees

Andrew Cater

Bye SPARC - for now

So it looks as if it's the end for the Debian SPARC port that is primarily 32 bit, for now at least. Too little available modern hardware, too few porters and an upstream hardware provider emotionally tied to significant licensing and support agreements.

If 64 bit SPARC hardware were more available, I'd be interested again. SPARC has given me two of my favourite moments in Debian. I helped a colleague to duplicate existing software and move architecture from Intel to SPARC mainly by copying across the list of packages. 

It also allowed me in ?? 1999 / 2000 ?? to take a SPARC 20 to London Olympia to a Linux Expo where one of the principal sponsors was Sun. They laughed on their stand when I set up older hardware with minimal memory but were not so amused when I demonstrated Debian, full X Window environment and KDE successfully.

27 July, 2015 09:48PM by Andrew Cater (

Michael Stapelberg

dh-make-golang: creating Debian packages from Go packages

Recently, the pkg-go team has been quite busy, uploading dozens of Go library packages in order to be able to package gcsfuse (a user-space file system for interacting with Google Cloud Storage) and InfluxDB (an open-source distributed time series database).

Packaging Go library packages (!) is a fairly repetitive process, so before starting my work on the dependencies for gcsfuse, I started writing a tool called dh-make-golang. Just like dh-make itself, the goal is to automatically create (almost) an entire Debian package.

As I worked my way through the dependencies of gcsfuse, I refined how the tool works, and now I believe it’s good enough for a first release.

To demonstrate how the tool works, let’s assume we want to package the Go library

midna /tmp $ dh-make-golang
2015/07/25 18:25:39 Downloading ""
2015/07/25 18:25:53 Determining upstream version number
2015/07/25 18:25:53 Package version is "0.0~git20150723.0.2ca5e0c"
2015/07/25 18:25:53 Determining dependencies
2015/07/25 18:25:55 
2015/07/25 18:25:55 Packaging successfully created in /tmp/golang-github-jacobsa-ratelimit
2015/07/25 18:25:55 
2015/07/25 18:25:55 Resolve all TODOs in itp-golang-github-jacobsa-ratelimit.txt, then email it out:
2015/07/25 18:25:55     sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt
2015/07/25 18:25:55 
2015/07/25 18:25:55 Resolve all the TODOs in debian/, find them using:
2015/07/25 18:25:55     grep -r TODO debian
2015/07/25 18:25:55 
2015/07/25 18:25:55 To build the package, commit the packaging and use gbp buildpackage:
2015/07/25 18:25:55     git add debian && git commit -a -m 'Initial packaging'
2015/07/25 18:25:55     gbp buildpackage --git-pbuilder
2015/07/25 18:25:55 
2015/07/25 18:25:55 To create the packaging git repository on alioth, use:
2015/07/25 18:25:55     ssh "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'"
2015/07/25 18:25:55 
2015/07/25 18:25:55 Once you are happy with your packaging, push it to alioth using:
2015/07/25 18:25:55     git push git+ssh:// --tags master pristine-tar upstream

The ITP is often the most labor-intensive part of the packaging process, because any number of auto-detected values might be wrong: the repository owner might not be the “Upstream Author”, the repository might not have a short description, the long description might need some adjustments or the license might not be auto-detected.

midna /tmp $ cat itp-golang-github-jacobsa-ratelimit.txt
From: "Michael Stapelberg" <stapelberg AT>
Subject: ITP: golang-github-jacobsa-ratelimit -- Go package for rate limiting
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Package: wnpp
Severity: wishlist
Owner: Michael Stapelberg <stapelberg AT>

* Package name    : golang-github-jacobsa-ratelimit
  Version         : 0.0~git20150723.0.2ca5e0c-1
  Upstream Author : Aaron Jacobs
* URL             :
* License         : Apache-2.0
  Programming Lang: Go
  Description     : Go package for rate limiting

 GoDoc (
 This package contains code for dealing with rate limiting. See the
 reference ( for more info.

TODO: perhaps reasoning
midna /tmp $

After filling in all the TODOs in the file, let’s mail it out and get a sense of what else still needs to be done:

midna /tmp $ sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt
midna /tmp $ cd golang-github-jacobsa-ratelimit
midna /tmp/golang-github-jacobsa-ratelimit master $ grep -r TODO debian
debian/changelog:  * Initial release (Closes: TODO) 
midna /tmp/golang-github-jacobsa-ratelimit master $

After filling in these TODOs as well, let’s have a final look at what we’re about to build:

midna /tmp/golang-github-jacobsa-ratelimit master $ head -100 debian/**/*
==> debian/changelog <==                            
golang-github-jacobsa-ratelimit (0.0~git20150723.0.2ca5e0c-1) unstable; urgency=medium

  * Initial release (Closes: #793646)

 -- Michael Stapelberg <>  Sat, 25 Jul 2015 23:26:34 +0200

==> debian/compat <==

==> debian/control <==
Source: golang-github-jacobsa-ratelimit
Section: devel
Priority: extra
Maintainer: pkg-go <>
Uploaders: Michael Stapelberg <>
Build-Depends: debhelper (>= 9),
Standards-Version: 3.9.6
Vcs-Git: git://

Package: golang-github-jacobsa-ratelimit-dev
Architecture: all
Depends: ${shlibs:Depends},
Built-Using: ${misc:Built-Using}
Description: Go package for rate limiting
 This package contains code for dealing with rate limiting. See the
 reference ( for more info.

==> debian/copyright <==
Upstream-Name: ratelimit

Files: *
Copyright: 2015 Aaron Jacobs
License: Apache-2.0

Files: debian/*
Copyright: 2015 Michael Stapelberg <>
License: Apache-2.0
Comment: Debian packaging is licensed under the same terms as upstream

License: Apache-2.0
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 See the License for the specific language governing permissions and
 limitations under the License.
 On Debian systems, the complete text of the Apache version 2.0 license
 can be found in "/usr/share/common-licenses/Apache-2.0".

==> debian/gbp.conf <==
pristine-tar = True

==> debian/rules <==
#!/usr/bin/make -f

export DH_GOPKG :=

	dh $@ --buildsystem=golang --with=golang

==> debian/source <==
head: error reading ‘debian/source’: Is a directory

==> debian/source/format <==
3.0 (quilt)
midna /tmp/golang-github-jacobsa-ratelimit master $

Okay, then. Let’s give it a shot and see if it builds:

midna /tmp/golang-github-jacobsa-ratelimit master $ git add debian && git commit -a -m 'Initial packaging'
[master 48f4c25] Initial packaging                                                      
 7 files changed, 75 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/compat
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100644 debian/gbp.conf
 create mode 100755 debian/rules
 create mode 100644 debian/source/format
midna /tmp/golang-github-jacobsa-ratelimit master $ gbp buildpackage --git-pbuilder
midna /tmp/golang-github-jacobsa-ratelimit master $ lintian ../golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes
I: golang-github-jacobsa-ratelimit source: debian-watch-file-is-missing
P: golang-github-jacobsa-ratelimit-dev: no-upstream-changelog
I: golang-github-jacobsa-ratelimit-dev: extended-description-is-probably-too-short
midna /tmp/golang-github-jacobsa-ratelimit master $

This package just built (as it should!), but occasionally one might need to disable a test and file an upstream bug about it. So, let’s push this package to pkg-go and upload it:

midna /tmp/golang-github-jacobsa-ratelimit master $ ssh "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'"
Initialized empty shared Git repository in /srv/
HEAD is now at ea6b1c5 add mrconfig for dh-make-golang
[master c5be5a1] add mrconfig for golang-github-jacobsa-ratelimit
 1 file changed, 3 insertions(+)
To /git/pkg-go/meta.git
   ea6b1c5..c5be5a1  master -> master
midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh:// --tags master pristine-tar upstream
Counting objects: 31, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (25/25), done.
Writing objects: 100% (31/31), 18.38 KiB | 0 bytes/s, done.
Total 31 (delta 2), reused 0 (delta 0)
To git+ssh://
 * [new branch]      master -> master
 * [new branch]      pristine-tar -> pristine-tar
 * [new branch]      upstream -> upstream
 * [new tag]         upstream/0.0_git20150723.0.2ca5e0c -> upstream/0.0_git20150723.0.2ca5e0c
midna /tmp/golang-github-jacobsa-ratelimit master $ cd ..
midna /tmp $ debsign golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes
midna /tmp $ dput golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes   
Uploading golang-github-jacobsa-ratelimit using ftp to ftp-master (host:; directory: /pub/UploadQueue/)
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.dsc
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c.orig.tar.bz2
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.debian.tar.xz
Uploading golang-github-jacobsa-ratelimit-dev_0.0~git20150723.0.2ca5e0c-1_all.deb
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1_amd64.changes
midna /tmp $ cd golang-github-jacobsa-ratelimit 
midna /tmp/golang-github-jacobsa-ratelimit master $ git tag debian/0.0_git20150723.0.2ca5e0c-1
midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh:// --tags master pristine-tar upstream
Total 0 (delta 0), reused 0 (delta 0)
To git+ssh://
 * [new tag]         debian/0.0_git20150723.0.2ca5e0c-1 -> debian/0.0_git20150723.0.2ca5e0c-1
midna /tmp/golang-github-jacobsa-ratelimit master $

Thanks for reading this far, and I hope dh-make-golang makes your life a tiny bit easier. As dh-make-golang just entered Debian unstable, you can install it using apt-get install dh-make-golang. If you have any feedback, I’m eager to hear it.

27 July, 2015 06:50AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Evading the "Hadley tax": Faster Travis tests for R

Hadley is a popular figure, and rightly so as he successfully introduced many newcomers to the wonders offered by R. His approach strikes some of us old greybeards as wrong---I particularly take exception with some of his writing which frequently portrays a particular approach as both the best and only one. Real programming, I think, is often a little more nuanced and aware of tradeoffs which need to be balanced. As a book on another language once popularized: "There is more than one way to do things." But let us leave this discussion for another time.

As the reach of the Hadleyverse keeps spreading, we sometimes find ourselves at the receiving end of a cost/benefit tradeoff. That is what this post is about, and it uses a very concrete case I encountered yesterday.

As blogged earlier, the RcppZiggurat package was updated. I had not touched it in a year, but Brian Ripley had sent a brief and detailed note concerning something flagged by the Solaris compiler (correctly suggesting I replace fabs() with abs() on integer types). (Allow me to stray from the main story line here for a second to stress just how insane a work load he is carrying, essentially for all of us. R and the R community are so just so indebted to him for all his work---which makes the usual social media banter about him so unfortunate. But that too shall be left for another time.) Upon making the simple fix, and submitting to GitHub the usual Travis CI was triggered. And here is what I saw:

first travis build in a year
All happy, all green. Previous build a year ago, most recent build yesterday, both passed. But hold on: test time went from 2:54 minutes to 7:47 minutes for an increase of almost five minutes! And I knew that I had not added any new dependencies, or altered any build options. What did happen was that among the dependencies of my package, one had decided to now also depend on ggplot2. Which leads to a chain of sixteen additional packages being loaded besides the four I depend upon---when it used to be just one. And that took five minutes as all those packages are installed from source, and some are big and take a long time to compile.

There is however and easy alternative, and for that we have to praise Michael Rutter who looks after a number of things for R on Ubuntu. Among these are the R builds for Ubuntu but also the rrutter PPA as well as the c2d4u PPA. If you have not heard this alphabet soup before, a PPA is a package repository for Ubuntu where anyone (who wants to sign up) can upload (properly setup) source files which are then turned into Ubuntu binaries. With full dependency resolution and all other goodies we have come to expect from the Debian / Ubuntu universe. And Michael uses this facility with great skill and calm to provide us all with Ubuntu binaries for R itself (rebuilding what yours truly uploads into Debian), as well as a number of key packages available via the CRAN mirrors. Less know however is this "c2d4u" which stands for CRAN to Debian for Ubuntu. And this builds on something Charles Blundell once built under my mentorship in a Google Summer of Code. And Michael does a tremdous job covering well over a thousand CRAN source packages---and providing binaries for all. Which we can use for Travis!

What all that means is that I could now replace the line

 - ./ install_r RcppGSL rbenchmark microbenchmark highlight

which implies source builds of the four listed packages and all their dependencies with the following line implying binary installations of already built packages:

 - ./ install_aptget libgsl0-dev r-cran-rcppgsl r-cran-rbenchmark r-cran-microbenchmark r-cran-highlight

In this particular case I also needed to build a binary package of my RcppGSL package as this one is not (yet) handled by Michael. I happen to have (re-)discovered the beauty of PPAs for Travis earlier this year and revitalized an older and largely dormant launchpad account I had for this PPA of mine. How to build a simple .deb package will also have to left for a future post to keep this more concise.

This can be used with the existing r-travis setup---but one needs to use the older, initial variant in order to have the ability to install .deb packages. So in the .travis.yml of RcppZiggurat I just use

## PPA for Rcpp and some other packages
- sudo add-apt-repository -y ppa:edd/misc
## r-travis by Craig Citro et al
- curl -OL
- chmod 755 ./
- ./ bootstrap

to add my own PPA and all is good. If you do not have a PPA, or do not want to create your own packages you can still benefit from the PPAs by Michael and "mix and match" by installing from binary what is available, and from source what is not.

Here we were able to use an all-binary approach, so let's see the resulting performance:

latest travis build
Now we are at 1:03 to 1:15 minutes---much better.

So to conclude, while the every expanding universe of R packages is fantastic for us as users, it can be seen to be placing a burden on us as developers when installing and testing. Fortunately, the packaging infrastructure built on top of Debian / Ubuntu packages can help and dramatically reduce build (and hence test) times. Learning about PPAs can be a helpful complement to learning about Travis and continued integration. So maybe now I need a new reason to blame Hadley? Well, there is always snake case ...

Follow-up: The post got some pretty immediate feedback shortly after I posted it. Craig Citro pointed out (quite correctly) that I could use r_binary_install which would also install the Ubuntu binaries based on their R packages names. Having built R/CRAN packages for Debian for so long, I am simply more used to the r-cran-* notations, and I think I was also the one contributing install_aptget to r-travis ... Yihui Xie spoke up for the "new" Travis approach deploying containers, caching of packages and explicit whitelists. It was in that very (GH-based) discussion that I started to really lose faith in the new Travis approach as they want use to whitelist each and every package. With 6900 and counting at CRAN I fear this simply does not scale. But different approaches are certainly welcome. I posted my 1:03 to 1:15 minutes result. If the "New School" can do it faster, I'd be all ears.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 July, 2015 01:35AM

July 26, 2015

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2015/30

this week, besides other activities, I again managed to NMU a few packages as part of the GCC 5 transition. & again I could build on patches submitted by various HP engineers & other helpful souls.

  • #757525 – hardinfo: "hardinfo: FTBFS with clang instead of gcc"
    patch to build with -std=gnu89, upload to DELAYED/5
  • #758723 – nagios-plugins-rabbitmq: "should depend on libjson-perl"
    add missing dependency, upload to DELAYED/5
  • #777766 – " ftbfs with GCC-5"
    send updated patch to BTS
  • #777837 – src:ebview: "ebview: ftbfs with GCC-5"
    add patch from, upload to DELAYED/5
  • #777882 – src:gnokii: "gnokii: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #777907 – src:hunt: "hunt: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #777920 – src:isdnutils: "isdnutils: ftbfs with GCC-5"
    add patch to build with -fgnu89-inline; upload to DELAYED/5
  • #778019 – src:multimon: "multimon: ftbfs with GCC-5"
    build with -fgnu89-inline; upload to DELAYED/5
  • #778068 – src:pork: "pork: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778098 – src:quarry: "quarry: ftbfs with GCC-5"
    build with -std=gnu89, upload to DELAYED/5, then rescheduled to 0-day with maintainer's permission
  • #778099 – src:ratbox-services: "ratbox-services: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5, later cancelled because package is about to be removed (#793408)
  • #778109 – src:s51dude: "s51dude: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #778116 – src:shell-fm: "shell-fm: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778119 – src:simulavr: "simulavr: ftbfs with GCC-5"
    apply patch from Brett Johnson, QA upload
  • #778120 – src:sipsak: "sipsak: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778122 – src:skyeye: "skyeye: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778140 – src:tcpcopy: "tcpcopy: ftbfs with GCC-5"
    add patch backported from upstream git, upload to DELAYED/5
  • #778145 – src:thewidgetfactory: "thewidgetfactory: ftbfs with GCC-5"
    add missing #include, upload to DELAYED/5
  • #778164 – src:vtun: "vtun: ftbfs with GCC-5"
    add patch from Tim Potter, upload to DELAYED/5
  • #790464 – flow-tools: "Please drop conditional build-depend on libmysqlclient15-dev"
    drop obsolete dependency, NMU
  • #793336 – src:libdevel-profile-perl: "libdevel-profile-perl: FTBFS with perl 5.22 in experimental (MakeMaker changes)"
    finish and upload package modernized by XTaran (pkg-perl)
  • #793580 – libb-hooks-parser-perl: "libb-hooks-parser-perl: B::Hooks::Parser::Install::Files missing"
    investigate and forward upstream, upload new upstream release later (pkg-perl)

26 July, 2015 09:18PM

hackergotchi for Lunar


Reproducible builds: week 13 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

  • Emmanuel Bourg uploaded maven-archiver/2.6-3 which fixed parsing DEB_CHANGELOG_DATETIME with non English locales.
  • Emmanuel Bourg uploaded maven-repo-helper/1.8.12 which always use the same system independent encoding when transforming the pom files.
  • Piotr Ożarowski uploaded dh-python/2.20150719 which makes the order of the generated maintainer scripts deterministic. Original patch by Chris Lamb.

akira uploaded a new version of doxygen in the experimental “reproducible” repository incorporating upstream patch for SOURCE_DATE_EPOCH, and now producing timezone independent timestamps.

Dhole updated Peter De Wachter's patch on ghostscript to use SOURCE_DATE_EPOCH and use UTC as a timezone. A modified package is now being experimented.

Packages fixed

The following 14 packages became reproducible due to changes in their build dependencies: bino, cfengine2, fwknop, gnome-software, jnr-constants, libextractor, libgtop2, maven-compiler-plugin, mk-configure, nanoc, octave-splines, octave-symbolic, riece, vdr-plugin-infosatepg.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #792943 on argus-client by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792945 on authbind by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792947 on cvs-mailcommit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792949 on chimera2 by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792950 on ccze by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792951 on dbview by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792952 on dhcpdump by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792953 on dhcping by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792955 on dput by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792958 on dtaus by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792959 on elida by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792961 on enemies-of-carlotta by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792963 on erc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792965 on fastforward by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792967 on fgetty by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792969 on flowscan by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792971 on junior-doc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792972 on libjama by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792973 on liblip by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792974 on liblockfile by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792975 on libmsv by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792976 on logapp by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792977 on luakit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792978 on nec by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792979 on runit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792980 on tworld by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792981 on wmweather by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792982 on ftpcopy by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792983 on gerstensaft by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792984 on integrit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792985 on ipsvd by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792986 on uruk by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792987 on jargon by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792988 on xbs by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792989 on freecdb by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792990 on skalibs by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792991 on gpsmanshp by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792993 on cgoban by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792994 on angband-doc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792995 on abook by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792996 on bcron by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792998 on chiark-utils by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792999 on console-cyrillic by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793000 on beav by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793001 on blosxom by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793002 on cgilib by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793003 on daemontools by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793004 on debdelta by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793005 on checkpw by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793006 on dropbear by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793126 on torbutton by Dhole: set TZ=UTC when calling zip.
  • #793127 on pdf.js by Dhole: set TZ=UTC when calling zip.
  • #793300 on deejayd by Dhole: set TZ=UTC when calling zip.

Packages identified as failing to build from source with no bugs filed and older than 10 days are scheduled more often now (except in experimental). (h01ger)

Package reviews

178 obsolete reviews have been removed, 59 added and 122 updated this week.

New issue identified this week: random_order_in_ruby_rdoc_indices.

18 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and h01ger.

26 July, 2015 04:03PM

Reproducible builds: week 12 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

Eric Dorlan uploaded automake-1.15/1:1.15-2 which makes the output of mdate-sh deterministic. Original patch by Reiner Herrmann.

Kenneth J. Pronovici uploaded epydoc/3.0.1+dfsg-8 which now honors SOURCE_DATE_EPOCH. Original patch by Reiner Herrmann.

Chris Lamb submitted a patch to dh-python to make the order of the generated maintainer scripts deterministic. Chris also offered a fix for a source of non-determinism in dpkg-shlibdeps when packages have alternative dependencies.

Dhole provided a patch to add support for SOURCE_DATE_EPOCH to gettext.

Packages fixed

The following 78 packages became reproducible in our setup due to changes in their build dependencies: chemical-mime-data, clojure-contrib, cobertura-maven-plugin, cpm, davical, debian-security-support, dfc, diction, dvdwizard, galternatives, gentlyweb-utils, gifticlib, gmtkbabel, gnuplot-mode, gplanarity, gpodder, gtg-trace, gyoto, highlight.js, htp, ibus-table, impressive, jags, jansi-native, jnr-constants, jthread, jwm, khronos-api, latex-coffee-stains, latex-make, latex2rtf, latexdiff, libcrcutil, libdc0, libdc1394-22, libidn2-0, libint, libjava-jdbc-clojure, libkryo-java, libphone-ui-shr, libpicocontainer-java, libraw1394, librostlab-blast, librostlab, libshevek, libstxxl, libtools-logging-clojure, libtools-macro-clojure, litl, londonlaw, ltsp, macsyfinder, mapnik, maven-compiler-plugin, mc, microdc2, miniupnpd, monajat, navit, pdmenu, pirl, plm, scikit-learn, snp-sites, sra-sdk, sunpinyin, tilda, vdr-plugin-dvd, vdr-plugin-epgsearch, vdr-plugin-remote, vdr-plugin-spider, vdr-plugin-streamdev, vdr-plugin-sudoku, vdr-plugin-xineliboutput, veromix, voxbo, xaos, xbae.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

The statistics on the main page of are now updated every five minutes. A random unreviewed package is suggested in the “look at a package” form on every build. (h01ger)

A new package set based new on the Core Internet Infrastructure census has been added. (h01ger)

Testing of FreeBSD has started, though no results yet. More details have been posted to the freebsd-hackers mailing list. The build is run on a new virtual machine running FreeBSD 10.1 with 3 cores and 6 GB of RAM, also sponsored by Profitbricks.

strip-nondeterminism development

Andrew Ayer released version 0.009 of strip-nondeterminism. The new version will strip locales from Javadoc, include the name of files causing errors, and ignore unhandled (but rare) zip64 archives.

debbindiff development

Lunar continued its major refactoring to enhance code reuse and pave the way to fuzzy-matching and parallel processing. Most file comparators have now been converted to the new class hierarchy.

In order to support for archive formats, work has started on packaging Python bindings for libarchive. While getting support for more archive formats with a common interface is very nice, libarchive is a stream oriented library and might have bad performance with how debbindiff currently works. Time will tell if better solutions need to be found.

Documentation update

Lunar started a Reproducible builds HOWTO intended to explain the different aspects of making software build reproducibly to the different audiences that might have to get involved like software authors, producers of binary packages, and distributors.

Package reviews

17 obsolete reviews have been removed, 212 added and 46 updated this week.

15 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and Mattia Rizzolo.


Lunar presented Debian efforts and some recipes on making software build reproducibly at Libre Software Meeting 2015. Slides and a video recording are available.


h01ger, dkg, and Lunar attended a Core Infrastructure Initiative meeting. The progress and tools mode for the Debian efforts were shown. Several discussions also helped getting a better understanding of the needs of other free software projects regarding reproducible builds. The idea of a global append only log, similar to the logs used for Certificate Transparency, came up on multiple occasions. Using such append only logs for keeping records of sources and build results has gotten the name “Binary Transparency Logs”. They would at least help identifying a compromised software signing key. Whether the benefits in using such logs justify the costs need more research.

26 July, 2015 03:41PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppZiggurat 0.1.3: Faster Random Normal Draws


After a slight hiatus since the last release in early 2014, we are delighted to announce a new release of RcppZiggurat which is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution.

The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release contains a few internal cleanups relative to the last release. It was triggered by a very helpful email from Brian Ripley who notices compiler warnings on the Solaris platform due to my incorrect use of on integer variables.

The NEWS file entry below lists all changes.

Changes in version 0.1.3 (2015-07-25)

  • Use the SHR3 generator for the default implementation just like Leong et al do, making our default implementation identical to theirs (but 32- and 64-bit compatible)

  • Switched generators from float to double ensuring that results are identical on 32- and 64-bit platforms

  • Simplified builds with respect to GSL use via the RcppGSL package; added a seed setter for the GSL variant

  • Corrected use of fabs() to abs() on integer variables, with a grateful nod to Brian Ripley for the hint (based on CRAN checks on the beloved Slowlaris machines)

  • Accelerated Travis CI tests by relying exclusively on r-cran-* packages from the PPAs by Michael Rutter and myself

  • Updated DESCRIPTION and NAMESPACE according to current best practices, and R-devel CMD check --as-cran checks

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 July, 2015 01:09PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

DYI web video streaming

I've recently taken a new(ish) look at streaming video for the web, in terms of what's out there of formats. (When I say streaming, I mean live video; not static files where you can seek etc.) There's a bewildering array; most people would probably use a ready-made service such as Twitch, Ustream or YouTube, but they do have certain aspects that are less than ideal; for instance, you might need to pay (or have your viewers endure ads), you might be shut down at any time if they don't like your content (e.g. sending non-gaming content on Twitch, or using copyrighted music on YouTube), or the video quality might be less than ideal.

So what I'm going to talk about is mainly what format to choose; there are solutions that allow you to stream to many, but a) the CPU amount you need is largely proportional to the number of different codecs you want to encode to, and b) I've never really seen any of these actually work well in practice; witness the Mistserver fiasco at FOSDEM last year, for instance (full disclosure: I was involved in the 2014 FOSDEM streaming, but not in 2015). So the goal is to find the minimum number of formats to maximize quality and client support.

So, let's have a look at the candidates:

We'll start in a corner with HLS. The reason is that mobile is becoming increasingly important, and Mobile Safari (iOS) basically only supports HLS, so if you want iOS support, this has to be high on your list. HLS is basically H.264+AAC in a MPEG-TS mux, split over many files (segments), with a .m3u8 file that is refreshed to inform about new segments. This can be served over whatever that serves HTTP (including your favorite CDN), and if your encoder is up to it, you can get adaptive bandwidth control (which works so-so, but better than nothing), but unfortunately it also has high latency, and MPEG-TS is a pretty high-overhead mux (6–7%, IIRC).

Unfortunately, basically nothing but Safari (iOS/OS X) supports HLS. (OK, that's not true; the Android browser does from Android 4.0, but supposedly 4.0 is really buggy and you really want something newer.) So unless you're in la-la land where nothing but Apple counts, you'll not only need HLS, but also something else. (Well, there's a library that claims to make Chrome/Firefox support HLS, but it's basically a bunch of JavaScript that remuxes each segment from MPEG-TS to MP4 on the fly, and hangs the entire streaming process while doing so.) Thankfully FFmpeg can remux from some other format into HLS, so it's not that painful.

MPEG-DASH is supposedly the new hotness, but like anything container-wise from MPEG, it's huge, tries to do way too many things and is generally poorly supported. Basically it's HLS (with the same delay problems) except that you can support a bazillion different codecs and multiple containers, and actual support out there is poor. The only real way to get it into a browser (assuming you can find anything stable that encodes an MPEG-DASH stream) is to load a 285kB JavaScript library into your browser, which tries to do all the metadata parsing in JavaScript, download the pieces with XHR and then piece them into the <video> tag with the Media Source Extensions API. And to pile on the problems, you can't really take an MPEG-DASH stream and feed it into something that's not a web browser, e.g. current versions of MPlayer/VLC/XBMC. (This matters if you have e.g. a separate HTPC that's remote-controlled. Admittedly, it might be a small segment depending on your audience.) Perhaps it will get better over time, but for the time, I cannot really recommend it unless you're a huge corporation and have the resources to essentially make your own video player in JavaScript (YouTube or Twitch can, but the rest of us really can't).

Of course, a tried-and-tested solution is Flash, with its FLV and RTMP offerings. RTMP (in this context) is basically FLV over a different transport from HTTP, and I've found it to be basically pain from one end to the other; the solutions you get are either expensive (Adobe's stuff, Wowza), scale poorly (Wowza), or are buggy and non-interoptable in strange ways (nginx-rtmp). But H.264+AAC in FLV over HTTP (e.g. with VLC plus my own Cubemap reflector) works well against e.g. JW Player, and has good support… on desktop. (There's one snag, though, in that if you stream from HTTP, JW Player will believe that you're streaming a static file, and basically force you to zero client-side buffer. Thus, it ends up being continuously low on buffer, and you need some server-side trickery to give it some more leeway against network bumps and not show its dreaded buffering spinner.) With mobile becoming more important, and people increasingly calling for the death of Flash, I don't think this is the solution for tomorrow, although it might be okay for today.

Then there's WebM (in practice VP8+Vorbis in a Matroska mux; VP9 is too slow for good quality in realtime yet, AFAIK). If worries about format patents are high on your list, this is probably a good choice. Also, you can stick it straight into <video> (e.g. with VLC plus my own Cubemap reflector), and modulo some buffering issues, you can go without Flash. Unfortunately, VP8 trails pretty far behind H.264 on picture quality, libvpx has strange bugs and my experience is that bitrate control is rather lacking, which can lead to your streams getting subtle, hard-to-debug issues with getting through to the actual user. Furthermore, support is lackluster; no support for IE, no support for iOS, no hardware acceleration on most (all?) phones so you burn your battery.

Finally there's MP4, which is formally MPEG-4 Part 14, which in turn is based on MPEG-4 Part 12. Or something. In any case, it's the QuickTime mux given a blessing as official, and it's a relatively common format for holding H.264+AAC. MP4 is one of those formats that support a zillion different ways of doing everything; the classic case is when someone's made a file in QuickTime and it has the “moov” box at the end, so you can't play any of your 2 GB file until you have the very last bytes, too. But after I filed a VLC bug and Martin Storsjö picked it up, the ffmpeg mux has gotten a bunch of fixes to produce MP4 files that are properly streamable.

And browsers have improved as well; recent versions of Chrome (both desktop and Android) stream MP4 pretty well, IE11 reportedly does well (although I've had reports of regressions, where the user has to switch tabs once before the display actually starts updating), Firefox on Windows plays these fine now, and I've reported a bug against GStreamer to get these working on Firefox on Linux (unfortunately it will be a long time until this works out of the box for most people).

So that's my preferred solution right now; you need a pretty recent ffmpeg for this to work, and if you want to use MP4 in Cubemap, you need this VLC bugfix (unfortunately not in 2.2.0, which is the version in Debian stable), but combined with HLS as an iOS fallback, it will give you great quality on all platforms, good browser coverage, reasonably low latency (for non-HLS clients) and good playability in non-web clients. It won't give you adaptive bitrate selection, though, and you can't hand it to your favorite CDN because they'll probably only want to serve static files (and I don't think there's a market for a Cubemap CDN :-) ). The magic VLC incantation is:

--sout '#transcode{vcodec=h264,vb=3000,acodec=mp4a,ab=256,channels=2,fps=50}:std{access=http{mime=video/mp4},mux=ffmpeg{mux=mp4},dst=:9094}' --sout-avformat-options '{movflags=empty_moov+frag_keyframe+default_base_moof}'

26 July, 2015 11:00AM

hackergotchi for Norbert Preining

Norbert Preining

Challenging riddle from The Talos Principle

When I recently complained that Portal 2 was too easy, I have to say, The Talos Principle is challenging. For a solution that, if known, takes only a few seconds, I often have to wring my brain about the logistics for long long time. Here a nice screenshot from one of the easier riddles, but with great effect.


A great game, very challenging. A more length review will come when I have finished the game.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

26 July, 2015 08:53AM by Norbert Preining

July 25, 2015

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.0: Now with more Big Data!

big-data image

A new release 0.12.0 of Rcpp arrived on the CRAN network for GNU R this morning, and I also pushed a Debian package upload.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 423 packages on CRAN depend on Rcpp for making analyses go faster and further. Note that this is 60 more packages since the last release in May! Also, BioConductor adds another 57 packages, and casual searches on GitHub suggests many more.

And according to Andrie De Vries, Rcpp has now page rank of one on CRAN as well!

And with this release, Rcpp also becomes ready for Big Data, or, as they call it in Texas, Data.

Thanks to a lot of work and several pull requests by Qiang Kou, support for R_xlen_t has been added.

That means we can now do stunts like

R> library(Rcpp)
R> big <- 2^31-1
R> bigM <- rep(NA, big)
R> bigM2 <- c(bigM, bigM)
R> cppFunction("double getSz(LogicalVector x) { return x.length(); }")
R> getSz(bigM)
[1] 2147483647
R> getSz(bigM2)
[1] 4294967294

where prior versions of Rcpp would just have said

> getSz(bigM2)
Error in getSz(bigM2) :
  long vectors not supported yet: ../../src/include/Rinlinedfuns.h:137

which is clearly not Texas-style. Another wellcome change, also thanks to Qiang Kou, adds encoding support for strings.

A lot of other things got polished. We are still improving exception handling as we still get the odd curveballs in a corner cases. Matt Dziubinski corrected the var() computation to use the proper two-pass method and added better support for lambda functions in Sugar expression using sapply(), Qiang Kou added more pull requests mostly for string initialization, and Romain added a pull request which made data frame creation a little more robust, and JJ was his usual self in tirelessly looking after all aspects of Rcpp Attributes.

As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, ... regarding Rcpp are always welcome at the rcpp-devel mailing list.

Last but not least, we are also extremely pleased to annouce that Qiang Kou has joined us in the Rcpp-Core team. We are looking forward to a lot more awesome!

See below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.12.0 (2015-07-24)

  • Changes in Rcpp API:

    • Rcpp_eval() no longer uses R_ToplevelExec when evaluating R expressions; this should resolve errors where calling handlers (e.g. through suppressMessages()) were not properly respected.

    • All internal length variables have been changed from R_len_t to R_xlen_t to support vectors longer than 2^31-1 elements (via pull request 303 by Qiang Kou).

    • The sugar function sapply now supports lambda functions (addressing issue 213 thanks to Matt Dziubinski)

    • The var sugar function now uses a more robust two-pass method, supports complex numbers, with new unit tests added (via pull request 320 by Matt Dziubinski)

    • String constructors now allow encodings (via pull request 310 by Qiang Kou)

    • String objects are preserving the underlying SEXP objects better, and are more careful about initializations (via pull requests 322 and 329 by Qiang Kou)

    • DataFrame constructors are now a little more careful (via pull request 301 by Romain Francois)

    • For R 3.2.0 or newer, Rf_installChar() is used instead of Rf_install(CHAR()) (via pull request 332).

  • Changes in Rcpp Attributes:

    • Use more robust method of ensuring unique paths for generated shared libraries.

    • The evalCpp function now also supports the plugins argument.

    • Correctly handle signature termination characters ('{' or ';') contained in quotes.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was once again updated with respect to OS X issues and Fortran libraries needed for e.g. RcppArmadillo.

    • The included Rcpp.bib bibtex file (which is also used by other Rcpp* packages) was updated with respect to its CRAN references.

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 July, 2015 07:01PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Stream audio level monitoring with ebumeter

When monitoring stream sound levels, seemingly VLC isn't quite there; at least the VU meter on mine shows unusably low levels (and I think it might even stick a compressor in there, completely negating the point). So I wanted to write my own, but while searching for the right libraries, I found ebumeter.

So I spent the same amount of time getting it to run in the first place; it uses JACK, which I've never ever had working before. But I guess there's a first time for everything? I wrote up a quick guide for others that are completely unfamiliar with it:

First, install the JACK daemon and qjackctl (Debian packages jackd2 and qjackctl), in addition to ebumeter itself. I've been using mplayer to play the streams, but you can use whatever with JACK output.

Then, start JACK:

jack_control start

and start ebumeter plus give the stream some input:

ebumeter &
mplayer -ao jack http://whatever…

You'll notice that ebumeter isn't showing anything yet, because the default routing for MPlayer is to go to the system output. Open qjackctl and go to the Connect dialog. You should see the running MPlayer and ebumeter, and you should see that MPlayer is connected to “system” (not ebumeter as we'd like).

So disconnect all (ignore the warning). Then expand the MPlayer and ebumeter clients, select out_0, then in.L and choose Connect. Do the same with the other channel, and tada! There should be a meter showing EBU R128 levels, including peak (unfortunately it doesn't seem to show number of clipped samples, but I can live with that).

Unfortunately the conncetions are not persistent. To get them persistent, you need to go to Patchbay, create a new patchbay, accept when it asks you if you want to start from the current conncetions, then save, and finally activate. As long as the qjackctl dialog is open (?), new MPlayer JACK sessions will now be autoconnected to ebumeter, no matter what the pid is. If you want to distinguish between different MPlayers, you can always give them a different name as an argument to the -ao jack parameter.

25 July, 2015 05:08PM

Sandro Tosi

How to change your Google services location

Several services in Google depends on your location, in particular on Google Play (things like apps, devices, contents can be restricted to some countries), but what to do if you relocate and want to update your information to access those exclusive services? Lots of stories out there to make a payment on the playstore with updated credit card info etc etc, it's actually a bit different, but not that much.

There are 3 places where you need to update your location information, all of them on Google Payments:

  1. in Payment Methods, change the billing address of all your payment methods;
  2. in Address Book, change the default shipping address;
  3. in Settings, change your home address.
Once that's done, wait some minutes, and you might also want to logout/login again in your Google account (even tho Google support will tell it's not necessary, it didnt work for me otherwise) and you should be ready to go.

25 July, 2015 01:06PM by Sandro Tosi (

DICOM viewer and converter in Debian

DICOM is a standard for your RX/CT/MRI scans and the format most of the times your result will be given to you, along with Win/MacOS viewers, but what about Debian? the best I could find is Ginkgo CADx (package ginkgocadx).

If  you want to convert those DICOM files into images you can use convert (I dont know why I was surprised to find out imagemagik can handle it).

PS: here a description of the format.

25 July, 2015 12:53PM by Sandro Tosi (

hackergotchi for Norbert Preining

Norbert Preining

PiwigoPress release 2.30

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates some new features, mostly contributed by Anton Lavrov (big thanks!)


The new features are:

  • Shortcode: multiple ids can be specified, including ranges (not supported in the shortcode generator)
  • Display of image name/title: in addition to the description, also the name/title can be displayed. Here three possible settings can be choosen: 0 – never show titles (default as before), 1 – always show titles, and ‘auto’ – show only titles that do not look like auto-generated titles. (supported in the shortcode generator)

I also checked that the plugin runs with the soon to be released WordPress 4.3, and fixed a small problem with the setting of ‘albumpicture’ not being saved.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

25 July, 2015 06:11AM by Norbert Preining

hackergotchi for Michal Čihař

Michal Čihař

Migrating phpMyAdmin from

Some time ago we've decided to move phpMyAdmin out of services. This was mostly motivated by issues with bundling crapware with installers (though we were not affected), but also we've missed some features that we would like to have and were not possible there.

The project relied on with several services. The biggest ones being website and downloads hosting, issue tracking and mailing lists. We've chosen different approach for each of these.

As first, we've moved away website and downloads. Thanks to generous offer of, everything went quite smoothly and we now have HTTPS secured website and downloads, see our announcement. Oh and on the way we've started to PGP sign the releases as well, so you can verify the download.

Shortly after this was hit by major problems with infrastructure. Unfortunately we were not yet completely ready with rest of the migration, but this has definitely pushed us to make progress faster.

During the outage, we've opened up issue tracker on GitHub, to be able to receive bug reports from our users. On the background I've worked on the issue migration. The good news is that as of now almost all issues are migrated. There are few missing ones, but these will be hopefully handled in upcoming days as well.

Last but not least, we had mailing lists on We've shortly discussed available options and decided to run own mail server with these. It will allow us greater flexibility while still using well know software in background. Initial attempts with Mailman 3 failed, so we got back to Mailman 2, which is stable and easy to configure. See also our news posts for official announcement.

Thanks to, it has been great home for us, but now we have better places to live.

Filed under: English phpMyAdmin | 0 comments

25 July, 2015 04:00AM by Michal Čihař (

hackergotchi for Steve Kemp

Steve Kemp

We're in Finland now.

So we've recently spent our first week together in Helsinki, Finland.

Mostly this has been stress-free, but there are always oddities about living in new places, and moving to Europe didn't minimize them.

For the moment I'll gloss over the differences and instead document the computer problem I had. Our previous shared-desktop system had a pair of drives configured using software RAID. I pulled one of the drives to use in a smaller-cased system (smaller so it was easier to ship).

Only one drive of a pair being present make mdadm scream, via email, once per day, with reports of failure.

The output of cat /proc/mdstat looked like this:

md2 : active raid1 sdb6[0] [LVM-storage-area]
      1903576896 blocks super 1.2 2 near-copies [2/1] [_U]
md1 : active raid10 sdb5[1] [/root]
      48794112 blocks super 1.2 2 near-copies [2/1] [_U]
md0 : active raid1 sdb1[0]  [/boot]
      975296 blocks super 1.2 2 near-copies [2/1] [_U]

See the "_" there? That's the missing drive. I couldn't remove the drive as it wasn't present on-disk, so this failed:

mdadm --fail   /dev/md0 /dev/sda1
mdadm --remove /dev/md0 /dev/sda1
# repeat for md1, md2.

Similarly removing all "detached" drives failed, so the only thing to do was to mess around re-creating the arrays with a single drive:

lvchange -a n shelob-vol
mdadm --stop /dev/md2
mdadm --create /dev/md2 --level=1 --raid-devices=1 /dev/sdb6 --force

I did that on the LVM-storage area, and the /boot partition, but "/" is still to be updated. I'll use knoppix/similar to do it next week. That'll give me a "RAID" system which won't alert every day.

Thanks to the joys of re-creation the UUIDs of the devices changed, so /etc/mdadm/mdadm.conf needed updating. I realized that too late, when grub failed to show the menu, because it didn't find it's own UUID. Handy recipe for the future:

set prefix=(md/0)/grub/
insmod linux
linux (md/0)/vmlinuz-3.16.0-0.bpo.4-amd64 root=/dev/md1
initrd (md/0)//boot/initrd.img-3.16.0-0.bpo.4-amd64

25 July, 2015 02:00AM

July 24, 2015

Vincent Sanders

NetSurf developers and the Order of the Phoenix

Once more the NetSurf developers gathered to battle the forces of darkness, or as they are more commonly known web specifications.

Michael Drake, Vincent Sanders, John-Mark Bell and Daniel Silverstone at the Codethink manchester officesThe fifth developer weekend was an opportunity for us to gather in a pleasant setting and work together in person. We were graciously hosted, once again, by Codethink in their Manchester offices.

Four developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone and Vincent Sanders.

The main focus of the weekends activities was to address two areas that have become overwhelmingly important: JavaScript and Layout.

Although the browser obviously already has both these features they are somewhat incomplete and incapable of supporting the features of the modern web.


The discussion started with JavaScript and its implementation. We had previously looked at the feasibility of changing our JavaScript engine from Spidermonkey to DukTape. We had decided this was a change we wanted to make when DukTape was mature enough to support the necessary features.

The main reasons for the change are that Spidermonkey is a poor fit to NetSurf as it is relatively large and does not provide a stable API guarantee. The lack of a stable API requires extensive engineering to update to new releases. Additionally support for compiling on minority platforms is very challenging meaning that most platforms are stuck using version 1.7 or 1.85 (current release is version 31 with 38 due).

We started the move to Duktape by creating a development branch, integrating the Duktape library  and open coding a minimal implementation of the core classes as a proof of concept. This work was mostly undertaken by Daniel with input from myself and John-Mark. This resulted in a build that was substantially smaller than using Spidermonkey with all the existing functionality our tests cover.

The next phase of this work is to take the prototype implementation and turn it into something that can be reliably used and covers the entire JavaScript DOM interface. This is no small job as there are at least 200 classes and 1500 methods and properties to implement.


The layout library design discussion was an extensive and very involved. The layout engine is a browsers most important component. It takes all the information processed by the CSS and DOM libraries, applies a vast number of involved rules and produces a list of operations that can be rendered.

This reimplementation of our rendering engine has been in planning for many years. The existing engine stems from the browsers earliest days more than a decade ago and has many shortcomings in architecture and implementation we hope to address.

The work has finally started on libnslayout with Michael taking the lead and defining the initial API and starting the laborious work of building the test harness, a feature the previous implementation lacked!

The second war begins

For a war you need people and it is a little unfortunate that this was our lowest ever turnout for the event. This is true of the project overall with declining numbers of commits and interest outside our core group. If anyone is interested we are always happy to have new contributors and there are opportunities to contribute in many areas from image assets, through translations, to C programming.

We discussed some ways to encourage new developers and try and get committed developers especially for the minority platform frontends. The RISC OS frontend for example has needed a maintainer since the previous one stepped down. There was some initial response from its community, culminating in a total of two patches, when we announced the port was under threat of not being releasable in future. Unfortunately nothing further came from this and it appears our oldest frontend may soon become part of our history.

We also covered some issues from the bug tracker mostly to see if there were any patterns that we needed to address before the forthcoming 3.4 release.

There was discussion about recent improvements to the CI system which generate distribution packages from the development branch and how this could be extended to benefit more users. This also included authorisation to acquire storage and other miscellaneous items necessary to keep the project infrastructure running.

We managed over 20 hours of work in the two days and addressed our current major shortcomings. Now it just requires a great deal of programming to complete the projects started here.

24 July, 2015 12:53PM by Vincent Sanders (

hackergotchi for Martin Michlmayr

Martin Michlmayr

Congratulations to Stefano Zacchiroli

Stefano Zacchiroli receiving the O'Reilly Open Source Award I attended OSCON's closing sessions today and was delighted to see my friend Stefano Zacchiroli (Zack) receive an O'Reilly Open Source Award. Zack acted as Debian Project Leader for three years, is working on important activities at the Open Source Initiative and the Free Software Foundation, and is generally an amazing advocate for free software.

Thanks for all your contributions, Zack, and congratulations!

24 July, 2015 11:21AM

Elena 'valhalla' Grandi

Old and new: furoshiki and electronics.

Old and new: furoshiki and electronics.

Yesterday at the local LUG (@Gruppo Linux Como ) somebody commented on the mix of old and new in my cloth-wrapped emergency electronics kit (you know, the kind of things you carry around with a microcontroller board and a few components in case you suddenly have an idea for a project :-) ).


This is the kind of things it has right now: components tend to change in time.


And yes, I admit I can only count up to 2, for higher numbers I carry a reference card :-)


Anyway, there was a bit of conversation on how this looked like a grandmother-ish thing, especially since it was in the same bag with a knitted WIP sock, and I mentioned the Japanese #furoshiki revival and how I believe that good old things are good, and good new things are good, and why not use them both?

Somebody else, who may or not be @Davide De Prisco asked me to let him have the links I mentioned, which include:

* Wikipedia page: Furoshiki
* Guide from the Japanese Ministry of the Environment on how to use a furoshiki (and the article
* A website with many other wrapping tecniques

24 July, 2015 11:10AM by Elena ``of Valhalla''

hackergotchi for Simon Kainz

Simon Kainz

DUCK challenge: week 3

One more update on the the DUCK challenge: In the current week, the following packages were fixed and uploaded into unstable:

So we had 10 packages fixed and uploaded by 8 different uploaders. A big "Thank You" to you!!

Since the start of this challenge, a total of 35 packages, uploaded by 25 different persons were fixed.

Here is a quick overview:

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7
# Packages 10 15 10 - - - -
Total 10 25 35 - - - -

The list of the fixed and updated packages is availabe here. I will try to update this ~daily. If I missed one of your uploads, please drop me a line.

There is still lots of time till the end of DebConf15 and the end of the DUCK Challenge, so please get involved.

Pevious articles are here: Week 1, Week 2.

24 July, 2015 06:30AM by Simon Kainz

July 23, 2015

Antoine Beaupré

Is it safe to use open wireless access points?

I sometimes get questions when people use my wireless access point, which, for as long as I can remember, has been open to everyone; that is without any form of password protection or encryption. I arguably don't use the access point much myself, as I prefer the wired connection for the higher bandwidth, security and reliability it provides.

Apart from convenience for myself and visitors, the main reason why I leave my wireless access open is that I believe in a free (both as in beer and freedom) internet, built with principles of solidarity rather than exploitation and profitability. In these days of ubiquitous surveillance, freedom often goes hand in hand with anonymity, which implies providing free internet access to everyone.

I also believe that, as more and more services get perniciously transferred to the global internet, access to the network is becoming a basic human right. This is therefore my small contribution to the struggle, now also part of the Réseau Libre project.

So here were my friends question, in essence:

My credit card info was stolen when I used a wifi hotspot in an airport... Should I use open wifi networks?

Is it safe to use my credit card for shopping online?

Here is a modified version of an answer I sent to a friend recently which I thought could be useful to the larger internet community. The short answer is "sorry about that", "it depends, you generally can, but be careful" and "your credit card company is supposed to protect you".


First off, sorry to hear that our credit card was stolen in an airport! That has to be annoying... Did the credit card company reimburse you? Normally, the whole point of credit cards is that they protect you in case of theft like this and they are supposed to reimburse you if you credit card gets stolen or abused...

The complexity and unreliability of passwords

Now of course, securing every bit of your internet infrastructure helps in protecting against such attacks. However: there is a trade-off! First off, it does makes it more complicated for people to join the network. You need to make up some silly password (which has its own security problems: passwords can be surprisingly easy to guess!) that you will post on the fridge or worst, forget all the time!

And if it's on the fridge, anyone with a view to that darn fridge, be it one-time visitor or sneaky neighbor, can find the password and steal your internet access (although, granted, that won't allow them to directly spy on your internet connection).

In any case, if you choose to use a password, you should use the tricks I wrote in the koumbit wiki to generate the password and avoid writing it on the fridge.

The false sense of security of wireless encryption

Second, it can also give a false sense of security: just because a wifi access point appears "secure" (ie. that the communication between your computer and the wifi access point is encrypted) doesn't mean the whole connection is secure.

In fact, one attack that can be done against access points is exactly to masquerade as an existing access point, with no security security at all. That way, instead of connecting to the real secure and trusted access point, you connect to an evil one which spies on our connection. Most computers will happily connect to such a hotspot even with degraded security without warning.

It may be what happened at the airport, in fact. Of course this particular attack would be less likely to happen if you live in the middle of the woods than an airport, but it's some important distinction to keep in mind, because the same attack can be performed after the wireless access point, for example by your countryside internet access provider or someone attacking it.

Your best protection for your banking details is to rely on good passwords (for your back account) but also, and more importantly, what we call end-to-end encryption. That is usually implemented using the "HTTPS" with a pad lock icon in your address bar. This ensures that the communication between your computer and the bank or credit card company is secure, that is: that no wifi access point or attacker between your computer and them can intercept your credit card number.

The flaws of internet security

Now unfortunately, even the HTTPS protocol doesn't bring complete security. For example, one attack that can be done is similar to the previous one and that is to masquerade as a legitimate bank site, but either strip out the encryption or even fake the encryption.

So you also need to look at the address of the website you are visiting. Attackers are often pretty clever and will use many tricks to hide the real address of the website in the address bar. To work around this, I always explicitly type my bank website address ( in my case) directly myself instead of clicking on links, bookmarks or using a search engine to find my bank site.

In the case of credit cards, it is much trickier because when you buy stuff online, you end up putting that credit card number on different sites which you do not necessarily trust. There's no good solution but complaining to your credit card company if you believe a website you used has stolen your credit card details. You can also use services like Paypal, Dwolla or Bitcoin that hide your credit card details from the seller, if they support the service.

I usually try to avoid putting my credit card details on sites I do not trust, and limit myself to known parties (e.g. Via Rail, Air Canada, etc). Also, in general, I try to assume the network connection between me and the website I visit is compromised. This forced me to get familiar with online security and use of encryption. It is more accessible to me than trying to secure the infrastructure i am using, because i often do not control it at all (e.g. internet cafes...).

Internet security is unfortunately a hard problem, and things are not getting easier as more things move online. The burden is on us programmers and system administrators to create systems that are more secure and intuitive for our users so, as I said earlier, sorry the internet sucks so much, we didn't think so many people would join the acid trip of the 70s. ;)

23 July, 2015 09:35PM

Elena 'valhalla' Grandi

A Makefile for OpenSCAD projects

A Makefile for OpenSCAD projects

When working with OpenSCAD to generate models for 3D printing, I find it convenient to be able to build .stl and .gcode files from the command line, expecially in batch, so I've started writing a Makefile, improving it and making it more generic in subsequent iterations; I've added a page on my website to hosts my current version.

Most of my projects use the following directory structure.

  • my_project/conf/basic.ini…
    slic3r configuration files

  • my_project/src/object1.scad, my_project/src/object2.scad…
    models that will be exported

  • my_projects/src/lib/library1.scad, my_projects/src/lib/library2.scad…
    OpenSCAD files that don't correnspond to a single object, included / used in the files above.

  • my_project/Makefile
    the shown below.

Running make will generate stl files for all of the models; make gcode adds .gcode files using slic3r; make build/object1.stl and make build/object1.gcode also work, when just one model is needed.

# Copyright 2015 Elena Grandi
# This work is free. You can redistribute it and/or modify it under the
# terms of the Do What The Fuck You Want To Public License, Version 2,
# as published by Sam Hocevar. See for more details.

BUILDDIR = build
CONFDIR = conf
SRCDIR = src

SLIC3R = slic3r


STL_TARGETS = $(patsubst $(SRCDIR)/%.scad,$(BUILDDIR)/%.stl,$(wildcard $(SRCDIR)/*.scad))
GCODE_TARGETS = $(patsubst $(SRCDIR)/%.scad,$(BUILDDIR)/%.gcode,$(wildcard $(SRCDIR)/*.scad))

.PHONY: all gcode clean


$(BUILDDIR)/%.stl: %.scad $(SRCDIR)/lib/*
mkdir -p ${BUILDDIR}
openscad -o $@ $<

$(BUILDDIR)/%.gcode: %.stl ${CONFDIR}/basic.ini
${SLIC3R} --load ${CONFDIR}/basic.ini $<

rm -f ${BUILDDIR}/*.stl ${BUILDDIR}/*.gcode

This Makefile is released under the WTFPL:

Version 2, December 2004

Copyright (C) 2004 Sam Hocevar <>

Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.



23 July, 2015 05:03PM by Elena ``of Valhalla''

hackergotchi for Daniel Pocock

Daniel Pocock

Unpaid work training Google's spam filters

This week, there has been increased discussion about the pain of spam filtering by large companies, especially Google.

It started with Google's announcement that they are offering a service for email senders to know if their messages are wrongly classified as spam. Two particular things caught my attention: the statement that less than 0.05% of genuine email goes to the spam folder by mistake and the statement that this new tool to understand misclassification is only available to "help qualified high-volume senders".

From there, discussion has proceeded with Linus Torvalds blogging about his own experience of Google misclassifying patches from Linux contributors as spam and that has been widely reported in places like Slashdot and The Register.

Personally, I've observed much the same thing from the other perspective. While Torvalds complains that he isn't receiving email, I've observed that my own emails are not always received when the recipient is a Gmail address.

It seems that Google expects their users work a little bit every day going through every message in the spam folder and explicitly clicking the "Not Spam" button:

so that Google can improve their proprietary algorithms for classifying mail. If you just read or reply to a message in the folder without clicking the button, or if you don't do this for every message, including mailing list posts and other trivial notifications that are not actually spam, more important messages from the same senders will also continue to be misclassified.

If you are not willing to volunteer your time to do this, or if you are simply one of those people who has better things to do, Google's Gmail service is going to have a corrosive effect on your relationships.

A few months ago, we visited Australia and I sent emails to many people who I wanted to catch up with, including invitations to a family event. Some people received the emails in their inboxes yet other people didn't see them because the systems at Google (and other companies, notably Hotmail) put them in a spam folder. The rate at which this appeared to happen was definitely higher than the 0.05% quoted in the Google article above. Maybe the Google spam filters noticed that I haven't sent email to some members of the extended family for a long time and this triggered the spam algorithm? Yet it was at that very moment that we were visiting Australia that email needs to work reliably with that type of contact as we don't fly out there every year.

A little bit earlier in the year, I was corresponding with a few students who were applying for Google Summer of Code. Some of them also observed the same thing, they sent me an email and didn't receive my response until they were looking in their spam folder a few days later. Last year I know a GSoC mentor who lost track of a student for over a week because of Google silently discarding chat messages, so it appears Google has not just shot themselves in the foot, they managed to shoot their foot twice.

What is remarkable is that in both cases, the email problems and the XMPP problems, Google doesn't send any error back to the sender so that they know their message didn't get through. Instead, it is silently discarded or left in a spam folder. This is the most corrosive form of communication problem as more time can pass before anybody realizes that something went wrong. After it happens a few times, people lose a lot of confidence in the technology itself and try other means of communication which may be more expensive, more synchronous and time intensive or less private.

When I discussed these issues with friends, some people replied by telling me I should send them things through Facebook or WhatsApp, but each of those services has a higher privacy cost and there are also many other people who don't use either of those services. This tends to fragment communications even more as people who use Facebook end up communicating with other people who use Facebook and excluding all the people who don't have time for Facebook. On top of that, it creates more tedious effort going to three or four different places to check for messages.

Despite all of this, the suggestion that Google's only response is to build a service to "help qualified high-volume senders" get their messages through leaves me feeling that things will get worse before they start to get better. There is no mention in the Google announcement about what they will offer to help the average person eliminate these problems, other than to stop using Gmail or spend unpaid time meticulously training the Google spam filter and hoping everybody else does the same thing.

Some more observations on the issue

Many spam filtering programs used in corporate networks, such as SpamAssassin, add headers to each email to suggest why it was classified as spam. Google's systems don't appear to give any such feedback to their users or message senders though, just a very basic set of recommendations for running a mail server.

Many chat protocols work with an explicit opt-in. Before you can exchange messages with somebody, you must add each other to your buddy lists. Once you do this, virtually all messages get through without filtering. Could this concept be adapted to email, maybe giving users a summary of messages from people they don't have in their contact list and asking them to explicitly accept or reject each contact?

If a message spends more than a week in the spam folder and Google detects that the user isn't ever looking in the spam folder, should Google send a bounce message back to the sender to indicate that Google refused to deliver it to the inbox?

I've personally heard that misclassification occurs with mailing list posts as well as private messages.

23 July, 2015 08:49AM by Daniel.Pocock

Recording live events like a pro (part 1: audio)

Whether it is a technical talk at a conference, a political rally or a budget-conscious wedding, many people now have most of the technology they need to record it and post-process the recording themselves.

For most events, audio is an essential part of the recording. There are exceptions: if you take many short clips from a wedding and mix them together you could leave out the audio and just dub the couple's favourite song over it all. For a video of a conference presentation, though, the the speaker's voice is essential.

These days, it is relatively easy to get extremely high quality audio using a lapel microphone attached to a smartphone. Lets have a closer look at the details.

Using a lavalier / lapel microphone

Full wireless microphone kits with microphone, transmitter and receiver are usually $US500 or more.

The lavalier / lapel microphone by itself, however, is relatively cheap, under $US100.

The lapel microphone is usually an omnidirectional microphone that will pick up the voices of everybody within a couple of meters of the person wearing it. It is useful for a speaker at an event, some types of interviews where the participants are at a table together and it may be suitable for a wedding, although you may want to remember to remove it from clothing during the photos.

There are two key features you need when using such a microphone with a smartphone:

  • TRRS connector (this is the type of socket most phones and many laptops have today)
  • Microphone impedance should be at least 1kΩ (that is one kilo Ohm) or the phone may not recognize when it is connected

Many leading microphone vendors have released lapel mics with these two features aimed specifically at smartphone users. I have personally been testing the Rode smartLav+

Choice of phone

There are almost 10,000 varieties of smartphone just running Android, as well as iPhones, Blackberries and others. It is not practical for most people to test them all and compare audio recording quality.

It is probably best to test the phone you have and ask some friends if you can make test recordings with their phones too for comparison. You may not hear any difference but if one of the phones has a poor recording quality you will hopefully notice that and exclude it from further consideration.

A particularly important issue is being able to disable AGC in the phone. Android has a standard API for disabling AGC but not all phones or Android variations respect this instruction.

I have personally had positive experiences recording audio with a Samsung Galaxy Note III.

Choice of recording app

Most Android distributions have at least one pre-installed sound recording app. Look more closely and you will find not all apps are the same. For example, some of the apps have aggressive compression settings that compromise recording quality. Others don't work when you turn off the screen of your phone and put it in your pocket. I've even tried a few that were crashing intermittently.

The app I found most successful so far has been Diktofon, which is available on both F-Droid and Google Play. Diktofon has been designed not just for recording, but it also has some specific features for transcribing audio (currently only supporting Estonian) and organizing and indexing the text. I haven't used those features myself but they don't appear to cause any inconvenience for people who simply want to use it as a stable recording app.

As the app is completely free software, you can modify the source code if necessary. I recently contributed patches enabling 48kHz recording and disabling AGC. At the moment, the version with these fixes has just been released and appears in F-Droid but not yet uploaded to Google Play. The fixes are in version 0.9.83 and you need to go into the settings to make sure AGC is disabled and set the 48kHz sample rate.

Whatever app you choose, the following settings are recommended:

  • 16 bit or greater sample size
  • 48kHz sample rate
  • Disable AGC
  • WAV file format

Whatever app you choose, test it thoroughly with your phone and microphone. Make sure it works even when you turn off the screen and put it in your pocket while wearing the lapel mic for an hour. Observe the battery usage.


Now lets say you are recording a wedding and the groom has that smartphone in his pocket and the mic on his collar somewhere. What is the probability that some telemarketer calls just as the couple are exchanging vows? What is the impact on the recording?

Maybe some apps will automatically put the phone in silent mode when recording. More likely, you need to remember this yourself. These are things that are well worth testing though.

Also keep in mind the need to have sufficient storage space and to check whether the app you use is writing to your SD card or internal memory. The battery is another consideration.

In a large event where smartphones are being used instead of wireless microphones, possibly for many talks in parallel, install a monitoring app like Ganglia on the phones to detect and alert if any phone has weak wifi signal, low battery or a lack of memory.

Live broadcasts and streaming

Some time ago I tested RTP multicasting from Lumicall on Android. This type of app would enable a complete wireless microphone setup with live streaming to the internet at a fraction of the cost of a traditional wireless microphone kit. This type of live broadcast could also be done with WebRTC on the Firefox app.


If you research the topic thoroughly and spend some time practicing and testing your equipment, you can make great audio recordings with a smartphone and an inexpensive lapel microphone.

In subsequent blogs, I'll look at tips for recording video and doing post-production with free software.

23 July, 2015 07:14AM by Daniel.Pocock

July 22, 2015

Sven Hoexter

moto g falcon CM 12.1 nightly - eating the battery alive

At least the nightly builds from 2015-07-21 to 2015-07-24 eat the battery alive. Until that one is fixed one can downgrade to The downgrade fixed the issue for me.

Update: I'm now running fine with the build from 2015-07-26.

22 July, 2015 08:32PM

hackergotchi for Cyril Brulebois

Cyril Brulebois

D-I Stretch Alpha 1

Time for a quick recap of the beginning of the Stretch release cycle as far as the Debian Installer is concerned:

  • It took nearly 3 months after the Jessie release, but linux finally managed to get into shape and fit for migration to testing, which unblocked the way for an debian-installer upload.
  • Trying to avoid last-minute fun, I’ve updated the britney freeze hints file to put into place a block-udeb on all packages.
  • Unfortunately, a recent change in systemd (implementation of Proposal v2: enable stateless persistant network interface names) found its way into testing a bit before that, so I’ve had my share of last-minute fun anyway! Indeed, that resulted in installer system and installed system having different views on interface naming. Thankfully I was approached by Michael Biebl right before my final tests (and debian-installer upload) so there was little head scratching involved. Commits were already in the master branch so a little plan was proposed in Fixing udev-udeb vs. net.ifnames for Stretch Alpha 1. This was implemented in two shots, given the extra round trip due to having dropped a binary package in the meanwhile and due to dak’s complaining about it.
  • After the usual round of build (see logs), and dak copy-installer to get installer files from unstable to testing, and urgent to get the source into testing as well (see request), I’ve asked Steve McIntyre to start building images through debian-cd. As expected, some troubles were run into, but they were swiftly fixed!
  • While Didier Raboud and Steve were performing some tests with the built images, I’ve prepared the annoucement for dda@, and updated the usual pages in the debian-installer corner of the website: news entry, errata, and homepage.
  • Once the website was rebuilt to include these changes, I’ve sent the announce, and lifted all block-udeb.

(On a related note, I’ve started tweeting rather regularly about my actions, wins & fails, using the #DebianInstaller hashtag. I might try and aggregate my tweets as @CyrilBrulebois into more regular blog posts, time permitting.)

Executive summary: D-I Stretch Alpha 1 is released, time to stretch a bit!

Stretching cat

(Credit: rferran on openclipart)

22 July, 2015 11:50AM

hackergotchi for James McCoy

James McCoy


Some time ago, pabs documented his setup for easily connecting to one of Debian's porterboxes based on the desired architecture. Similarly, he submitted a wishlist bug against devscripts specifying some requirements for a script to make this functionality generally accessible to the developer community.

I have yet to follow up on that request mainly due to ENOTIME for developing new scripts outright. I also have my own script I had been using to get information on available Debian machines.

Recently, this came up again on IRC and jwilk decided to actually implement pabs' DNS alias idea. Now, one can use $ to connect to a porterbox of the specified architecture.

Preference is given to domains when there are both and porterboxes, and it's a simple use first listed machine if there are multiple available porterboxes.

This is all well and good, but if you have SSH's StrictHostKeyChecking enabled, SSH will rightly refuse to connect. However, OpenSSH 6.5 added a feature called hostname canonicalization which can help. The below ssh_config snippet allows one to run ssh $arch-porterbox or ssh $ and connect to one of the porterboxes, verifying the host key against the canonical host name.

Host *-porterbox

Match host *
  CanonicalizeHostname yes
  CanonicalizePermittedCNAMEs **

22 July, 2015 05:33AM

Orestis Ioannou

GSoC Debsources midterm news

Midterm evaluations have already passed and I guess we have also reached a milestone since last week I finished working on the copyright tracker and started the patch tracker.

Here's the list of my reports on soc-coordination for those interested

Copyright tracker status

Copyright tracker

Most of the functionalities of the copyright tracker are already merged. Specifically navigating in the tracker, rendering the machine readable licenses, API functionalities such as obtaining the license of a file searching by checksum or by a package / version / path or obtaining the licenses of many files at once and their respective views.

Some more functionalities are still under review such as filling the database with copyright related information at update time, using the database to answer the aforementioned requests, license statistics in the spirit of the Debsources ones and exporting a license in SPDX format.

Its going to be pretty exciting when those pull requests are going to be merged since the copyright tracker will be full and complete! Meanwhile I started working on the patch tracker.

Patch tracker

My second task is the implementation of a patch tracker. This feature existed in Debian but unfortunately died recently. I have already started revising the functionalities of the old patch tracker, started identifying target users, creating use stories and cases. Those should help me list the desired functionalities of the tracker, imagine the structure of the blueprint and start writing code to that end.

It is going to be a pretty exciting run of 1 month doing this as my knowledge on the Debian packaging system is not that good just yet. I hope that until Debconf some of the functionalities of the patch tracker are going to be ready.


My request for sponsorship for Debconf was accepted and I am pretty excited since this is going to be my first Debconf attendance. I am looking forward meeting my mentors (Zack and Matthieu), the fellow student working on Debsources (Clemux) as well as a lot of other people I happen to chat occasionaly during this summer. I ll arrive on Friday 14th and leave on Sunday 23.

Debconf 2015

22 July, 2015 12:00AM by Orestis

July 21, 2015

Dmitry Shachnev

GNOME Flashback 3.16 available in archive, needs your help

Some time ago GNOME Flashback 3.16/3.17 packages landed in Debian testing and Ubuntu wily.

GNOME Flashback is the project which continues the development of components of classic GNOME session, including the GNOME Panel, the Metacity window manager, and so on.


The full changelog can be found in official announcement mail by Alberts and in changelog.gz files in each package, but I want to list the most imporant improvements in this release (compared to 3.14):

  • GNOME Panel and GNOME Applets (uploaded version 3.16.1):

    • The ability to use transparent panels has been restored.

    • The netspeed applet has been ported to the new API and integrated into gnome-applets source tree.

    • Many deprecation warnings have been fixed, and the code has been modernized.

    • This required a transition and a port of many third-party applets. Currently in Debian these third-party applets are compatible with gnome-panel 3.16: command-runner-applet, gnubiff, sensors-applet, uim, workrave.

  • GNOME Flashback helper application (uploaded version 3.17.2):

    • Added support for the on-screen display (OSD) when switching brightness, volume, etc.

    • Applications using GNOME AppMenu are now shown correctly.

  • Metacity window manager (uploaded version 3.17.2):

    • Metacity can now draw the window decorations based on the Gtk+ theme (without need to add Metacity-specific theme). This follows Mutter behavior, but (unlike Mutter) ability to use Metacity themes has been preserved.

    • Adwaita and HighContrast themes for Metacity have been removed from gnome-themes-standard, so they are now shipped as part of Metacity (in metacity-common package).

    • Metacity now supports invisible window borders (the default setting is 10px extra space for resize cursor area).

Sounds interesting? Contribute!

If you are interested in helping us, please write to our mailing list:

The current TODO list is:

  1. Notification Daemon needs GTK notification support.
  2. GNOME Flashback needs screenshot, screencast, keyboard layout switching and bluetooth status icon.
  3. Fix/replace deprecated function usage in all modules.
  4. libstatus-notifier — get it in usable state, create a new applet for gnome-panel.

21 July, 2015 09:00PM by Dmitry Shachnev

Sven Hoexter

O: courierpassd

In case you're one of the few still depending on courierpassd and would like to see it to be part of stretch, please pick it up. I'm inclined to fill a request for removal before we release stretch in case nobody picks it up.

21 July, 2015 06:48PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

New camera

Sony RX100-III

Earlier in the year I treated myself to a new camera. It's been many years since I bought one, which was a perfectly serviceable Panasonic FS-15 compact, to replace my lost-or-stolen Panasonic TZ3, which I loved. The FS-15 didn't have a "wow" factor and with the advent of smartphones and fantastic smartphone cameras, it rarely left a drawer at home.

Last year I upgraded my mobile from an iPhone to a Motorola Moto G, which is a great phone in many respects, but has a really poor camera. I was given a very generous gift voucher when I left my last job and so had the perfect excuse to buy a dedicated camera.

I'd been very tempted by a Panasonic CSC camera ever since I read this review of the GF1 years ago, and the GM1 was high on my list, but there were a lot of compromises: no EVF... In the end I picked up a Sony RX 100 Mark 3 which had the right balance of compromises for me.

I haven't posted a lot of photos to this site in the past but I hope to do so in future. I've got to make some alterations to the software first.

Post-script: Craig Mod, who wrote that GF1 review, wrote another interesting essay a few years later: Cameras, Goodbye, where he discusses whether smartphone cameras are displacing even the top end of the Camera market.

21 July, 2015 03:28PM

hackergotchi for Martin Michlmayr

Martin Michlmayr

Debian archive rebuild on ARM64 with GCC 5

I recently got access to several ProLiant m400 ARM64 servers at work. Since Debian is currently working on the migration to GCC 5, I thought it would be nice to rebuild the Debian archive on ARM64 to see if GCC 5 is ready. Fortunately, I found no obvious compiler errors.

During the process, I noticed several areas where ARM64 support can be improved. First, a lot of packages failed to build due to missing dependencies. Some missing dependencies are libraries or tools that have not been ported to ARM64 yet, but the majority was due to the lack of popular programming languages on ARM64. This requires upstream porting work, which I'm sure is going on already in many cases. Second, over 160 packages failed to build due to out-of-date autoconf and libtool scripts. Most of these bugs have been reported over a year ago by the ARM64 porters (Matthias Klose from Canonical/Ubuntu and Wookey from ARM/Linaro) and the PowerPC porters, but unfortunately they haven't been fixed yet.

Finally, I went through all packages that list specific architectures in debian/control and filed wishlist bugs on those that looked relevant to ARM64. This actually prompted some Debian and upstream developers to implement ARM64 support, which is great!

21 July, 2015 01:51PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Recovering a DGN3500 via JTAG

Back in 2010 when I needed an ADSL2 router in the US I bought a Netgear DGN3500. It did what I wanted out of the box and being based on a MIPS AR9 (ARX100) it seemed likely OpenWRT support might happen. Long story short I managed to overwrite u-boot (the bootloader) while flashing a test image I’d built. I ended up buying a new router (same model) to get my internet connection back ASAP and never getting around to fully fixing the broken one. Until yesterday. Below is how I fixed it; both for my own future reference and in case it’s of use any any other unfortunate soul.

The device has clear points for serial and JTAG and it was easy enough (even with my basic soldering skills) to put a proper header on. The tricky bit is that the flash is connected via SPI, so it’s not just a matter of attaching JTAG, doing a scan and reflashing from the JTAG tool. I ended up doing RAM initialisation, then copying a RAM copy of u-boot in and then using that to reflash. There may well have been a better way, but this worked for me. For reference the failure mode I saw was an infinitely repeating:

ROM VER: 1.1.3
CFG 05

My JTAG device is a Bus Pirate v3b which is much better than the parallel port JTAG device I built the first time I wanted to do something similar. I put the latest firmware (6.1) on it.

All of this was done from my laptop, which runs Debian testing (stretch). I used the OpenOCD 0.9.0-1+b1 package from there.

Daniel Schwierzeck has some OpenOCD scripts which include a target definition for the ARX100. I added a board definition for the DGN3500 (I’ve also send Daniel a patch to add this to his repo).

I tied all of this together with an openocd.cfg that contained:

source [find interface/buspirate.cfg]

buspirate_port /dev/ttyUSB1
buspirate_vreg 0
buspirate_mode normal
buspirate_pullup 0
reset_config trst_only

source [find openocd-scripts/target/arx100.cfg]

source [find openocd-scripts/board/dgn3500.cfg]

gdb_flash_program enable
gdb_memory_map enable
gdb_breakpoint_override hard

I was then able to power on the router and type dgn3500_ramboot into the OpenOCD session. This fetched my RAM copy of u-boot from dgn3500_ram/u-boot.bin, copied it into the router’s memory and started it running. From there I had a u-boot environment with access to the flash commands and was able to restore the original Netgear image (and once I was sure that was working ok I subsequently upgraded to the Barrier Breaker OpenWRT image).

21 July, 2015 10:34AM

July 20, 2015

Niels Thykier

Performance tuning of lintian, take 2

The other day, I wrote about our recent performance tuning in lintian.  Among other things, we reduced the memory usage by ~33%.  The effect was also reproducible on libreoffice (4.2.5-1 plus its 170-ish binaries, arch amd64), which started at ~515 MB and was reduced to ~342 MB.  So this is pretty great in its own right…

But at this point, I have seen what was in “Pandora’s box”. By which, I mean the two magical numbers 1.7kB per file and 2.2kB per directory in the package (add +250-300 bytes per entry in binary packages).  This is before even looking at data from file(1), readelf, etc.  Just the raw index of the package.

Depending on your point of view, 1.7-2.2kB might not sound like a lot.  But for the lintian source with ~1 500 directories and ~3 300 non-directories, this sums up to about 6.57MB out of the (then) usage at 12.53MB.  With the recent changes, it dropped to about 1.05kB for files and 1.5kB for dirs.  But even then, the index is still 4.92MB (out of 8.48MB).

This begs the question, what do you get for 1.05kB in perl? The following is a dump of the fields and their size in perl for a given entry:

lintian/vendors/ubuntu/main/data/changes-file/known-dists: 1077.00 B
  _path_info: 24.00 B
  date: 44.00 B
  group: 42.00 B
  name: 123.00 B
  owner: 42.00 B
  parent_dir: 24.00 B
  size: 42.00 B
  time: 42.00 B
  (overhead): 694.00 B

With time, date, owner and group being fixed sized strings (at most 15 characters).  The size and _path_info fields being integers, parent_dir a reference (nulled).  Finally, the name being a variable length string.  Summed the values take less than half of the total object size.  The remainder of ~700 bytes is just “overhead”.

Time for another clean up:

  • The ownership fields are usually always “root/root” (0/0).  So let’s just omit them when they satisfy said assumption. [f627ef8]
    • This is especially true for source packages where lintian ignores the actual value and just uses “root/root”.
  • The Lintian::Path API has always had a “cop-out” on the size field for non-files and it happens to be 0 for these.  Let’s omit the field if the value was zero and save 0.17MB on lintian. [5cd2c2b]
    • Bonus: Turns out we can save 18 bytes per non-zero “size” by insisting on the value being an int.
  • Unsurprisingly, the date and time fields can trivially be merged into one.  In fact, that makes “time” redundant as nothing outside Lintian::Path used its value.  So say goodbye to “time” and good day to 0.36MB more memory. [f1a7826]

Which leaves us now with:

lintian/vendors/ubuntu/main/data/changes-file/known-dists: 698.00 B
  _path_info: 24.00 B
  date_time: 56.00 B
  name: 123.00 B
  parent_dir: 24.00 B
  size: 24.00 B
  (overhead): 447.00 B

Still a ~64% overhead, but at least we reduced the total size by 380 bytes (585 bytes for entries in binary packages).  With these changes, the memory used for the lintian source index is now down to 3.62MB.  This brings the total usage down to 7.01MB, which is a reduction to 56% of the original usage (a.k.a. “the-almost-but-not-quite-50%-reduction”).

But at least the results also carried over to libreoffice, which is now down to 284.83 MB (55% of original).  The chromium-browser (source-only, version 32.0.1700.123-2) is down to 111.22MB from 179.44MB (61% of original, better results expected if processed with binaries).


In closing, Lintian 2.5.34 will use slightly less memory than 2.5.33.


Filed under: Debian, Lintian

20 July, 2015 09:48PM by Niels Thykier

hackergotchi for Matthew Garrett

Matthew Garrett

Your Ubuntu-based container image is probably a copyright violation

Update: A Canonical employee responded here, but doesn't appear to actually contradict anything I say below.

I wrote about Canonical's Ubuntu IP policy here, but primarily in terms of its broader impact, but I mentioned a few specific cases. People seem to have picked up on the case of container images (especially Docker ones), so here's an unambiguous statement:

If you generate a container image that is not a 100% unmodified version of Ubuntu (ie, you have not removed or added anything), Canonical insist that you must ask them for permission to distribute it. The only alternative is to rebuild every binary package you wish to ship[1], removing all trademarks in the process. As I mentioned in my original post, the IP policy does not merely require you to remove trademarks that would cause infringement, it requires you to remove all trademarks - a strict reading would require you to remove every instance of the word "ubuntu" from the packages.

If you want to contact Canonical to request permission, you can do so here. Or you could just derive from Debian instead.

[1] Other than ones whose license explicitly grants permission to redistribute binaries and which do not permit any additional restrictions to be imposed upon the license grants - so any GPLed material is fine

comment count unavailable comments

20 July, 2015 07:33PM

hackergotchi for Daniel Pocock

Daniel Pocock

RTC status on Debian, Ubuntu and Fedora

Zoltan (Zoltanh721) recently blogged about WebRTC for the Fedora community and Fedora desktop. has been running for a while now and this has given many people a chance to get a taste of regular SIP and WebRTC-based SIP. As suggested in Zoltan's blog, it has convenient integration with Fedora SSO and as the source code is available, people are welcome to see how it was built and use it for other projects.

Issues with Chrome/Chromium on Linux

If you tried any of, or using Chrome/Chromium on Linux, you may have found that the call appears to be connected but there is no media. This is a bug and the Chromium developers are on to it. You can work around this by trying an older version of Chromium (it still works with v37 from Debian wheezy) or Firefox/Iceweasel.

WebRTC is not everything

WebRTC offers many great possibilities for people to quickly build and deploy RTC services to a large user base, especially when using components like JSCommunicator or the DruCall WebRTC plugin for Drupal.

However, it is not a silver bullet. For example, there remain concerns about how to receive incoming calls. How do you know which browser tab is ringing when you have many tabs open at once? This may require greater browser/desktop integration and that has security implications for JavaScript. Whether users on battery-powered devices can really leave JavaScript running for extended periods of time waiting for incoming calls is another issue, especially when you consider that many web sites contain some JavaScript that is less than efficient.

Native applications and mobile apps like Lumicall continue to offer the most optimized solution for each platform although WebRTC currently offers the most convenient way for people to place a Call me link on their web site or portal.

Deploy it yourself

The RTC Quick Start Guide offers step-by-step instructions and a thorough discussion of the architecture for people to start deploying RTC and WebRTC on their own servers using standard packages on many of the most popular Linux distributions, including Debian, Ubuntu, RHEL, CentOS and Fedora.

20 July, 2015 02:04PM by Daniel.Pocock

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Micro DD meetup

A couple of us DDs met here on the weekend. It is always a fun time, being part of these meetings. We talked briefly about the status of Cross Compilation in Debian, on the tools that simplify the process.

Next we touched upon licensing, discussing the benefits of particular licenses (BSD, Apache, GPL) from the point of view of the consumer. The consumer being an individual just wanting to use/improve software, to a consumer who's building a (free / non-free) product on top of it. I think the overall conclusion was that there are 2 major licenses at a high level: Ones those allow you take the code and not give back, and the others which allow you to take code only if you are ready to share the enhancements back and forward.

Next we briefly touched upon systemd. Given that I recently spent a good amount of time talking to the systemd maintainer while fixing bugs in my software, it was natural for me to steer that topic. At the end, more people are now enthused to learn the paradigm shift.

The other topic where we spent time was on Containers. It is impressive to see how quick, and how many, products have now spun out of cgroups. The topic moved to cgroups, thanks to systemd, one of the prime consumers of cgroups. While demonstrating the functionalities of Linux Containers (LXC), I realized that systemd has a tool in place to serve the same use case.

So, once back home, I spent some time figuring out the possibility to replace my lxc setup, with that of systemd-nspawn. Apart from a minor bug, almost everything else seems to work find with systemd-nspawn.

So, following is the config detail of my container, as used in lxc. And to replace lxc, I need to fill is almost all of it with systemd-nspawn.

rrs@learner:~$ sudo cat /var/lib/lxc/deb-template/config
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234

# Mem
lxc.cgroup.memory.limit_in_bytes = 2000M
lxc.cgroup.memory.soft_limit_in_bytes = 1500M

# Network = veth = 00:16:3e:0c:c5:d4 = up = lxcbr0

# Root file system
lxc.rootfs = /var/lib/lxc/deb-template/rootfs

# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf

# Container specific configuration
lxc.mount = /var/lib/lxc/deb-template/fstab
lxc.utsname = deb-template
lxc.arch = amd64

# For apt
lxc.mount.entry = /var/cache/apt/archives var/cache/apt/archives none defaults,bind 0 0
lxc.mount.entry = /var/tmp/lxc var/tmp/lxc none defaults,bind 0 0
2015-07-20 / 16:28:58 ♒♒♒  ☺    


The equivalent of the above, in systemd-nspawn is:

sudo systemd-nspawn -n -b --machine deb-template --network-bridge=lxcbr0 --bind /var/cache/apt/archives/

The only missing bit is CPU and Memory, which I'm yet to try, is documented as doable with the systemctl --property= interface

           Set a unit property on the scope unit to register for the machine. This only
           applies if the machine is run in its own scope unit, i.e. if --keep-unit is not
           used. Takes unit property assignments in the same format as systemctl
           set-property. This is useful to set memory limits and similar for machines.

With all this in place, using containers under systemd is a breeze.

rrs@learner:~/Community/Packaging/multipath-tools (experimental)$ sudo machinectl list
deb-template container nspawn

1 machines listed.
2015-07-20 / 16:44:07 ♒♒♒  ☺    
rrs@learner:~/Community/Packaging/multipath-tools (experimental)$ sudo machinectl status deb-template
           Since: Mon 2015-07-20 16:13:58 IST; 30min ago
          Leader: 9064 (systemd)
         Service: nspawn; class container
            Root: /var/lib/lxc/deb-template/rootfs
           Iface: lxcbr0
              OS: Debian GNU/Linux stretch/sid
            Unit: machine-deb\x2dtemplate.scope
                  ├─9064 /lib/systemd/systemd --system --deserialize 14
                    │ └─9092 /lib/systemd/systemd-journald
                    │ └─9160 /usr/sbin/sshd -D
                      ├─9166 /bin/login --     
                      ├─9171 -bash
                      └─9226 dhclient host0

Jul 20 16:13:58 learner systemd[1]: Started Container deb-template.
Jul 20 16:13:58 learner systemd[1]: Starting Container deb-template.
2015-07-20 / 16:44:15 ♒♒♒  ☺    




20 July, 2015 11:26AM by Ritesh Raj Sarraf

July 19, 2015

Laura Arjona

Family games: Robots

I play “Robots” with my kid. I’ve tested the game with other kids and it seems that for ages 5 to 7 they like it. I’ve talked about the game to several adults and it seems they like too, so I thought maybe writing about it here may be useful for somebody to enjoy some summer days.


One player is the Robot. The other one is the programmer. If there are more players, it can be several robots and several programmers. If players are older, you can make the game more complicated making robots cooperate or programmers cooperate. If not, you make pairs 1-1 or 1 programmer – 2 robots if the number is odd.

The game

The programmer must turn on the robot, pressing the ON/OFF button (robot chooses where’s the button: nose, ear, belly, whatever).
Then, the robot say “hello”, and the programmer asks for the list of commands available (like “Hello, robot, give me the list of commands”). The robot says the list of commands available, for example “Run, stop, jump, sing a song, somersault, say something in a different language”. Then, the programmer thinks a program, and loads it to the robot (speaks the list of orders, loudly, to the robot). Then the programmer presses the START button (Robot choses where it is) and then the robot has to perform the program without errors.

If the robot performs correctly, wins one point. If it fails, looses one point. The programmer can design another program (maybe longer, maybe with some conditional expression) and tries the limits of the memory of robot.

If the robot is tired, needs to charge batteries, or whatever, the roles programmer/robot are interchanged, and the one with more points in a certain amount of time or rounds, wins.

Variants, tips…

If the programmer does not like the list, of commands, she can ask for updates, and maybe some new commands will be installed (and/or other uninstalled, who knows).

Please be creative with the list of commands, or the game will be very boring.

Depending on the operating system which runs the robot, it will give more or less options to the programmer, and the behaviour will be more evil or good. Robots shouldn’t behave too much evil, though, otherwise the programmer will erase their disk and install Debian on them to make them obedient ;)

You can play with a third person being the Robot manufacturer, who controls the robot, even sometimes overriding the programmer instructions (if the robot has an OS which is not free software). Robot will win one point obeying the manufacturer, but if there are more robots, will loose one round of playing because the programmer got angry and turned it off or reinstalled the software.

The manufacturer and the programmer cooperate if the robot runs free software, though. Together they can expand robot memory (for example, lend a piece of paper where to store the program), or create new commands, fix bugs, or whatever.


You can comment about this post in thread about this post.

Filed under: My experiences and opinion Tagged: Debian, Education, English, Free culture, Free Software, Games, kids

19 July, 2015 10:04PM by larjona

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2015/17-29

after the release is before the release. – or: long time no RC bug report.

after the jessie release I spent most of my Debian time on work in the Debian Perl Group. we tried to get down the list of new upstream releases (from over 500 to currently 379; unfortunately the CPAN never sleeps), we were & still are busy preparing for the Perl 5.22 transition (e.g. we uploaded something between 300 & 400 packages to deal with Module::Build & being removed from perl core; only team-maintained packages so far), & we had a pleasant & productive sprint in Barcelona in May. – & I also tried to fix some of the RC bugs in our packages which popped up over the previous months.

yesterday & today I finally found some time to help with the GCC 5 transition, mostly by making QA or Non-Maintainer Uploads with patches that already were in the BTS. – a big thanks especially to the team at HP which provided a couple dozens patches!

& here's the list of RC bugs I've worked on in the last 3 months:

  • #752026 – libpdl-stats-perl: "libpdl-stats-perl: FTBFS on arm*"
    upload new upstream release (pkg-perl)
  • #755961 – autounit: "FTBFS with clang instead of gcc"
    apply patch from Alexander <>, QA upload
  • #755963 – clearsilver: "FTBFS with clang instead of gcc"
    apply patch from Alexander <>, upload to DELAYED/5
  • #777776 – src:apron: "apron: ftbfs with GCC-5"
    tag as unreproducible
  • #777780 – src:asmon: "asmon: ftbfs with GCC-5"
    apply patch from Martin Michlmayr, upload to DELAYED/5
  • #777783 – src:atftp: "atftp: ftbfs with GCC-5"
    apply patch from Martin Michlmayr, upload to DELAYED/5
  • #777797 – src:bbrun: "bbrun: ftbfs with GCC-5"
    add patch to build with "-std=gnu89", upload to DELAYED/5
  • #777806 – src:booth: "booth: ftbfs with GCC-5"
    tag as unreproducible
  • #777808 – src:bwm-ng: "bwm-ng: ftbfs with GCC-5"
    merge patch from Ubuntu, and build with "-std=gnu89", upload to DELAYED/5
  • #777831 – src:deborphan: "deborphan: ftbfs with GCC-5"
    apply patch from Jakub Wilk, upload to DELAYED/5, then rescheduled to 0-day with maintainer's permission
  • #777835 – src:dsbltesters: "dsbltesters: ftbfs with GCC-5"
    tag as unreproducible
  • #777853 – src:flow-tools: "flow-tools: ftbfs with GCC-5"
    apply patch from Alexander Balderson, upload to DELAYED/5
  • #777880 – src:gnac: "gnac: ftbfs with GCC-5"
    apply patch from Greg Pearson, upload to DELAYED/5
  • #777881 – src:gngb: "gngb: ftbfs with GCC-5"
    apply patch from Greg Pearson, upload to DELAYED/5
  • #777895 – src:haildb: "haildb: ftbfs with GCC-5"
    tag as unreproducible
  • #777902 – src:hfsplus: "hfsplus: ftbfs with GCC-5"
    merge patch from Ubuntu, QA upload
  • #777903 – src:hugs98: "hugs98: ftbfs with GCC-5"
    apply patch from Elizabeth J Dall, upload to DELAYED/5
  • #777965 – src:libpam-chroot: "libpam-chroot: ftbfs with GCC-5"
    apply patch from Linn Crosetto, upload to DELAYED/5
  • #777975 – src:libssh: "libssh: ftbfs with GCC-5"
    apply patch from Matthias Klose, upload to DELAYED/5
  • #778009 – src:mknbi: "mknbi: ftbfs with GCC-5"
    apply patch from Matthias Klose, QA upload
  • #778020 – src:mz: "mz: ftbfs with GCC-5"
    apply patch from Joshua Gadeken, upload to DELAYED/5
  • #778051 – src:overgod: "overgod: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #778056 – src:pads: "pads: ftbfs with GCC-5"
    apply patch from Andrew Patterson, upload to DELAYED/5
  • #778121 – src:sks-ecc: "sks-ecc: ftbfs with GCC-5"
    apply patch from Brett Johnson, QA upload
  • #778129 – src:squeak-plugins-scratch: "squeak-plugins-scratch: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778137 – src:tabble: "tabble: ftbfs with GCC-5"
    apply patch from David S. Roth, QA upload
  • #778146 – src:tinyscheme: "tinyscheme: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #778148 – src:trafficserver: "trafficserver: ftbfs with GCC-5"
    lower severity
  • #778151 – src:tuxonice-userui: "tuxonice-userui: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5, later sponsor maintainer upload
  • #778152 – src:uaputl: "uaputl: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778153 – src:udftools: "udftools: ftbfs with GCC-5"
    apply patch from Jakub Wilk, upload to DELAYED/5
  • #778159 – src:uswsusp: "uswsusp: ftbfs with GCC-5"
    apply patch from Andrew James, upload to DELAYED/5
  • #778167 – src:weplab: "weplab: ftbfs with GCC-5"
    apply patch from Elizabeth J Dall, QA upload
  • #778171 – src:wmmon: "wmmon: ftbfs with GCC-5"
    add patch to build with "-std=gnu89", upload to DELAYED/5
  • #778173 – src:wmressel: "wmressel: ftbfs with GCC-5"
    apply patch from Elizabeth J Dall, upload to DELAYED/5
  • #780199 – src:redhat-cluster: "redhat-cluster: FTBFS in unstable - error: conflicting types for 'int64_t'"
    apply patch from Michael Tautschnig, upload to DELAYED/2, then rescheduled by maintainer
  • #783899 – liblog-any-perl: "liblog-any-perl, liblog-any-adapter-perl: File conflict when being installed together"
    add Breaks/Replaces/Provides (pkg-perl)
  • #784844 – libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: test failures"
    upload new upstream release (pkg-perl)
  • #785020 – libmoosex-getopt-perl: "libmoosex-getopt-perl: FTBFS: test failures"
    upload new upstream release (pkg-perl)
  • #785158 – libnet-ssleay-perl: "libnet-ssleay-perl: FTBFS: Your vendor has not defined SSLeay macro LIBRESSL_VERSION_NUMBER"
    upload new upstream release (pkg-perl)
  • #785229 – sqitch: "sqitch: FTBFS: new warnings"
    upload new upstream release (pkg-perl)
  • #785232 – libdist-zilla-plugin-requiresexternal-perl: "libdist-zilla-plugin-requiresexternal-perl: FTBFS: More than one plan found in TAP output"
    make tests non-verbose (pkg-perl)
  • #785659 – libdist-zilla-perl: "libdist-zilla-perl: FTBFS: t/plugins/testrelease.t failure"
    make tests non-verbose (pkg-perl)
  • #786447 – libcgi-application-plugin-authentication-perl: "libcgi-application-plugin-authentication-perl FTBFS in unstable"
    add patch from Micah Gersten/Ubuntu (pkg-perl)
  • #786591 – libtext-quoted-perl: "libtext-quoted-perl: broken by libtext-autoformat-perl changes"
    upload new upstream release (pkg-perl)
  • #786667 – libcatalyst-plugin-authentication-credential-openid-perl: "libcatalyst-plugin-authentication-credential-openid-perl: FTBFS: Bareword "use_test_base" not allowed"
    patch Makefile.PL (pkg-perl)
  • #788350 – libhttp-proxy-perl: "FTBFS - proxy tests"
    add patch, improved from CPAN RT (pkg-perl)
  • #789141 – src:libdancer2-perl: "libdancer2-perl: FTBFS with Plack >= 1.0036: t/classes/Dancer2-Core-Response/new_from.t"
    upload new upstream release (pkg-perl)
  • #789669 – src:starlet: "starlet: FTBFS with Plack 1.0036"
    add patch for test compatibility with newer Plack (pkg-perl)
  • #789838 – src:starman: "starman: FTBFS with Plack 1.0036"
    upload new upstream release (pkg-perl)
  • #791493 – libpadre-plugin-datawalker-perl: "libpadre-plugin-datawalker-perl: missing dependency on padre"
    add missing dependency (pkg-perl)
  • #791510 – libcatalyst-authentication-credential-authen-simple-perl: "libcatalyst-authentication-credential-authen-simple-perl: FTBFS: Can't locate Test/ in @INC"
    add missing build dependency (pkg-perl)
  • #791512 – libcatalyst-plugin-cache-store-fastmmap-perl: "libcatalyst-plugin-cache-store-fastmmap-perl: FTBFS: Can't locate Test/ in @INC"
    add missing build dependency (pkg-perl)
  • #791709 – libjson-perl: "libjson-perl: FTBFS: Recursive inheritance detected"
    upload new upstream release (pkg-perl)
  • #792063 – src:libmath-mpfr-perl: "FTBFS: lngamma_bug.t and test1.t fail"
    upload new upstream release (pkg-perl)
  • #792844 – libatombus-perl: "libatombus-perl: ships usr/share/man/man3/README.3pm.gz"
    don't install README manpage (pkg-perl)
  • #792845 – libclang-perl: "libclang-perl: ships usr/share/man/man3/README.3pm.gz"
    don't install README POD/manpage (pkg-perl)

19 July, 2015 09:23PM

Enrico Zini


Random quote

Be selfish when you ask, honest when you reply, and when others reply, take them seriously.

(me, late at night)

19 July, 2015 04:53PM