August 17, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


armadillo image

A new RcppArmadillo release, based on the new Armadillo release 9.100.5 from earlier today, is now on CRAN and in Debian.

It once again follows our (and Conrad's) bi-monthly release schedule. Conrad started with a new 9.100.* series a few days ago. I ran reverse-depends checks and found an issue which he promptly addressed; CRAN found another which he also very promptly addressed. It remains a true pleasure to work with such experienced professionals as Conrad (with whom I finally had a beer around the recent useR! in his home town) and of course the CRAN team whose superb package repository truly is the bedrock of the R community.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 479 other packages on CRAN.

This release once again brings a number of improvements to the sparse matrix functionality. We also fixed one use case of the OpemMP compiler and linker flags which will likely hit a number of the by now 501 (!!) CRAN packages using RcppArmadillo.

Changes in RcppArmadillo version (2018-08-16)

  • Upgraded to Armadillo release 9.100.4 (Armatus Ad Infinitum)

    • faster handling of symmetric/hermitian positive definite matrices by solve()

    • faster handling of inv_sympd() in compound expressions

    • added .is_symmetric()

    • added .is_hermitian()

    • expanded spsolve() to optionally allow keeping solutions of systems singular to working precision

    • new configuration options ARMA_OPTIMISE_SOLVE_BAND and ARMA_OPTIMISE_SOLVE_SYMPD smarter use of the element cache in sparse matrices

    • smarter use of the element cache in sparse matrices

  • Aligned OpenMP flags in the RcppArmadillo.package.skeleton used Makevars,.win to not use one C and C++ flag.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

Edited on 2018-08-17 to correct one sentence (thanks, Barry!) and adjust the RcppArmadillo to 501 (!!) as we crossed the threshold of 500 packages overnight.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 August, 2018 12:00PM

hackergotchi for Sune Vuorela

Sune Vuorela

Invite me to your meetings

I was invited by my boss to a dinner. He uses exchange or outlook365 or something like that. The KMail TNEF parser didn’t succeed in parsing all the info, so I’m kind of trying to fix it.

But I need test data. From Exchange or outlook or outlook365. That I can add to the repoository for unit tests.

So if you can help me generate test data, please setup a meeting and invite me.

Just to repeat. The data will be made public.

17 August, 2018 08:39AM by Sune Vuorela

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel


armadillo image

A new RcppArmadillo release, based on the new Armadillo release 9.100.5 from earlier today, is now on CRAN and in Debian.

It once again follows our (and Conrad's) bi-monthly release schedule. Conrad started with a new 9.100.* series a few days ago. I ran reverse-depends checks and found an issue which he promptly addressed; CRAN found another which he also very promptly addressed. It remains a true pleasure to work with such experienced professionals as Conrad (with whom I finally had a beer around the recent useR! in his home town) and of course the CRAN team whose superb package repository truly is the bedrock of the R community.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 479 other packages on CRAN.

This release once again brings a number of improvements to the sparse matrix functionality. We also also one use case of the OpemMP compiler and linker flags which will likely hit a number of the by now 499 (!!) CRAN packages using RcppArmadillo.

Changes in RcppArmadillo version (2018-08-16)

  • Upgraded to Armadillo release 9.100.4 (Armatus Ad Infinitum)

    • faster handling of symmetric/hermitian positive definite matrices by solve()

    • faster handling of inv_sympd() in compound expressions

    • added .is_symmetric()

    • added .is_hermitian()

    • expanded spsolve() to optionally allow keeping solutions of systems singular to working precision

    • new configuration options ARMA_OPTIMISE_SOLVE_BAND and ARMA_OPTIMISE_SOLVE_SYMPD smarter use of the element cache in sparse matrices

    • smarter use of the element cache in sparse matrices

  • Aligned OpenMP flags in the RcppArmadillo.package.skeleton used Makevars,.win to not use one C and C++ flag.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 August, 2018 01:20AM

August 16, 2018

hackergotchi for Steve McIntyre

Steve McIntyre

25 years...

We had a small gathering in the Haymakers pub tonight to celebrate 25 years since Ian Murdock started the Debian project.

people in the pub!

We had 3 DPLs, a few other DDs and a few more users and community members! Good to natter with people and share some history. :-) The Raspberry Pi people even chipped in for some drinks. Cheers! The celebrations will continue at the big BBQ at my place next weekend.

16 August, 2018 09:42PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Solskogen 2018: Tireless wireless (a retrospective)

These days, Internet access is a bit like oxygen—hard to get excited about, but living without it can be profoundly annoying. With prevalent 4G coverage and free roaming within the EU, the need for wifi in the woods has diminished somewhat, but it's still important for computers (bleep bloop!), and even more importantly, streaming.

As Solskogen's stream wants 5 Mbit/sec out of the party place (we reflect it outside, where bandwidth is less scarce), we were a bit dismayed when we arrived a week before the party for pre-check and discovered that the Internet access from the venue was capped at 5/0.5. After some frenzied digging, we discovered the cause: Since Solskogen is the only event at Flateby that uses the Internet much, they have reverted to the cheapest option except in July—and that caused us to eventually being relegated to an ADSL line card in the DSLAM, as opposed to the VDSL we've had earlier (which gave us 50/10). Even worse, with a full DSLAM, the change back would take weeks. We needed a plan B.

The obvious first choice would be 4G, but it's not a perfect match; just the stream alone would be 150+ GB (although it can be reduced or turned off when there's nothing happening on the big screen), and it's not the only thing that wants bandwidth. In other words, it would have a serious cost issue, and then there was the question to what degree it could deliver rock-stable streaming or not. There would be the option to use multiple providers and/or use the ADSL line for non-prioritized traffic (ie., participant access), but in the end, it didn't look so attractive, so we filed this as plan C and moved on to find another B.

Plan B eventually materialized in the form of the Ubiquiti Litebeam M5, a ridiculously cheap ($49 MSRP!) point-to-point link based on a somewhat tweaked Wi-Fi chipset. The idea was to get up on the roof (køb min fisk!), shoot to somewhere else with better networking and then use that link for everything. Øyafestivalen, by means of Daniel Husand, borrowed us a couple of M5s on short notice, and off we went to find trampolines on Google Maps. (For the uninitiated, trampolines = kids = Internet access.)

We considered the home of a fellow demoscener living nearby—at 1.4 km, it's well within the range of the M5 (we know of deployments running over 17 km).. However, the local grocery store in Flateby, Spar, managed to come up with something even more interesting; it turns out that behind the store, more or less across the street, there's a volunteer organization called Frivillighetssentralen that were willing to borrow out their 20/20 fiber Internet from Viken Fiber. Even better, after only a quick phone call, the ISP was more than willing to boost the line to 200/200 for the weekend. (The boost would happen Friday or so, so we'd run most of our testing with 20/20, but even that would be plenty.)

After a trip up on the roof of the party place, we decided approximately where to put the antenna, and put one of the M5s in the window of Frivillighetssentralen pointing roughly towards that spot. In a moment of hubris, we decided to try without going up on the roof again, just holding the other M5 out of the window, pointed it roughly in the right directoin… and lo and behold, it synced on 150 Mbit/sec both ways, reporting a distance of 450 meters. (This was through another house that was in the way, ie., no clear path. Did we mention the M5s are impossibly good for the price?)

So, after mounting it on the wall, we started building the rest of the network. Having managed switches everywhere paid off; instead of having to pull a cable from the wireless to the central ARM machine (an ODROID XU4) running as a router, we could just plug it into the closest participant switch and configure the ports. I'm aware that most people would consider VLANs overkill for a 200-person network, but it really helps in flexibility when something unexpected happens—and also in terms of cable.

However, as the rigging progressed and we started getting to the point where we could run test streams, it became clear that something was wrong. The Internet link just wasn't pushing the amount of bandwidth we wanted it to; in particular, the 5 Mbit/sec stream just wouldn't go through. (In parallel, we also had some problems with access points refusing to join the wireless controller, which turned out to be a faulty battery that caused the clock on the WLC to revert to year 2000, which in turn caused its certificate to be invalid. If we'd had Internet at that stage, it would have had NTP and never seen the problem, but of course, we didn't because we were still busy trying to figure out the best place on the roof at the time!)

Of course, frantic debugging ensued. We looked through every setting we could find on the M5s, we moved them to a spot with clear path and pointed them properly at each other (bringing the estimated link up to 250 Mbit/sec) and upgraded their software to the latest version. Nothing helped at all.

Eventually, we started looking elsewhere in our network. We run a fairly elaborate shaping and tunneling setup; this allows us to be fully in control over relative bandwidth prioritization, both ways (the stream really gets dedicated 5 Mbit/sec, for example), but complexity can also be scary when you're trying to debug. TCP performance can also be affected by multiple factors, and then of course, there's the Internet on its way. We tried blasting UDP at the other end full speed, which the XU4 would police down to 13 Mbit/sec, accurate to two decimals, for us (20 Mbit uplink, minus 5 for the stream, minus some headroom)—but somehow, the other end only received 12. Hmm. We reduced the policer to 12 Mbit/sec, and only got 11… what the heck?

At this point, we understood we had a packet loss problem on our hands. It would either be the XU4s or the M5s; something dropped 10% or so of all packets, indiscriminately. Again, the VLANs helped; we could simply insert a laptop on the right VLAN and try to send traffic outside of the XU4. We did so, and after some confusion, we figured out it wasn't that. So what was wrong with the M5s?

It turns out the latest software version has iperf built-in; you can simply ssh to the box and run from there. We tried the one on the ISP side; it got great TCP speeds to the Internet. We tried the one on the local side; it got… still great speeds! What!?

So, after six hours of debugging, we found the issue; there was a faulty Cat5 cable between two switches in the hall, that happened to be on the path out to the inner M5. Somehow it got link at full gigabit, but it caused plenty of dropped packets—I've never seen this failure mode before, and I sincerely hope we'll never be seeing it again. We replaced the cable, and tada, Internet.

Next week, we'll talk about how the waffle irons started making only four hearts instead of five, and how we traced it to a poltergeist that we brought in a swimming pool when we moved from Ås to Flateby five years ago.

16 August, 2018 08:00PM

hackergotchi for Bdale Garbee

Bdale Garbee

Mixed Emotions On Debian Anniversary

When I woke up this morning, my first conscious thought was that today is the 25th anniversary of a project I myself have been dedicated to for nearly 24 years, the Debian GNU/Linux distribution. I knew it was coming, but beyond recognizing the day to family and friends, I hadn't really thought a lot about what I might do to mark the occasion.

Before I even got out of bed, however, I learned of the passing of Aretha Franklin, the Queen of Soul. I suspect it would be difficult to be a caring human being, born in my country in my generation, and not feel at least some impact from her mere existence. Such a strong woman, with amazing talent, whose name comes up in the context of civil rights and women's rights beyond the incredible impact of her music. I know it's a corny thing to write, but after talking to my wife about it over coffee, Aretha really has been part of "the soundtrack of our lives". Clearly, others feel the same, because in her half-century-plus professional career, "Ms Franklin" won something like 18 Grammy awards, the Presidential Medal of Freedom, and other honors too numerous to list. She will be missed.

What's the connection, if any, between these two? In 2002, in my platform for election as Debian Project Leader, I wrote that "working on Debian is my way of expressing my most strongly held beliefs about freedom, choice, quality, and utility." Over the years, I've come to think of software freedom as an obvious and important component of our broader freedom and equality. And that idea was strongly reinforced by the excellent talk Karen Sandler and Molly de Blanc gave at Debconf18 in Taiwan recently, in which they pointed out that in our modern world where software is part of everything, everything can be thought of as a free software issue!

So how am I going to acknowledge and celebrate Debian's 25th anniversary today? By putting some of my favorite Aretha tracks on our whole house audio system built entirely using libre hardware and software, and work to find and fix at least one more bug in one of my Debian packages. Because expressing my beliefs through actions in this way is, I think, the most effective way I can personally contribute in some small way to freedom and equality in the world, and thus also the finest tribute I can pay to Debian... and to Aretha Franklin.

16 August, 2018 05:26PM

hackergotchi for Bits from Debian

Bits from Debian

25 years and counting

Debian is 25 years old by Angelo Rosa

When the late Ian Murdock announced 25 years ago in comp.os.linux.development, "the imminent completion of a brand-new Linux release, [...] the Debian Linux Release", nobody would have expected the "Debian Linux Release" to become what's nowadays known as the Debian Project, one of the largest and most influential free software projects. Its primary product is Debian, a free operating system (OS) for your computer, as well as for plenty of other systems which enhance your life. From the inner workings of your nearby airport to your car entertainment system, and from cloud servers hosting your favorite websites to the IoT devices that communicate with them, Debian can power it all.

Today, the Debian project is a large and thriving organization with countless self-organized teams comprised of volunteers. While it often looks chaotic from the outside, the project is sustained by its two main organizational documents: the Debian Social Contract, which provides a vision of improving society, and the Debian Free Software Guidelines, which provide an indication of what software is considered usable. They are supplemented by the project's Constitution which lays down the project structure, and the Code of Conduct, which sets the tone for interactions within the project.

Every day over the last 25 years, people have sent bug reports and patches, uploaded packages, updated translations, created artwork, organized events about Debian, updated the website, taught others how to use Debian, and created hundreds of derivatives.

Here's to another 25 years - and hopefully many, many more!

16 August, 2018 06:50AM by Ana Guerrero Lopez

hackergotchi for Norbert Preining

Norbert Preining

DebConf 18 – Day 3

Most of Japan is on summer vacation now, only a small village in the north resists the siege, so I am continuing my reports on DebConf. See DebConf 18 – Day 1 and DebConf 18 – Day 2 for the previous ones.

With only a few talks of interest for me in the morning, I spent the time preparing my second presentation Status of Japanese (and CJK) typesetting (with TeX in Debian) during the morning, and joined for lunch and the afternoon session.

First to attend was the Deep Learning BoF by Mo Zou. Mo reported on the problems of getting Deep Learning tools into Debian: Here not only the pure software, where proprietary drivers for GPU acceleration are often highly advisable, but also the data sets (pre-trained data) which often fall under a non-free license, pose problems with integration into Debian. With several deep learning practitioners around, we had a lively discussion how to deal with all this.

Next up was Markus Koschany with Debian Java, where he gave an overview on the packaging tools for Java programs and libraries, and their interaction with the Java build tools like Maven, Ant, and Gradle.

After the coffee break I gave my talk about Status of Japanese (and CJK) typesetting (with TeX in Debian), and I must say I was quite nervous. As a non CJK-native foreigner speaking about the intricacies of typesetting with Kanji was a bit a challenge. At the end I think it worked out quite well, and I got some interesting questions after the talk.

Last for today was Nathan Willis’ presentation Rethinking font packages—from the document level down. With design, layout, and fonts being close to my personal interests, too, this talk was one of the highlights for me. Starting from a typical user’s workflow in selecting a font set for a specific project, Nathan discussed the current situation of fonts in Linux environment and Debian, and suggested improvements. Unfortunately what would be actually needed is a complete rewrite of the font stack, management, system organization etc, a rather big task at hand.

After the group photo shot by Aigars Mahinovs who also provided several more photos and a relaxed dinner I went climbing with Paul Wise to a nearby gym. It was – not surprisingly – quite humid and warm in the gym, so the amount of sweat I lost was considerable, but we had some great boulders and a fun time. In addition to that, I found a very nice book, nice out of two reasons: first, it was about one of my (and my daughters – seems to be connected) favorite movies, Totoro by Miyazaki Hayao, and second, it was written in Taiwanese Mandarin with some kind of Furigana to aid reading for kids – something that is very common in Japan (even in books for adults in case of rare readings), but I have never seen before with Chinese. The proper name is Zhùyīn Zìmǔ 註音字母 or (or more popular) Bopomofo.

This interesting and long day finished in my hotel with a cold beer to compensate for the loss of minerals during climbing.

16 August, 2018 12:46AM by Norbert Preining

August 14, 2018

Enrico Zini

DebConf 18

This is a quick recap of what happened during my DebConf 18.

24 July:

  • after buying a new laptop I didn't set up a build system for Debian on it. I finally did it, with cowbuilder. It was straightforward to set up and works quite fast.
  • shopping for electronics. Among other things, I bought myself a new USB-C power supply that I can use for laptop and phone, and now I can have a power supply for home and one always in my backpack for traveling. I also bought a new pair of headphones+microphone, since I cannot wear in-ear, and I only had the in-ear ones that came with my phone.
  • while trying out the new headphones, I unexpectedly started playing loud music in the hacklab. I then debugged audio pin mapping on my new laptop and reported #904437
  • fixed nightly maintenance scripts, which have been mailing me errors for a while.

25 July:

26 July:

  • I needed to debug a wreport FTBFS on a porterbox, and since the procedure to set up a build system on a porterbox was long and boring, I wrote debug-on-porterbox
  • Fixed a wreport FTBFS and replaced it with another FTBFS, that I still haven't managed to track down.

27 July:

  • worked on multiple people talk notes, alone and with Rhonda
  • informal FD/DAM brainstorming with jmw
  • local antiharassment coordination with Tassia and Taowa
  • talked to ansgar about how to have debtags tags reach ftp-master automatically, without my manual intervention
  • watched a wonderful lunar eclipse

28 July:

  • implemented automatic export of debtags data for ftp-master
  • local anti-harassment team work

29 July:

30 July:

31 July:

  • Implemented F-Droid antifeatures as privacy:: Debtags tags

01 August:

  • Day trip and barbecue

02 August:

03 August:

  • Multiple People talk
  • Debug Boot of my laptop with UEFI with Steve, and found out that HP firmware updates for it can only be installed using Windows. I am really disappointed with HP for this, given it's a rather high-end business laptop.

04 August:

14 August, 2018 01:08PM

Reproducible builds folks

Reproducible Builds: Weekly report #172

Here’s what happened in the Reproducible Builds effort between Sunday August 5 and Saturday August 11 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

There were a handful of updates to diffoscope, our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages: development


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

14 August, 2018 06:17AM

Minkush Jain

Google Summer of Code 2018 Final Report

This is the summary of my work done during Google Summer of Code 2018 with Debian.

Project Title: Wizard/GUI helping new interns/students get started

Final Work Product:

Mentor: Daniel Pocock

Codebase: gsoc-2018-experiments

CardBook debian/sid

What is Google Summer of Code?

Google Summer of Code is a global program focused on introducing students to open source software development. Students work on a 3-month programming project with an open source organization during their break from university.

As you can probably guess, there is a high demand for its selection as thousands of students apply for it every year. The program offers students real-world experience to build software along with collaboration with the community and other student developers.

Project Overview

This project aims at developing tools and packages which would simplify the process for new applicants in the open source community to get the required setup. It would consist of a GUI/Wizard with integrated scripts to setup various communication and development tools like PGP and SSH key, DNS, IRC, XMPP, mail filters along with Jekyll blog creation, mailing lists subscription, project planner, searching for developer meet-ups, source code scanner and much more! The project would be free and open source hosted on Salsa (Debian based Gitlab)

I created various scripts and packages for automating tasks and helping a user get started by managing contacts, emails, subscribe to developer’s lists, getting started with Github, IRC and more.

Mailing Lists Subscription

I made a script for fully automating the subscription to various Debian mailing lists. The script also automates its reply process as well to complete the procedure for a user.

It works for all ten important Debian mailing lists for a newcomer like ‘debian-outreach’, ‘debian-announce’, ‘debian-news’, ‘debian-devel-announce’ and more.

I also spent time refactoring the code with my mentors to make it work as a stand-alone script by adding utility functions and fixing the syntax.

The video demo of the script had also been added in my blog.

It inputs the email and automated reply-code received from from the user, and subscribes them to the mailing list. The script uses requests library to send data on the website and submit it on their server.

For the application task, I also created a basic GUI for the program using PyQt.

Libraries used:

  • Requests
  • Smtp
  • PyQt
  • MIME handlers

This is a working demo of the script. The user can enter any Debian mailing lists to subscribe to it. They have to enter the unique code received by email to confirm their subscription:

Thunderbird Setup

This task involved writing program to simplify the setup procedure of Thunderbird for a new user.

I made a script which kills the Thunderbird process if it is running and then edits the ‘prefs.js’ configuration file to modify configuration settings of the software.

The program overwrites the existing settings by creating ‘user.js’ with cusotm settings. It gets implemented as soon Thunderbird is re-opened.

Also added the feature to extend the script to all profiles or a specific one which would be user’s choice.


  • Examines system process to find if Thunderbird is running in background and kills it.

  • Searches dynamically in user’s system to find the configuration file’s path.

  • User can chose which profile should they allow to change.

  • Modifies the default settings to accomplish the following:

    • User’s v-card is automatically appended in mails and posts.
    • Top-posting configuration has been setup by default.
    • Reply heading format is changed.
    • Plain-text mode made default for new mails.
    • No sound and alerts for incoming mails.

and many more…

Libraries used:

  • Psutil
  • Os
  • Subprocess

Source Code Scanner

I created a program to analyse user’s project directory to find which Programming Language they are proficient.

The script would help them realise which language and skill they prefer by finding the percentage of each language present.

It scans through all the file extensions like (.py, .java, .cpp) which are stored in a separate file and examines them to display the total number of lines and percentage of each language present in the directory.

The script uses Pygount library to scan all folders for source code files. It uses pygments syntax highlighting package to analyse the source code and can examine any language.

Libraries used:

  • os (operating system interfaces)
  • pygount

I added a Python script with all common file extensions included in it.

The script could be excecuted easily by entering the directory’s path by the user.


  • Searched Python’s glob library to iterate through home directory.

  • Using Github Linguists library to analyse code.

  • Pygments library to search languages through syntax highlighter.

This is a working demo of the script. The user can enter their project’s directory and the script will analyse it to publish the result:

CardBook Debian Package

For managing contacts/calendar for a user, Thunderbird extensions need to be installed and setup.

I created a Debian package for CardBook, a Thunderbird add on for managing contact using vCard and CardDAV standards.

I have written a blog here, explaining the entire development process , as well as using tools to make it comply to Debian standards.

Creating a Debian package from scratch, involved a lot of learning from resources and wiki pages.

I created the package using debhelper commands, and included the CardBook extension inside the package. I modified the binary package files like changes, control, rules, copyright for its installation.

I also created a Local Debian Repository for testing the package.

I created four updated versions of the package, which are present in the changelog.

I used Lintian tool to check for bugs, packaging errors and policy violations. I spent some time to remove all the Lintian errors in 1.3.0 version of the package.

I took help from mentors on IRC (#debian-mentors) and mailing lists during the packaging process. Finally, I added mozilla-devscripts to build the package using xul-ext architecture.

I updated the ‘watch’ file to automatically pull tags from upstream.

I mailed Carsten Schoenert, Debian Maintainer of Thunderbird and Lightning package, who helped me a lot along with my mentor, Daniel during the packaging process.

CardBook Debian Package:


I created and setup my public and private GPG key using GnuPg and added them on

I signed the package files including ‘.changes’, ‘.dsc’, ‘.deb’ using ‘dpkg-sig’ and ‘debsign’ and then verified them with my keys.

Finally, the package has been uploaded on using dput HTTPS method.


This is video demo showing the package’s installation inside Thunderbird. As it can be clearly observed, CardBook was successfully installed as a Thunderbird add-on:

IRC Setup

One of most challenging tasks for a new contributor is getting started with Internet Relay Protocol chat and its setup.

I made an IRC Python bot to overcome the initial setup required. The script uses socket programming to connect to freenode server and send data.


*It registers new nickname for the user on Freenode server by sending user’s credentials to Nickserv. An email is received on successful registration of the nickname.

  • The script checks if the entered email is invalid or the nickname chosen by the user is already registered on the server. If this is case, the server disconnects and prompts the user again for re-entering the details.

  • It does identification for the nickname on the server before joining any channel by messaging ‘nickserv’ , if the nick registration is successful.

  • It displays the list of all available ‘#debian’ channels live on the server with minimum 30 members.

  • The script connects and joins with any IRC channel entered by the user and displays the live chat occurring on the channel.

  • Implements ping-pong protocol to keep the server live. This makes sure that the connection is not lost during the operation and simulate human interaction with the server by responding to its pings.

  • It continuously prints all data received from the server after decoding it with UTF-8 and closes the server after the operation is done.


Socket library

This is a working video demo for the IRC script.

To display one of it features, I have entered my already registered nickname (Mjain) to test it. It analyses server response to ask the user to again enter it.

Salsa and Github Registration

I created scripts using Selenium Web Driver to automate new account creation on Salsa and Github.

This task would provide a quick-start for a user to get started to contribute to Open source by registering account on web-hosting clients for version control.

I learned Selenium automation techniques in Python to accomplish it. It uses web driver to control it through automated scripts. (Tested with geckodriver for Firefox)

I used Pytest to write test scripts for both the programs which finds whether the account was successfully created or not.

Libraries used:

  • Selenium Web driver
  • Geckodriver
  • Pytest

Extract Mail Data

The aim for this task was to extract data from user’s email for ease of managing contacts.

I created a script to analyse user’s email and extract all Phone numbers present in it. The Program fetches all mails from the server using IMAP and decodes it in using UTF-8 to obtain it in readable format.


  • Easy login on mail server through user’s credentials

  • Obtains the date and time for all mails

  • Option to iterate through all or unseen mails

  • Extracts the Sender, Receiver, Subject and body of the email.

It scans the body of each message to look for phone numbers using python-phonenumbers and stores all of them along with details in a text file in external system.


  • Converts all the telephone numbers in Standard International Format E164 (adds country code if not already present)

  • Using geocoder to find the location of the phone numbers

  • Also extracts the Carrier name and Timezone details for all the phone numbers.

  • Saves all this data along with sender’s details in a file and also displays it on the terminal.

Libraries used:

  • Imaplib
  • IMAPClient
  • Python port of libphonenumbers (phoneumbers)

The original libphonenumbers is a popular Google’s library for parsing, formatting, and validating international phone numbers.

I also researched Telify Mozilla plugin for a similar algorithm to have click-to-save phone numbers.

This is a working video demo for the script:

HTTP Post Salsa Registration

I have created another script to automate the process of new account creation on Salsa using HTTP Post.

The script uses requests library to send HTTP requests on the website and send data in forms.

I used Beautiful Soup 4 library to parse and navigate HTML and XML data inside the URL and get tokens and form fields within the website.

The script checks for password mismatch and duplicate usernames and creates a new account instantly.

Libraries used:

  • Requests
  • Beautiful Soup

This is a working demo for the script. An email is received from Salsa which confirms that new account has been created:

Mail Filters Setup

One of the problems faced by a developer is filtering hundreds of unnecessary mails incoming from mailing lists, promotion websites, and spam.

Email client does the job to certain extent, still many emails are left which need to be sorted into categories.

For this purpose, I created a script which examines user’s mailbox and filters mails into labels and folders in Gmail, by creating them. The script uses IMAP to fetch mails from the server.

Libraries used:


I would like to thank Debian and Google for giving me this opportunity to work on this project.

I am grateful to my mentors Daniel Pocock, Urvika Gola, Jaminy Prabharan and Sanyam Khurana for their constant help throughout GSoC.

Finally, this journey wouldn’t have been possible without my friends and family who supported me.

Special Mention

I would like to thank Carsten Schönert and Andrey Rahmatullin for their help with Debian packaging.

14 August, 2018 04:00AM by Minkush Jain

Athos Ribeiro

Google Summer of Code 2018 Final Report: Automatic Builds with Clang using Open Build Service

Project Overview

Debian package builds with Clang were performed from time to time through massive rebuilds of the Debian archive on AWS. The results of these builds are published on This summer project aimed to automate Debian archive clang rebuilds by substituting the current clang builds in with Open Build System (OBS) builds.

Our final product consists of a repository with salt states to deploy an OBS instance which triggers Clang builds of Debian Unstable packages as soon as they get uploaded by their maintainers.

An instance of our clang builder is hosted at and the Clang builds triggered so far can be seen here.

My Google Summer of Code Project can bee seen at

My contributions

The major contribution for the summer is our running OBS instance at

Salt states to deploy our OBS intance

We created a series of Salt states to deploy and configure our OBS instance. The states for local deploy and development are available at


The commits above were condensed and submitted as a Pull Request to the project’s mentor github account, with production deployment configurations.

OBS Source Service to make gcc/clang binary substitutions

To perform deb packages Clang builds, we substitute GCC binaries with the Clang binaries in the builders chroot during build time. To do that, we use the OBS Source Services feature, which requires a package (which performs the desired task) to be available to the target OBS project.

Our obs-service-clang-build package is hosted at


Monitor Debian Unstable archive and trigger clang builds for newly uploaded packages

We also use two scripts to monitor the debian-devel-changes mailing lists, watching for new package uploads in Debian Unstable, and trigger Clang builds in our OBS instance whenever a new upload is accepted.

Our scripts to monitor the debian-devel-changes mailing list and trigger Clang builds in our OBS instance are available at


OBS documentation contributions

During the summer, most of my work was to read OBS documentation and code to understand how to trigger Debian Unstable builds in OBS and how to perform customized Clang builds (replacing GCC).

My contributions

Pending PRs

We want to change the Clang build links at To do so, we must change Debian distro-tracker to point to our OBS instance. As of the time this post was written, we have an open PR in distro-tracker to change the URLs:

Reports written through the summer

Adding new workers to the OBS instance

To configure new workers to our current OBS instance, hosted at, just set new salt slaves and provision them with obs-common and obs-worker, from This should be done in the top.sls file.

Future work

  • We want to extend our OBS instance with more projects to provide Upstream LLVM packages to Debian and derived distributions.
  • More automation is needed in our salt states. For instance, we may want to automate SSL certificates generation using Let’s encrypt.
  • During the summer, several issues were detected in Debian Stable OBS packages. We want to work closer to OBS packages to help improving OBS packages and OBS itself.

Google Summer of Code experience

Working with Debian during the summer was an interesting experience. I did not expect to have so many problems as I did (see reports) with the OBS packages. This problems were turned into hours of debuging and reading Perl code in order to understand how OBS processes comunicate and trigger new builds. I also learned more about Debian packaging, salt and vagrant. I do expect to keep working with OBS and help maintaining the service we deployed during the summer. There’s still a lot of room for improvements and it is easy to see how the project benefits FLOSS communities.

14 August, 2018 03:20AM

August 13, 2018

Iustin Pop

Eiger Bike Challenge 2018

So… another “fun” ride. Probably the most fun ever, both subjectively and in terms of Strava’s relative effort level. And that despite it being the “short” version of the race (55km/2’500m ascent vs. 88km/3’900m).

It all started very nicely. About five weeks ago, I started the Sufferfest climbing plan, and together with some extra cross-training, I was going very strong, feeling great and seeing my fitness increasing constantly. I was quite looking forward to my first time at this race.

Then, two weeks ago, after already having registered, family gets sick, then I get sick—just a cold, but with a persistent cough that has not gone away even after two weeks. The week I got sick my training plan went haywire (it was supposed to be the last heavy week), and the week of the race itself I was only half-recovered so I only did a couple of workouts.

With two days before the race, I was still undecided whether to actually try to do it or not. Weather was quite cold, which was on the good side (I was even a bit worried about too cold in the morning), then it turned to the better.

So, what do I got to lose? I went to the start of the 55km version. As to length, this is on the easy side. But it does have 2’500m of ascent, which is a lot for me for such a short ride. I’ve done this amount of ascent before—2017 BerGiBike, long route—but that was “spread” over 88km of distance and in lower temperatures and with quite a few kilograms fewer (on my body, not on the bike), and still killed me.

The race starts. Ten minutes in, 100m gained; by 18 minutes, 200m already. By 1h45m I’m done with the first 1’000m of ascent, and at this time I’m still on the bike. But I was also near the end of my endurance reserve, and even worse, at around 1h30m in, the sun was finally high enough in the sky to start shining on my and temperature went from 7-8°C to 16°. I pass Grosse Scheidegg on the bike, a somewhat flat 5k segment follows to the First station, but this flat segment still has around 300m of ascent, with one portion that VeloViewer says is around 18% grade. After pedalling one minute at this grade, I give up, get off the bike, and start pushing.

And once this mental barrier of “I can bike the whole race” is gone, it’s so much easier to think “yeah, this looks steep, let’s get off and push” even though one might still have enough reserves to bike uphill. In the end, what’s the difference between biking at 5km/h and pushing at 4.0-4.3km/h? Not much, and heart rate data confirms it.

So, after biking all the way through the first 1’100m of ascent, the remainder 1’400m were probably half-biking, half-pushing. And that might still be a bit generous. Temperatures went all the way up to 32.9°C at one point, but went back down a bit and stabilised at around 25°. Min/Avg/Max overall were 7°/19°/33° - this is not my ideal weather, for sure.

Other fun things:

  • Average (virtual) power over time as computed by VeloViewer went from 258W at 30m, to 230W at the end of first hour, 207W at 2h, 164W at 4h, and all the way down to 148W at the end of the race.
  • The brakes faded enough on the first long descend that in one corner I had to half-way jump of the bike and stop it against the hill; I was much more careful later to avoid this, which lead to very slow going down gravel roads (25-30km/h, not more); I need to fix this ASAP.
  • By last third of the race, I was tired enough that even taking a 2 minutes break didn’t relax my heart rate, and I was only able to push the bike uphill at ~3km/h.
  • The steepest part of the race (a couple of hundred meters at 22-24%) was also in the hottest temperature (33°).
  • At one point, there was a sign saying “Warning, ahead 2.5km uphill with 300m altitude gain”; I read that as “slowly pushing the bike for 2.5km”, and that was true enough.
  • In the last third of the race, there was a person going around the same speed as me (in the sense that we were passing each other again and again, neither gaining significantly). But he was biking uphill! Not much faster than my push, but still biking! Hat off, sir.
  • My coughing bothered me a lot (painful coughing) in the first two thirds, by the end of the race it was gone (now it’s back, just much better than before the race).
  • I met someone while pushing and we went together for close to two hours (on and off the bike), I think; lots of interesting conversation, especially as pushing is very monotonous…
  • At the end of the race (really, after the finish point), I was “ok, now what?” Brain was very confused that more pushing is not needed, especially as the race finishes with 77m of ascent.
  • BerGiBike 2017 (which I didn’t write about, apparently) was exactly the same recorded ascent to the meter: 2’506, which is a fun coincidence ☺

The route itself is not the nicest one I’ve done at a race. Or rather, the views are spectacular, but a lot of the descent is on gravel or even asphalt roads, and the single-trails are rare and on the short side. And a large part of the difficult descent are difficult enough that I skipped them, which in many other races didn’t happen to me. On the plus side, they had very good placements of the official photographers, I think one of the best setups I’ve seen (as to the number of spots and their positioning).

And final fun thing: I was not the last! Neither overall nor in my age category:

  • In my age category, I was place 129 our of 131 finishers, and there were another six DNF.
  • Overall (55km men), I was 391 out of 396 finishers, plus 17 DNF.

So, given my expectations for the race—I only wanted to finish—this was a good result. Grand questions:

  • How much did my sickness affect me? Especially as lung capacity is involved, and this being at between 1’000 and 2’000m altitude, when I do my training at below 500?
  • How much more could I have pushed the bike? E.g. could I push all above 10%, but bike the rest? What’s the strategy when some short bits are 20%? Or when there’s a long one at ~12%?
  • If I had an actual power meter, could I do much better by staying below my FTP, or below 90% FTP at all times? I tried to be careful with heart rate, but coupled with temperature increase this didn’t go as well as I thought it would.
  • My average overall speed was 8.5km/h. First in 55km category was 19.72km/h. In my age category and non-licensed, first one was 18.5km/h. How, as in how much training/how much willpower does that take?
  • Even better, in the 88km and my age category, first placed speed was 16.87km/h, finishing this longer route more than one hour faster than me. Fun! But how?

In any case, at my current weight/fitness level, I know what my next race profile will be. I know I can bike more than one thousand meters of altitude in a single long (10km) uphill, so that’s where I should aim at. Or not?

Closing with one picture to show how the views on the route are:

Yeah, that's me ☺ Yeah, that’s me ☺

And with that, looking forward to the next trial, whatever it will be!

13 August, 2018 09:50PM

hackergotchi for Thomas Goirand

Thomas Goirand

Official Debian testing OpenStack image news

A few things happened to the testing image, thanks to Steve McIntire, myself, and … some debconf18 foo!

  • The buster/testing image wasn’t generated since last April, this is now fixed. Thanks to Steve for it.
  • The datasource_list is now correct, in both the Stretch and Testing image (previously, cloustack was set too early in the list, which made the image wait 120 seconds for a data source which wasn’t available if booting on OpenStack).
  • The buster/testing image is now using the new package linux-image-cloud-amd64. This made the qcow file shrink from 614 MB to 493 MB. Unfortunately, we don’t have a matching arm64 cloud kernel image yet, but it’s still nice to have this for the amd64 arch.

Please use the new images, and report any issue or suggestion against the openstack-debian-images package.

13 August, 2018 10:46AM by Goirand Thomas

Petter Reinholdtsen

A bit more on privacy respecting health monitor / fitness tracker

A few days ago, I wondered if there are any privacy respecting health monitors and/or fitness trackers available for sale these days. I would like to buy one, but do not want to share my personal data with strangers, nor be forced to have a mobile phone to get data out of the unit. I've received some ideas, and would like to share them with you. One interesting data point was a pointer to a Free Software app for Android named Gadgetbridge. It provide cloudless collection and storing of data from a variety of trackers. Its list of supported devices is a good indicator for units where the protocol is fairly open, as it is obviously being handled by Free Software. Other units are reportedly encrypting the collected information with their own public key, making sure only the vendor cloud service is able to extract data from the unit. The people contacting me about Gadgetbirde said they were using Amazfit Bip and Xiaomi Band 3.

I also got a suggestion to look at some of the units from Garmin. I was told their GPS watches can be connected via USB and show up as a USB storage device with Garmin FIT files containing the collected measurements. While proprietary, FIT files apparently can be read at least by GPSBabel and the GpxPod Nextcloud app. It is unclear to me if they can read step count and heart rate data. The person I talked to was using a Garmin Forerunner 935, which is a fairly expensive unit. I doubt it is worth it for a unit where the vendor clearly is trying its best to move from open to closed systems. I still remember when Garmin dropped NMEA support in its GPSes.

A final idea was to build ones own unit, perhaps by basing it on a wearable hardware platforms like the Flora Geo Watch. Sound like fun, but I had more money than time to spend on the topic, so I suspect it will have to wait for another time.

While I was working on tracking down links, I came across an inspiring TED talk by Dave Debronkart about being a e-patient, and discovered the web site Participatory Medicine. If you too want to track your own health and fitness without having information about your private life floating around on computers owned by others, I recommend checking it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 August, 2018 07:00AM

August 12, 2018

Shashank Kumar

Google Summer of Code 2018 with Debian - Final Report

Three weeks of Google Summer of Code went off to be life-changing for me. This here is the summary of my work which also serves as my Final Report of Google Summer of Code 2018.

GSoC and Debian


My project is Wizard/GUI helping students/interns apply and get started and the final application is named New Contributor Wizard. It originated as the brainchild and Project Idea of Daniel Pocock for GSoC 2018 under Debian. I prepared the application task for the same and shared my journey through Open Source till GSoC 2018 in two of my blogs, From Preparations to Debian to Proposal and The Application Task and Results.

Project Overview

Sign Up Screen

New Contributor Wizard is a GUI application build to help new contributors get started with Open Source. It was an idea to bring together all the Tools and Tutorials necessary for a person to learn and start contributing to Open Source. The application contains different courseware sections like Communication, Version Control System etc. and within each section, there are respective Tools and Tutorials.

A Tool is an up and running service right inside the application which can perform tasks to help the user understand the concepts. For example, encrypting a message using the primary key, decrypting the encrypted message using the private key, and so on, these tools can help the user better understand the concepts of encryption.

A tutorial is comprised of lessons which contain text, images, questions and code snippets. It is a comprehensive guide for a particular concept. For example, Encryption 101, How to use git?, What is a mailing list? and so on.

In addition to providing the Tools and Tutorials, this application is build to be progressive. One can easily contribute new Tutorials by just creating a JSON file, the process of which is documented in the project repository itself. Similarly, a documentation for contributing Tools is present as well.

Project Details

Programming Language and Tools

For Development

For Testing


  • Pipenv for Python Virtual Environment
  • Debian 9 for Project Development and testing

Version Control System

For pinned dependencies and sub-dependencies one can have a look at the Pipfile and Pipfile.lock

My Contributions

The project was just an idea before GSoC and I had to make all the decisions for the implementation with the help of mentors whether it was Design or Architecture of the application. Below is the list of my contributions in shape of merge requests and every merge request contains UI, application logic, tests, and documentation. My contributions can also be seen in Changelog and Contribution Graph of the application.

Sign Up

Sign Up is the first screen a user is shown and asks for all the information required to create an account. It then takes the user to the Dashboard with all the courseware sections.

Merge request - Adds SignUp feature

Redmine Issue - Create SignUp Feature

Feature In Action (updated working of the feature)

Sign In

Alternate to Sign Up, the user has option to select Sign In to use existing account in order to access the application.

Merge Request - Adds SignIn feature

Redmine Issue - Create SignIn Feature

Feature In Action (updated working of the feature)


The Dashboard is said to be the protagonist screen of the application. It contains all the courseware sessions and their respective Tools and Tutorials.

Merge Request - Adds Dashboard feature

Redmine Issue - Implementing Dashboard

Feature In Action (updated working of the feature)

Adding Tool Architecture

Every courseware section can have respective Tools and Tutorials. To add Tools to a section I devised an architecture and implemented on Encryption to add 4 different Tools. They are:

  • Create Key Pair
  • Display and manage Key Pair
  • Encrypt a message
  • Decrypt a message

Merge Request - Adding encryption tools

Redmine Issue - Adding Encryption Tools

Feature In Action (updated working of the feature)

Adding Tutorial Architecture

Similar to Tools, Tutorials can be found with respect to any courseware section. I have created a Tutorial Parser, which can take a JSON file and build GUI for the Tutorial easily without any coding required. This way folks can easily contribute Tutorials to the project. I added Encryption 101 Tutorial to showcase the use of Tutorial Parser.

Merge Request - Adding encryption tutorials

Redmine Issue - Adding Encryption Tutorials

Feature In Action (updated working of the feature)

Adding 'Invite Contributor' block to Tools and Tutorials

In order to invite the contributor to New Contributor Wizard, every Tools and Tutorials menu display an additional block by linking the project repository.

Merge Request - Inviting contributors

Redmine Issue - Inviting contributors to the project

Feature In Action (updated working of the feature)

Adding How To Use

One of the courseware section How To Use help the user understand about different sections of the application in order to get the best out of it.

Merge Request - Updating How To Use

Redmine Issue - Adding How To Use in the application

Feature In Action (updated working of the feature)

Adding description to all the modules

All the courseware sections or modules need a simple description to describe what the user will learn using it's Tutorials and Tools.

Merge Request - Description added to all the modules

Redmine Issue - Add a introduction/description to all the modules

Feature In Action (updated working of the feature)

Adding Generic Tools and Tutorials Menu

This feature allows the abstraction of Tools and Tutorials architecture I mentioned earlier so that the Menu architecture can be used by any of the courseware sections following the DRY approach.

Merge Request - Adding Generic Menu

Redmine Issue - Adding Tutorial and Tools menu to all the modules

Tutorial Contribution Doc

A tutorial in the application can be added using just a JSON file. As mentioned earlier, it is made possible using the Tutorial Parser. A comprehensive ocumentation is added to help the users understand how they can contribute Tutorials to the application for the world to take advantage of.

Merge Request - Tutorial contribution docs

Redmine Issue - Add documentation for Tutorial development

Tools Contribution Doc

A tool in the application is build using Kivy lang and Python. A comprehensive documentation is added to the project in order for folks to contribute Tools for the world to take advantage of.

Merge Request - Tools contribution docs

Redmine Issue - Add documentation for Tools development

Adding a License to project

After having discussions with the mentors and a bit of research, GNU GPLv3 was finalized as the license for the project and has been added to the repository.

Merge Request - Adds License to project

Redmine Issue - Add a license to Project Repository

Allowing different timezones during Sign Up

Sign Up feature is refactored to support different timezones from the user.

Merge Request - Allowing different timezones during signup

Redmine Issue - Allow different timezones

All other contributions

Here's a list of all the merge request I raised to develop a feature or fix an issue with the application - All merge request by Shashank Kumar

Here are all the issues/bug/features I created, resolved or was associated to on the Redmine - All the redmine issue associated to Shashank Kumar


The application has been packaged for PyPi and can be installed using either pip or pipenv.

Package - new-contributor-wizard

Packaging Tool - setuptools

To Do List

Weekly Updates And Reports

These report were send daily to private mentors mail thread and weekly on Debian Outreach mailing list.

Talk Delivered On My GSoC Project

On 12th August 2018, I gave a talk on How my Google Summer of Code project can help bring new contributors to Open Source during a meetup in Hacker Space, Noida, India. Here are the slides I prepared for my talk and a collection of photographs of the event.


New Contributor Wizard is ready for the users who would like to get started with Open Source as well as to the folks who would like to contribute Tools and Tutorials to the application as well.


I would like to thank Google Summer of Code for giving me the opportunity of giving back to the community and Debian for selecting me for the project.

I would like to thank Daniel Pocock for his amazing blogs and ideas he comes up which end up inspiring students and result in a project like above.

I would like to thank Sanyam Khurana for constantly motivating me by reviewing every single line of code which I wrote to come up with the best solution to put in front of the community.

Thanks to all the loved ones who always believed in me and kept me motivated.

12 August, 2018 06:30PM by Shashank Kumar

hackergotchi for Vasudev Kamath

Vasudev Kamath

SPAKE2 In Golang: Finite fields of Elliptic Curve

In my previous post I talked about elliptic curve basics and how the operations are done on elliptic curves, including the algebraic representation which is needed for computers. For usage in cryptography we need a elliptic curve group with some specified number of elements, that is what we called Finite Fields. We limit Elliptic Curve groups with some big prime number p. In this post I will try to briefly explain finite fields over elliptic curve.

Finite Fields

Finite field or also called Galois Field is a set with finite number of elements. An example we can give is integer modulo `p` where p is prime. Finite fields can be denoted as \(\mathbb Z/p, GF(p)\) or \(\mathbb F_p\).

Finite fields will have 2 operations addition and multiplications. These operations are closed, associative and commutative. There exists a unique identity element and inverse element for every element in the set.

Division operation in finite fields is defined as \(x / y = x \cdot y^{-1}\), that is x multiplied by inverse of y. and substraction \(x - y\) is defined in terms of addition as \(x + (-y)\) which is x added by negation of y. Multiplicative inverse can be easily calculated using extended Euclidean algorithm which I've not understood yet myself as there were readily available library functions which does this for us. But I hear from Ramakrishnan that its very easy one.

Elliptic Curve in \(\mathbb F_p\)

Now we understood what is finite fields we now need to restrict our elliptic curves to the finite field. So our original definition of elliptic curve becomes slightly different, that is we will have modulo p to restrict the elements.

\begin{equation*} \begin{array}{rcl} \left\{(x, y) \in (\mathbb{F}_p)^2 \right. & \left. | \right. & \left. y^2 \equiv x^3 + ax + b \pmod{p}, \right. \\ & & \left. 4a^3 + 27b^2 \not\equiv 0 \pmod{p}\right\}\ \cup\ \left\{0\right\} \end{array} \end{equation*}

All our previous operations can now be written as follows

\begin{equation*} \begin{array}{rcl} x_R & = & (m^2 - x_P - x_Q) \bmod{p} \\ y_R & = & [y_P + m(x_R - x_P)] \bmod{p} \\ & = & [y_Q + m(x_R - x_Q)] \bmod{p} \end{array} \end{equation*}

Where slope, when \(P \neq Q\)

\begin{equation*} m = (y_P - y_Q)(x_P - x_Q)^{-1} \bmod{p} \end{equation*}

and when \(P = Q\)

\begin{equation*} m = (3 x_P^2 + a)(2 y_P)^{-1} \bmod{p} \end{equation*}

So now we need to know order of this finite field. Order of elliptic curve finite field can be defined as number of points in the finite field. Unlike integer modulo p where number of elements are 0 to p-1, in case of elliptic curve you need to count points from x to p-1. This counting will be \(O(p)\). Given large p this will be hard problem. But there are faster algorithm to count order of group, which even I don't know much in detail :). But from my reference its called Schoof's algorithm.

Scalar Multiplication and Cyclic Group

When we consider scalar multiplication over elliptic curve finite fields, we discover a special property. Taking example from Andrea Corbellini's post, consider curve \(y^2 \equiv x^3 + 2x + 3 ( mod 97)\) and point \(P = (3,6)\). If we try calculating multiples of P

\begin{align*} 0P = 0 \\ 1P = (3,6) \\ 2P = (80,10) \\ 3P = (80,87) \\ 4P = (3, 91) \\ 5P = 0 \\ 6P = (3,6) \\ 7P = (80, 10) \\ 8P = (80, 87) \\ 9P = (3, 91) \\ ... \end{align*}

If you are wondering how to calculate above (I did at first). You need to use point addition formula from earlier post where P = Q with mod 97. So we observe that there are only 5 multiples of P and they are repeating cyclicly. we can write above points as

  • \(5kP = 0P\)
  • \((5k + 1)P = 1P\)
  • \((5k + 2)P = 2P\)
  • \((5k + 3)P = 3P\)
  • \((5k + 4)P = 4P\)

Or simply we can write these as \(kP = (k mod 5)P\). We also note that all these 5 Points are closed under addition. This means adding two multiples of P, we obtain a multiple of P and the set of multiples of P form cyclic subgroup

\begin{equation*} nP + mP = \underbrace{P + \cdots + P}_{n\ \text{times}} + \underbrace{P + \cdots + P}_{m\ \text{times}} = (n + m)P \end{equation*}

Cyclic subgroups are foundation of Elliptic Curve Cryptography (ECC).

Subgroup Order

Subgroup order tells how many points are really there in the subgroup. We can redefine the order of group in subgroup context as order of P is the smallest positive integer such that nP = 0. In above case if you see we have smallest n as 5 since 5P = 0. So order of subgroup above is 5, it contains 5 element.

Order of subgroup is linked to order of elliptic curve by Lagrange's Theorem which says the order of subgroup is divisor of order of parent group. Lagrange is another name which I had read in my college, but the algorithms were different.

From this we have following steps to find out the order of subgroup with base point P

  1. Calculate the elliptic curve's order N using Schoof's algorithm.
  2. Find out all divisors of N.
  3. For every divisor of n, compute nP.
  4. The smallest n such that nP = 0 is the order of subgroup N.

Note that its important to choose smallest divisor, not a random one. In above examples 5P, 10P, 15P all satisfy condition but order of subgroup is 5.

Finding Base Point

Far all above which is used in ECC, i.e. Group, subgroup and order we need a base point P to work with. So base point calculation is not done at the beginning but in the end i.e. first choose a order which looks good then look for subgroup order and finally find the suitable base point.

We learnt above that subgroup order is divisor of group order which is derived from Lagrange's Theorem. This term \(h = N/n\) is actually called co-factor of the subgroup. Now why is this term co-factor important?. Without going into details, this co-factor is used to find generator for the subgroup as \(G = hP\).


So now are you wondering why I went on such length to describe all these?. Well one thing I wanted to make some notes for myself because you can't find all these information in single place, another these topics we talked in my previous post and this point forms the domain parameters of Elliptic Curve Cryptography.

Domain parameters in ECC are the parameters which are known publicly to every one. Following are 6 parameters

  • Prime p which is order of Finite field
  • Co-efficients of curve a and b
  • Base point \(\mathbb G\) the generator which is the base point of curve that generates subgroup
  • Order of subgroup n
  • Co-factor h

So in short following is the domain parameters of ECC \((p, a, b, G, n, h)\)

In my next post I will try to talk about the specific curve group which is used in SPAKE2 implementation called twisted Edwards curve and give a brief overview of SPAKE2 protocol.

12 August, 2018 05:21PM by copyninja

hackergotchi for Steve McIntyre

Steve McIntyre

DebConf in Taiwan!

DebConf 18 logo

So I'm slowly recovering from my yearly dose of full-on Debian! :-) DebConf is always fun, and this year in Hsinchu was no different. After so many years in the project, and so many DebConfs (13, I think!) it has become unmissable for me. It's more like a family gathering than a work meeting. In amongst the great talks and the fun hacking sessions, I love catching up with people. Whether it's Bdale telling me about his fun on-track exploits or Stuart sharing stories of life in an Australian university, it's awesome to meet up with good friends every year, old and new.

DC18 venue

For once, I even managed to find time to work on items from my own TODO list during DebCamp and DebConf. Of course, I also got totally distracted helping people hacking on other things too! In no particular order, stuff I did included:

  • Working with Holger and Wolfgang to get debian-edu netinst/USB images building using normal debian-cd infrastructure;
  • Debugging build issues with our buster OpenStack images, fixing them and also pushing some fixes to Thomas for build-openstack-debian-image;
  • Reviewing secure boot patches for Debian's GRUB packages;
  • As an AM, helping two DD candidates working their way through NM;
  • Monitoring and tweaking an archive rebuild I'm doing, testing building all of our packages for armhf using arm64 machines;
  • Releasing new upstream and Debian versions of abcde, the CD ripping and encoding package;
  • Helping to debug UEFI boot problems with Helen and Enrico;
  • Hacking on MoinMoin, the wiki engine we use for;
  • Engaging in lots of discussions about varying things: Arm ports, UEFI Secure Boot, Cloud images and more

I was involved in a lot of sessions this year, as normal. Lots of useful discussion about Ignoring Negativity in Debian, and of course lots of updates from various of the teams I'm working in: Arm porters, web team, Secure Boot. And even an impromptu debian-cd workshop.

Taipei 101 - datrip venue

I loved my time at the first DebConf in Asia (yay!), and I was yet again amazed at how well the DebConf volunteers made this big event work. I loved the genius idea of having a bar in the noisy hacklab, meaning that lubricated hacking continued into the evenings too. And (of course!) just about all of the conference was captured on video by our intrepid video team. That gives me a chance to catch up on the sessions I couldn't make it to, which is priceless.

So, despite all the stuff I got done in the 2 weeks my TODO list has still grown. But I'm continuing to work on stuff, energised again. See you in Curitiba next year!

12 August, 2018 03:11PM

Sam Hartman

Dreaming of a Job to Promote Love, Empathy and sexual Freedom

Debianhas always been filled with people who want to make the world a better place. We consider the social implications of our actions. Many are involved in work that focuses on changing the world. I’ been hesitant to think too closely about how that applies to me: I fear being powerless to bring about the world in which I would like to live.

Recently though, I've been taking the time to dream. One day my wife came home and told another story of how she’d helped a client reduce their pain and regain mobility. I was envious. Every day she advances her calling and brings happiness into the world, typically by reducing physical suffering. What would it be like for me to find a job where I helped advance my calling and create a world where love could be more celebrated. That seems such a far cry from writing code and working on software design every day. But if I don’t articulate what I want, I'll never find it.

I’ve been working to start this journey by acknowledging the ways in which I already bring love into the world. One of the most important lessons of Venus’s path is that to bring love into the world, you have to start by leading a life of love. At work I do this by being part of a strong team. We’re there helping each other grow, whether it is people trying entirely new jobs or struggling to challenge each other and do the best work we can. We have each other’s back when things outside of work mean we're not at our best. We pitch in together when the big deadlines approach.

I do not shove my personal life or my love and spirituality work in people’s faces, but I do not hide it. I'm there as a symbol and reminder that different is OK. Because I am open people have turned to me in some unusual situations and I have been able to invite compassion and connection into how people thought about challenges they faced.

This is the most basic—most critical love work. In doing this I’m already succeeding at bringing love into the world. Sometimes it is hard to believe that. Recently I have been daring to dream of a job in which the technology I created also helped bring love into the world.

I'd love to find a company that's approaching the world in a love-positive, sex-positive manner. And of course they need to have IT challenges big enough to hire someone who is world class at networking, security and cloud architecture. While I'd be willing to take a pay cut for the right job, I'd still need to be making a US senior engineer's salary.

Actually saying that is really hard. I feel vulnerable because I’m being honest about what I want. Also, it feels like I’m asking for the impossible.

Yet, the day after I started talking about this on Facebook, OkCupid posted a job for a senior engineer. That particular job would require moving to New York, something I want to avoid. Still, it was reassuring as a reminder that asking for what you want is the first step.

I doubt that will be the only such job. It's reasonable to assume that as we embrace new technologies like blockchains and continue to appreciate what the evolving web platform standards have to offer, there will be new opportunities. Yes, a lot of the adult-focused industries are filled with corruption and companies that use those who they touch. However, there's also room for approaching intimacy in a way that celebrates desire, connection, and all the facets of love.

And yes, I do think sexuality and desire are an important part of how I’d like to promote love. With platforms like Facebook, Amazon and Google, it's easier than ever for people to express themselves, to connect, and if they are willing to give up privacy, to try and reach out and create. Yet all of these platforms have increasingly restrictive rules about adult content. Sometimes it’s not even intentional censorship. My first post about this topic on Facebook was marked as spam probably because some friends suggested some businesses that I might want to look at. Those businesses were adult-focused and apparently even positive discussion of such businesses is now enough to trigger a presumption of spam.

If we aren't careful, we're going to push sex further out of our view and add to an ever-higher wall of shame and fear. Those who wish to abuse and hurt will find their spaces, but if we aren't careful to create spaces where sex can be celebrated alongside love, those seedier corners of the Internet will be all that explores sexuality. Because I'm willing to face the challenge of exploring sexuality in a positive, open way, I think I should: few enough people are.

I have no idea what this sort of work might look like. Perhaps someone will take on the real challenge of creating content platforms that are more decentralized and that let people choose how they want content filtered. Perhaps technology can be used to improve the safety of sex workers or eventually to fight shame associated with sex work. Several people have pointed out the value of cloud platforms in allowing people to host whatever service they would choose. Right now I’m at the stage of asking for what I want. I know I will learn from the exploration and grow stronger by understanding what is possible. And if it turns out that filling my every day life with love is the answer I get, then I’ll take joy in that. Another one of the important Venus lessons is celebrating desires even when they cannot be achieved.

12 August, 2018 02:05PM

Sven Hoexter

iptables with random-fully support in stretch-backports

I've just uploaded iptables 1.6.2 to stretch-backports (thanks Arturo for the swift ACK). The relevant new feature here is the --random-fully support for the MASQUERADE target. This release could be relevant to you if you've to deal with a rather large amount of NATed outbound connections, which is likely if you've to deal with the whale. The engineering team at Xing published a great writeup about this issue in February. So the lesson to learn here is that the nf_conntrack layer propably got a bit more robust during the Bittorrent heydays, but NAT is still evil shit we should get rid of.

12 August, 2018 12:45PM

Mike Hommey

Announcing git-cinnabar 0.5.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0?

  • git-cinnabar-helper is now mandatory. You can either download one with git cinnabar download on supported platforms or build one with make.
  • Performance and memory consumption improvements.
  • Metadata changes require to run git cinnabar upgrade.
  • Mercurial tags are consolidated in a separate (fake) repository. See the README file.
  • Updated git to 2.18.0 for the helper.
  • Improved memory consumption and performance.
  • Improved experimental support for pushing merges.
  • Support for clonebundles for faster clones when the server provides them.
  • Removed support for the .git/hgrc file for mercurial specific configuration.
  • Support any version of Git (was previously limited to 1.8.5 minimum)
  • Git packs created by git-cinnabar are now smaller.
  • Fixed incompatibilities with Mercurial 3.4 and >= 4.4.
  • Fixed tag cache, which could lead to missing tags.
  • The prebuilt helper for Linux now works across more distributions (as long as is present, it should work)
  • Properly support the pack.packsizelimit setting.
  • Experimental support for initial clone from a git repository containing git-cinnabar metadata.
  • Now can successfully clone the pypy and GNU octave mercurial repositories.
  • More user-friendly errors.

Development process changes

It took about 6 months between version 0.3 and 0.4. It took more than 18 months to reach version 0.5 after that. That’s a long time to wait for a new version, considering all the improvements that have happened under the hood.

From now on, the release branch will point to the last tagged release, which is roughly the same as before, but won’t be the default branch when cloning anymore.

The default branch when cloning will now be master, which will receive changes that are acceptable for dot releases (0.5.x). These include:

  • Changes in behavior that are backwards compatible (e.g. adding new options which default to the current behavior).
  • Changes that improve error handling.
  • Changes to existing experimental features, and additions of new experimental features (that require knobs to be enabled).
  • Changes to Continuous Integration/Tests.
  • Git version upgrades for the helper.

The next branch will receive changes for the next “major” release, which as of writing is planned to be 0.6.0. These include:

  • Changes in behavior.
  • Changes in metadata.
  • Stabilizing experimental features.
  • Remove backwards compability with older metadata (< 0.5.0).

12 August, 2018 01:57AM by glandium

August 11, 2018

hackergotchi for Shirish Agarwal

Shirish Agarwal


This would be a long blog post as I would be sharing a lot of journeys, so have your favorite beverage in your hand and prepare for an evening of musing.

Before starting the blog post, I have been surprised as the last week and the week before, lot of people have been liking my Debconf 2016 blog post on diaspora which is almost two years old. Almost all the names mean nothing to me but was left unsure as to reason of the spike. Were they debconf newcomers who saw my blog post and their experience was similar to mine or something, don’t know.

About a month and half back, I started reading Gandhiji’s ‘My Experiments with Truth‘ . To be truthful, a good friend had gifted this book back in 2015 but I had been afraid to touch it. I have had read a few autobiographies and my experience had been less than stellar when reading the autobiographies. Some exceptions are there, but those are and will remain exceptions. Now, just as everybody else even I had held high regard for Gandhiji and was afraid that reading the autobiography it would lower him in my eyes. As it is he is lovingly regarded as the ‘Father of the Nation‘ and given the honorific title of ‘Mahatma’ (Great Soul) so there was quite a bit of resistance within me to read the book as its generally felt to be impossible to be like him or even emulate him in any way.

So, with some hesitancy, I started reading his autobiography about a month and half back. It is a shortish book, topping out at 470 odd pages and I have read around 350 pages or so. While I am/was reading it I could identify with lot of what was written and in so many ways it also represents a lot of faults which are still prevalent in Indian society today as well.

The book is heavy with layered meanings. I do feel in parts there have been brushes of RSS . I dunno maybe that’s the paranoia in me, but would probably benefit from an oldish version (perhaps the 1993 version) if I can find it somewhere which probably may be somewhat more accurate. I don’t dare to review it unless I have read and re-read it at least 3-4 times or more. I can however share what I have experienced so far. He shares quite a bit of religion and shares his experience and understanding of reading the various commentaries both on Gita and the various different religious books like the ‘Koran’, ‘The Bible’ and so on and so forth. When I was reading it, I felt almost like an unlettered person. I know that at sometime in the near future I would have to start read and listen to various commentaries of Hinduism as well as other religions to have at least some base understanding.

The book makes him feel more human as he had the same struggles that we all do, with temptations of flesh, food, medicine, public speaking. The only difference between him and me that he was able to articulate probably in a far better way than people even today.

Many passages in the book are still highly or even more prevalent in today’s ‘life’ . It really is a pity it isn’t an essential book to be read by teenagers and young adults. At the very least they would start with their own inquiries at a young age.

The other part which was interesting to me is his description of life in Indian Railways. He traveled a lot by Indian Railways, in both third and first class. I have had the pleasure of traveling in first, second and general (third class), military cabin, guard cabin, luggage cabin as well as the cabin for people with disabilities and once by mistake even in a ladies cabin. The only one I haven’t tried is the loco pilot’s cabin and it’s more out of fear than anything else. While I know the layout of the cabin more or less and am somewhat familiar the job they have to do, I still fear as I know the enormous responsibilities the loco pilots have, each train carrying anywhere between 2300 to 2800 passengers or more depending on the locomotive/s, rake, terrain, platform length and number of passengers.

The experiences which Gandhiji shared about his travels then and my own limited traveling experience seem to indicate that change has hardly been in Indian Railways as far as the traveling experience goes.

A few days back my mum narrated one of her journeys on the Indian Railways when she was a kid, about five decades or so back. Her experience was similar to what even one can experience even today and probably decades from now till things don’t improve which I don’t think will happen at least in the short-term, medium to long-term who knows.

Anyways, my grandfather (my mother’s father, now no more 😦 ) had a bunch of kids. In those days, having 5-6 kids was considered normal. My mother, her elder sister (who is not with us anymore, god rest her soul.) and my grandpa took a train from Delhi/Meerut to Pune. As that time there was no direct train to Pune, the idea was to travel from Delhi to Bombay (today’s Mumbai). Take a break in Bombay (Mumbai) and then take a train to Pune. The journey was supposed to take only couple of days or a bit more. My grandma had made Puris and masala bhaji (basically boiled Potatoes mixed with Onions fried a bit) .

Puri bhaji image taken from

You can try making it with a recipe shared by Sanjeev Kapoor, a celebrity chef from India. This is not the only way to make it, Indian cooking is all about improvisation and experimentation but that’s a story for another day.

This is/was a staple diet for most North Indians traveling in trains and you can even find the same today as well. She had made it enough for 2 days with some to spare as she didn’t want my mum or her sister taking any outside food (food hygiene, health concerns etc.) My mum and sister didn’t have much to do and they loved my grandma’s cooking what was made for 2 days didn’t even last a day. What my mother, her sister and grandpa didn’t know it was one of those ill-fated journeys. Because of some accident which happened down the line, the train was stopped in Bhopal for indefinite time. This was the dead in night and there was nothing to eat there. Unfortunately for them, the accident or whatever happened down the line meant that all food made for travelers was either purchased by travelers before my mother’s train or was being diverted to help those injured . The end-result being that till they reached Mumbai which took another one and a half day which means around 4 days instead of two days were spent traveling. My grandpa also tried to get something for the children to eat but still was unable to find any food for them.

Unfortunately when they reached Bombay (today’s Mumbai) it was also dead at night so grandpa knew that he wouldn’t be able to get anything to eat as all shops were shut at night, those were very different times.

Fortunately for them, one of my grandfather’s cousins had got a trunk-call a nomenclature from a time in history when calling long-distance was pretty expensive for some other thing at his office from Delhi by one of our relatives on some unrelated matter. Land-lines were incredibly rare and it was just a sheer coincidence that he came to know that my grandpa would be coming to Bombay (Mumbai) and if possible receive him. My grandpa’s cousin made inquiries and came to know the accident and knew that the train would arrive late although he had no foreknowledge how late it would be. Even then he got meals for the extended family on both days as he knew that they probably would not be getting meals.

On the second night, my grandpa was surprised and relived to see his cousin and both my mum and her sister who had gone without food finished whatever food was bought within 10 minutes.

The toilets on Indian Railways in Gandhiji’s traveling days (the book was written in 1918 while he resided in Pune’s Yerwada Jail [yup, my city] ) and the accounts he shared were of 1908 and even before that, the days my mother traveled and even today are same, they stink irrespective of whichever class you travel. After reading the book, read and came to know that Yerwada had lot of political prisoners

The only changes which have happened is in terms of ICT but that too only when you know specific tools and sites. There is the IRCTC train enquiry site and the map train tracker . For food you have sites like RailRestro but all of these amenities are for the well-heeled or those who can pay for the amenities and know how to use the services. I say this for the reason below.

India is going to have elections come next year, to win in those elections round the corner the Government started the largest online recruitment drive for exams of loco pilot, junior loco pilot and other sundry posts. For around 90k jobs, something like 0.28 billion applied. Out of which around 0.17 billion were selected to apply for ‘online’ exam with 80-85% percent of the selected student given centers in excess of 1000 kms. At the last moment some special trains were made for people who wanted to go for the exams.

Sadly, due to the nature of conservatism in India, majority of the women who were selected choose to not travel that far as travel was time-consuming, expensive (about INR 5k or a bit more) excluding any lodging, just for traveling and a bit for eating. Most train journeys are and would be in excess of 24 hours or more as the tracks are more or less the same (some small improvement has been for e.g. from wooden tracks, it’s now concrete tracks) while the traffic has quadrupled from before and they can’t just build new lines without encroaching on people’s land or wildlife sanctuaries (both are happening but that’s a different story altogether.)

The exams are being held in batches and will continue for another 2-3 days. Most of the boys/gentleman are from rural areas for whom getting INR 5k/- is a huge sum. There are already reports of collusion, cheating etc. After the exams are over, the students fear that some people might go to court of law saying cheating happened, the court might make the whole exam null and void, cheating students of the hard-earned money and the suffering the long journeys they had to take. The date of the exams were shared just a few days and clashed with some other government exams so many will miss those exams, while some wouldn’t have time to prepare for the other exams.

It’s difficult to imagine the helplessness and stress the students might be feeling.

I just hope that somehow people’s hopes are fulfilled.

Ending the blog post on a hopeful and yet romantic ballad

12/08/18 – Update – CAG: Failure to invest in infra behind delay in train services

11 August, 2018 07:27PM by shirishag75

hackergotchi for Norbert Preining

Norbert Preining

TypeScript’s generics – still a long way to go

I first have to admit I am no a JavaScript or TypeScript expert, but the moment I wanted to implement some generic functionality (Discrete Interval Encoding Tree with additional data and that is merged according to a functional type parameter), I immediately stumbled upon lots of things I would like to have but aren’t there – I guess I am too much used to Scala.

First of all, parametric classes within parameters being again parametric, submitted in 2014, still open.

Then the nice one, type aliases do not work with classes (follow up here):

class A {}
type B = A;
var b = new B(); // <--- Cannot find name 'B'.

In this case not very useful, but in my case this would have been something like

type B = A>

in which case this would make a lot of sense.

Time to go back to some Scala ...

11 August, 2018 01:58PM by Norbert Preining

Andreas Metzler

You might not be off too bad ...

... if one of the major changes on going back to work after the holidays is that swimming in the morning or the evening in the lake in Lipno nad Vltavou is replaced by a going for a swim in the Lake Constance during the lunch break.

11 August, 2018 09:01AM by Andreas Metzler

August 10, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.5: Windows is back, and small updates

A brand new release of RQuantLib, now at version 0.4.5, just arrived on CRAN, and will get to Debian shortly. This release re-enables Windows builds thanks to a PR by Jeroen who now supplies a QuantLib library build in his rwinlib repositories. (Sadly, though, it is already one QuantLib release behind, so it would be awesome if a volunteer could step forward to help Jeroen keeping this current.) A few other smaller fixes were made, see below for more.

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.5 (2018-08-10)

  • Changes in RQuantLib code:

    • The old rquantlib.h header is deprecated and moved to a subdirectory. (Some OS confuse it with RQuantLib.h which Rcpp Attributes like to be the same name as the package.) (Dirk in #100 addressing #99).

    • The files in src/ now include rquantlib_internal.h directly.

    • Several ‘unused variable’ warnings have been taken care of.

    • The Windows build has been updated, and now uses an external QuantLib library from 'rwinlib' (Jeroen Ooms in #105).

    • Three curve-building example are no longer running by default as win32 has seen some numerical issues.

    • Two Rcpp::compileAttributes generated files have been updated.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list off the R-Forge page. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 August, 2018 04:51PM

hackergotchi for Norbert Preining

Norbert Preining

Specification and Verification of Software with CafeOBJ – Part 4 – Modules

This blog continues Part 1, Part 2, and Part 3 of our series on software specification and verification with CafeOBJ.

In the last part we have made our first steps with CafeOBJ, learned how start and quit the interpreter, how to access the help system, and how to write basic functions. This time we will turn our attention to the most fundamental building blocks of specifications, modules.

Answer to the challenge

But before diving into modules, let us give answers to the challenge posed at the end of the last blog, namely to give definitions of the factorial function and the Fibonacci function. Without much discussion, here is the one possible solution for the Fibonacci numbers:

-- Factorial
open NAT .
  op _! : Nat -> Nat .
  eq 0 ! = 1 .
  eq ( X:NzNat ! ) = X * ( (p X) ! ) .
  red 10 ! .
-- Fibonacci
open NAT .
  op fib : Nat -> Nat .
  eq fib ( 0 ) = 0 .
  eq fib ( 1 ) = 1 .
  ceq fib ( X:Nat ) = fib (p X) + fib (p (p X)) if X > 1 .
  red fib(10) .

Note that in the case of factorial we used the standard mathematical notation of postfix bang. We will come to the allowed syntax later on.


Modules are the basic building blocks of CafeOBJ specifications, and they correspond – or define – order-sorted algebras. Think about modules as a set of elements, operators on these elements, and rules how they behave. A very simple example of such a module is:

module ADDMOD {
  [ Elem ]
  op _+_ : Elem Elem -> Elem .
  op 0 : -> Elem .
  eq 0 + A:Elem = A .

This modules is introduced using the keyword module and the identifier ADDMOD, and the definition is enclosed in curly braces. The first line [ Elem ] defines the types (or sorts) available in the module. Most algebra one sees is mono-sorted, so there is only one possible sort, but algebras in CafeOBJ are many-sorted, and in fact order-sorted (we come to that later). So the module we are defining contains only elements of one sort Elem.

Next up are two lines starting with op, which introduces operators. Operators (think “functions”) have a certain arity, that is, the take a certain number of arguments and return a value. In the many-sorted case we not only need to specify how many arguments an operator takes, but also from which sorts these arguments are. In the above case, let us first look at the line op _+_ : Elem Elem -> Elem .. Here an infix operator + is introduced, that takes two arguments of sort Elem, and returns again an Elem.

The next line defines an operator 0 – you might ask what a strange operator this is, but looking at the definition we see that there is no arguments to the operator, only a result. This is the standard way to define constants. So we now have an addition operator and a constant operator.

The last line of the definition starts with eq which introduces an equality, or axiom, or rule. It states that the left side and the right side are equal, in mathematical terms, that 0 is a left-neutral element of +.

Even with much more complicated modules, these three blocks will form most of the code one writes in CafeOBj: sort declarations, operator declarations, axiom definitions.

A first module

Let us define a bit more involved module extending the ideas of the pet example above:

mod! PNAT {
  [ PNat ]
  op 0 : -> PNat .
  op s : PNat -> PNat .
  op _+_ : PNat PNat -> PNat .
  vars X Y : PNat
  eq 0 + Y = Y .
  eq s(X) + Y = s(X + Y) .

You will immediately recognize the same parts as in the first example, sort declaration, three operator declarations, and two axioms. There are two things that are new: (1) the module started with mod!: mod is a shorthand for the longer module, and the bang after it indicates that we are considering initial models – we wont go into details about initial versus free models here, though; (2) the line vars X Y : PNat: we have seen variable declarations inline in the first example A:Elem, but instead of this inline definition of variables one can defines variables beforehand and use them later on multiple times.

For those with mathematical background, you might have recognized that the name of the module (PNAT) relates to the fact that we are defining Peano Natural Numbers here. For those with even more mathematical background – yes I know this is not the full definition of Peano numbers 😉

Next, let us see what we can do with this module, in particular what are elements of this algebra and how we can add them. From the definiton we only see that 0 is a constant of the algebra, and that the monadic operator s defines new elements. That means, that terms of the form 0, s(0), s(s(0)), s(s(s(0))) etc are PNats (not ). So, can we do computations with these kind of numbers? Let us see:

CafeOBJ> open PNAT .
-- opening module PNAT.. done.
%PNAT> red s(s(s(0))) + s(s(0)) .
-- reduce in %PNAT : (s(s(s(0))) + s(s(0))):PNat
(0.0000 sec for parse, 0.0010 sec for 4 rewrites + 7 matches)
%PNAT> close

Oh, CafeOBJ computed something for us. Let us look into details of the computation. CafeOBJ has a built-in tracing facility that shows each computations step. It is activated using the code set trace whole on:

%PNAT> set trace whole on

%PNAT> red s(s(s(0))) + s(s(0)) .
-- reduce in %PNAT : (s(s(s(0))) + s(s(0))):PNat
[1]: (s(s(s(0))) + s(s(0))):PNat
---> (s((s(s(0)) + s(s(0))))):PNat
[2]: (s((s(s(0)) + s(s(0))))):PNat
---> (s(s((s(0) + s(s(0)))))):PNat
[3]: (s(s((s(0) + s(s(0)))))):PNat
---> (s(s(s((0 + s(s(0))))))):PNat
[4]: (s(s(s((0 + s(s(0))))))):PNat
---> (s(s(s(s(s(0)))))):PNat
(0.0000 sec for parse, 0.0000 sec for 4 rewrites + 7 matches)

The above trace shows that the rule (axiom) eq s(X) + Y = s(X + Y) . is applied from left to right until the final term is obtained.

Computational model of CafeOBJ

The underlying computation mechanism of CafeOBJ is Term Rewriting, in particular order-sorted conditional term rewriting with associativity and commutativity (AC). Just as a side node, CafeOBJ is one of the very few (if not the only one still in use) system that implements conditional AC rewriting, in addition on order-sorted terms.

We wont go into details of the intricacies of term rewriting but mention only one thing that is important for the understanding of the computational model of the CafeOBJ interpreter: While axioms (equations) do not have a direction, they are just statements of equality between two terms, the evaluation carried out by the red uses the axioms only in the direction from left to right, thus it is important how an equation is written down. General rule is to write the complex term on the left side and the simple term on the right (if it is easy to define what is the more complex one).

Let us exhibit this directional usage on a simple example. Look at the following code, what will happen?

mod! FOO {
  [ Elem ]
  op f : Elem -> Elem .
  op a : -> Elem .
  var x : Elem
  eq f(x) = f(f(x)) .
set trace whole on
open FOO .
red f(a) .

I hope you guessed it, we will send CafeOBJ into an infinite loop:

%FOO> red f(a) .
-- reduce in %FOO : (f(a)):Elem
[1]: (f(a)):Elem
---> (f(f(a))):Elem
[2]: (f(f(a))):Elem
---> (f(f(f(a)))):Elem
[3]: (f(f(f(a)))):Elem
---> (f(f(f(f(a))))):Elem
[4]: (f(f(f(f(a))))):Elem

This is because the rules are strictly applied from left to right, and this can be done again and again.

Defining lists

To close this blog, let us use modules to define a very simple list of natural numbers: A list can either be empty, or it is a natural number followed by the symbol | and another list. In BNF this would look like

  L ::= nil  |   x|L

Some examples of list and things that are not proper lists:

  • nil – this is a list, the most simple one
  • 1 | ( 3 | ( 2 | nil ) ) – again a list, with all parenthesis
  • 1 | 3 | 2 | nil – again a list, if we assume right associativity of |
  • 1 | 3 | 2 – not a list, because the last element is a natural number and not a list
  • (1 | 3) | 2 | nil – again not a list, because the first element is not a natural number

Now let us reformulate this in CafeOBJ language:

mod! NATLIST {
  [ NatList ]
  op nil : -> NatList .
  op _|_ : Nat NatList -> NatList .

Here one new syntax element did appear: the line pr(NAT) which is used to pull in the natural numbers NATin a protected way pr. This is the general method to include other modules and build up more complex entities. We will discuss import methods later on.

We also see that within the module NATLIST we have now multiple sorts (Nat and NatList, and operators that take arguments of different sorts.

As a final step, let us see whether the above definition is consistent with our list intuition from above, i.e., that the CafeOBJ parser accepts exactly these terms. We can use the already known red here, but if we only want to check whether an expression can be correctly parsed, CafeOBJ also offers the parse command. Let us use it to check the above list:

CafeOBJ> open NATLIST .
%NATLIST> parse nil .,
%NATLIST> parse 1 | ( 3 | ( 2 | nil ) ) .
(1 | (3 | (2 | nil))):NatList
%NATLIST> parse 1 | 3 | 2 | nil .
(1 | (3 | (2 | nil))):NatList

Up to here, all fine, all the terms we hoped that they are lists are properly parsed into NatList. Let us try the same with the two terms which should not be parsed correctly:

%NATLIST> parse 1 | 3 | 2 .
[Error]: no successful parse
(parsed:[ 1 ], rest:[ (| 3 | 2) ]):SyntaxErr
%NATLIST> parse (1 | 3) | 2 | nil .
[Error]: no successful parse
((( 1 | 3 ) | 2 | nil)):SyntaxErr

As we see, parsing did not succeed, which is what we expected. CafeOBJ also tells us up to which part the parsing did work out, and where the first syntax error did occur.

This concludes the introduction to modules. Let us conclude with a challenge for the interested reader.


Enrich the NatList module with an operator len that computes the length of a list. Here is the basic skeleton to be filled in:

op len : NatList -> Nat .
eq len(nil) = ??? .
eq len(E:Nat | L:NatList) = ??? .

Test your code by looking at the rewriting trace.

10 August, 2018 12:16PM by Norbert Preining

Jacob Adams

PGP Clean Room 1.0 Release

After several months of work, I am proud to announce that my GSoC 2018 project, the PGP/PKI Clean Room, has arrived at a stable (1.0) release!

PGP/PKI Clean Room

PGP is still used heavily by many open source communities, especially Debian. Debian’s web of trust is the last line of defense between a Debian user and a malicious software update. Given the availability of GPG subkeys, the safest thing would be to store one’s private GPG master key offline and use subkeys regularly. However, many do not do this as it can be a complex and arcane process.

The PGP Clean Room simplifies this by allowing a user to safely create and store their master key offline while exporting their subkeys, either to a USB drive for importing on their computer, or to a smartcard, where they can be safely used without ever directly exposing one’s private keys to an Internet-connected computer.

The PGP Clean Room also supports PKI, allowing one to create a Certificate Authority and issue certificates from Certificate Signing Requests.


Main Menu

Progress Bar

Setting up a Smartcard


You’ll probably want to read the README to understand how to build and use this project.

It contains instructions on how to obtain the latest build, as well as how to verify it, use it and build it from source.

Translators Wanted

The application has gettext support and a partial German translation, but now that strings are final I would love to support more languages than just English! See the PGPCR README to get started, and thank you for your help!

Source Code

pgpcr: This repository contains the source code of the PGP Clean Room application. It allows one to manage PGP and PKI/CA keys, and export PGP subkeys for day-to-day usage.

make-pgp-clean-room: This repository holds all of the configuration required to build a live CD for the PGP Clean Room application. This is the recommended way to run the application and allows for easy offline key pair management. Everything from commit a50e2aae forward was part of GSoC 2018.

Development Log

The project changelog, which was a day-by-day log of my activities, can be found here.

You can find links to all my weekly reports on the project wiki page.

Bugs Filed

Over the course of this project I also filed a few bugs with other projects.

  • Debian #903681, about psf-unifont’s unneeded dependency on bdf2psf.
  • GNUPG T4001, about exposing import and export functions in the GPGME python bindings.
  • GNUPG T4052, about GPG’s inability to guess an algorithm for P-curve signing and authentication keys.

More Screenshots

Generating GPG Key

Generating GPG Signing Subkey

GPG Key Backup

Loading a GPG Key from a Backup

CA Creation

Advanced Menu

10 August, 2018 12:00AM

August 09, 2018

hackergotchi for Norbert Preining

Norbert Preining

DebConf 18 – Day 2

Although I have already returned from this year’s DebConf, I try to continue to write up my comments on the talks I have attended. The first one was DebConf 18 – Day 1, here we go for Day 2.

The day started with me sleeping in – having the peace of getting up and one’s own pace without a small daughter waking you up a a precious experience 😉 I spent my morning working on my second presentation and joined the conference for lunch.

After the lunch came a Plenary Talk/Discussion/Round Table on Ignoring Negativity. I tried to follow the discussion but somehow couldn’t keep me awake for more than 30min. For me this was the best sleeping pill ever encountered. The time I could listen to was mostly filled with voluptuous and elaborate verbiage I couldn’t digest. Missing IQ I guess on my side.

Next was a very interesting presentation on git-debrebase, a new tool for managing Debian packaging in git. I was very much impressed by the very tricky usage of git in areas I have never touched. Unfortunately one sour point did remain all through the end – by now it does not fully support collaboration in the sense that it can deal with digressing histories, one of the big features of git. Fortunately I learned after asking in the QA section, problems only arise when some very restricted branches are digressing, but not on normal operation.

After the coffee break I attended Autodeploy from salsa which was technically interesting, but I not directly usable for my own development, so I somehow dreamed through the talk.

The last talk for today was Server freedom: why choosing the cloud, OpenStack and Debian: With more and more services moving into the cloud, the question of lock-in is getting more and more pronounced. In my work environment I am dealing with this and we hope by using Kubernetes Cluster Federation and multi-cloud setups we can avoid the lock-in. Thomas gave a very interesting presentation on his work on OpenStack and the tools around it. Very promising and technically on a high level.

But the highlight of the day came after the dinner – the Cheese and Wine party. I cannot express my gratitude to all those who brought excellent cheese from their home countries. Life in Japan, where micro-slices of good cheese cost up to 10USD and more, is somehow a life of cheese deprivation. Enjoying this huge variety from all over the world was like heaven for me. I myself brought some sake and dried fish and burdock to contribute what I could do.

During the Cheese and Wine party we were also treated to a Kavalan Whiskey which has won some of the most prestigious prices for Whiskey making just this year in May. On my way back home to Japan I was sure to get two bottles for my own collection.

After having tasted countless wines and cheeses, a bit of this wonderful Kavalan, and enjoyed chatting with many of the participants mostly on matters unrelated to Debian, I returned late back to my hotel off campus.

Thanks goes to all the organizers of the conference and in particular the Wine and Cheese party for this spectacular event!

09 August, 2018 01:37AM by Norbert Preining

August 08, 2018

Athos Ribeiro

Building Debian packages with clang replacing gcc with Open Build Service

The production instance of our Open Build Service can be found here

This is the seventh post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Triggering Debian Unstable builds on Open Build Service

As I have been reporting on previous posts, whenever one tries to build packages against Debian unstable with Debian Stretch OBS packages, the OBS scheduler would download the package dependencies and clean up the cacher before the downloads were completed. This would cause builds to never get triggered: OBS would keep downloading dependencies forever.

This would only happen for builds on debian sid (unstable) and buster (testing).

After some investigation (as already reported before), we realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19.

While the backports package included the patch needed to support source packages built with newer versions of dpkg, we would still get the same issue with unstable and testing builds.

We spent a lot of time stuck in this issue, and I ended up proposing to host the whole archive in our OBS instance so we could use it as dependencies for new packages. It ended up being a terrible idea, since it would be unfeasible to test it locally and we would have to update the packages on each accepted upload into debian unstable (note that several packages get updated every day). Hence, I kept studying OBS code to understand how builds get triggered and why it would not happen for unstable/testing packages.

After diving into OBS code, we realized that it uses libsolv to read repositories. In libsolv’s change history, we found that there was an arbitrary size limit to the debian control file, which was fixed in an newer version of the package.

After updating the libsolv0, libsolv-perl and libsolvext0 packages (using Debian Buster versions), we could finally trigger Debian unstable builds.

Substituting gcc for clang in build time

With OBS being able to trigger builds for Debian Unstable, we now needed to be able to substitute builds with gcc for builds with the clang compiler. As suggested by my mentor, Sylvestre, we wanted to substitute gcc binaries for clang ones. For that, we needed to find a way to run a script suggest by Sylvestre in our build environments prior triggering the builds.

Fortunately, Open Build Service has a Source Services feature that can do exactly that: change source code before builds or run specific tasks in the build environment during build time. For the latter (which is what we needed to substitute gcc binaries), OBS requires one to build an obs-service-SERVICE_NAME package, which will be injected in the build environment as a dependency and get whatever task specified by that package performed.

We built our obs-service-clang-build package with the needed scripts to perform the gcc/clang substitutions in build time.

To activate the substitution task for a package, it should contain a _service file with in its root path (along with the debian and pristine sources) the following content:

  <service name="clang_build" mode="buildtime" />

If everything is correct, we should see

[  TIME in seconds] Running build time source services...

in the log file, followed by the gcc substitution commands.

Automating clang builds for new Debian unstable uploads

Now, we must proceed to trigger Debian unstable builds for newly accepted uploads. To do so, we wrote a cron job to monitor the debian-devel-changes mailing list. We check for new source uploads to debian unstable and trigger a new build for those new packages (we added a 4 hour cool-down before triggering the build to allow the package to propagate through the mirrors).

The results can be seen at

Updating links in Debian distro-tracker

Debian distro-tracker has links to clang builds, which would point to the former service at We now want to substitute those links to point to our new OBS instance. Thus, we opened a new pull request to perform such change.

Next steps

Now that we have an instance of our project up and running, there are a few tasks left for our final GSoC submission.

  • Migrate salt scripts from my personal github namespace to A few adjustments may be needed due to the environment differences.
  • Write some documentation for the project and a guide on how to add new workers
  • Create a separate github project for the cron which analyzes debian-devel-changes mailing list
  • Create a separate github project for the obs-service-clang-build package

08 August, 2018 06:53PM


Final GSOC 2018 Report

This is the final report of my 2018 Google Summer of Code project. It also serves as my final code submission.

Short overview:


The main project was nacho, the web frontend for the guest accounts of the Debian project. The software is now in a state where it can be used in a production enviroment and there is already work being done to deploy the application on Debian infrastructure. It was a lot of fun programming that software and i learned a lot about Python and Django. My mentors gave me valuable feedback and pointed me in the right direction in case i had questions. There are still some ideas or features that can be implemented and i’m sure some feature requests will come up in the future. Those can be tracked in the issue tracker in the salsa repository. An overview of the activity in the project, including both commits and issues, can be seen in the activity list.

The SSO evaluations i did give an overview of existing solutions and will help in the decision making process. The README in the evaluation repository has a table taht summarizes the findings of the evaluations.

The branch of that implements oauth2 authentication against an oauth2 provider provides a proof of concept of how the authentication can be implemented and it can be used to integrate the functionality into other services.

I’ve learned a lot in the last few month and it was a pleasure to work with babelouest and formorer. Debian is an interesting project and i plan to keep on contributing or maybe even intensify the contributions. Maybe i can use the the oauth2 authentication on for my own application soon ;)


The list of reports in chronological order from top to bottom:

08 August, 2018 01:30PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

I am Tomu!

While I was away for DebConf18, I received the Tomu boards I ordered on Crowdsupply a while ago while the project was still going through crowdfunding.

A Tomu board next to a US cent for size comparison

For those of you who don't know what the Tomu is, it's a tiny ARM microprocessor board which fits in your USB port. There are a bunch of neat stuff you can do with it, but I use it as a U2F token.

The design is less sleek than a YubiKey nano and it can't be used as a GPG smartcard (yet!), but it runs free software on open hardware and everything can be built using a free software toolchain.

It also cost me a fraction of the price of a Yubico device (14 CAD with shipping vs 70+ CAD for the YubiKey nano) so I could literally keep 1 for me and give away 4 Tomus to my friends and family for the price of a YubiKey nano.

But yeah, the deal breaker really is the openness of the device. I don't see how I could trust a proprietary device that tells me it's very secure when I can't see what it's doing with my U2F private key...

Flashing the board

The Tomu can be used as a U2F token by flashing chopstx on it, the same software used in the gnuk project lead by awesome Niibe-san.

Although I had a gnuk token a while ago, I ended up giving it away since I found the flashing process painful and I didn't really have a use case for a GPG smartcard at the time.

The Tomu board in the bootloader

On the contrary, flashing the Tomu was a walk in the park. The Tomu's bootloader supports dfu-util so it was only a matter of installing it on my computer, building the software and pushing it on the board.

I did encounter a few small problems during the process, but I sent a series of patches upstream to try to fix that and make the whole experience smoother.

Here's a few things you should look out for while flashing a Tomu for to be used as a U2F token.

  • Make sure you are running the latest version of the bootloader. You can find it here.
  • Your U2F private key will be erased if you update the firmware. Be sure to generate it on your host computer and keep an encrypted copy of it somewhere.
  • For now, the readout protection is not enabled by default. Be sure to use make ENFORCE_DEBUG_LOCK=1 when building the chopstx binary.
  • Firefox doesn't support U2F out of the box on Debian. You have to enable a few options in about:config and use a plugin for it to work properly.
  • You need to add a new udev rule for the Tomu to be seen as a U2F device by your system.

08 August, 2018 04:00AM by Louis-Philippe Véronneau

August 07, 2018

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill


Am I leading a double life as an actor in several critically acclaimed television series?

I ask because I was recently accused of being Paul Sparks—the actor who played gangster Mickey Doyle on Boardwalk Empire and writer Thomas Yates in the Netflix version of House of Cards. My accuser reacted to my protestations with incredulity. Confronted with the evidence, I’m a little incredulous myself.

Previous lookalikes are here.

07 August, 2018 09:00PM by Benjamin Mako Hill

Lucas Kanashiro

DebCamp and DebConf 18 summary

Come as no surprise, Debcamp and Debconf 18 were amazing! I worked on many things that I had not had enough time to accomplish before; also I had the opportunity to meet old friends and new people. Finally, I engaged in important discussions regarding the Debian project.

Based on my last blog post, follows what I did in Hsinchu during these days.

  • The Debconf 19 website has an initial version running \o/ I want to thank Valessio Brito and Arthur Del Esposte for helping me build this first version, and also thank tumblingweed for your explanation about how wafer works.

  • The Perl Team Rolling Sprint was really nice! Four people participated, and we were able to get a bunch of things done, you can see the full report here.

  • Arthur Del Esposte (my GSoC intern) made some improvements on his work, and also collected some feedbacks from others developers. I hope he will blog post these things soon. You can find his presentation about his GSoC project here; he is the first student in the video :)

  • I worked on some Ruby packages. I uploaded some new dependencies of Rails 5 to unstable (which Praveen et al. were already working on them). I hope we can make Rails 5 package available in experimental soon, and ship it in the next Debian stable release. I also discussed about Redmine package with Duck (Redmine’s co-maintainer) but did not manage to work on it.

Besides the technical part, this was my first time in Asia! I loved the architecture, despite the tight streets, the night markets the temples and so on. Some pictures that I took below:

And in order to provide a great experience for the Debian community next year in Curitiba - Brazil, we already started to prepare the ground for you :)

See you all next year in Curitiba!

07 August, 2018 07:54PM

Vincent Sanders

The brain is a wonderful organ; it starts working the moment you get up in the morning and does not stop until you get into the office.

I fear that I may have worked in a similar office environment to Robert Frost. Certainly his description is familiar to those of us who have been subjected to modern "open plan" offices. Such settings may work for some types of job but for myself, as a programmer, it has a huge negative effect.

My old basement officeWhen I decided to move on from my previous job my new position allowed me to work remotely. I have worked from home before so knew what to expect. My experience led me to believe the main aspects to address when home working were:
This is difficult to mitigate but frequent face to face meetings and video calls with colleagues can address this providing you are aware that some managers have a terrible habit of "out of sight, out of mind" management
You are on your own a lot of the time which means you must motivate yourself to work. Mainly this is achieved through a routine. I get dressed properly, start work the same time every day and ensure I take breaks at regular times.
Work life balance
This is often more of a problem than you might expect and not in the way most managers assume. A good motivated software engineer can have a terrible habit of suddenly discovering it is long past when they should have finished work. It is important to be strict with yourself and finish at a set time.
In my previous office testers, managers, production and support staff were all mixed in with the developers resulting in a lot of distractions however when you are at home there are also a great number of possible distractions. It can be difficult to avoid friends and family assuming you are available during working hours to run errands. I find I need to carefully budget time to such tasks and take it out of my working time like i was actually in an office.
My previous office had "tired" furniture and decoration in an open plan which often had a negative impact on my productivity. When working from home I find it beneficial to partition my working space from the rest of my life and ensure family know that when I am in that space I am unavailable. You inevitably end up spending a great deal of time in this workspace and it can have a surprisingly large effect on your productivity.
Being confident I was aware of what I was letting myself into I knew I required a suitable place to work. In our previous home the only space available for my office was a four by ten foot cellar room with artificial lighting. Despite its size I was generally productive there as there were few distractions and the door let me "leave work" at the end of the day.

Garden office was assembled June 2017
This time my resources to create the space are larger and I wanted a place I would be comfortable to spend a lot of time in. Initially I considered using the spare bedroom which my wife was already using as a study. This was quickly discounted as it would be difficult to maintain the necessary separation of work and home.

Instead we decided to replace the garden shed with a garden office. The contractor ensured the structure selected met all the local planning requirements while remaining within our budget. The actual construction was surprisingly rapid. The previous structure was removed and a concrete slab base was placed in a few hours on one day and the timber building erected in an afternoon the next.

Completed office in August 2018
The building arrived separated into large sections on a truck which the workmen assembled rapidly. They then installed wall insulation, glazing and roof coverings. I had chosen to have the interior finished in a hardwood plywood being hard wearing and easy to apply finish as required.

Work desk in July 2017
Although the structure could have been painted at the factory Melodie and I applied this ourselves to keep the project in budget. I laid a laminate floor suitable for high moisture areas (the UK is not generally known as a dry country) and Steve McIntyre and Andy Simpkins assisted me with various additional tasks to turn it into a usable space.

To begin with I filled the space with furniture I already had, for example the desk was my old IKEA Jerker which I have had for over twenty years.

Work desk in August 2018
Since then I have changed the layout a couple of times but have finally returned to having my work desk in the corner looking out over the garden. I replaced the Jerka with a new IKEA Skarsta standing desk, PEXIP bought me a nice work laptop and I acquired a nice print from Lesley Mitchell but overall little has changed in my professional work area in the last year and I have a comfortable environment.

Cluttered personal work area
In addition the building is large enough that there is space for my electronics bench. The bench itself was given to me by Andy. I purchased some inexpensive kitchen cabinets and worktop (white is cheapest) to obtain a little more bench space and storage. Unfortunately all those flat surfaces seem to accumulate stuff at an alarming rate and it looks like I need a clear out again.

In conclusion I have a great work area which was created at a reasonable cost.

There are a couple of minor things I would do differently next time:
  • Position the building better with respect to the boundary fence. I allowed too much distance on one side of the structure which has resulted in an unnecessary two foot wide strip of unusable space.
  • Ensure the door was made from better materials. The first winter in the space showed that the door was a poor fit as it was not constructed to the same standard as the rest of the building.
  • The door should have been positioned on the end wall instead of the front. Use of the building showed moving the door would make the internal space more flexible.
  • Planned the layout more effectively ahead of time, ensuring I knew where services (electricity) would enter and where outlets would be placed.
  • Ensure I have an electrician on site for the first fix so electrical cables could be run inside the walls instead of surface trunking.
  • Budget for air conditioning as so far the building has needed heating in winter and cooling in summer.
In essence my main observation is better planning of the details matters. If i had been more aware of this a year ago perhaps I would not not be budgeting to replace the door and fit air conditioning now.

07 August, 2018 05:49PM by Vincent Sanders (

hackergotchi for Joachim Breitner

Joachim Breitner

Swing Dancer Profile

During my two years in Philadelphia (I was a post-doc with Stephanie Weirich at the University of Pennsylvania) I danced a lot of Swing, in particular at the weekly social “Jazz Attack”.

They run a blog, that blog features dancers, and this week – just in time for my last dance – they featured me with a short interview.

07 August, 2018 04:16PM by Joachim Breitner (

Petter Reinholdtsen

Privacy respecting health monitor / fitness tracker?

Dear lazyweb,

I wonder, is there a fitness tracker / health monitor available for sale today that respect the users privacy? With this I mean a watch/bracelet capable of measuring pulse rate and other fitness/health related values (and by all means, also the correct time and location if possible), which is only provided for me to extract/read from the unit with computer without a radio beacon and Internet connection. In other words, it do not depend on a cell phone app, and do make the measurements available via other peoples computer (aka "the cloud"). The collected data should be available using only free software. I'm not interested in depending on some non-free software that will leave me high and dry some time in the future. I've been unable to find any such unit. I would like to buy it. The ones I have seen for sale here in Norway are proud to report that they share my health data with strangers (aka "cloud enabled"). Is there an alternative? I'm not interested in giving money to people requiring me to accept "privacy terms" to allow myself to measure my own health.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

07 August, 2018 02:00PM

Paul Wise

FLOSS Activities July 2018





  • myrepos: merge patches, release
  • foxtrotgps: merge patch
  • whohas: merge pull request
  • fossjobs: forward some job advertisments
  • Debian: quiet buildd cron mail, redirect potential contributor, discuss backup hosts for some arches, discussions at DebConf18
  • Debian wiki: unblacklist networks, whitelist domains, whitelist email addresses, reject possibly inappropriate signup attempt
  • Debian website: remove lingering file



All work was done on a volunteer basis.

07 August, 2018 08:45AM

August 06, 2018

Andreas Bombe

GHDL Back in Debian

As I have noted, I have been working on packaging the VHDL simulator GHDL for Debian after it has dropped out of the archive for a few years. This work has been on slow burner for a while and last week I used some time at DebConf 18 to finally push this to completion and upload it. ftpmasters were also working fast, so yesterday the package got accepted and is now available from Debian unstable.

The package you get supports up to VHDL-93, which is entirely down to VHDL library issues. The libraries published by IEEE along with the VHDL standard are not free enough to be suitable for Debian main. Instead, the package uses the openieee libraries developed as part of GHDL, which are GPL’ed from-scratch implementations of the libraries required by the VHDL standard. Currently these only implement VHDL-89 and VHDL-93, hence the limitation.

I intend to package the IEEE libraries in a separate package that will go into non-free. The new license under which the libraries are distributed is frustratingly close to free except in the case of modifications, where only specific changes are allowed. No foreseeable problems for the non-free section though. This package should integrate itself into the GHDL package installations, so installing it will make the GHDL packages support VHDL-2008 — at least as far as GHDL itself supports VHDL-2008.

06 August, 2018 08:48PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

DebConf18 writeup

I’m just back from DebConf18, which was held in Hsinchu, Taiwan. I went without any real concrete plans about what I wanted to work on - I had some options if I found myself at a loose end, but no preconceptions about what would pan out. In the end I felt I had a very productive conference and I did bits on all of the following:

  • Worked on trying to fix my corrupted Lenovo Ideacentre Stick 300 BIOS (testing of current attempt has been waiting until I’m back home and have access to the device again, so hopefully within the next few days)
  • NMUed sdcc to fix FTBFS with GCC 8
  • Prepared Pulseview upload to fix FTBFS with GCC 8, upload stalled on libsigc++2.0 (Bug#897895)
  • Caught up with Gunnar re keyring stuff
  • Convinced John Sullivan to come and help out keyring-maint
  • New Member / Front Desk conversations
  • Worked on gcc toolchain packages for ESP8266 (xtensa-lx106) (Bug#868895). Not sure if these are useful enough to others to upload or not, but so far I’ve moved from 4.8.5 to 7.3 and things seem happy.
  • Worked on porting latest newlib to xtensa with help from Keith Packard (in particular his nano variant with much smaller stdio code)
  • Helped present the state of Debian + the GDPR
  • Sat on the New Members BoF panel
  • Went to a whole bunch of interesting talks + BoFs.
  • Put faces to a number of names, as well as doing the usual catchup with the familiar faces.

I managed to catch the DebConf bug towards the end of the conference, which was unfortunate - I had been eating the venue food at the start of the week and it would have been nice to explore the options in Hsinchu itself for dinner, but a dodgy tummy makes that an unwise idea. Thanks to Stuart Prescott I squeezed in a short daytrip to Taipei yesterday as my flight was in the evening and I was going to have to miss all the closing sessions anyway. So at least I didn’t completely avoid seeing some of Taiwan when I was there.

As usual thanks to all the organisers for their hard work, and looking forward to DebConf19 in Curitiba, Brazil!

06 August, 2018 02:29PM

Reproducible builds folks

Reproducible Builds: Weekly report #171

Here’s what happened in the Reproducible Builds effort between Sunday July 29 and Saturday August 4 2018:

Upstream work

Bernhard M. Wiedemann proposed toolchain patches to:

  • rpm to have determinism in the process of stripping debuginfo into separate packages
  • gzip to make tar -cz output reproducible on the gzip side. This might also help with compressed man-pages and merged by gzip upstream.

In addition, Bernhard M. Wiedemann worked on:


This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

06 August, 2018 01:25PM

Niels Thykier

Buster is headed for a long hard freeze

We are getting better and better accumulating RC bugs in testing. This is unfortunate because the length of the freeze is strongly correlated with the number of open RC bugs affecting testing. If you believe that Debian should have short freezes, then it will require putting effort behind that belief and fix some RC bugs – even in packages that are not maintained directly by you or your team and especially in key packages.

The introduction of key packages have been interesting. On the plus side, we can use it to auto-remove RC buggy non-key packages from testing which has been very helpful. On the flip-side, it also makes it painfully obvious that over 50% of all RC bugs in testing are now filed against key packages (for the lazy; we are talking about 475 RC bugs in testing filed against key packages; about 25 of these appear to be fixed in unstable).

Below are some observations from the list of RC bugs in key packages (affecting both testing and unstable – based on a glance over all of the titles).

  • About 85 RC bugs related to (now) defunct maintainer addresses caused by the shutdown of Alioth. From a quick glance, it appears that the Debian Xfce Maintainers has the largest backlog – maybe they could use another team member.  Note they are certainly not the only team with this issue.
  • Over 100 RC bugs are FTBFS for various reasons..  Some of these are related to transitions (e.g. new major versions of GCC, LLVM and OpenJDK).

Those three points alone accounts for 40% of the RC bugs affecting both testing and unstable.

We also have several contributors that want to remove unmaintained, obsolete or old versions of  packages (older versions of compilers such as GCC and LLVM, flash-players/tooling, etc.).  If you are working on this kind of removal, please remember to follow through on it (even if it means NMU packages).  The freeze is not the right time to remove obsolete key packages as it tends to involve non-trivial changes of features or produced binaries.  As much of this as entirely possible ought to be fixed before 2019-01-12 (transition freeze).


In summary: If you want Debian Buster released in early 2019 or short Debian freezes in general, then put your effort where your wish/belief is and fix RC bugs today.  Props for fixes to FTBFS bugs, things that hold back transitions or keep old/unmaintained/unsupportable key packages in Buster (testing).

06 August, 2018 10:48AM by Niels Thykier

August 05, 2018

hackergotchi for Bits from Debian

Bits from Debian

DebConf18 closes in Hsinchu and DebConf19 dates announced

DebConf18 group photo - click to enlarge

Today, Sunday 5 August 2018, the annual Debian Developers and Contributors Conference came to a close. With over 306 people attending from all over the world, and 137 events including 100 talks, 25 discussion sessions or BoFs, 5 workshops and 7 other activities, DebConf18 has been hailed as a success.

Highlights included DebCamp with more than 90 participants, the Open Day,
where events of interest to a broader audience were offered, plenaries like the traditional Bits from the DPL, a Questions and Answers session with Minister Audrey Tang, a panel discussion about "Ignoring negativity" with Bdale Garbee, Chris Lamb, Enrico Zini and Steve McIntyre, the talk "That's a free software issue!!" given by Molly de Blanc and Karen Sandler, lightning talks and live demos and the announcement of next year's DebConf (DebConf19 in Curitiba, Brazil).

The schedule has been updated every day, including 27 ad-hoc new activities, planned
by attendees during the whole conference.

For those not able to attend, most talks and sessions were recorded and live streamed, and videos are being made available at the Debian meetings archive website. Many sessions also facilitated remote participation via IRC or a collaborative text document.

The DebConf18 website will remain active for archive purposes, and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf19 will be held in Curitiba, Brazil, from 21 July to 28 July, 2019. It will be the second DebConf held in Brazil (first one was DebConf4 in Porto Alegre. For the days before DebConf the local organisers will again set up DebCamp (13 July – 19 July), a session for some intense work on improving the distribution, and organise the Open Day on 20 July 2019, open to the general public.

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf18, particularly our Platinum Sponsor Hewlett Packard Enterprise.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (HPE) is an industry-leading technology company providing a comprehensive portfolio of products such as integrated systems, servers, storage, networking and software. The company offers consulting, operational support, financial services, and complete solutions for many different industries: mobile and IoT, data & analytics and the manufacturing or public sectors among others.

HPE is also a development partner of Debian, and providing hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

Contact Information

For further information, please visit the DebConf18 web page at or send mail to

05 August, 2018 08:00PM by Laura Arjona Reina

August 04, 2018

Thorsten Alteholz

My Debian Activities in July 2018

FTP master

This month was dominated by warm weather and I spent more time in a swimming pool than in the NEW queue. So I only accepted 149 packages and rejected 5 uploads. The overall number of packages that got accepted this month was 380.

Debian LTS

This was my forty ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30.00h. During that time I did LTS uploads of:

    [DLA 1428-1] 389-ds-base security update for 5 CVEs
    [DLA 1430-1] taglib security update for one CVE
    [DLA 1433-1] openjpeg2 security update for two CVEs
    [DLA 1437-1] slurm-llnl security update for two CVEs
    [DLA 1438-1] opencv security update for 17 CVEs
    [DLA 1439-1] resiprocate security update for two CVEs
    [DLA 1444-1] vim-syntastic security update for one CVE
    [DLA 1451-1] wireshark security update for 7 CVEs

Further I started to work on libgit and fuse. Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the second ELTS month.

During my allocated time I uploaded:

  • ELA-23-1 for wireshark
  • ELA-24-1 for fuse

I also tried to work on qemu but had to confess that those CVEs are far beyond my capabilities. Luckily qemu is no longer on the list of supported packages for ELTS. As there seemed to be some scheduling difficulties I stepped in and did 1.5 weeks of frontdesk duties.

Other stuff

This month I uploaded new packages of

  • pywws, a software to obtain data from some wheather stations
  • osmo-msc, a software from Osmocom

Further I continued to sponsor some glewlwyd packages for Nicolas Mora.

I also uploaded a new upstream version of …

I improved packaging of …

and fixed some bugs in …

The DOPOM (Debian Orphaned Package Of the Month) of this month has been sockstat. As there was a BUG about IPv6 support and upstream doesn’t seem to be active anymore, I revived it on github.

04 August, 2018 02:59PM by alteholz

hackergotchi for Sune Vuorela

Sune Vuorela

The smart button

Item {
    property string text;
    Text { ... }
    MouseArea {
        onClicked: {
            if (parent.text === "Quit") {
            } else if (parent.text === "Start") {
            } else if (parent.text === "Stop") {

I don’t always understand why people do things in some ways.

04 August, 2018 07:36AM by Sune Vuorela


Report from the AppArmor BoF at DebConf18

After a discussion started on debian-devel a year ago, AppArmor has been enabled by default in testing/sid since November 2017 as an experiment. We'll soon need to decide whether Buster ships with AppArmor by default or not. Clément Hermann and yours truly have hosted a BoF at DebConf18 in order to gather both subjective and factual data that can later be used to:

  1. draw conclusions from this experiment;
  2. identify problems we need to fix.

About 40 people attended this BoF; about half of them to participated actively, which is better than I expected even though I think we can do better.

Opting-in or -out

We started with a show of hands:

  • Out of 7 attendees who run Debian Stretch on their main system, 3 have voluntarily enabled AppArmor. course the attendees
  • Out of 15 attendees who run Debian testing/sid on their main system, 4 have voluntarily disabled AppArmor.
    → It would be interesting to understand why; if you're in this situation, let's talk!

Sticky notes party

We had a very dynamic collaborative sticky notes party aiming at gathering feeling and ideas, in a way that let us identify which ones were most commonly shared among the attendees.


We asked the participants to write down their answers to the following questions on sticky notes (one idea per post-it):

  • How have you felt about your personal AppArmor experience so far?
  • How do you feel about the idea of keeping AppArmor enabled by default in the Debian Buster release?

Then we de-duplicated and categorized the resulting big pile of post-its together on a whiteboard. Finally, everyone got the chance to "+1" the four ideas/feelings they shared the most.


If you're curious, here's what the whiteboard contained at the end.

Here are the conclusions I draw from this data:

  • A clear majority of the actively participating attendees have a generally positive feeling about AppArmor since it was enabled.
  • A clear majority of the actively participating attendees like the idea of keeping it enabled in Debian Buster. This is not very surprising coming from a small crowd of people who were interested enough to attend this BoF, but still.
  • Many attendees would like AppArmor to confine more software.
  • We need integration tests for AppArmor policy… just like we need integration tests for many other things in Debian.
  • We need at the very least better documentation (to explain how to use the existing policy debugging/development tools) and probably better integration in Debian (e.g. reportbug).
  • Regarding desktop apps sandboxing, the audience seemed to be split:
    • Those who were lead to believe that AppArmor is, in itself, a great technology to sandbox desktop apps. I think that's a misunderstanding; I know I'm at partly responsible for it and will do my best to fix it.
    • Those who echoed the concerns I had written on post-its myself about this strategy and communication problem.

I will update/file bug reports to reflect these conclusions.

Open discussion

Finally, we had an open discussion, half brainstorming ideas and half "ask me anything about AppArmor". For the curious, I've compiled the notes that were taken by Clément Hermann.


I want to thank:

  • Clément Hermann for co-hosting this session with me;
  • all attendees for playing the sticky notes party game — which was probably not what they expected when entering the room — and for their valuable input.

The feedback I got about the sticky notes party format was very positive: a few attendees told me it made them feel more part of the decision making process. Credits are due to Gunner for the inspiration!

If you attended this BoF and want to share your thoughts about how it went, I'm all ears → :)

04 August, 2018 01:45AM

August 03, 2018

Dima Kogan

UNIX curiosities

Recently I've been doing more UNIXy things in various tools I'm writing, and I hit two interesting issues. Neither of these are "bugs", but behaviors that I wasn't expecting.

Thread-safe printf

I have a C application that reads some images from disk, does some processing, and writes output about these images to STDOUT. Pseudocode:

for(imagefilename in images)
    results = process(imagefilename);

The processing is independent for each image, so naturally I want to distribute this processing between various CPUs to speed things up. I usually use fork(), so I wrote this:

for(child in children)
    pipe = create_pipe();

// main parent process
for(imagefilename in images)
    write(pipe[i_image % N_children], imagefilename)

        imagefilename = read(pipe);
        results = process(imagefilename);

This is the normal thing: I make pipes for IPC, and send the child workers image filenames through these pipes. Each worker could write its results back to the main process via another set of pipes, but that's a pain, so here each worker writes to the shared STDOUT directly. This works OK, but as one would expect, the writes to STDOUT clash, so the results for the various images end up interspersed. That's bad. I didn't feel like setting up my own locks, but fortunately GNU libc provides facilities for that: flockfile(). I put those in, and … it didn't work! Why? Because whatever flockfile() does internally ends up restricted to a single subprocess because of fork()'s copy-on-write behavior. I.e. the extra safety provided by fork() (compared to threads) actually ends up breaking the locks.

I haven't tried using other locking mechanisms (like pthread mutexes for instance), but I can imagine they'll have similar problems. And I want to keep things simple, so sending the output back to the parent for output is out of the question: this creates more work for both me the programmer, and for the computer running the program.

The solution: use threads instead of forks. This has a nice side effect of making the pipes redundant. Final pseudocode:

    pthread_create(worker, child_index);

    for(i_image = child_index; i_image < N_images; i_image += N_children)
        results = process(images[i_image]);

Much simpler, and actually works as desired. I guess sometimes threads are better.

Passing a partly-read file to a child process

For various vnlog tools I needed to implement this sequence:

  1. process opens a file with O_CLOEXEC turned off
  2. process reads a part of this file (up-to the end of the legend in the case of vnlog)
  3. process calls exec to invoke another program to process the rest of the already-opened file

The second program may require a file name on the commandline instead of an already-opened file descriptor because this second program may be calling open() by itself. If I pass it the filename, this new program will re-open the file, and then start reading the file from the beginning, not from the location where the original program left off. It is important for my application that this does not happen, so passing the filename to the second program does not work.

So I really need to pass the already-open file descriptor somehow. I'm using Linux (other OSs maybe behave differently here), so I can in theory do this by passing /dev/fd/N instead of the filename. But it turns out this does not work either. On Linux (again, maybe this is Linux-specific somehow) for normal files /dev/fd/N is a symlink to the original file. So this ends up doing exactly the same thing that passing the filename does.

But there's a workaround! If we're reading a pipe instead of a file, then there's nothing to symlink to, and /dev/fd/N ends up passing the original pipe down to the second process, and things then work correctly. And I can fake this by changing the open("filename") above to something like popen("cat filename"). Yuck! Is this really the best we can do? What does this look like on one of the BSDs, say?

03 August, 2018 10:04PM by Dima Kogan

hackergotchi for Lars Wirzenius

Lars Wirzenius

On requiring English in a free software project

This week's issue of LWN has a quote by Linus Torvalds on translating kernel messages to something else than English. He's against it:

Really. No translation. No design for translation. It's a nasty nasty rat-hole, and it's a pain for everybody.

There's another reason I fundamentally don't want translations for any kernel interfaces. If I get an error report, I want to be able to just 'git grep" it. Translations make that basically impossible.

So the fact is, I want simple English interfaces. And people who have issues with that should just not use them. End of story. Use the existing error numbers if you want internationalization, and live with the fact that you only get the very limited error number.

I can understand Linus's point of view. The LWN readers are having a discussion about it, and one of the comments there provoked this blog post:

It somewhat bothers me that English, being the lingua franca of of free software development, excludes a pretty huge parts of the world from participation. I thought that for a significant part of the world, writing an English commit message has to be more difficult than writing code.

I can understand that point of view as well.

Here's my point of view:

  • It is entirely true that if a project requires English for communication within the project, it discriminates against those who don't know English well.

  • Not having a common language within a project, between those who contribute to the project, now and later, would pretty much destroy any hope of productive collaboration.

    If I have a free software project, and you ask me to merge something where commit messages are in Hindi, error messages in French, and code comments in Swahili, I'm not going to understand any of them. I won't merge what I don't understand.

    If I write my commit messages in Swedish, my code comments in Finnish, and my error messages by entering randomly chosen words from /usr/share/dict/words into search engine, and taking the page title of the fourteenth hit, then you're not going to understand anything either. You're unlikely to make any changes to my project.

    When Bo finds the project in 2038, and needs it to prevent the apocalypse from 32-time timestamps ending, and can't understand the README, humanity is doomed.

    Thus, on balance, I'm OK with requiring the use of a single language for intra-project communication.

  • Users should not be presented with text in a language foreign to them. However, this raises a support issue, where a user may copy-paste an error message in their native language, and ask for help, but the developers don't understand the language, and don't even know what the error is. If they knew the error was "permission denied", they could tell the user to run the chmod command to fix the permissions. This is a dilemma.

    I've solved the dilemma by having a unique error code for each error message. If the user tells me "R12345X: Xscpft ulkacsx ggg: /etc/obnam.conf!" I can look up R12345X and see that the error is that /etc/obnam.conf is not in the expected file format.

    This could be improved by making the "parameters" for the error message easy to parse. Perhaps something like this:

    R12345X: Xscpft ulkacsx ggg! filename=/etc/obnam.conf

    Maintaining such error codes by hand would be quite tedious, of course. I invented a module for doing that. Each error message is represented by a class, and the class creates its own error code by taking the its Python module and class name, and computing and MD5 of that. The first five hexadecimal digits are the code, and get surrounded by R and X to make it easier to grep.

    (I don't know if something similar might be used for the Linux kernel.)

  • Humans and inter-human communication is difficult. In many cases, there is not solution that's good for everyone. But let's not give up.

03 August, 2018 01:24PM

Enrico Zini

Multiple people

These are the notes from my DebConf18 talk

Slides are available in pdf and odp.


Starting from Debian, I have been for a long time part of various groups where diversity is accepted and valued, and it has been an invaluable supply of inspiration, allowing my identity to grow with unexpected freedom.

During the last year, I have been thinking passionately about things such as diversity, gender identity, sexual orientation, neurodiversity, and preserving identity in a group.

I would like to share some of those thoughts, and some of that passion.

Multiple people

"Debian is a relationship between multiple people", it (I) says at the entrance.

I grew up in a small town, being different was not allowed. Difference attracted "education", well-meaning concern, bullying.

Everyone knew everyone, there was no escape to a more fitting bubble, there was very little choice of bubbles.

I had to learn from a very young age the skill of making myself accepted by my group of peers.

It was an essential survival strategy.

Not being a part of the one group meant becoming a dangerous way of defining the identity of the group: "we are not like him". And one would face the consequences.

"Debian is a relationship between multiple people", it (I) says at the entrance.

Debian was one of the first opportunities for me to directly experience that.

Where I could begin to exist

Where I could experience the value and pleasure of diversity.

Including mine.

I am extremely grateful for that.

I love that.

This talk is also a big thank you to you all for that.

"Debian is a relationship between multiple people", it (I) says at the entrance.

Multiple people does not mean being all the same kind of person, all doing all the same kind of things.

Each of us is a different individual, each of us brings their unique creativity to Debian.

Classifying people

How would you describe a person?

There are binary definitions:

  • Good / Bad
  • Smart / Stupid
  • Pretty / Ugly
  • White / Not white
  • Rich / Poor
  • From here / immigrant
  • Not nerd / Nerd
  • Straight / Gay
  • Cis / Trans
  • Polite / Impolite
  • Right handed / left handed (some kids are still being "corrected" for being left handed in Italy)
  • Like me / unlike me

Labels: (like package sections)

  • Straight
  • Gay
  • Bi
  • Cis
  • Trans
  • Caucasian
  • ...
  • Like me / unlike me

Spectra: (like debtags)

  • The gender spectrum
  • The sexual preference spectrum
  • The romantic preference spectrum
  • The neurodiversity spectrum
  • The skin color spectrum
  • The sexual attraction spectrum

We classify packages better than we classify people.

Identity / spectrums

I'm going to show a few examples of spectra; I chose them not because they are more or less important than others, but because they have recently been particularly relevant to me, and it's easier for me to talk about them.

If you wonder where you are in each spectrum, know that every place is ok.

Think about who you are, not about who you should be.

Gender identity

My non binary awareness began with d-w and gender neutral documentation.

Sexual orientation

table of sexual preference prefixes combinations


I'll introduce neurodiversity by introducing allism

An allistic person learns subconsciously that ey is dependent on others for eir emotional experience. Consequently, ey tends to develop the habit of manipulating the form and content of social interactions in order to elicit from others expressions of emotion that ey will find pleasing when incorporated into eir mind.

The more I reason about this (and I reasoned about this a lot, before, during and after therapy), the more I consider it a very rational adaptation, derived from a very clear message I received since I was a small child: nobody cared whom I was, and to be accepted socially I needed to act a social part, which changed from group to group. Therefore, socially, I was wrong, and I had to work to deserve the right to exist.

What one usually sees of me in large groups or when out of comfort zone, is a social mask of me.

This paper is also interesting: analyzing tweets of people and their social circle, they got to the point of being able to predict what a person will write by running statistics only on what their friends are writing.

Is it measuring identity or social conformance?

Discussion about the autism spectrum tends to get very medical very fast, and the DSM gets often criticised for a tendency of turning diversity into mental health issues.

I stick to my experience from a different end of the spectrum, and there are various resources online to explore if you are interested in more.

Other spectra

I hope you get the idea about spectrum and identity.

There are many more, those were at the top of my head because of my recent experiences.

Skin color, age, wealth, disability, language proficiency, ...

How to deal with diversity

How to deal with my diversity

Let's all assume for a moment that each and every single one of us is ok.

I am ok.

You are ok.

You've been ok since the moment you were born.

Being born is all you need to deserve to exist.

You are ok, and you will always be ok.

Like every single person alive.

I'm ok.

You're ok.

We're all ok.

Hold on to that thought for the next 5 minutes. Hold onto it for the rest of your life.

Ok. A lot of problems are now solved. A lot of energy is not wasted anymore. What next?

Get to know myself


  • what do I like / what don't I like?
  • what am I interested in?
  • what would I like to do?
  • what do I know? What would I like to know?
  • what do I feel?
  • what do I need?

Get in touch with my feelings, get to know my needs.

Here's a simple algorithm to get to know your feelings and needs:

  1. If you are happy, take this phrase: I feel … because my need of … is being met
  2. If you are not happy, take this phrase: I feel … because my need of … is not being met
  3. Fill the first space with one of the words from here
  4. Fill the second space with one of the words from here
  5. Done!

To know more about Non-Violent Communication, I suggest this video

This other video I also liked.

Forget absolute truths, center on my own experience. Have a look here for more details.

Learn to communicate and express myself

Communicating/being oneself

  • enjoy what I like
  • pursue my interests
  • do what I want to do
  • study and practice what I'm interested in
  • let myself be known and seen by those who respect who I am

Find out where to be myself

Look for safe spaces where I can activate parts of myself

  • Friends
  • Partners (but not just partners)
  • Interest groups
  • Courses / classes
  • Debian
  • DebConf!

Learn to protect myself

I will make mistakes acting as myself:

  • Mistakes do not invalidate me, Mistakes are opportunity for learning.
  • I need to make a distinction between "I did something wrong" and "I am wrong"

Learn to know my boundaries

Learn to recognise when they are being crossed


Use my anger to protect my integrity. I do not need to hurt others to protect myself

How to deal with the diversity of others

Diversity is a good thing

Once I realised I can be ok in my diversity, it was easier to appreciate the diversity of others

Opening to others doesn't need to sacrifice oneself.

  • I can embrace my own identity without denying the identity of others.
  • Affirming me does not imply destroying you
  • If I feel I'm right, it doesn't mean that you are wrong

Curiosity is a good default.

Do not assume. Assume better and I'll be disappointed. Assume worse and I'll miss good interactions

  • Listen
  • Connect, don't be creepy
  • Interact with people, not things
  • People are unique like me
  • Respect others and myself
  • listen to my red flags
  • choose my involvement
  • choose again

When facing the new and unsettling, use curiosity if I have the energy, or be aware that I don't, and take a step back

The goal of the game is to affirm all identities, especially oneself.

Love freely.

Expect nothing.

Liberate myself from imagined expectations.


What is not acceptable

The paradox of tolerance, as a comic strip

Less well known is the paradox of tolerance: Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. — In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.

Use diversity for growth

Identifying where I am gives me more awareness about myself.

Identifying where I am shows me steps I might be interested in making.

Identity can change, evolve, move I like the idea of talking about my identity in the past tense

Diversity as empowerment, spectrum as empowerment

  • I'm in the trans* spectrum at least as far as not needing to follow gender expectations, possibly more
  • I'm in the autism spectrum at least as far as not needing to follow social expectations, possibly more
  • I'm in the asexual spectrum at least as far as not seeing people as sexual objects, possibly more Once I'm in, I'm free to move, I can reason on myself, see other possibilities

Take control of your narrative: what is your narrative? Do you like it? Does it tell you now what you're going to like next year, or in 5 years? Is it a problem if it does?

Conceptual space is not limited. Allocating mental space for new diversity doesn't reduce one's own mental space, but it expands it

Is someone trying to control your narrative? gaslighting, negging, patronising.

Debian and diversity

Impostor syndrome

Entering a new group: impostor syndrome. Am I good enough for this group?

Expectations, perceived expectations, perceived changes in perceived identity, perceived requirements on identity

I worked some months with a therapist to deal with that, to, it turned out, learn to give up the need to work to belong.

In the end, it was all there in the Diversity Statement:

No matter how I identify myself or how others perceive me: I am welcome, as long as I interact constructively with my community.

Ability of the group to grow, evolve, change, adapt, create

And here, according to Trout, was the reason human beings could not reject ideas because they were bad: “Ideas on Earth were badges of friendship or enmity. Their content did not matter. Friends agreed with friends, in order to express friendliness. Enemies disagreed with enemies, in order to express enmity.

“The ideas Earthlings held didn’t matter for hundreds of thousands of years, since they couldn’t do much about them anyway. Ideas might as well be badges as anything.

(Kurt Vonnegut, "Breakfast of Champions", 1973)

Keep one's identity in Debian

If your identity is your identity, and the group changes, it actually extends, because you keep being who you are.

If your identity is a function of the group identity, you become a control freak for where the group is going.

When people define their identity in terms of belonging to a group, that group cannot change anymore, because if it does, it faces resistance from its members, that will see their own perceived identity under threat.

The threat is that rituals, or practices, that validated my existance, that previously used to work, cease to function. systemd?

  • can I adapt when facing something new and unexpected?
  • do I have the energy to do it?
  • do I allow myself to ask for help?

Free software

Us, and our users, we are a diverse ecosystem

Free Software is a diverse ecosystem

Free software can be a spectrum (free hardware, free firmware, free software, free javascript in browsers...)


Debian exists, and can move in a diverse and constantly changing upstream ecosystem

Vision / non limiting the future of Debian (if your narrative tells you what you're going to like next year, you might have a problem) (but before next year I'd like to get to a point that I can cope with X)

Debian doesn't need to be what people need to define their own identity, but it is defined by the relationship between different, diverse, evolving people

Appreciate diversity, because there's always something you don't know / don't understand, and more in the future.

Nobody can know all of Debian now, and in the future, if we're successful, we're going to get even bigger and more complex.

We're technically complex and diverse, we're socially complex and diverse. We got to learn to deal with with that.

Because we're awesome. We got to learn to deal with with that.

Ode to the diversity statement

03 August, 2018 09:26AM

Jose M. Calhariz

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

August 02, 2018

hackergotchi for Bits from Debian

Bits from Debian

DebConf18 thanks its sponsors!

DebConf18 logo

DebConf18 is taking place in Hsinchu, Taiwan, from July 29th to August 5th, 2018. It is the first Debian Annual Conference in Asia, with over 300 attendees and major advances for Debian and for Free Software in general.

Thirty-two companies have committed to sponsor DebConf18! With a warm "thank you", we'd like to introduce them to you.

Our Platinum sponsor is Hewlett Packard Enterprise (HPE). HPE is an industry-leading technology company providing a comprehensive portfolio of products such as integrated systems, servers, storage, networking and software. The company offers consulting, operational support, financial services, and complete solutions for many different industries: mobile and IoT, data & analytics and the manufacturing or public sectors among others.

HPE is also a development partner of Debian, and providing hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

We have four Gold sponsors:

  • Google, a technology company specialized in Internet-related services as online advertising and search engine,
  • Infomaniak, Switzerland's largest web-hosting company, also offering live-streaming and video on demand services,
  • Collabora, which offers a comprehensive range of services to help its clients to navigate the ever-evolving world of Open Source, and
  • Microsoft, the American multinational technology company developing, licensing and selling computer software, consumer electronics, personal computers and related services.

As Silver sponsors we have credativ (a service-oriented company focusing on open-source software and also a Debian development partner), Gandi (a French company providing domain name registration, web hosting, and related services), Skymizer (a Taiwanese company focused on compiler and virtual machine technology), Civil Infrastructure Platform, (a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software), Brandorr Group, (a company that develops, deploys and manages new or existing infrastructure in the cloud for customers of all sizes), 3CX, (a software-based, open standards IP PBX that offers complete unified communications), Free Software Initiative Japan, (a non-profit organization dedicated to supporting Free Software growth and development), Texas Instruments (the global semiconductor company), the Bern University of Applied Sciences (with over 6,800 students enrolled, located in the Swiss capital), ARM, (a multinational semiconductor and software design company, designers of the ARM processors), Ubuntu, (the Operating System delivered by Canonical), Cumulus Networks, (a company building web-scale networks using innovative, open networking technology), Roche, (a major international pharmaceutical provider and research company dedicated to personalized healthcare) and Hudson-Trading, (a company researching and developing automated trading algorithms using advanced mathematical techniques).

ISG.EE, Univention Private Internet Access, Logilab, Dropbox and IBM are our Bronze sponsors.

And finally, SLAT (Software Liberty Association of Taiwan), The Linux foundation, deepin, Altus Metrum, Evolix, BerryNet and Purism are our supporter sponsors.

Thanks to all our sponsors for their support! Their contributions made possible that a large number of Debian contributors from all over the globe work together, help and learn from each other in DebConf18.

02 August, 2018 10:00AM by Laura Arjona Reina

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

I have no friends or colleagues

ICA "You have no friends or colleagues."Although it’s never fun to have the most important professional association in your field tell you that “you have no friends or colleagues,” being able to make one’s very first submission to screenshots of despair softens the blow a little.

02 August, 2018 02:20AM by Benjamin Mako Hill

August 01, 2018

Vincent Sanders

Irony is the hygiene of the mind

While Elizabeth Bibesco might well be right about the mind software cleanliness requires a different approach.

Previously I have written about code smells which give a programmer hints where to clean up source code. A different technique, which has recently become readily available, is using tool-chain based instrumentation to perform run time analysis.

At a recent NetSurf developer weekend Michael Drake mentioned a talk he had seen at the Guadec conference which reference the use of sanitizers for improving the security and correctness of programs.

Santizers differ from other code quality metrics such as compiler warnings and static analysis in that they detect issues when the program is executed rather than on the source code. There are currently two  commonly used instrumentation types:
address sanitizer
This instrumentation detects several common errors when using memory such as "use after free"
undefined behaviour sanitizer
This instruments computations where the language standard has behaviour which is not clearly specified. For example left shifts of negative values (ISO 9899:2011 6.5.7 Bit-wise shift operators)
As these are runtime checks it is necessary to actually execute the instrumented code. Fortunately most of the NetSurf components have good unit test coverage so Daniel Silverstone used this to add a build target which runs the tests with the sanitizer options.

The previous investigation of this technology had been unproductive because of the immaturity of support in our CI infrastructure. This time the tool chain could be updated to be sufficiently robust to implement the technique.

Jobs were then added to the CI system to build this new target for each component in a similar way to how the existing coverage reports are generated. This resulted in failed jobs for almost every component which we proceeded to correct.

An example of how most issues were addressed is provided by Daniel fixing the bitmap library. Most of the fixes ensured correct type promotion in bit manipulation, however the address sanitizer did find a real out of bounds access when a malformed BMP header is processed. This is despite this library being run with a fuzzer and electric fence for many thousands of CPU hours previously.

Although we did find a small number of real issues the majority of the fixes were to tests which failed to correctly clean up the resources they used. This seems to parallel what I observed with the other run time testing, like AFL and Valgrind, in that often the test environment has the largest impact on detected issues to begin with.

In conclusion it appears that an instrumented build combined with our existing unit tests gives another tool to help us improve our code quality. Given the very low amount of engineering time the NetSurf project has available automated checks like these are a good way to help us avoid introducing issues.

01 August, 2018 09:20AM by Vincent Sanders (

July 31, 2018

hackergotchi for Junichi Uekawa

Junichi Uekawa

Sound of Cicada.

Sound of Cicada. I hear there's debconf in Taipei and my wishes to those who are there. I haven't done much Debian work recently and I wish I can do something in the future.

31 July, 2018 11:40PM by Junichi Uekawa

Petter Reinholdtsen

Sharing images with friends and family using RSS and EXIF/XMP metadata

For a while now, I have looked for a sensible way to share images with my family using a self hosted solution, as it is unacceptable to place images from my personal life under the control of strangers working for data hoarders like Google or Dropbox. The last few days I have drafted an approach that might work out, and I would like to share it with you. I would like to publish images on a server under my control, and point some Internet connected display units using some free and open standard to the images I published. As my primary language is not limited to ASCII, I need to store metadata using UTF-8. Many years ago, I hoped to find a digital photo frame capable of reading a RSS feed with image references (aka using the <enclosure> RSS tag), but was unable to find a current supplier of such frames. In the end I gave up that approach.

Some months ago, I discovered that XScreensaver is able to read images from a RSS feed, and used it to set up a screen saver on my home info screen, showing images from the Daily images feed from NASA. This proved to work well. More recently I discovered that Kodi (both using OpenELEC and LibreELEC) provide the Feedreader screen saver capable of reading a RSS feed with images and news. For fun, I used it this summer to test Kodi on my parents TV by hooking up a Raspberry PI unit with LibreELEC, and wanted to provide them with a screen saver showing selected pictures from my selection.

Armed with motivation and a test photo frame, I set out to generate a RSS feed for the Kodi instance. I adjusted my Freedombox instance, created /var/www/html/privatepictures/, wrote a small Perl script to extract title and description metadata from the photo files and generate the RSS file. I ended up using Perl instead of python, as the libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP tags I ended up using, while python3-exif did not. The relevant EXIF tags only support ASCII, so I had to find better alternatives. XMP seem to have the support I need.

I am a bit unsure which EXIF/XMP tags to use, as I would like to use tags that can be easily added/updated using normal free software photo managing software. I ended up using the tags set using this exiftool command, as these tags can also be set using digiKam:

exiftool -headline='The RSS image title' \
  -description='The RSS image description.' \
  -subject+=for-family photo.jpeg

I initially tried the "-title" and "keyword" tags, but they were invisible in digiKam, so I changed to "-headline" and "-subject". I use the keyword/subject 'for-family' to flag that the photo should be shared with my family. Images with this keyword set are located and copied into my Freedombox for the RSS generating script to find.

Are there better ways to do this? Get in touch if you have better suggestions.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

31 July, 2018 09:30PM

Mike Gabriel

My Work on Debian LTS (July 2018)

This month, after a longer pause, I have started working again for the Debian LTS project as a paid contributor. Thanks to all LTS sponsors for making this possible.

This is my list of work done in July 2018:

  • Triage CVE issues of ~27 packages during my front desk week.
  • Upload gosa 2.7.4+reloaded2-13+deb9u1 (DLA-1436-1) to jessie-security.
  • Upload network-manager-vpnc (DLA-1454-1) to jessie-security.
  • At the end of the month, I started looking at one of two open issues in phpldapadmin. More details on this, I have sent to the Debian LTS mailing list [1].



31 July, 2018 01:51PM by sunweaver

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in July 2018

Here is my monthly update covering what I have been doing in the free software world during July 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

  • Performed a Non Maintainer Upload of the GNU mtools package in order to address two reproducibility-related bugs (#900409 & #900410) that are blocking the inclusion of my previous merge request to the Debian Installer to make the installation images (ISO, hd-media, netboot, etc,) bit-for-bit reproducible.
  • Kept up to date. [...]
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
    • ogdi-dfsg: Please make the build (mostly) reproducible. (#903442)
    • schroot: Please make the documentation build reproducibly. (#902804)
  • I also submitted a patch to fix a specific reproducibility issue in v4l2loopback.
  • Worked on publishing our weekly reports. (#166, #167, #168, #169 & #170)
  • I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:
    • Support .deb archives that contain an uncompressed data tarball. (#903401)
    • Wrap jsondiff calls with a try-except to prevent errors becoming fatal. (#903447, #903449)
    • Clear the progress bar after completion. (#901758)
    • Support .deb archives that contain an uncompressed control tarball. (#903391)
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 11.75 hours on its sister Extended LTS project:

  • "Frontdesk" duties, triaging CVEs, responding to user questions/queries, etc.
  • Hopefully final updates to various scripts — both local and shared — to accommodate and support the introduction of the new "Extended LTS" initiative.
  • Issued DLA 1417-1 for ca-certificates, updating the set of Certificate Authority (CA) certificates that are considered "valid" or otherwise should be trusted by systems.
  • Issued DLA 1419-1 for ruby-sprockets to fix a path traversal issue exploitable via file:// URIs.
  • Issued DLA 1420-1 for the Cinnamon Desktop Environment where a symlink attack could permit an attacker to overwrite an arbitrary file on the filesystem.
  • Issued DLA 1427-1 for znc to address a path traversal vulnerability via ../ filenames in "skin" names as well as to fix an issue where insufficient validation could allow writing of arbitrary values to the znc.conf config file.
  • Issued DLA 1443-1 for evolution-data-server to fix an issue where rejected requests to upgrade to a secure connection did not result in the termination of the connection.
  • Issued DLA 1448-1 for policykit-1, uploading Abhijith PA's fix for a denial of service vulnerability.
  • Issued ELA-13-1 for ca-certificates, also updating the set of Certificate Authority (CA) certificates that are considered "valid" or otherwise should be trusted by wheezy systems.


Finally, I also sponsored elpy (1.22.0-1) & wolfssl (3.15.3+dfsg-1) and I orphaned dbus-cpp (#904426) and process-cpp (#904425) as they were no longer required as build-dependencies of Anbox.

Debian bugs filed

  • cod-tools: Missing build-depends. (#903689)
  • network-manager-openvpn: "Cannot specify device when activating VPN" error when connecting. (#903109)
  • ukwm: override_dh_auto_test doesn't respect nocheck build profile. (#904889)
  • ITP: gpg-encrypted-root — Encrypt root volumes with an OpenPGP smartcard. (#903163)
  • gnumeric: ssconvert segmentation faults. (#903194)

FTP Team

As a Debian FTP assistant I ACCEPTed 213 packages: ahven, apache-mode-el, ats2-lang, bar-cursor-el, bidiui, boxquote-el, capstone, cargo, clevis, cockpit, crispy-doom, cyvcf2, debian-gis, devscripts-el, elementary-xfce, emacs-pod-mode, emacs-session, eproject-el, feedreader, firmware-nonfree, fwupd, fwupdate, gmbal, gmbal-commons, gmbal-pfl, gnome-subtitles, gnuastro, golang-github-avast-retry-go, golang-github-gdamore-encoding, golang-github-git-lfs-gitobj, golang-github-lucasb-eyer-go-colorful, golang-github-smira-go-aws-auth, golang-github-ulule-limiter, golang-github-zyedidia-clipboard, graphviz-dot-mode, grub2, haskell-iwlib, haskell-lzma, hyperscan, initsplit-el, intel-ipsec-mb, intel-mkl, ivulncheck, jaxws-api, jitterentropy-rngd, jp, json-c, julia, kitty, leatherman, leela-zero, lektor, libanyevent-fork-perl, libattribute-storage-perl, libbio-tools-run-alignment-clustalw-perl, libbio-tools-run-alignment-tcoffee-perl, libcircle-be-perl, libconvert-color-xterm-perl, libconvert-scalar-perl, libfile-copy-recursive-reduced-perl, libfortran-format-perl, libhtml-escape-perl, libio-fdpass-perl, libjide-oss-java, libmems, libmodule-build-pluggable-perl, libmodule-build-pluggable-ppport-perl, libnet-async-irc-perl, libnet-async-tangence-perl, libnet-cidr-set-perl, libperl-critic-policy-variables-prohibitlooponhash-perl, libppix-quotelike-perl, libpqxx, libproc-fastspawn-perl, libredis-fast-perl, libspatialaudio, libstring-tagged-perl, libtickit-async-perl, libtickit-perl, libtickit-widget-scroller-perl, libtickit-widget-tabbed-perl, libtickit-widgets-perl, libu2f-host, libuuid-urandom-perl, libvirt-dbus, libxsmm, lief, lightbeam, limesuite, linux, log4shib, mailscripts, mimepull, monero, mutter, node-unicode-data, octavia, octavia-dashboard, openstack-cluster-installer, osmo-iuh, osmo-mgw, osmo-msc, pg-qualstats, pg-stat-kcache, pgzero, php-composer-xdebug-handler, plasma-browser-integration, powerline-gitstatus, ppx-tools-versioned, pyside2, python-certbot-dns-gehirn, python-certbot-dns-linode, python-certbot-dns-sakuracloud, python-cheroot, python-django-dbconn-retry, python-fido2, python-ilorest, python-ipfix, python-lupa, python-morph, python-pygtrie, python-stem, pywws, r-cran-callr, r-cran-extradistr, r-cran-pkgbuild, r-cran-pkgload, r-cran-processx, rawtran, ros-ros-comm, ruby-bindex, ruby-marcel, rust-ar, rust-arrayvec, rust-atty, rust-bitflags, rust-bytecount, rust-byteorder, rust-chrono, rust-cloudabi, rust-crossbeam-utils, rust-csv, rust-csv-core, rust-ctrlc, rust-dns-parser, rust-dtoa, rust-either, rust-encoding-rs, rust-filetime, rust-fnv, rust-fuchsia-zircon, rust-futures, rust-getopts, rust-glob, rust-globset, rust-hex, rust-httparse, rust-humantime, rust-idna, rust-indexmap, rust-is-match, rust-itoa, rust-language-tags, rust-lazy-static, rust-libc, rust-memoffset, rust-nodrop, rust-num-integer, rust-num-traits, rust-openssl-sys, rust-os-pipe, rust-rand, rust-rand-core, rust-redox-termios, rust-regex, rust-regex-syntax, rust-remove-dir-all, rust-same-file, rust-scoped-tls, rust-semver-parser, rust-serde, rust-sha1, rust-sha2-asm, rust-shared-child, rust-shlex, rust-string-cache-shared, rust-strsim, rust-tar, rust-tempfile, rust-termion, rust-time, rust-try-lock, rust-ucd-util, rust-unicode-bidi, rust-url, rust-vec-map, rust-void, rust-walkdir, rust-winapi, rust-winapi-i686-pc-windows-gnu, rust-winapi-x86-64-pc-windows-gnu, rustc, simavr, tabbar-el, tarlz, ukui-media, ukui-menus, ukui-power-manager, ukui-window-switch, ukwm, vanguards, weevely & xml-security-c.

I also filed wishlist-level bugs against the following packages with potential licensing improvements:

  • pgzero: Please inline/summarise web-based licensing discussion in debian/copyright. (#904674)
  • plasma-browser-integration: "This_file_is_part_of_KDE" in debian/copyright? (#903713)
  • rawtran: Please split out debian/copyright. (#904589)
  • tabbar-el: Please inline web-based comments in debian/copyright. (#904782)
  • feedreader: Please use wildcards in debian/copyright. (#904631)

Lastly, I filed 10 RC bugs against packages that had potentially-incomplete debian/copyright files against: ahven, ats2-lang, fwupd, ivulncheck, libmems, libredis-fast-perl, libtickit-widget-tabbed-perl, lief, rust-humantime & rust-try-lock.

31 July, 2018 12:02PM

hackergotchi for Matthew Garrett

Matthew Garrett

Porting Coreboot to the 51NB X210

The X210 is a strange machine. A set of Chinese enthusiasts developed a series of motherboards that slot into old Thinkpad chassis, providing significantly more up to date hardware. The X210 has a Kabylake CPU, supports up to 32GB of RAM, has an NVMe-capable M.2 slot and has eDP support - and it fits into an X200 or X201 chassis, which means it also comes with a classic Thinkpad keyboard . We ordered some from a Facebook page (a process that involved wiring a large chunk of money to a Chinese bank which wasn't at all stressful), and a couple of weeks later they arrived. Once I'd put mine together I had a quad-core i7-8550U with 16GB of RAM, a 512GB NVMe drive and a 1920x1200 display. I'd transplanted over the drive from my XPS13, so I was running stock Fedora for most of this development process.

The other fun thing about it is that none of the firmware flashing protection is enabled, including Intel Boot Guard. This means running a custom firmware image is possible, and what would a ridiculous custom Thinkpad be without ridiculous custom firmware? A shadow of its potential, that's what. So, I read the Coreboot[1] motherboard porting guide and set to.

My life was made a great deal easier by the existence of a port for the Purism Librem 13v2. This is a Skylake system, and Skylake and Kabylake are very similar platforms. So, the first job was to just copy that into a new directory and start from there. The first step was to update the Inteltool utility so it understood the chipset - this commit shows what was necessary there. It's mostly just adding new PCI IDs, but it also needed some adjustment to account for the GPIO allocation being different on mobile parts when compared to desktop ones. One thing that bit me - Inteltool relies on being able to mmap() arbitrary bits of physical address space, and the kernel doesn't allow that if CONFIG_STRICT_DEVMEM is enabled. I had to disable that first.

The GPIO pins got dropped into gpio.h. I ended up just pushing the raw values into there rather than parsing them back into more semantically meaningful definitions, partly because I don't understand what these things do that well and largely because I'm lazy. Once that was done, on to the next step.

High Definition Audio devices (or HDA) have a standard interface, but the codecs attached to the HDA device vary - both in terms of their own configuration, and in terms of dealing with how the board designer may have laid things out. Thankfully the existing configuration could be copied from /sys/class/sound/card0/hwC0D0/init_pin_configs[2] and then hda_verb.h could be updated.

One more piece of hardware-specific configuration is the Video BIOS Table, or VBT. This contains information used by the graphics drivers (firmware or OS-level) to configure the display correctly, and again is somewhat system-specific. This can be grabbed from /sys/kernel/debug/dri/0/i915_vbt.

A lot of the remaining platform-specific configuration has been split out into board-specific config files. and this also needed updating. Most stuff was the same, but I confirmed the GPE and genx_dec register values by using Inteltool to dump them from the vendor system and copy them over. lspci -t gave me the bus topology and told me which PCIe root ports were in use, and lsusb -t gave me port numbers for USB. That let me update the root port and USB tables.

The final code update required was to tell the OS how to communicate with the embedded controller. Various ACPI functions are actually handled by this autonomous device, but it's still necessary for the OS to know how to obtain information from it. This involves writing some ACPI code, but that's largely a matter of cutting and pasting from the vendor firmware - the EC layout depends on the EC firmware rather than the system firmware, and we weren't planning on changing the EC firmware in any way. Using ifdtool told me that the vendor firmware image wasn't using the EC region of the flash, so my assumption was that the EC had its own firmware stored somewhere else. I was ready to flash.

The first attempt involved isis' machine, using their Beaglebone Black as a flashing device - the lack of protection in the firmware meant we ought to be able to get away with using flashrom directly on the host SPI controller, but using an external flasher meant we stood a better chance of being able to recover if something went wrong. We flashed, plugged in the power and… nothing. Literally. The power LED didn't turn on. The machine was very, very dead.

Things like managing battery charging and status indicators are up to the EC, and the complete absence of anything going on here meant that the EC wasn't running. The most likely reason for that was that the system flash did contain the EC's firmware even though the descriptor said it didn't, and now the system was very unhappy. Worse, the flash wouldn't speak to us any more - the power supply from the Beaglebone to the flash chip was sufficient to power up the EC, and the EC was then holding onto the SPI bus desperately trying to read its firmware. Bother. This was made rather more embarrassing because isis had explicitly raised concern about flashing an image that didn't contain any EC firmware, and now I'd killed their laptop.

After some digging I was able to find EC firmware for a related 51NB system, and looking at that gave me a bunch of strings that seemed reasonably identifiable. Looking at the original vendor ROM showed very similar code located at offset 0x00200000 into the image, so I added a small tool to inject the EC firmware (basing it on an existing tool that does something similar for the EC in some HP laptops). I now had an image that I was reasonably confident would get further, but we couldn't flash it. Next step seemed like it was going to involve desoldering the flash from the board, which is a colossal pain. Time to sleep on the problem.

The next morning we were able to borrow a Dediprog SPI flasher. These are much faster than doing SPI over GPIO lines, and also support running the flash at different voltage. At 3.5V the behaviour was the same as we'd seen the previous night - nothing. According to the datasheet, the flash required at least 2.7V to run, but flashrom listed 1.8V as the next lower voltage so we tried. And, amazingly, it worked - not reliably, but sufficiently. Our hypothesis is that the chip is marginally able to run at that voltage, but that the EC isn't - we were no longer powering the EC up, so could communicated with the flash. After a couple of attempts we were able to write enough that we had EC firmware on there, at which point we could shift back to flashing at 3.5V because the EC was leaving the flash alone.

So, we flashed again. And, amazingly, we ended up staring at a UEFI shell prompt[3]. USB wasn't working, and nor was the onboard keyboard, but we had graphics and were executing actual firmware code. I was able to get USB working fairly quickly - it turns out that Linux numbers USB ports from 1 and the FSP numbers them from 0, and fixing that up gave us working USB. We were able to boot Linux! Except there were a whole bunch of errors complaining about EC timeouts, and also we only had half the RAM we should.

After some discussion on the Coreboot IRC channel, we figured out the RAM issue - the Librem13 only has one DIMM slot. The FSP expects to be given a set of i2c addresses to probe, one for each DIMM socket. It is then able to read back the DIMM configuration and configure the memory controller appropriately. Running i2cdetect against the system SMBus gave us a range of devices, including one at 0x50 and one at 0x52. The detected DIMM was at 0x50, which made 0x52 seem like a reasonable bet - and grepping the tree showed that several other systems used 0x52 as the address for their second socket. Adding that to the list of addresses and passing it to the FSP gave us all our RAM.

So, now we just had to deal with the EC. One thing we noticed was that if we flashed the vendor firmware, ran it, flashed Coreboot and then rebooted without cutting the power, the EC worked. This strongly suggested that there was some setup code happening in the vendor firmware that configured the EC appropriately, and if we duplicated that it would probably work. Unfortunately, figuring out what that code was was difficult. I ended up dumping the PCI device configuration for the vendor firmware and for Coreboot in case that would give us any clues, but the only thing that seemed relevant at all was that the LPC controller was configured to pass io ports 0x4e and 0x4f to the LPC bus with the vendor firmware, but not with Coreboot. Unfortunately the EC was supposed to be listening on 0x62 and 0x66, so this wasn't the problem.

I ended up solving this by using UEFITool to extract all the code from the vendor firmware, and then disassembled every object and grepped them for port io. x86 systems have two separate io buses - memory and port IO. Port IO is well suited to simple devices that don't need a lot of bandwidth, and the EC is definitely one of these - there's no way to talk to it other than using port IO, so any configuration was almost certainly happening that way. I found a whole bunch of stuff that touched the EC, but was clearly depending on it already having been enabled. I found a wide range of cases where port IO was being used for early PCI configuration. And, finally, I found some code that reconfigured the LPC bridge to route 0x4e and 0x4f to the LPC bus (explaining the configuration change I'd seen earlier), and then wrote a bunch of values to those addresses. I mimicked those, and suddenly the EC started responding.

It turns out that the writes that made this work weren't terribly magic. PCs used to have a SuperIO chip that provided most of the legacy port functionality, including the floppy drive controller and parallel and serial ports. Individual components (called logical devices, or LDNs) could be enabled and disabled using a sequence of writes that was fairly consistent between vendors. Someone on the Coreboot IRC channel recognised that the writes that enabled the EC were simply using that protocol to enable a series of LDNs, which apparently correspond to things like "Working EC" and "Working keyboard". And with that, we were done.

Coreboot doesn't currently have ACPI support for the latest Intel graphics chipsets, so right now my image doesn't have working backlight control.Backlight control also turned out to be interesting. Most modern Intel systems handle the backlight via registers in the GPU, but the X210 uses the embedded controller (possibly because it supports both LVDS and eDP panels). This means that adding a simple display stub is sufficient - all we have to do on a backlight set request is store the value in the EC, and it does the rest.

Other than that, everything seems to work (although there's probably a bunch of power management optimisation to do). I started this process knowing almost nothing about Coreboot, but thanks to the help of people on IRC I was able to get things working in about two days of work[4] and now have firmware that's about as custom as my laptop.

[1] Why not Libreboot? Because modern Intel SoCs haven't had their memory initialisation code reverse engineered, so the only way to boot them is to use the proprietary Intel Firmware Support Package.
[2] Card 0, device 0
[3] After a few false starts - it turns out that the initial memory training can take a surprisingly long time, and we kept giving up before that had happened
[4] Spread over 5 or so days of real time

comment count unavailable comments

31 July, 2018 05:28AM

Russ Allbery

Free software log (June 2018)

Well, this is embarassingly late, but not a full month late. That's what counts, right?

It's quite late partly because I haven't had the right combination of time and energy to do much free software work since the beginning of June. I did get a couple of releases out then, though. wallet 1.4 incorporated better Active Directory support and fixed a bunch of build system and configuration issues. And rra-c-util 7.2 includes a bunch of fixes to M4 macros and cleans up some test issues.

The July report may go missing for roughly the same reason. I have done some outside-of-work programming, but it's gone almost entirely into learning Rust and playing around with implementing various algorithms to understand them better. Rather fun, but not something that's good enough to be worth releasing. It's reinventing wheels intentionally to understand underlying concepts better.

I do have a new incremental release of DocKnot almost ready to go out (incorporating changes I needed for the last wallet release), but I'm not sure if that will make it into July.

31 July, 2018 04:06AM

hackergotchi for Jonathan McDowell

Jonathan McDowell

(Badly) cloning a TEMPer USB


Having setup a central MQTT broker I’ve wanted to feed it extra data. The study temperature was a start, but not the most useful piece of data when working towards controlling the central heating. As it happens I have a machine in the living room hooked up to the TV, so I thought about buying something like a TEMPer USB so I could sample the room temperature and add it as a data source. And then I realised that I still had a bunch of Digispark clones and some Maxim DS18B20 1-Wire temperature sensors and I should build something instead.

I decided to try and emulate the TEMPer device rather than doing something unique. V-USB was pressed into service and some furious Googling took place to try and find out the details of how the TEMPer appears to the host in order to craft the appropriate USB/HID descriptors to present - actually finding some lsusb output was the hardest part. Looking at the code of various tools designed to talk to the device provided details of the different init commands that needed to be recognised and a basic skeleton framework (reporting a constant 15°C temperature) was crafted. Once that was working with the existing client code knocking up some 1-Wire code to query the DS18B20 wasn’t too much effort (I seem to keep implementing this code on various devices).

At this point things became less reliable. The V-USB code is an evil (and very clever) set of interrupt driven GPIO bit banging routines, working around the fact that the ATTiny doesn’t have a USB port. 1-Wire is a timed protocol, so the simple implementation involves a bunch of delays. To add to this the temper-python library decides to do a USB device reset if it sees a timeout. And does a double read to work around some behaviour of the real hardware. Doing a 1-Wire transaction directly in response to these requests causes lots of problems, so I implemented a timer to do a 1-Wire temperature check once every 10 seconds, and then the request from the host just returns the last value read. This is a lot more reliable, but still sees a few resets a day. It would be nice to fix this, but for the moment it’s good enough for my needs - I’m reading temperature once a minute to report back to the MQTT server, but it offends me to see the USB resets in the kernel log.

Additionally I had some problems with accuracy. Firstly it seems the batch of DS18B20s I have can vary by 1-2°C, so I ended up adjusting for this in the code that runs on the host. Secondly I mounted the DS18B20 on the Digispark board, as in the picture. The USB cable ensures it’s far enough away from the host (rather than sitting plugged directly into the back of the machine and measuring the PSU fan output temperature), but the LED on the board turned out to be close enough that it affected the reading. I have no need for it so I just ended up removing it.

The code is locally and on GitHub in case it’s of use/interest to anyone else.

(I’m currently at DebConf18 but I’ll wait until it’s over before I write it up, and I’ve been meaning to blog about this for a while anyway.)

31 July, 2018 12:31AM