February 18, 2019

Russ Allbery

INN 2.6.3

INN 2.6.3 has been released. This is a bug fix and minor feature release over INN 2.6.2, and the upgrade should be painless. The main ISC downloads page will be updated shortly; in the meantime, you can download the new release from ftp.isc.org or my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

The big change in this release is support for Python 3. Embedded Python filtering and authentication hooks for innd and nnrpd can now use version 3.3.0 or later of the Python interpreter. Python 2.x is still supported (2.3.0 or later).

Also fixed in this release are several issues with TLS: fixed selection of elliptic curve selection, a new configuration parameter to fine-tune the cipher suites with TLS 1.3, and better logging of failed TLS session negotiation. This release also returns the error message from Python and Perl filter hook rejects to CHECK and TAKETHIS commands and fixes various other, more minor bugs.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

18 February, 2019 10:30PM

hackergotchi for Chris Lamb

Chris Lamb

Book Review: Painting the Sand

Painting the Sand (2017)

Kim Hughes

Staff Sargeant Kim Hughes is a bomb disposal operator in the British Army undergoing a gruelling six-month tour of duty in Afghanistan during which he defuses over 100 improvised explosives devices, better known as IEDs.

Cold opening in the heat of the desert, it begins extremely strongly with a set piece of writing that his editor should rightfullyy be proud of. The book contains colourful detail throughout and readers will genuinely feel like "one of the lads" with all the shibboleth military lingo and acronyms. Despite that, this brisk and no-nonsense account — written in that almost-stereotypical squaddie's tone of voice — is highly accessible and furthermore is refreshing culturally-speaking given the paucity of stories in the zeitgest recounting the British derring-do in "Afghan", to adopt Hughes' typical truncation of the country.

However, apart from a few moments (such as when the Taliban adopt devices undetectable with a metal detector) the tension deflates slowly rather than mounting like lit fuse. For example, I would have wished for a bit more suspense or drama where they were being played, trapped or otherwise manipulated by the Taliban for once, and only so much of that meekness can be attributed to Hughes' self-effacing and humble manner. Indeed, the book was somewhat reminiscent of The Secret Barrister in that the ending is a little too quick for my taste, rushing through "current" events and becoming a little too self-referential in parts, the comparison between the two words being aided too by both writers having somewhat of a cynical affect about them.

One is left with the distinct impression that Hughes went to Afghan a job. He gets it done with minimal fuss and no unnecessary nonsense … and that's what exactly what he does as a memorialist.

18 February, 2019 06:43PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, January 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 204.5 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 12 hours (out of 12 hours allocated).
  • Antoine Beaupré did 9 hours (out of 20.5 hours allocated, thus keeping 11.5h extra hours for February).
  • Ben Hutchings did 24 hours (out of 20 hours allocated plus 5 extra hours from December, thus keeping one extra hour for February).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 42.5 hours (out of 20.5 hours allocated + 25.25 extra hours, thus keeping 3.25 extra hours for February).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 5 hours (out of 4 hours allocated plus one extra hour from December).
  • Markus Koschany did 20.5 hours (out of 20.5 hours allocated).
  • Mike Gabriel did 10 hours (out of 10 hours allocated).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 6.5 extra hours, thus keeping 8 extra hours for February, as he also gave 2h back to the pool).
  • Roberto C. Sanchez did 10.75 hours (out of 20.5 hours allocated, thus keeping 9.75 extra hours for February).
  • Thorsten Alteholz did 20.5 hours (out of 20.5 hours allocated).

Evolution of the situation

In January we again managed to dispatch all available hours (well, except one) to contributors. We also still had one new contributor in training, though starting in February Adrian Bunk has become a regular contributor. But: we will lose another contributor in March, so we are still very much looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file has 42 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

18 February, 2019 02:05PM by Raphaël Hertzog

hackergotchi for Thomas Lange

Thomas Lange

Netplan support in FAI

The new version FAI 5.8.1 now generates the configuration file for Ubuntu's netplan tool. It's a YAML description for setting up the network devices, replacing the /etc/network/interfaces file. The FAI CD/USB installation image for Ubuntu now offers two different variants to be installed, Ubuntu desktop and Ubuntu server without a desktop environment. Both are using Ubuntu 18.04 aka Bionic Beaver.

FAI 5.8.1 also improves UEFI support for network installations. UEFI boot is still missing for the ISO images.

The FAI ISO images are available from [1]. The FAIme build service [2] for customized cloud and installation images also uses the newest FAI version.

[1] https://fai-project.org/fai-cd/

[2] https://fai-project.org/FAIme

FAI Ubuntu

18 February, 2019 01:34PM

February 17, 2019

Niels Thykier

Making debug symbols discoverable and fetchable

Michael wrote a few days ago about the experience of debugging programs on Debian.  And he is certainly not the only one, who found it more difficult to find debug symbols on Linux systems in general.

But fortunately, it is a fixable problem.  Basically, we just need a service to map a build-id to a downloadable file containing that build-id.  You can find the source code to my (prototype) of such a dbgsym service on salsa.debian.org.

It exposes one API endpoint, “/api/v1/find-dbgsym”, which accepts a build-id and returns some data about that build-id (or HTTP 404 if we do not know the build-id).  An example:

$ curl --silent | python -m json.tool
    "package": {
        "architecture": "amd64",
        "build_ids": [
        "checksums": {
            "sha1": "a7a38b49689031dc506790548cd09789769cfad3",
            "sha256": "3706bbdecd0975e8c55b4ba14a481d4326746f1f18adcd1bd8abc7b5a075679b"
        "download_size": 18032,
        "download_urls": [
        "name": "6tunnel-dbgsym",
        "version": "1:0.12-1"

Notice how it includes a download URL and a SHA256 checksum, so with this you can download the package containing the build-id directly from this and verify the download. The sample_client.py included in the repo does that and might be a useful basis for others interested in developing a client for this service.


To seed the database, so it can actually answer these queries, there is a bulk importer that parses Packages files from the Debian archive (for people testing: the ones from debian-debug archive are usually more interesting as they have more build-ids).

Possible improvements

  • Have this service deployed somewhere on the internet rather than the loopback interface on my machine.
  • The concept is basically distribution agnostic (Michael’s post in fact links to a similar service) and this could be a standard service/tool for all Linux distributions (or even just organizations).  I am happy to work with people outside Debian to make the code useful for their distribution (including non-Debian based distros).
    • The prototype was primarily based around Debian because it was my point of reference (plus what I had data for).
  • The bulk importer could (hopefully) be faster on new entries.
  • The bulk importer could import the download urls as well, so we do not have to fetch the relevant data online the first time when people are waiting.
  • Most of the django code / setup could probably have been done better as this has been an excuse to learn django as well.

Patches and help welcome.


This prototype would not have been possible without python3, django, django’s restframework, python’s APT module, Postgresql, PoWA and, of course, snapshot.debian.org (including its API).

17 February, 2019 09:03PM by Niels Thykier

Birger Schacht

Sway in experimental

A couple of days ago the 1.0-RC2 version of Sway, a Wayland compositor, landed in Debian experimental. Sway is a drop in replacement for the i3 tiling window manager for wayland. Drop in replacement means that, apart from minor adaptions, you can reuse your existing i3 configuration file for Sway. On the Website of sway you can find a short introduction video that shows the most basic concepts of using Sway, though if you have worked with i3 you will feel at home soon.

In the video the utility swaygrab is mentioned, but this tool is not part of Sway anymore. There is another screenshot tool now though, called grim which you can combine with the tool slurp if you want to select regions for screenshots. The video also mentions swaylock, which is a screen locking utility similar to i3lock. It was split out of the main Sway release a couple of weeks ago but there also exists a Debian package by now. And there is a package for swayidle, which is a idle management daemon, which comes handy for locking the screen or for turning of your display after a timeout. If you need clipboard manager, you can use wl-clipboard. There is also a notification daemon called mako (the Debian package is called mako-notifier and is in NEW) and if you don’t like the default swaybar, you can have a look at waybar (not yet in Debian, see this RFS). If you want to get in touch with other Sway users there is a #sway IRC channel on freenode. For some tricks setting up Sway you can browse the wiki.

If you want to try Sway, beware that is is a release candiate and there are still bugs. I’m using Sway since a couple of month and though i had crashes when it still was the 1.0-beta.1 i hadn’t any since beta.2. But i’m using a pretty conservative setup.

Sway was started by Drew DeVault who is also the upstream maintainer of wlroots, the Wayland compositor library Sway is using and who some might now from his sourcehut project (LWN Article). He also just published an article about Wayland misconceptions. The upstream of grim, slurp and mako is Simon Ser, who also contributes to sway. A lot of thanks for the Debian packaging is due to nicoo who did most of the heavy lifting and to Sean for having patience when reviewing my contributions. Also thanks to Guido for maintaining wlroots!

17 February, 2019 07:52PM

Andrew Cater

Debian 9.8 released : Desktop environments and power settings

Debian 9.8 - the latest update to Debian Stretch - was released yesterday. Updated installation media can be found at the Debian CD Netnstall page, for example, at As part of the testing, I was using a very old i686 laptop which powers down at random intervals because the battery is old. It tends to suspend almost immediately. I found that the power management settings for the Cinnamon desktop were hart to find: using a Mate disktop allowed me to find the appropriate settings in the Debian menu layout much more easily Kudos to Steve McIntyre and Andy Simpkins (amongst others) for such a good job producing and testing the Debian CDs

17 February, 2019 03:12PM by Andrew Cater (noreply@blogger.com)

February 16, 2019

hackergotchi for Jonathan Dowland

Jonathan Dowland

embedding Haskell in AsciiDoc

I'm a fan of the concept of Literate Programming (I explored it a little in my Undergraduate Dissertation a long time ago) which can be briefly (if inadequately) summarised as follows: the normal convention for computer code is by default the text within a source file is considered to be code; comments (or, human-oriented documentation) are exceptional and must be demarked in some way (such as via a special symbol). Literate Programming (amongst other things) inverts this. By default, the text in a source file is treated as comments and ignored by the compiler, code must be specially delimited.

Haskell has built-in support for this scheme: by naming your source code files .lhs, you can make use of one of two conventions for demarking source code: either prefix each source code line with a chevron (called Bird-style, after Richard Bird), or wrap code sections in a pair of delimiters \begin{code} and \end{code} (TeX-style, because it facilitates embedding Haskell into a TeX-formatted document).

For various convoluted reasons I wanted to embed Haskell into an AsciiDoc-formatted document and I couldn't use Bird-style literate Haskell, which would be my preference. The AsciiDoc delimiter for a section of code is a line of dash symbols, which can be interleaved with the TeX-style delimiters:

next a = if a == maxBound then minBound else succ a

Unfortunately the Tex-style delimiters show up in the output once the AsciiDoc is processed. Luckily, we can swap the order of the AsciiDoc and Literate-Haskell delimiters, because the AsciiDoc ones are treated as a source-code comment by Haskell and ignored. This moves the visible TeX-style delimiters out of the code block, which is a minor improvement:

next a = if a == maxBound then minBound else succ a

We can disguise the delimiters outside of the code block further by defining an empty AsciiDoc macro called "code". Macros are marked up with surrounding braces, leaving just stray \begin and \end tokens in the text. Towards the top of the AsciiDoc file, in the pre-amble:

= Document title
Document author

This could probably be further improved by some AsciiDoc markup to change the style of the text outside of the code block immediately prior to the \begin token (perhaps make the font 0pt or the text colour the same as the background colour) but this is legible enough for me, for now.

The resulting file can be fed to an AsciiDoc processor (like asciidoctor, or intepreted by GitHub's built-in AsciiDoc formatter) and to a Haskell compiler. Unfortunately GitHub insists on a .adoc extension to interpret the file as AsciiDoc; GHC insists on a .lhs extension to interpret it as Literate Haskell (who said extensions were semantically meaningless these days…). So I commit the file as .adoc for GitHub's benefit and maintain a local symlink with a .lhs extension for my own.

Finally, I am not interested in including some of the Haskell code in my document that I need to include in the file in order for it to work as Haskell source. This can be achieved by changing from the code delimiter to AsciiDoc comment delimeters on the outside:

utilityFunction = "necessary but not interesting for the document"

You can see an example of a combined AsciiDoc-Haskell file here (which is otherwise a work in progress):


16 February, 2019 10:50PM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Note to Self: Growing the Root File System on First Boot

These 2 are the use cases I came across for expanding root file system.

  1. RaspberryPi images which comes in smaller image size like 1GB which you write bigger size SD cards like 32GB but you want to use full 32GB of space when system comes up.
  2. You have a VM image which is contains basic server operating system and you want to provision the same as a VM with much larger size root file system.

My current use case was second but I learnt the trick from 1, that is the RaspberryPi3 image spec by Debian project.

Idea behind the expanding root file system is first expanding the root file system to full available size and then run resize2fs on the expanded partition to grow file system. resize2fs is a tool specific for ext2/3/4 file system. But this needs to be done before the file system is mounted.

Here is my modified script from raspi3-image-spec repo. Only difference is I've changed the logic of extracting root partition device to my need, and of course added comments based on my understanding.


# Just extracts root partition and removes partition number to get the device
# name eg. /dev/sda1 becomes /dev/sda
roottmp=$(lsblk -l -o NAME,MOUNTPOINT | grep '/$')
rootpart=/dev/${roottmp%% */}

# Use sfdisk to extend partition to all available free space on device.
flock $rootdev sfdisk -f $rootdev -N 2 <<EOF

sleep 5

# Wait for all pending udev events to be handled
udevadm settle

sleep 5

# detect the changes to partition (we extended it).
flock $rootdev partprobe $rootdev

# remount the root partition in read write mode
mount -o remount,rw $rootpart

# Finally grow the file system on root partition
resize2fs $rootpart

exit 0fs

raspi3-image-spec uses sytemd service file to execute this script just before any file system is mounted. This is done by a making service execute before local-fs.pre target. From the man page for systemd.special

systemd-fstab-generator(3) automatically adds dependencies of type Before= to all mount units that refer to local mount points for this target unit. In addition, it adds dependencies of type Wants= to this target unit for those mounts listed in /etc/fstab that have the auto mount option set.

Service also disables itself on executing to avoid re-runs on every boot. I've used the service file from raspi3-image-spec as is.

Testing with VM

raspi3-image-spec is well tested, but I wanted to make sure this works with my use case for VM. Since I didn't have any spare physical disks to experiment with I used kpartx with raw file images. Here is what I did

  1. Created a stretch image using vmdb2 with grub installed. Image size is 1.5G
  2. I created another raw disk using fallocate of 4G size.
  3. I created a partition on 4G disk.
  4. Loop mounted the disk and wrote 1G image on it using dd
  5. Finally created a VM using virt-install with this loop mounted device as root disk.

Below is my vmdb configuration yml again derived from raspi3-image-spec one with some modifications to suit my needs.

# See https://wiki.debian.org/RaspberryPi3 for known issues and more details.
- mkimg: "{{ output }}"
  size: 1.5G

- mklabel: msdos
  device: "{{ output }}"

- mkpart: primary
  device: "{{ output }}"
  start: 0%
  end: 100%
  tag: /

- kpartx: "{{ output }}"

- mkfs: ext4
  partition: /
  label: RASPIROOT

- mount: /

- unpack-rootfs: /

- debootstrap: stretch
  mirror: http://localhost:3142/deb.debian.org/debian
  target: /
  variant: minbase
  - main
  - contrib
  - non-free
  unless: rootfs_unpacked

# TODO(https://bugs.debian.org/877855): remove this workaround once
# debootstrap is fixed
- chroot: /
  shell: |
    echo 'deb http://deb.debian.org/debian buster main contrib non-free' > /etc/apt/sources.list
    apt-get update
  unless: rootfs_unpacked

- apt: install
  - ssh
  - parted
  - dosfstools
  - linux-image-amd64
  tag: /
  unless: rootfs_unpacked

- grub: bios
  tag: /

- cache-rootfs: /
  unless: rootfs_unpacked

- shell: |
     echo "experimental" > "${ROOT?}/etc/hostname"

     # '..VyaTFxP8kT6' is crypt.crypt('raspberry', '..')
     sed -i 's,root:[^:]*,root:..VyaTFxP8kT6,' "${ROOT?}/etc/shadow"

     sed -i 's,#PermitRootLogin prohibit-password,PermitRootLogin yes,g' "${ROOT?}/etc/ssh/sshd_config"

     install -m 644 -o root -g root fstab "${ROOT?}/etc/fstab"

     install -m 644 -o root -g root eth0 "${ROOT?}/etc/network/interfaces.d/eth0"

     install -m 755 -o root -g root rpi3-resizerootfs "${ROOT?}/usr/sbin/rpi3-resizerootfs"
     install -m 644 -o root -g root rpi3-resizerootfs.service "${ROOT?}/etc/systemd/system"
     mkdir -p "${ROOT?}/etc/systemd/system/systemd-remount-fs.service.requires/"
     ln -s /etc/systemd/system/rpi3-resizerootfs.service "${ROOT?}/etc/systemd/system/systemd-remount-fs.service.requires/rpi3-resizerootfs.service"

     install -m 644 -o root -g root rpi3-generate-ssh-host-keys.service "${ROOT?}/etc/systemd/system"
     mkdir -p "${ROOT?}/etc/systemd/system/multi-user.target.requires/"
     ln -s /etc/systemd/system/rpi3-generate-ssh-host-keys.service "${ROOT?}/etc/systemd/system/multi-user.target.requires/rpi3-generate-ssh-host-keys.service"
     rm -f ${ROOT?}/etc/ssh/ssh_host_*_key*

  root-fs: /

# Clean up archive cache (likely not useful) and lists (likely outdated) to
# reduce image size by several hundred megabytes.
- chroot: /
  shell: |
     apt-get clean
     rm -rf /var/lib/apt/lists

# TODO(https://github.com/larswirzenius/vmdb2/issues/24): remove once vmdb
# clears /etc/resolv.conf on its own.
- shell: |
     rm "${ROOT?}/etc/resolv.conf"
  root-fs: /

I could not run with vmdb2 installed from Debian archive, so I cloned raspi3-image-spec and used vmdb2 submodule from it. And here are rest of commands used for testing the script.

fallocate -l 4G rootdisk.img
# Create one partition with full disk
sfdisk -f rootdisk.img <<EOF
kpartx -av rootdisk.img # mounts on /dev/loop0 for me
dd if=vmdb.img of=/dev/loop0
sudo virt-install --name experimental --memory 1024 --disk path=/dev/loop0 --controller type=scsi,model=virtio-scsi --boot hd --network bridge=lxcbr0

Once VM booted I could see the root file system is 4G of size instead of 1.5G it was after using dd to write image on to it. So success!.

16 February, 2019 04:43PM by copyninja

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, January 2019

I was assigned 20 hours of work by Freexian's Debian LTS initiative and carried over 5 hours from December. I worked 24 hours and so will carry over 1 hour.

I prepared another stable update for Linux 3.16 (3.16.63), but did not upload a new release yet.

I also raised the issue that the installer images for Debian 8 "jessie" would need to be updated to include a fix for CVE-2019-3462.

16 February, 2019 04:01PM

hackergotchi for David Moreno

David Moreno

Dell XPS 13 9380

Got myself a XPS 13” 9380 that Dell just released as “Developer Edition” with Ubuntu 18.04 LTS pre-installed. They just released it on January 2019.

Ubuntu didn’t last long though. I prefer OS X Mojave than any of the Ubuntu installations. It’s okay though, it’s just not for me.

Big big big shoutout to Jérémy Garrouste for his documentation only a few days ago of the Debian installation for Buster on the laptop which helped me figure out I needed to upgrade the BIOS to fix a couple of issues and be able to run the d-i alpha 5 with the Atheros drivers. Time to dust a few things here and there :)

IMG_20190215_171421.jpg IMG_20190215_171647.jpg IMG_20190215_171736.jpg IMG_20190215_190933.jpg

Well that’s not a picture of the alpha d-i but from the first time I tried to install before upgrading the BIOS.

16 February, 2019 03:17PM

hackergotchi for Steve Kemp

Steve Kemp

Updated myy compiler, and bought a watch.

The simple math-compiler I introduced in my previous post has had a bit of an overhaul, so that now it is fully RPN-based.

Originally the input was RPN-like, now it is RPN for real. It handles error-detection at run-time, and generates a cleaner assembly-language output:

In other news I bought a new watch, which was a fun way to spend some time.

I love mechanical watches, clocks, and devices such as steam-engines. While watches are full of tiny and intricate parts I like the pretence that you can see how they work, and understand them. Steam engines are seductive because their operation is similar; you can almost look at them and understand how they work.

I've got a small collection of watches at the moment, ranging from €100-€2000 in price, these are universally skeleton-watches, or open-heart watches.

My recent purchase is something different. I was looking at used Rolexs, and found some from 1970s. That made me suddenly wonder what had been made the same year as I was born. So I started searching for vintage watches, which had been manufactured in 1976. In the end I found a nice Soviet Union piece, made by Raketa. I can't prove that this specific model was actually manufactured that year, but I'll keep up the pretence. If it is +/- 10 years that's probably close enough.

My personal dream-watch is the Rolex Oyster (I like to avoid complications). The Oyster is beautiful, and I can afford it. But even with insurance I'd feel too paranoid leaving the house with that much money on my wrist. No doubt I'll find a used one, for half that price, sometime. I'm not in a hurry.

(In a horological-sense a "complication" is something above/beyond the regular display of time. So showing the day, the date, or phase of the moon would each be complications.)

16 February, 2019 02:45PM

Molly de Blanc


My father and Fanny Farmer taught me how to make pancakes. Fanny Farmer provided a basic recipe, and Peter de Blanc filled in the details and the important things she missed — the texture of the batter, how you know when it’s time to flip them, the repeatedly emphasized importance of butter.


  • 1 1/2 cup flour
  • 2 1/2 teaspoons baking powder
  • 3 tablespoons sweetener (optional, see note below)
  • 3/4 teaspoon salt
  • 1 egg (slightly beaten)
  • 2 cups milk
  • 3 tablespoons butter (melted)

Why is the sweetener optional? I think if you sweeten the pancake recipe, it’s enough that you don’t want to cover it in maple syrup, jam, etc, etc. So, I usually go without sugar or honey or putting the maple right in, unless I’m making these to eat on the go. ALSO! If you don’t use sugar, you can make them savory.

A glass bowl containing flour, salt, and baking powder.

Make the batter

  1. Mix all the dry ingredients (flour, baking powder, salt, and sweetener if you’re using sugar).
  2. Add most of the wet ingredients (egg, milk, and sweetener if you’re using honey or maple syrup).
  3. Mix together.
  4. Melt the butter in your pan, so the pan gets full of butter. Yum.
  5. While stirring your batter, add in the melted butter slowly.

Cooking the pancakes

A pancake cooking, the top has small bubbles in it.Bubbles slowly forming

Okay, this is the hardest part for me because it requires lots of patience, which I don’t have enough of.

A  few tips:

  • Shamelessly use lots of butter to keep your pancakes from sticking. It will also help them taste delicious.
  • After  you put the pancakes in the pan, they need to cook at a medium temperature for a fairly long time. You want them to be full of little holes and mostly solid before you flip them over. (See photos.)
  • I recommend listening to music and taking dancing breaks.

How to cook pancakes:

A pancake that has not been flipped yet cooking.Almost ready to flip!
  1. Over a medium heat, use your nice, hot, buttery pan.
  2. Use a quarter cup of batter for each pancake. Based on the size of your pan, you should make three – four pancakes per batch.
  3. Let cook for a while. As mentioned above, they should be mostly cooked on the top as well. There ought to be a ring of crispy pancake around the outside (see pictures), but if there’s not it’s still okay!
  4. Flip the pancakes!
  5. Let cook a little bit longer, until the bottom has that pretty golden brown color to match the top.

And that’s it. Eat your pancakes however you’d like. The day I wrote this recipe I heated some maple syrup with vanilla, bourbon, and cinnamon. Yummmmm.

A delicious golden brown pancake, ready to be enjoyed.

P.S. I tagged this post “free software” so it would show up in Planet Debian because I was asked to include a baking post there. This isn’t quite baking, but it’s still pretty good.

16 February, 2019 01:31PM by mollydb

hackergotchi for Gunnar Wolf

Gunnar Wolf

Debian on the Raspberryscape: Great news!

I already mentioned here having adopted and updated the Raspberry Pi 3 Debian Buster Unofficial Preview image generation project. As you might know, the hardware differences between the three families are quite deep — The original Raspberry Pi (models A and B), as well as the Zero and Zero W, are ARMv6 (which, in Debian-speak, belong to the armel architecture, a.k.a. EABI / Embedded ABI). Raspberry Pi 2 is an ARMv7 (so, we call it armhf or ARM hard-float, as it does support floating point instructions). Finally, the Raspberry Pi 3 is an ARMv8-A (in Debian it corresponds to the ARM64 architecture).

The machines are quite different, but being the CPUs provided by Broadcom, they all share a strange bootloader requirement, as the GPU must be initialized to kickstart the CPU (and only then can Linux be started), hence, they require non-free firmware

Anyway, the image project was targetted at model 3 Raspberries. However...

Thanks (again!) to Romain Perier, I got word that the "lesser" Raspberries can be made to boot from Debian proper, after they are initialized with this dirty, ugly firmware!

I rebuilt the project, targeting armhf instead of arm64. Dropped an extra devicetree blob on the image, to help Linux understand what is connected to the RPI2. Flashed it to my so-far-trusty SD. And... Behold! On the photo above, you can appreciate the Raspberry Pi 2 booting straight Debian, no Raspbian required!

As for the little guy, the Zero that sits atop them, I only have to upload a new version of raspberry3-firmware built also for armel. I will add to it the needed devicetree files. I have to check with the release-team members if it would be possible to rename the package to simply raspberry-firmware (as it's no longer v3-specific).

Why is this relevant? Well, the Raspberry Pi is by far the most popular ARM machine ever. It is a board people love playing with. It is the base for many, many, many projects. And now, finally, it can run with straight Debian! And, of course, if you don't trust me providing clean images, you can prepare them by yourself, trusting the same distribution you have come to trust and love over the years.


IMG_20190215_194614.jpg4.11 MB

16 February, 2019 03:25AM by gwolf

February 15, 2019

hackergotchi for Chris Lamb

Chris Lamb

Book Review: Stories of Your Life and Others

Stories of Your Life and Others (2002)

Ted Chiang

This compilation has been enjoying a renaissance in recent years due the success of the film Arrival (2016) which based on on the fourth and titular entry in this amazing collection. Don't infer too much from that however as whilst this is prima facie just another set of sci-fi tales it is science fiction in the way that Children of Men is, rather than Babylon 5.

A well-balanced mixture of worlds are evoked throughout with a combination of tales that variously mix the Aristotelian concepts of spectacle (opsis), themes (dianoia), character (ethos) and dialogue (lexis), perhaps best expressed practically in that some stories were extremely striking at the time — one even leading me to rebuff an advance at a bar — and a number were not as remarkable at the time yet continue to occupy my idle thoughts.

The opening tale which reworks the Tower of Babel into a construction project probably remains my overall favourite, but the Dark Materials-esque world summoned in Seventy-Two Letters continues to haunt my mind and lips of anyone else who has happened to come across it, perhaps becoming the quite-literal story of my life for a brief period. Indeed it could be said that, gifted as a paperback, whilst the whole collection followed me around across a number of locales, it continues to follow me — figuratively speaking that is — to this day.

Highly recommended to all readers but for those who enjoy discussing books with others it would more than repay any investment.

15 February, 2019 11:10PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Spinning rust woes: Cowbuilder

I use a traditional, spinning rust hard drive as my main volume:

  1. $ /dev/sda2 on / type btrfs (rw,relatime,compress=zlib:3,space_cache,subvolid=5,subvol=/)

I just adopted Lars' vmdb2. Of course, I was eager to build and upload my first version and... Hit a FTBFS bug due to missing dependencies... Bummer!
So I went to my good ol' cowbuilder (package cowdancer)to fix whatever needed fixing. But it took long! Do note that my /var/cache/pbuilder/base.cow was already set up and updated.
  1. # time cowbuilder --build /home/gwolf/vcs/build-area/vmdb2_0.13.2+git20190215-1.dsc
  2. (...)
  3. real 15m55.403s
  4. user 0m53.734s
  5. sys 0m23.138s

But... What if I take the spinning rust out of the equation?
  1. # mkdir /var/cache/pbuilder
  2. # mount none -t tmpfs /var/cache/pbuilder
  3. # time rsync -a /var/cache/pbuilderbk/* /var/cache/pbuilder
  5. real 0m5.363s
  6. user 0m2.333s
  7. sys 0m0.709s
  8. # time cowbuilder --build /home/gwolf/vcs/build-area/vmdb2_0.13.2+git20190215-1.dsc
  9. (...)
  10. real 0m52.586s
  11. user 0m53.076s
  12. sys 0m8.277s

Close to ¹/₁₆th of the running time — Even including the copy of the base.cow!

OK, I cheated a bit before the rsync, as my cache was already warm... But still, convenient!

15 February, 2019 06:37PM by gwolf

Michael Stapelberg

Debugging experience in Debian

Recently, a user reported that they don’t see window titles in i3 when running i3 on a Raspberry Pi with Debian.

I copied the latest Raspberry Pi Debian image onto an SD card, booted it, and was able to reproduce the issue.

Conceptually, at this point, I should be able to install and start gdb, set a break point and step through the code.

Enabling debug symbols in Debian

Debian, by default, strips debug symbols when building packages to conserve disk space and network bandwidth. The motivation is very reasonable: most users will never need the debug symbols.

Unfortunately, obtaining debug symbols when you do need them is unreasonably hard.

We begin by configuring an additional apt repository which contains automatically generated debug packages:

raspi# cat >>/etc/apt/sources.list.d/debug.list <<'EOT'
deb http://deb.debian.org/debian-debug buster-debug main contrib non-free
raspi# apt update

Notably, not all Debian packages have debug packages. As the DebugPackage Debian Wiki page explains, debhelper/9.20151219 started generating debug packages (ending in -dbgsym) automatically. Packages which have not been updated might come with their own debug packages (ending in -dbg) or might not preserve debug symbols at all!

Now that we can install debug packages, how do we know which ones we need?

Finding debug symbol packages in Debian

For debugging i3, we obviously need at least the i3-dbgsym package, but i3 uses a number of other libraries through whose code we may need to step.

The debian-goodies package ships a tool called find-dbgsym-packages which prints the required packages to debug an executable, core dump or running process:

raspi# apt install debian-goodies
raspi# apt install $(find-dbgsym-packages $(which i3))

Now we should have symbol names and line number information available in gdb. But for effectively stepping through the program, access to the source code is required.

Obtaining source code in Debian

Naively, one would assume that apt source should be sufficient for obtaining the source code of any Debian package. However, apt source defaults to the package candidate version, not the version you have installed on your system.

I have addressed this issue with the pk4 tool, which defaults to the installed version.

Before we can extract any sources, we need to configure yet another apt repository:

raspi# cat >>/etc/apt/sources.list.d/source.list <<'EOT'
deb-src http://deb.debian.org/debian buster main contrib non-free
raspi# apt update

Regardless of whether you use apt source or pk4, one remaining problem is the directory mismatch: the debug symbols contain a certain path, and that path is typically not where you extracted your sources to. While debugging, you will need to tell gdb about the location of the sources. This is tricky when you debug a call across different source packages:

(gdb) pwd
Working directory /usr/src/i3.
(gdb) list main
229     * the main loop. */
230     ev_unref(main_loop);
231   }
232 }
234 int main(int argc, char *argv[]) {
235  /* Keep a symbol pointing to the I3_VERSION string constant so that
236   * we have it in gdb backtraces. */
237  static const char *_i3_version __attribute__((used)) = I3_VERSION;
238  char *override_configpath = NULL;
(gdb) list xcb_connect
484	../../src/xcb_util.c: No such file or directory.

See Specifying Source Directories in the gdb manual for the dir command which allows you to add multiple directories to the source path. This is pretty tedious, though, and does not work for all programs.

Positive example: Fedora

While Fedora conceptually shares all the same steps, the experience on Fedora is so much better: when you run gdb /usr/bin/i3, it will tell you what the next step is:

# gdb /usr/bin/i3
Reading symbols from /usr/bin/i3...(no debugging symbols found)...done.
Missing separate debuginfos, use: dnf debuginfo-install i3-4.16-1.fc28.x86_64

Watch what happens when we run the suggested command:

# dnf debuginfo-install i3-4.16-1.fc28.x86_64
enabling updates-debuginfo repository
enabling fedora-debuginfo repository
  i3-debuginfo.x86_64 4.16-1.fc28
  i3-debugsource.x86_64 4.16-1.fc28

A single command understood our intent, enabled the required repositories and installed the required packages, both for debug symbols and source code (stored in e.g. /usr/src/debug/i3-4.16-1.fc28.x86_64). Unfortunately, gdb doesn’t seem to locate the sources, which seems like a bug to me.

One downside of Fedora’s approach is that gdb will only print all required dependencies once you actually run the program, so you may need to run multiple dnf commands.

In an ideal world

Ideally, none of the manual steps described above would be necessary. It seems absurd to me that so much knowledge is required to efficiently debug programs in Debian. Case in point: I only learnt about find-dbgsym-packages a few days ago when talking to one of its contributors.

Installing gdb should be all that a user needs to do. Debug symbols and sources can be transparently provided through a lazy-loading FUSE file system. If our build/packaging infrastructure assured predictable paths and automated debug symbol extraction, we could have transparent, quick and reliable debugging of all programs within Debian.

NixOS’s dwarffs is an implementation of this idea: https://github.com/edolstra/dwarffs


While I agree with the removal of debug symbols as a general optimization, I think every Linux distribution should strive to provide an entirely transparent debugging experience: you should not even have to know that debug symbols are not present by default. Debian really falls short in this regard.

Getting Debian to a fully transparent debugging experience requires a lot of technical work and a lot of social convincing. In my experience, programmatically working with the Debian archive and packages is tricky, and ensuring that all packages in a Debian release have debug packages (let alone predictable paths) seems entirely unachievable due to the fragmentation of packaging infrastructure and holdouts blocking any progress.

My go-to example is rsync’s debian/rules, which intentionally (!) still has not adopted debhelper. It is not a surprise that there are no debug symbols for rsync in Debian.

15 February, 2019 12:11PM

February 14, 2019

hackergotchi for Chris Lamb

Chris Lamb

Book review: Toujours Provence

Toujours Provence (2001)

Peter Mayle

In my favourite books of 2018 I mentioned that after enjoying the first book in the series I was looking forward to imbibing the second installment of humorous memoirs from British expat M. Mayle in the south of France.

However — and I should have seen this coming really — not only did it not live up to expectations I further can't quite put my finger on exactly why. All of the previous ingredients of the «mise en place» remain; the anecdotes on the locals' quirks & peccadilloes, the florid & often-beautiful phrasing and, of course, further lengthy expositions extolling the merits of a lengthy «dégustation»… but it somehow didn't engage or otherwise "hang together" quite right.

Some obvious candidates might be the lack of a structural through-line (the previous book used the conceit of the authors's renovation of a house in a small village through a particular calendar year) but that seems a little too superficial. Perhaps it was the over-reliance of "silly" stories such as the choir of toads or the truffle dog? Or a somewhat contrived set-piece about Pavarotti? I remain unsure.

That being said, the book is not at all bad or unworthy of your time, it's merely that whilst the next entry in the series remains on my reading list, I fear it has subconsciously slipped down a few places.

14 February, 2019 02:56PM

February 13, 2019

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Futatabi video out

After some delay, the video of my FOSDEM talk about Futatabi, my instant replay system, is out on YouTube. Actually, the official recording has been out for a while, but this is a special edit; nearly all of the computer content has been replaced with clean 720p59.94 versions.

The reason is simple; the talk is so much about discussing smooth versus choppy video, and when you decimate 59.94 fps to 25 fps, everything gets choppy. (When making the talk, I did know about this and took it into account to the degree I could, but it's always better just to get rid of the problem entirely.) As an extra bonus, when I only have one talk to care about and not 750+, I could give it some extra love in the form of hand-transcribed subtitles, audio sync fixes, and so on. Thanks to the FOSDEM team for providing me with the required source material so that I could make this edit.

Oh, and of course, since I now have a framerate interpolation system, the camera has been upsampled from 50 to 59.94 fps, too. :-)

13 February, 2019 11:31PM

Dimitri John Ledkov

Encrypt all the things

xkcd #538: Security
Went into blogger settings and enabled TLS on my custom domain blogger blog. So it is now finally a https://blog.surgut.co.uk However, I do use feedburner and syndicate that to the planet. I am not sure if that is end-to-end TLS connections, thus I will look into removing feedburner between my blog and the ubuntu/debian planets. My experience with changing feeds in the planets is that I end up spamming everyone. I wonder, if I should make a new tag and add that one, and add both feeds to the planet config to avoid spamming old posts.

Next up went into gandi LiveDNS platform and enabled DNSSEC on my domain. It propagated quite quickly, but I believe my domain is now correctly signed with DNSSEC stuff. Next up I guess, is to fix DNSSEC with captive portals. I guess what we really want to have on "wifi" like devices, is to first connect to wifi and not set it as default route. Perform captive portal check, potentially with a reduced DNS server capabilities (ie. no EDNS, no DNSSEC, etc) and only route traffic to the captive portal to authenticate. Once past the captive portal, test and upgrade connectivity to have DNSSEC on. In the cloud, and on the wired connections, I'd expect that DNSSEC should just work, and if it does we should be enforcing DNSSEC validation by default.

So I'll start enforcing DNSSEC on my laptop I think, and will start reporting issues to all of the UK banks if they dare not to have DNSSEC. If I managed to do it, on my own domain, so should they!

Now I need to publish CAA Records to indicate that my sites are supposed to be protected by Let's Encrypt certificates only, to prevent anybody else issuing certificates for my sites and clients trusting them.

I think I think I want to publish SSHFP records for the servers I care about, such that I could potentially use those to trust the fingerprints. Also at the FOSDEM getdns talk it was mentioned that openssh might not be verifying these by default and/or need additional settings pointing at the anchor. Will need to dig into that, to see if I need to modify something about this. It did sound odd.

Generated 4k RSA subkeys for my main key. Previously I was using 2k RSA keys, but since I got a new yubikey that supports 4k keys I am upgrading to that. I use yubikey's OpenGPG for my signing, encryption, and authentication subkeys - meaning for ssh too. Which I had to remember how to use `gpg --with-keygrip -k` to add the right "keygrip" to `~/.gnupg/sshcontrol` file to get the new subkey available in the ssh agent. Also it seems like the order of keygrips in sshcontrol file matters. Updating new ssh key in all the places is not fun I think I did github, salsa and launchpad at the moment. But still need to push the keys onto the many of the installed systems.

Tried to use FIDO2 passwordless login for Windows 10, only to find out that my Dell XPS appears to be incompatible with it as it seems that my laptop does not have TPM. Oh well, I guess I need to upgrade my laptop to have a TPM2 chip such that I can have self-unlocking encrypted drives, and like OTP token displayed on boot and the like as was presented at this FOSDEM talk.

Now that cryptsetup 2.1.0 is out and is in Debian and Ubuntu, I guess it's time to reinstall and re-encrypt my laptop, to migrate from LUKS1 to LUKS2. It has a bigger header, so obviously so much better!

Changing phone soon, so will need to regenerate all of the OTP tokens. *sigh* Does anyone backup all the QR codes for them, to quickly re-enroll all the things?

BTW I gave a talk about systemd-resolved at FOSDEM. People didn't like that we do not enable/enforce DNS over TLS, or DNS over HTTPS, or DNSSEC by default. At least, people seemed happy about not leaking queries. But not happy again about caching.

I feel safe.

ps. funny how xkcd uses 2k RSA, not 4k.

13 February, 2019 11:09PM by Dimitri John Ledkov (noreply@blogger.com)

hackergotchi for Jonathan Dowland

Jonathan Dowland

My first FOSDEM

FOSDEM 2019 was my first FOSDEM. My work reason to attend was to meet many of my new team-mates from the Red Hat OpenJDK team, as well as people from the wider OpenJDK community, and learn a bit about what people are up to. I spent most of the first day entirely in the Free Java room, which was consistently over-full. On Monday I attended an OpenJDK Committer's meeting hosted by Oracle (despite not — yet — being an OpenJDK source contributor… soon!)

A sides from work and Java, I thought this would be a great opportunity to catch up with various friends from the Debian community. I didn't do quite as well as I hoped! By coincidence, I sat on a train next to Ben Hutchings On Friday, I tried to meet up with Steve McIntyre and others (I spotted at least Neil Williams and half a dozen others) for dinner, but alas the restaurant had (literally) nothing on the menu for vegetarians, so I waved and said hello for a mere 5 minutes before moving on.

On Saturday I bumped into Thomas Goirand (who sports a fantastic Debian Swirl umbrella) with whom I was not yet acquainted. I'm fairly sure I saw Mark Brown from across a room but didn't manage to say hello. I also managed a brief hello with Nattie Hutchings who was volunteering at one of the FOSDEM booths. I missed all the talks given by Debian people, including Karen Sandler, Molly De Blanc, irl, Steinar, Samuel Thibault.

Sunday was a little more successful: I did manage to shake Enrico's hand briefly in the queue for tea, and chat with Paul Sladen for all of 5 minutes. I think I bumped into Karen after FOSDEM in the street near my hotel whilst our respective groups searched for somewhere to eat dinner, but I didn't introduce myself. Finally I met Matthias Klose on Monday.

Quite apart from Debian people, I also failed to meet some Red Hat colleagues and fellow PhD students from Newcastle University who were in attendance, as well as several people from other social networks to which I'd hoped to say hello.

FOSDEM is a gigantic, unique conference, and there are definitely some more successful strategies for getting the most out of it. If I were to go again, I'd be more relaxed about seeing the talks I wanted to in real-time (although I didn't have unrealistic expectations about that for this one); I'd collect more freebie stickers (not for me, but for my daughter!); and I'd try much harder to pre-arrange social get-togethers with friends from various F/OSS communities for the "corridor track" as well as dinners and such around the edges. Things that worked: my tea flask was very handy, and using a lightweight messenger bag instead of my normal backpack made getting in and out of places much easier; things that didn't: I expected it to be much colder than it turned out to be, and wore my warmest jumper, which meant I was hot a lot of the time and had to stuff it (+bulky winter gloves and hat) into aformentioned messenger bag; bringing my own stash of tea bags and a large chocolate tear-and-share brioche for the hotel; in general I over-packed, although that wasn't a problem for the conference itself, just travelling to/from Brussels. I did manage to use the hotel swimming pool once, but it was generally a trade-off between swim or sleep for another 30 minutes.

I've written nothing at all about the talks themselves, yet, perhaps I'll do so in another post.

13 February, 2019 05:31PM

hackergotchi for Olivier Berger

Olivier Berger

Two demos of programming inside the Web browser with Theia for Java and PHP, using Docker

Theia is a Web IDE that can be used to offer a development environment displayed in a Web browser.

I’ve recorded 2 screencasts of running Theia inside Docker containers, to develop Java and PHP applications, respectively using Maven and Composer (the latter is a Symfony application).

Here are the videos :

I’ve pushed the Dockerfiles I’ve used here (Java) and here (PHP).

This is the kind of environments we’re considering to use for “virual labs” that we could use for teaching purposes.

We’re considering integrating them in Janitor for giving access to these environments on a server, and maybe mixing them with Labtainers for auto-grading of labs, for instance.

I must thank etiennewan for introducing me to Janitor and Theia. So nice to have wonderful students that teach their professors 😉

13 February, 2019 08:58AM by Olivier Berger

hackergotchi for Keith Packard

Keith Packard


snekde — an IDE for snek development

I had hoped to create a stand-alone development environment on the Arduino, but I've run out of room. The current snek image uses 32606 bytes of flash (out of 32768) and 1980 bytes of RAM (out of 2048). I can probably squeeze a few more bytes out, but making enough room for a text editor seems like a stretch.

As a back-up plan, I've written a host-side application that communicates with the Arduino over the serial port.

Python + Curses

I looked at a range of options for writing this code, from Java to PyTk and finally settled on Python and Curses. This requires a minimal number of Python packages on the target system (just curses and pyserial), and seems to run just fine on both Linux and Windows. I haven't tried it on OS X, but I imagine it will work fine there too.

This creates a pretty spartan UI, but it's not a lot of code (about 800 lines), and installing it should be pretty easy.

What Can It Do

It's not the worlds most sophisticated IDE, but it does the basics:

  • Get and Put source code to EEPROM on the Arduino.

  • Edit source code. Including auto-indent, and a cut/paste buffer.

  • Load and Save source code to files on the host.

  • Interact with the snek command line, saving all output for review

Where Now for Snek?

I think I'll let a few students give this a try and see if they can make anything of it. I expect to get lots of feedback for stuff which is wrong, confusing or undocumented which should help make it better pretty quickly.

snekde source: https://keithp.com/cgit/snek.git/tree/snekde/snekde.py

13 February, 2019 06:43AM

February 12, 2019

hackergotchi for Jonathan McDowell

Jonathan McDowell

Gemini NC14 + Debian

My main machine is a Dell E7240. It’s 5 years old and, while a bit slow sometimes, is generally still capable of doing all I need. However it mostly lives in an E-Port Plus II dock and gets treated like a desktop. As a result I don’t tend to move it around the house; the external monitor has a higher resolution than the internal 1080p and I’m often running things on it where it would be inconvenient to have to suspend it. So I decided I’d look for a basic laptop that could act as a simple terminal and web browser. This seems like an ideal job for a Chromebook, but I wanted a decent resolution screen and all of the cheap Chromebooks were 1366x768.

Looking around I found the Gemini Devices NC14. This is a Celeron N3350 based device with 4GB RAM and a 14” 1080p LCD. For £180 that seemed like a decent spec, much better than anything else I could see for under £200. Included storage is limited to a 32GB eMMC, with a slot for an m.2 SSD if desired, but as I’m not planning to store anything other than the OS/applications on the device that wasn’t a drawback to me. Box seem to be the only supplier, though they also list on Amazon. I chose Amazon, because that avoided paying extra for shipping to Northern Ireland.

The laptop comes with just a wall-wart style power supply - there’s no paperwork or anything else in the box. The PSU is a 12V/2A model and the cable is only slightly more than 1m long. However there’s also a USB-C power on the left side of the laptop and it will charge from that; didn’t work with any of my USB-C phone chargers, but worked just fine with my Lenovo laptop charger. The USB-C port does USB, as you’d expect, but surprisingly is also setup for DisplayPort - I plugged in a standard USB-C → HDMI adaptor and it worked perfectly. Additional ports include 2 standard USB 3.0 ports, a mini-HDMI port, a 3.5mm audio jack and a micro SD card slot. The whole device is pretty light too, coming in at about 1.37kg. It feels cheap, but not flimsy - not unreasonable given the price point. The keyboard is ok; not a great amount of travel and slightly offset from what I’m used to on the right hand side (there is a column of home/pgup/pgdn/end to the right of the enter key). The worst aspect is that the power button is a regular key in the top right, so easy to hit when looking for delete. The trackpad is serviceable; the middle button is a little tricky to hit sometimes, but there and useful.

Software-wise it is supplied with Windows 10 Home. I didn’t boot it, instead choosing to install Debian Buster via the Alpha 4 installer (Alpha 5 has since been released). There were no hiccups here; I did a UEFI based install overwriting the Windows installation and chose LXDE as my desktop environment. I’m still not entirely convinced by it (my other machines run GNOME3), but with the hardware being lower spec I didn’t want to try too much. I added Chrome - I plan to leave the laptop running Buster rather than testing, so regular updates to the browser direct from Google are appealing. LXDE’s default LXTerminal works just fine as the terminal emulator (though I did hit #908760 in regards to trying to click on URLs to open them).

How do I find it? I’m pretty pleased with my purchase. I’ve had it just over 2 weeks at the point of writing, and I’m using it to write this post (ssh’d into my laptop - I’ve longer term plans to use a different machine as the grunt). Chrome can sometimes be a little sluggish to open a new URL - I think this is due to the slow internal eMMC and trying to lookup autocomplete suggestions from previous visits - but there’s no problem with responsiveness after that point. Youtube videos play just fine. Running a whole bunch of terminals doesn’t cause any issues, as you’d hope. I’m running a single virtual desktop with Chrome full-screened and one with a bunch of lxterminals and it’s all very usable. Battery life is excellent, though acpi reports obviously inaccurate timings (currently, with 16% battery left, it’s reporting 5hr35 runtime) I think I’m probably seeing 8+ hours. One oddity I did see is with the keyboard; the enter key actually returns KEY_KPENTER which makes less unhappy, as well as some other things. I fixed it using xmodmap -e 'keycode 104 = Return NoSymbol Return', which maps it back to KEY_ENTER, and I’ve had a fix accepted into udev/systemd to fix it up automatically.

microcode: microcode updated early to revision 0x32, date = 2018-05-11
Linux version 4.19.0-2-amd64 (debian-kernel@lists.debian.org) (gcc version 8.2.0 (Debian 8.2.0-14)) #1 SMP Debian 4.19.16-1 (2019-01-17)
Command line: BOOT_IMAGE=/boot/vmlinuz-4.19.0-2-amd64 root=UUID=57a681dd-c949-4287-be18-9d7b0f3f2b45 ro quiet
x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
x86/fpu: xstate_offset[3]:  576, xstate_sizes[3]:   64
x86/fpu: xstate_offset[4]:  640, xstate_sizes[4]:   64
x86/fpu: Enabled xstate features 0x1b, context size is 704 bytes, using 'compacted' format.
BIOS-provided physical RAM map:
BIOS-e820: [mem 0x0000000000000000-0x000000000003efff] usable
BIOS-e820: [mem 0x000000000003f000-0x000000000003ffff] reserved
BIOS-e820: [mem 0x0000000000040000-0x000000000009dfff] usable
BIOS-e820: [mem 0x000000000009e000-0x00000000000fffff] reserved
BIOS-e820: [mem 0x0000000000100000-0x000000000fffffff] usable
BIOS-e820: [mem 0x0000000010000000-0x0000000012150fff] reserved
BIOS-e820: [mem 0x0000000012151000-0x00000000768bcfff] usable
BIOS-e820: [mem 0x00000000768bd000-0x0000000079a0afff] reserved
BIOS-e820: [mem 0x0000000079a0b000-0x0000000079a26fff] ACPI data
BIOS-e820: [mem 0x0000000079a27000-0x0000000079a8afff] ACPI NVS
BIOS-e820: [mem 0x0000000079a8b000-0x0000000079ddffff] reserved
BIOS-e820: [mem 0x0000000079de0000-0x0000000079e34fff] type 20
BIOS-e820: [mem 0x0000000079e35000-0x000000007a1acfff] usable
BIOS-e820: [mem 0x000000007a1ad000-0x000000007a1adfff] ACPI NVS
BIOS-e820: [mem 0x000000007a1ae000-0x000000007a1c7fff] reserved
BIOS-e820: [mem 0x000000007a1c8000-0x000000007a762fff] usable
BIOS-e820: [mem 0x000000007a763000-0x000000007a764fff] reserved
BIOS-e820: [mem 0x000000007a765000-0x000000007affffff] usable
BIOS-e820: [mem 0x000000007b000000-0x000000007fffffff] reserved
BIOS-e820: [mem 0x00000000d0000000-0x00000000d0ffffff] reserved
BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved
BIOS-e820: [mem 0x00000000fe042000-0x00000000fe044fff] reserved
BIOS-e820: [mem 0x00000000fe900000-0x00000000fe902fff] reserved
BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
BIOS-e820: [mem 0x00000000fed01000-0x00000000fed01fff] reserved
BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
BIOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved
BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable
NX (Execute Disable) protection: active
efi: EFI v2.50 by American Megatrends
efi:  ACPI=0x79a10000  ACPI 2.0=0x79a10000  SMBIOS=0x79c98000  SMBIOS 3.0=0x79c97000  ESRT=0x73860f18 
secureboot: Secure boot could not be determined (mode 0)
SMBIOS 3.0.0 present.
DMI: Gemini Devices NC14V1006/To be filled by O.E.M., BIOS XW-BI-14-S133AR400-AA54M-046-A 01/04/2018
tsc: Fast TSC calibration using PIT
tsc: Detected 1094.400 MHz processor
e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
e820: remove [mem 0x000a0000-0x000fffff] usable
last_pfn = 0x180000 max_arch_pfn = 0x400000000
MTRR default type: uncachable
MTRR fixed ranges enabled:
  00000-6FFFF write-back
  70000-7FFFF uncachable
  80000-9FFFF write-back
  A0000-BFFFF uncachable
  C0000-FFFFF write-protect
MTRR variable ranges enabled:
  0 base 0000000000 mask 7F80000000 write-back
  1 base 007C000000 mask 7FFC000000 uncachable
  2 base 007B000000 mask 7FFF000000 uncachable
  3 base 0100000000 mask 7F80000000 write-back
  4 base 00FF800000 mask 7FFF800000 write-combining
  5 base 0090000000 mask 7FF0000000 write-through
  6 disabled
  7 disabled
  8 disabled
  9 disabled
x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
last_pfn = 0x7b000 max_arch_pfn = 0x400000000
esrt: Reserving ESRT space from 0x0000000073860f18 to 0x0000000073860f50.
Base memory trampoline at [(____ptrval____)] 97000 size 24576
Using GB pages for direct mapping
BRK [0x19001000, 0x19001fff] PGTABLE
BRK [0x19002000, 0x19002fff] PGTABLE
BRK [0x19003000, 0x19003fff] PGTABLE
BRK [0x19004000, 0x19004fff] PGTABLE
BRK [0x19005000, 0x19005fff] PGTABLE
BRK [0x19006000, 0x19006fff] PGTABLE
BRK [0x19007000, 0x19007fff] PGTABLE
RAMDISK: [mem 0x34d25000-0x36689fff]
ACPI: Early table checksum verification disabled
ACPI: RSDP 0x0000000079A10000 000024 (v02 ALASKA)
ACPI: XSDT 0x0000000079A100C0 0000F4 (v01 ALASKA A M I    01072009 AMI  00010013)
ACPI: FACP 0x0000000079A19030 000114 (v06 ALASKA A M I    01072009 AMI  00010013)
ACPI: DSDT 0x0000000079A10260 008DCF (v02 ALASKA A M I    01072009 INTL 20120913)
ACPI: FACS 0x0000000079A8A080 000040
ACPI: FPDT 0x0000000079A19150 000044 (v01 ALASKA A M I    01072009 AMI  00010013)
ACPI: FIDT 0x0000000079A191A0 00009C (v01 ALASKA A M I    01072009 AMI  00010013)
ACPI: MSDM 0x0000000079A19240 000055 (v03 ALASKA A M I    01072009 AMI  00010013)
ACPI: MCFG 0x0000000079A192A0 00003C (v01 ALASKA A M I    01072009 MSFT 00000097)
ACPI: DBG2 0x0000000079A192E0 000072 (v00 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: DBGP 0x0000000079A19360 000034 (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: HPET 0x0000000079A193A0 000038 (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: LPIT 0x0000000079A193E0 00005C (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: APIC 0x0000000079A19440 000084 (v03 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: NPKT 0x0000000079A194D0 000065 (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: PRAM 0x0000000079A19540 000030 (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: WSMT 0x0000000079A19570 000028 (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: SSDT 0x0000000079A195A0 00414C (v02 INTEL  DptfTab  00000003 BRXT 0100000D)
ACPI: SSDT 0x0000000079A1D6F0 003621 (v02 INTEL  RVPRtd3  00000003 BRXT 0100000D)
ACPI: SSDT 0x0000000079A20D20 00077D (v02 INTEL  UsbCTabl 00000003 BRXT 0100000D)
ACPI: SSDT 0x0000000079A214A0 001611 (v01 Intel_ Platform 00001000 INTL 20120913)
ACPI: SSDT 0x0000000079A22AC0 0003DF (v02 PmRef  Cpu0Ist  00003000 INTL 20120913)
ACPI: SSDT 0x0000000079A22EA0 00072B (v02 CpuRef CpuSsdt  00003000 INTL 20120913)
ACPI: SSDT 0x0000000079A235D0 00032D (v02 PmRef  Cpu0Tst  00003000 INTL 20120913)
ACPI: SSDT 0x0000000079A23900 00017C (v02 PmRef  ApTst    00003000 INTL 20120913)
ACPI: SSDT 0x0000000079A23A80 002760 (v02 SaSsdt SaSsdt   00003000 INTL 20120913)
ACPI: UEFI 0x0000000079A261E0 000042 (v01 ALASKA A M I    00000000      00000000)
ACPI: TPM2 0x0000000079A26230 000034 (v03        Tpm2Tabl 00000001 AMI  00000000)
ACPI: DMAR 0x0000000079A26270 0000A8 (v01 INTEL  EDK2     00000003 BRXT 0100000D)
ACPI: WDAT 0x0000000079A26320 000104 (v01                 00000000      00000000)
ACPI: Local APIC address 0xfee00000
No NUMA configuration found
Faking a node at [mem 0x0000000000000000-0x000000017fffffff]
NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffcfff]
Zone ranges:
  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
  Normal   [mem 0x0000000100000000-0x000000017fffffff]
  Device   empty
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x0000000000001000-0x000000000003efff]
  node   0: [mem 0x0000000000040000-0x000000000009dfff]
  node   0: [mem 0x0000000000100000-0x000000000fffffff]
  node   0: [mem 0x0000000012151000-0x00000000768bcfff]
  node   0: [mem 0x0000000079e35000-0x000000007a1acfff]
  node   0: [mem 0x000000007a1c8000-0x000000007a762fff]
  node   0: [mem 0x000000007a765000-0x000000007affffff]
  node   0: [mem 0x0000000100000000-0x000000017fffffff]
Reserved but unavailable: 98 pages
Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff]
On node 0 totalpages: 1005750
  DMA zone: 64 pages used for memmap
  DMA zone: 23 pages reserved
  DMA zone: 3996 pages, LIFO batch:0
  DMA32 zone: 7461 pages used for memmap
  DMA32 zone: 477466 pages, LIFO batch:63
  Normal zone: 8192 pages used for memmap
  Normal zone: 524288 pages, LIFO batch:63
Reserving Intel graphics memory at [mem 0x7c000000-0x7fffffff]
ACPI: PM-Timer IO Port: 0x408
ACPI: Local APIC address 0xfee00000
ACPI: LAPIC_NMI (acpi_id[0x01] high level lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x02] high level lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x03] high level lint[0x1])
ACPI: LAPIC_NMI (acpi_id[0x04] high level lint[0x1])
IOAPIC[0]: apic_id 1, version 32, address 0xfec00000, GSI 0-119
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
ACPI: IRQ0 used by override.
ACPI: IRQ9 used by override.
Using ACPI (MADT) for SMP configuration information
ACPI: HPET id: 0x8086a701 base: 0xfed00000
smpboot: Allowing 4 CPUs, 2 hotplug CPUs
PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
PM: Registered nosave memory: [mem 0x0003f000-0x0003ffff]
PM: Registered nosave memory: [mem 0x0009e000-0x000fffff]
PM: Registered nosave memory: [mem 0x10000000-0x12150fff]
PM: Registered nosave memory: [mem 0x768bd000-0x79a0afff]
PM: Registered nosave memory: [mem 0x79a0b000-0x79a26fff]
PM: Registered nosave memory: [mem 0x79a27000-0x79a8afff]
PM: Registered nosave memory: [mem 0x79a8b000-0x79ddffff]
PM: Registered nosave memory: [mem 0x79de0000-0x79e34fff]
PM: Registered nosave memory: [mem 0x7a1ad000-0x7a1adfff]
PM: Registered nosave memory: [mem 0x7a1ae000-0x7a1c7fff]
PM: Registered nosave memory: [mem 0x7a763000-0x7a764fff]
PM: Registered nosave memory: [mem 0x7b000000-0x7fffffff]
PM: Registered nosave memory: [mem 0x80000000-0xcfffffff]
PM: Registered nosave memory: [mem 0xd0000000-0xd0ffffff]
PM: Registered nosave memory: [mem 0xd1000000-0xdfffffff]
PM: Registered nosave memory: [mem 0xe0000000-0xefffffff]
PM: Registered nosave memory: [mem 0xf0000000-0xfe041fff]
PM: Registered nosave memory: [mem 0xfe042000-0xfe044fff]
PM: Registered nosave memory: [mem 0xfe045000-0xfe8fffff]
PM: Registered nosave memory: [mem 0xfe900000-0xfe902fff]
PM: Registered nosave memory: [mem 0xfe903000-0xfebfffff]
PM: Registered nosave memory: [mem 0xfec00000-0xfec00fff]
PM: Registered nosave memory: [mem 0xfec01000-0xfed00fff]
PM: Registered nosave memory: [mem 0xfed01000-0xfed01fff]
PM: Registered nosave memory: [mem 0xfed02000-0xfedfffff]
PM: Registered nosave memory: [mem 0xfee00000-0xfee00fff]
PM: Registered nosave memory: [mem 0xfee01000-0xff7fffff]
PM: Registered nosave memory: [mem 0xff800000-0xffffffff]
[mem 0x80000000-0xcfffffff] available for PCI devices
Booting paravirtualized kernel on bare hardware
clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
random: get_random_bytes called from start_kernel+0x93/0x531 with crng_init=0
setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1
percpu: Embedded 44 pages/cpu @(____ptrval____) s143192 r8192 d28840 u524288
pcpu-alloc: s143192 r8192 d28840 u524288 alloc=1*2097152
pcpu-alloc: [0] 0 1 2 3 
Built 1 zonelists, mobility grouping on.  Total pages: 990010
Policy zone: Normal
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.19.0-2-amd64 root=UUID=57a681dd-c949-4287-be18-9d7b0f3f2b45 ro quiet
Calgary: detecting Calgary via BIOS EBDA area
Calgary: Unable to locate Rio Grande table in EBDA - bailing!
Memory: 3729732K/4023000K available (10252K kernel code, 1236K rwdata, 3196K rodata, 1572K init, 2332K bss, 293268K reserved, 0K cma-reserved)
SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
ftrace: allocating 31615 entries in 124 pages
rcu: Hierarchical RCU implementation.
rcu: 	RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
NR_IRQS: 33024, nr_irqs: 1024, preallocated irqs: 16
Console: colour dummy device 80x25
console [tty0] enabled
ACPI: Core revision 20180810
clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 99544814920 ns
hpet clockevent registered
APIC: Switch to symmetric I/O mode setup
DMAR: Host address width 39
DMAR: DRHD base: 0x000000fed64000 flags: 0x0
DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 7e3ff0505e
DMAR: DRHD base: 0x000000fed65000 flags: 0x1
DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
DMAR: RMRR base: 0x000000799b6000 end: 0x000000799d5fff
DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
DMAR-IR: HPET id 0 under DRHD base 0xfed65000
DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
DMAR-IR: Enabled IRQ remapping in x2apic mode
x2apic enabled
Switched APIC routing to cluster x2apic.
..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0xfc66f4fc7c, max_idle_ns: 440795224246 ns
Calibrating delay loop (skipped), value calculated using timer frequency.. 2188.80 BogoMIPS (lpj=4377600)
pid_max: default: 32768 minimum: 301
Security Framework initialized
Yama: disabled by default; enable with sysctl kernel.yama.*
AppArmor: AppArmor initialized
Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes)
Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)
Mount-cache hash table entries: 8192 (order: 4, 65536 bytes)
Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes)
mce: CPU supports 7 MCE banks
Last level iTLB entries: 4KB 48, 2MB 0, 4MB 0
Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Spectre V2 : Mitigation: Full generic retpoline
Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Spectre V2 : Enabling Restricted Speculation for firmware calls
Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Freeing SMP alternatives memory: 24K
TSC deadline timer enabled
smpboot: CPU0: Intel(R) Celeron(R) CPU N3350 @ 1.10GHz (family: 0x6, model: 0x5c, stepping: 0x9)
Performance Events: PEBS fmt3+, Goldmont events, 32-deep LBR, full-width counters, Intel PMU driver.
... version:                4
... bit width:              48
... generic registers:      4
... value mask:             0000ffffffffffff
... max period:             00007fffffffffff
... fixed-purpose events:   3
... event mask:             000000070000000f
rcu: Hierarchical SRCU implementation.
NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
smp: Bringing up secondary CPUs ...
x86: Booting SMP configuration:
.... node  #0, CPUs:      #1
smp: Brought up 1 node, 2 CPUs
smpboot: Max logical packages: 2
smpboot: Total of 2 processors activated (4377.60 BogoMIPS)
devtmpfs: initialized
x86/mm: Memory block size: 128MB
PM: Registering ACPI NVS region [mem 0x79a27000-0x79a8afff] (409600 bytes)
PM: Registering ACPI NVS region [mem 0x7a1ad000-0x7a1adfff] (4096 bytes)
clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
futex hash table entries: 1024 (order: 4, 65536 bytes)
pinctrl core: initialized pinctrl subsystem
NET: Registered protocol family 16
audit: initializing netlink subsys (disabled)
audit: type=2000 audit(1549808778.056:1): state=initialized audit_enabled=0 res=1
cpuidle: using governor ladder
cpuidle: using governor menu
ACPI: bus type PCI registered
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820
PCI: Using configuration type 1 for base access
HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: Added _OSI(Linux-Dell-Video)
ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
ACPI: 10 ACPI AML tables successfully acquired and loaded
ACPI: Dynamic OEM Table Load:
ACPI: SSDT 0xFFFF975CBA502800 000102 (v02 PmRef  Cpu0Cst  00003001 INTL 20120913)
ACPI: Dynamic OEM Table Load:
ACPI: SSDT 0xFFFF975CBA5A2A00 00015F (v02 PmRef  ApIst    00003000 INTL 20120913)
ACPI: Dynamic OEM Table Load:
ACPI: SSDT 0xFFFF975CBA4CA840 00008D (v02 PmRef  ApCst    00003000 INTL 20120913)
ACPI: EC: EC started
ACPI: EC: interrupt blocked
ACPI: \_SB_.PCI0.SBRG.H_EC: Used as first EC
ACPI: \_SB_.PCI0.SBRG.H_EC: GPE=0x2c, EC_CMD/EC_SC=0x66, EC_DATA=0x62
ACPI: \_SB_.PCI0.SBRG.H_EC: Used as boot DSDT EC to handle transactions
ACPI: Interpreter enabled
ACPI: (supports S0 S3 S4 S5)
ACPI: Using IOAPIC for interrupt routing
PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
ACPI: Enabled 9 GPEs in block 00 to 7F
ACPI: Power Resource [SPPR] (on)
ACPI: Power Resource [SPPR] (on)
ACPI: Power Resource [UPPR] (on)
ACPI: Power Resource [PX03] (on)
ACPI: Power Resource [UPPR] (on)
ACPI: Power Resource [UPPR] (on)
ACPI: Power Resource [UPPR] (on)
ACPI: Power Resource [UPPR] (on)
ACPI: Power Resource [UPPR] (on)
ACPI: Power Resource [USBC] (on)
ACPI: Power Resource [LSPR] (on)
ACPI: Power Resource [SDPR] (on)
ACPI: Power Resource [PXP] (on)
ACPI: Power Resource [PXP] (on)
ACPI: Power Resource [PXP] (off)
ACPI: Power Resource [PXP] (off)
ACPI: Power Resource [PAUD] (on)
ACPI: Power Resource [FN00] (on)
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI]
acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR]
PCI host bridge to bus 0000:00
pci_bus 0000:00: root bus resource [io  0x0070-0x0077]
pci_bus 0000:00: root bus resource [io  0x0000-0x006f window]
pci_bus 0000:00: root bus resource [io  0x0078-0x0cf7 window]
pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
pci_bus 0000:00: root bus resource [mem 0x7c000001-0x7fffffff window]
pci_bus 0000:00: root bus resource [mem 0x7b800001-0x7bffffff window]
pci_bus 0000:00: root bus resource [mem 0x80000000-0xcfffffff window]
pci_bus 0000:00: root bus resource [mem 0xe0000000-0xefffffff window]
pci_bus 0000:00: root bus resource [bus 00-ff]
pci 0000:00:00.0: [8086:5af0] type 00 class 0x060000
pci 0000:00:00.1: [8086:5a8c] type 00 class 0x118000
pci 0000:00:00.1: reg 0x10: [mem 0x80000000-0x80007fff 64bit]
pci 0000:00:02.0: [8086:5a85] type 00 class 0x030000
pci 0000:00:02.0: reg 0x10: [mem 0x81000000-0x81ffffff 64bit]
pci 0000:00:02.0: reg 0x18: [mem 0x90000000-0x9fffffff 64bit pref]
pci 0000:00:02.0: reg 0x20: [io  0xf000-0xf03f]
pci 0000:00:02.0: BAR 2: assigned to efifb
pci 0000:00:0e.0: [8086:5a98] type 00 class 0x040300
pci 0000:00:0e.0: reg 0x10: [mem 0x82210000-0x82213fff 64bit]
pci 0000:00:0e.0: reg 0x20: [mem 0x82000000-0x820fffff 64bit]
pci 0000:00:0e.0: PME# supported from D0 D3hot D3cold
pci 0000:00:0f.0: [8086:5a9a] type 00 class 0x078000
pci 0000:00:0f.0: reg 0x10: [mem 0x82239000-0x82239fff 64bit]
pci 0000:00:0f.0: PME# supported from D3hot
pci 0000:00:14.0: [8086:5ad7] type 01 class 0x060400
pci 0000:00:14.0: PME# supported from D0 D3hot D3cold
pci 0000:00:15.0: [8086:5aa8] type 00 class 0x0c0330
pci 0000:00:15.0: reg 0x10: [mem 0x82200000-0x8220ffff 64bit]
pci 0000:00:15.0: PME# supported from D3hot D3cold
pci 0000:00:16.0: [8086:5aac] type 00 class 0x118000
pci 0000:00:16.0: reg 0x10: [mem 0x82236000-0x82236fff 64bit]
pci 0000:00:16.0: reg 0x18: [mem 0x82235000-0x82235fff 64bit]
pci 0000:00:16.1: [8086:5aae] type 00 class 0x118000
pci 0000:00:16.1: reg 0x10: [mem 0x82234000-0x82234fff 64bit]
pci 0000:00:16.1: reg 0x18: [mem 0x82233000-0x82233fff 64bit]
pci 0000:00:16.2: [8086:5ab0] type 00 class 0x118000
pci 0000:00:16.2: reg 0x10: [mem 0x82232000-0x82232fff 64bit]
pci 0000:00:16.2: reg 0x18: [mem 0x82231000-0x82231fff 64bit]
pci 0000:00:16.3: [8086:5ab2] type 00 class 0x118000
pci 0000:00:16.3: reg 0x10: [mem 0x82230000-0x82230fff 64bit]
pci 0000:00:16.3: reg 0x18: [mem 0x8222f000-0x8222ffff 64bit]
pci 0000:00:17.0: [8086:5ab4] type 00 class 0x118000
pci 0000:00:17.0: reg 0x10: [mem 0x8222e000-0x8222efff 64bit]
pci 0000:00:17.0: reg 0x18: [mem 0x8222d000-0x8222dfff 64bit]
pci 0000:00:17.1: [8086:5ab6] type 00 class 0x118000
pci 0000:00:17.1: reg 0x10: [mem 0x8222c000-0x8222cfff 64bit]
pci 0000:00:17.1: reg 0x18: [mem 0x8222b000-0x8222bfff 64bit]
pci 0000:00:17.2: [8086:5ab8] type 00 class 0x118000
pci 0000:00:17.2: reg 0x10: [mem 0x8222a000-0x8222afff 64bit]
pci 0000:00:17.2: reg 0x18: [mem 0x82229000-0x82229fff 64bit]
pci 0000:00:17.3: [8086:5aba] type 00 class 0x118000
pci 0000:00:17.3: reg 0x10: [mem 0x82228000-0x82228fff 64bit]
pci 0000:00:17.3: reg 0x18: [mem 0x82227000-0x82227fff 64bit]
pci 0000:00:18.0: [8086:5abc] type 00 class 0x118000
pci 0000:00:18.0: reg 0x10: [mem 0x82226000-0x82226fff 64bit]
pci 0000:00:18.0: reg 0x18: [mem 0x82225000-0x82225fff 64bit]
pci 0000:00:18.1: [8086:5abe] type 00 class 0x118000
pci 0000:00:18.1: reg 0x10: [mem 0x82224000-0x82224fff 64bit]
pci 0000:00:18.1: reg 0x18: [mem 0x82223000-0x82223fff 64bit]
pci 0000:00:18.2: [8086:5ac0] type 00 class 0x118000
pci 0000:00:18.2: reg 0x10: [mem 0xfea10000-0xfea10fff 64bit]
pci 0000:00:18.2: reg 0x18: [mem 0x00000000-0x00000fff 64bit]
pci 0000:00:18.3: [8086:5aee] type 00 class 0x118000
pci 0000:00:18.3: reg 0x10: [mem 0x82222000-0x82222fff 64bit]
pci 0000:00:18.3: reg 0x18: [mem 0x82221000-0x82221fff 64bit]
pci 0000:00:19.0: [8086:5ac2] type 00 class 0x118000
pci 0000:00:19.0: reg 0x10: [mem 0x82220000-0x82220fff 64bit]
pci 0000:00:19.0: reg 0x18: [mem 0x8221f000-0x8221ffff 64bit]
pci 0000:00:19.1: [8086:5ac4] type 00 class 0x118000
pci 0000:00:19.1: reg 0x10: [mem 0x8221e000-0x8221efff 64bit]
pci 0000:00:19.1: reg 0x18: [mem 0x8221d000-0x8221dfff 64bit]
pci 0000:00:19.2: [8086:5ac6] type 00 class 0x118000
pci 0000:00:19.2: reg 0x10: [mem 0x8221c000-0x8221cfff 64bit]
pci 0000:00:19.2: reg 0x18: [mem 0x8221b000-0x8221bfff 64bit]
pci 0000:00:1b.0: [8086:5aca] type 00 class 0x080501
pci 0000:00:1b.0: reg 0x10: [mem 0x8221a000-0x8221afff 64bit]
pci 0000:00:1b.0: reg 0x18: [mem 0x82219000-0x82219fff 64bit]
pci 0000:00:1c.0: [8086:5acc] type 00 class 0x080501
pci 0000:00:1c.0: reg 0x10: [mem 0x82218000-0x82218fff 64bit]
pci 0000:00:1c.0: reg 0x18: [mem 0x82217000-0x82217fff 64bit]
pci 0000:00:1e.0: [8086:5ad0] type 00 class 0x080501
pci 0000:00:1e.0: reg 0x10: [mem 0x82216000-0x82216fff 64bit]
pci 0000:00:1e.0: reg 0x18: [mem 0x82215000-0x82215fff 64bit]
pci 0000:00:1f.0: [8086:5ae8] type 00 class 0x060100
pci 0000:00:1f.1: [8086:5ad4] type 00 class 0x0c0500
pci 0000:00:1f.1: reg 0x10: [mem 0x82214000-0x822140ff 64bit]
pci 0000:00:1f.1: reg 0x20: [io  0xf040-0xf05f]
pci 0000:01:00.0: [8086:3165] type 00 class 0x028000
pci 0000:01:00.0: reg 0x10: [mem 0x82100000-0x82101fff 64bit]
pci 0000:01:00.0: Upstream bridge's Max Payload Size set to 128 (was 256, max 256)
pci 0000:01:00.0: Max Payload Size set to 128 (was 128, max 128)
pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
pci 0000:00:14.0: PCI bridge to [bus 01]
pci 0000:00:14.0:   bridge window [mem 0x82100000-0x821fffff]
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 11 12 14 *15), disabled.
ACPI Warning: GPE type mismatch (level/edge) (20180810/evxface-792)
ACPI: EC: interrupt unblocked
ACPI: EC: event unblocked
ACPI: \_SB_.PCI0.SBRG.H_EC: GPE=0x2c, EC_CMD/EC_SC=0x66, EC_DATA=0x62
ACPI: \_SB_.PCI0.SBRG.H_EC: Used as boot DSDT EC to handle transactions and events
pci 0000:00:02.0: vgaarb: setting as boot VGA device
pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
pci 0000:00:02.0: vgaarb: bridge control possible
vgaarb: loaded
pps_core: LinuxPPS API ver. 1 registered
pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
PTP clock support registered
EDAC MC: Ver: 3.0.0
Registered efivars operations
PCI: Using ACPI for IRQ routing
PCI: pci_cache_line_size set to 64 bytes
pci 0000:00:18.2: can't claim BAR 0 [mem 0xfea10000-0xfea10fff 64bit]: no compatible bridge window
e820: reserve RAM buffer [mem 0x0003f000-0x0003ffff]
e820: reserve RAM buffer [mem 0x0009e000-0x0009ffff]
e820: reserve RAM buffer [mem 0x768bd000-0x77ffffff]
e820: reserve RAM buffer [mem 0x7a1ad000-0x7bffffff]
e820: reserve RAM buffer [mem 0x7a763000-0x7bffffff]
e820: reserve RAM buffer [mem 0x7b000000-0x7bffffff]
hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
hpet0: 8 comparators, 64-bit 19.200000 MHz counter
clocksource: Switched to clocksource tsc-early
VFS: Disk quotas dquot_6.6.0
VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
AppArmor: AppArmor Filesystem Enabled
pnp: PnP ACPI init
system 00:00: [io  0x0680-0x069f] has been reserved
system 00:00: [io  0x0400-0x047f] has been reserved
system 00:00: [io  0x0500-0x05fe] has been reserved
system 00:00: [io  0x0600-0x061f] has been reserved
system 00:00: [io  0x164e-0x164f] has been reserved
system 00:00: Plug and Play ACPI device, IDs PNP0c02 (active)
system 00:01: [mem 0xe0000000-0xefffffff] has been reserved
system 00:01: [mem 0xfea00000-0xfeafffff] has been reserved
system 00:01: [mem 0xfed01000-0xfed01fff] has been reserved
system 00:01: [mem 0xfed03000-0xfed03fff] has been reserved
system 00:01: [mem 0xfed06000-0xfed06fff] has been reserved
system 00:01: [mem 0xfed08000-0xfed09fff] has been reserved
system 00:01: [mem 0xfed80000-0xfedbffff] has been reserved
system 00:01: [mem 0xfed1c000-0xfed1cfff] has been reserved
system 00:01: [mem 0xfee00000-0xfeefffff] could not be reserved
system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
pnp 00:02: Plug and Play ACPI device, IDs PNP0303 (active)
pnp 00:03: Plug and Play ACPI device, IDs PNP0b00 (active)
pnp: PnP ACPI: found 4 devices
clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
pci 0000:00:18.2: BAR 0: assigned [mem 0x80008000-0x80008fff 64bit]
pci 0000:00:18.2: BAR 2: assigned [mem 0x80009000-0x80009fff 64bit]
pci 0000:00:14.0: PCI bridge to [bus 01]
pci 0000:00:14.0:   bridge window [mem 0x82100000-0x821fffff]
pci_bus 0000:00: resource 4 [io  0x0070-0x0077]
pci_bus 0000:00: resource 5 [io  0x0000-0x006f window]
pci_bus 0000:00: resource 6 [io  0x0078-0x0cf7 window]
pci_bus 0000:00: resource 7 [io  0x0d00-0xffff window]
pci_bus 0000:00: resource 8 [mem 0x7c000001-0x7fffffff window]
pci_bus 0000:00: resource 9 [mem 0x7b800001-0x7bffffff window]
pci_bus 0000:00: resource 10 [mem 0x80000000-0xcfffffff window]
pci_bus 0000:00: resource 11 [mem 0xe0000000-0xefffffff window]
pci_bus 0000:01: resource 1 [mem 0x82100000-0x821fffff]
NET: Registered protocol family 2
tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes)
TCP established hash table entries: 32768 (order: 6, 262144 bytes)
TCP bind hash table entries: 32768 (order: 7, 524288 bytes)
TCP: Hash tables configured (established 32768 bind 32768)
UDP hash table entries: 2048 (order: 4, 65536 bytes)
UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
NET: Registered protocol family 1
pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
PCI: CLS 0 bytes, default 64
Unpacking initramfs...
Freeing initrd memory: 26004K
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
software IO TLB: mapped [mem 0x6cb91000-0x70b91000] (64MB)
clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0xfc66f4fc7c, max_idle_ns: 440795224246 ns
clocksource: Switched to clocksource tsc
Initialise system trusted keyrings
workingset: timestamp_bits=40 max_order=20 bucket_order=0
zbud: loaded
pstore: using deflate compression
Key type asymmetric registered
Asymmetric key parser 'x509' registered
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
io scheduler mq-deadline registered
pcieport 0000:00:14.0: Signaling PME with IRQ 122
shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
efifb: probing for efifb
efifb: framebuffer at 0x90000000, using 8128k, total 8128k
efifb: mode is 1920x1080x32, linelength=7680, pages=1
efifb: scrolling: redraw
efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Console: switching to colour frame buffer device 240x67
fb0: EFI VGA frame buffer device
intel_idle: MWAIT substates: 0x11242020
intel_idle: v0.4.1 model 0x5C
intel_idle: lapic_timer_reliable_states 0xffffffff
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Linux agpgart interface v0.103
AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
AMD IOMMUv2 functionality not available on this system
i8042: PNP: PS/2 Controller [PNP0303:PS2K] at 0x60,0x64 irq 1
i8042: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp
serio: i8042 KBD port at 0x60,0x64 irq 1
mousedev: PS/2 mouse device common for all mice
rtc_cmos 00:03: RTC can wake from S4
rtc_cmos 00:03: registered as rtc0
rtc_cmos 00:03: alarms up to one month, y3k, 242 bytes nvram, hpet irqs
intel_pstate: Intel P-state driver initializing
ledtrig-cpu: registered to indicate activity on CPUs
NET: Registered protocol family 10
input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Segment Routing with IPv6
mip6: Mobile IPv6
NET: Registered protocol family 17
mpls_gso: MPLS GSO support
microcode: sig=0x506c9, pf=0x1, revision=0x32
microcode: Microcode Update Driver: v2.2.
sched_clock: Marking stable (2917959387, -2501360)->(2921251562, -5793535)
registered taskstats version 1
Loading compiled-in X.509 certificates
Loaded X.509 cert 'secure-boot-test-key-lfaraone: 97c1b25cddf9873ca78a58f3d73bf727d2cf78ff'
zswap: loaded using pool lzo/zbud
AppArmor: AppArmor sha1 policy hashing enabled
rtc_cmos 00:03: setting system clock to 2019-02-10 14:26:20 UTC (1549808780)
Freeing unused kernel image memory: 1572K
Write protecting the kernel read-only data: 16384k
Freeing unused kernel image memory: 2028K
Freeing unused kernel image memory: 900K
x86/mm: Checked W+X mappings: passed, no W+X pages found.
Run /init as init process
hidraw: raw HID events driver (C) Jiri Kosina
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [TZ01] (24 C)
ACPI: bus type USB registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
i801_smbus 0000:00:1f.1: can't derive routing for PCI INT A
i801_smbus 0000:00:1f.1: PCI INT A: not connected
i801_smbus 0000:00:1f.1: SPD Write Disable is set
i801_smbus 0000:00:1f.1: SMBus using polling
lpc_ich 0000:00:1f.0: I/O space for ACPI uninitialized
sdhci: Secure Digital Host Controller Interface driver
sdhci: Copyright(c) Pierre Ossman
sdhci-pci 0000:00:1b.0: SDHCI controller found [8086:5aca] (rev b)
sdhci-pci 0000:00:1b.0: enabling device (0000 -> 0002)
xhci_hcd 0000:00:15.0: xHCI Host Controller
xhci_hcd 0000:00:15.0: new USB bus registered, assigned bus number 1
xhci_hcd 0000:00:15.0: hcc params 0x200077c1 hci version 0x100 quirks 0x0000000081109810
xhci_hcd 0000:00:15.0: cache line size of 64 is not supported
cryptd: max_cpu_qlen set to 1000
usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 4.19
usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb1: Product: xHCI Host Controller
usb usb1: Manufacturer: Linux 4.19.0-2-amd64 xhci-hcd
usb usb1: SerialNumber: 0000:00:15.0
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 8 ports detected
mmc0: SDHCI controller on PCI [0000:00:1b.0] using ADMA 64-bit
sdhci-pci 0000:00:1c.0: SDHCI controller found [8086:5acc] (rev b)
SSE version of gcm_enc/dec engaged.
mmc1: SDHCI controller on PCI [0000:00:1c.0] using ADMA 64-bit
sdhci-pci 0000:00:1e.0: SDHCI controller found [8086:5ad0] (rev b)
sdhci-pci 0000:00:1e.0: enabling device (0000 -> 0002)
i2c_hid i2c-SYNA3602:00: i2c-SYNA3602:00 supply vdd not found, using dummy regulator
i2c_hid i2c-SYNA3602:00: Linked as a consumer to regulator.0
i2c_hid i2c-SYNA3602:00: i2c-SYNA3602:00 supply vddl not found, using dummy regulator
xhci_hcd 0000:00:15.0: xHCI Host Controller
xhci_hcd 0000:00:15.0: new USB bus registered, assigned bus number 2
xhci_hcd 0000:00:15.0: Host supports USB 3.0  SuperSpeed
usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 4.19
usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb2: Product: xHCI Host Controller
usb usb2: Manufacturer: Linux 4.19.0-2-amd64 xhci-hcd
usb usb2: SerialNumber: 0000:00:15.0
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 7 ports detected
mmc2: SDHCI controller on PCI [0000:00:1e.0] using ADMA 64-bit
mmc1: new HS400 MMC card at address 0001
mmcblk1: mmc1:0001 DF4032 29.1 GiB 
mmcblk1boot0: mmc1:0001 DF4032 partition 1 4.00 MiB
mmcblk1boot1: mmc1:0001 DF4032 partition 2 4.00 MiB
mmcblk1rpmb: mmc1:0001 DF4032 partition 3 4.00 MiB, chardev (245:0)
 mmcblk1: p1 p2 p3 p4
random: fast init done
dw-apb-uart.8: ttyS0 at MMIO 0x82226000 (irq = 4, base_baud = 115200) is a 16550A
dw-apb-uart.9: ttyS1 at MMIO 0x82224000 (irq = 5, base_baud = 115200) is a 16550A
dw-apb-uart.10: ttyS2 at MMIO 0x80008000 (irq = 6, base_baud = 115200) is a 16550A
dw-apb-uart.11: ttyS3 at MMIO 0x82222000 (irq = 7, base_baud = 115200) is a 16550A
input: SYNA3602:00 0911:5288 Mouse as /devices/pci0000:00/0000:00:16.2/i2c_designware.2/i2c-3/i2c-SYNA3602:00/0018:0911:5288.0001/input/input1
input: SYNA3602:00 0911:5288 Touchpad as /devices/pci0000:00/0000:00:16.2/i2c_designware.2/i2c-3/i2c-SYNA3602:00/0018:0911:5288.0001/input/input2
hid-generic 0018:0911:5288.0001: input,hidraw0: I2C HID v1.00 Mouse [SYNA3602:00 0911:5288] on i2c-SYNA3602:00
usb 1-6: new high-speed USB device number 2 using xhci_hcd
usb 1-6: New USB device found, idVendor=0bda, idProduct=0129, bcdDevice=39.60
usb 1-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 1-6: Product: USB2.0-CRW
usb 1-6: Manufacturer: Generic
usb 1-6: SerialNumber: 20100201396000000
usbcore: registered new interface driver rtsx_usb
usb 1-7: new full-speed USB device number 3 using xhci_hcd
EXT4-fs (mmcblk1p3): mounted filesystem with ordered data mode. Opts: (null)
usb 1-7: New USB device found, idVendor=8087, idProduct=0a2a, bcdDevice= 0.01
usb 1-7: New USB device strings: Mfr=0, Product=0, SerialNumber=0
usb 1-8: new high-speed USB device number 4 using xhci_hcd
systemd[1]: RTC configured in localtime, applying delta of 0 minutes to system time.
systemd[1]: Inserted module 'autofs4'
usb 1-8: New USB device found, idVendor=058f, idProduct=3841, bcdDevice= 0.01
usb 1-8: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-8: Product: USB 2.0 PC Camera
usb 1-8: Manufacturer: Alcor Micro, Corp.
systemd[1]: systemd 240 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid)
systemd[1]: Detected architecture x86-64.
systemd[1]: Set hostname to <nc14>.
systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
systemd[1]: Created slice system-getty.slice.
systemd[1]: Listening on udev Kernel Socket.
systemd[1]: Listening on initctl Compatibility Named Pipe.
systemd[1]: Listening on Journal Socket (/dev/log).
systemd[1]: Listening on Syslog Socket.
systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
EXT4-fs (mmcblk1p3): re-mounted. Opts: errors=remount-ro
random: systemd-random-: uninitialized urandom read (512 bytes read)
systemd-journald[240]: Received request to flush runtime journal from PID 1
input: Intel HID events as /devices/platform/INT33D5:00/input/input3
intel-hid INT33D5:00: platform supports 5 button array
input: Intel HID 5 button array as /devices/platform/INT33D5:00/input/input4
input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:13/PNP0C09:00/PNP0C0D:00/input/input5
ACPI: Lid Switch [LID0]
input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input6
ACPI: Power Button [PWRB]
int3403 thermal: probe of INT3403:05 failed with error -22
idma64 idma64.0: Found Intel integrated DMA 64-bit
ACPI: AC Adapter [ADP1] (off-line)
battery: ACPI: Battery Slot [BAT0] (battery present)
idma64 idma64.1: Found Intel integrated DMA 64-bit
Intel(R) Wireless WiFi driver for Linux
Copyright(c) 2003- 2015 Intel Corporation
iwlwifi 0000:01:00.0: enabling device (0000 -> 0002)
checking generic (90000000 7f0000) vs hw (90000000 10000000)
fb: switching to inteldrmfb from EFI VGA
Console: switching to colour dummy device 80x25
[drm] Replacing VGA console driver
[drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[drm] Driver supports precise vblank timestamp query.
i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
iwlwifi 0000:01:00.0: firmware: direct-loading firmware iwlwifi-7265D-29.ucode
iwlwifi 0000:01:00.0: loaded firmware version 29.1044073957.0 op_mode iwlmvm
i915 0000:00:02.0: firmware: direct-loading firmware i915/bxt_dmc_ver1_07.bin
[drm] Finished loading DMC firmware i915/bxt_dmc_ver1_07.bin (v1.7)
alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
media: Linux media interface: v0.10
idma64 idma64.2: Found Intel integrated DMA 64-bit
videodev: Linux video capture interface: v2.00
uvcvideo: Found UVC 1.00 device USB 2.0 PC Camera (058f:3841)
uvcvideo 1-8:1.0: Entity type for entity Processing 2 was not initialized!
uvcvideo 1-8:1.0: Entity type for entity Extension 6 was not initialized!
uvcvideo 1-8:1.0: Entity type for entity Camera 1 was not initialized!
input: USB 2.0 PC Camera: PC Camera as /devices/pci0000:00/0000:00:15.0/usb1/1-8/1-8:1.0/input/input7
usbcore: registered new interface driver uvcvideo
USB Video Class driver (1.1.1)
usbcore: registered new interface driver snd-usb-audio
idma64 idma64.3: Found Intel integrated DMA 64-bit
iwlwifi 0000:01:00.0: Detected Intel(R) Dual Band Wireless AC 3165, REV=0x210
input: SYNA3602:00 0911:5288 Touchpad as /devices/pci0000:00/0000:00:16.2/i2c_designware.2/i2c-3/i2c-SYNA3602:00/0018:0911:5288.0001/input/input9
hid-multitouch 0018:0911:5288.0001: input,hidraw0: I2C HID v1.00 Mouse [SYNA3602:00 0911:5288] on i2c-SYNA3602:00
Bluetooth: Core ver 2.22
NET: Registered protocol family 31
Bluetooth: HCI device and connection manager initialized
Bluetooth: HCI socket layer initialized
Bluetooth: L2CAP socket layer initialized
Bluetooth: SCO socket layer initialized
iwlwifi 0000:01:00.0: base HW address: b8:08:cf:fd:fd:d6
usbcore: registered new interface driver btusb
Bluetooth: hci0: read Intel version: 370810011003110e00
bluetooth hci0: firmware: direct-loading firmware intel/ibt-hw-37.8.10-fw-
Bluetooth: hci0: Intel Bluetooth firmware file: intel/ibt-hw-37.8.10-fw-
[drm] Initialized i915 1.6.0 20180719 for 0000:00:02.0 on minor 0
ACPI: Video Device [GFX0] (multi-head: yes  rom: no  post: no)
input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input10
snd_hda_intel 0000:00:0e.0: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
EFI Variables Facility v0.08 2004-May-17
fbcon: inteldrmfb (fb0) is primary device
idma64 idma64.4: Found Intel integrated DMA 64-bit
input: PC Speaker as /devices/platform/pcspkr/input/input11
random: crng init done
ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs'
thermal thermal_zone3: failed to read out thermal zone (-61)
pstore: Registered efi as persistent store backend
RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer
RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
RAPL PMU: hw unit of domain package 2^-14 Joules
RAPL PMU: hw unit of domain dram 2^-14 Joules
RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules
snd_hda_codec_realtek hdaudioC0D0: autoconfig for ALC269VC: line_outs=1 (0x14/0x0/0x0/0x0/0x0) type:speaker
snd_hda_codec_realtek hdaudioC0D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
snd_hda_codec_realtek hdaudioC0D0:    hp_outs=1 (0x15/0x0/0x0/0x0/0x0)
snd_hda_codec_realtek hdaudioC0D0:    mono: mono_out=0x0
snd_hda_codec_realtek hdaudioC0D0:    inputs:
snd_hda_codec_realtek hdaudioC0D0:      Mic=0x18
snd_hda_codec_realtek hdaudioC0D0:      Internal Mic=0x12
idma64 idma64.5: Found Intel integrated DMA 64-bit
input: HDA Intel PCH Mic as /devices/pci0000:00/0000:00:0e.0/sound/card0/input12
input: HDA Intel PCH Headphone as /devices/pci0000:00/0000:00:0e.0/sound/card0/input13
input: HDA Intel PCH HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:0e.0/sound/card0/input14
input: HDA Intel PCH HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:0e.0/sound/card0/input15
input: HDA Intel PCH HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:0e.0/sound/card0/input16
input: HDA Intel PCH HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:0e.0/sound/card0/input17
input: HDA Intel PCH HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:0e.0/sound/card0/input18
idma64 idma64.6: Found Intel integrated DMA 64-bit
EDAC pnd2: b_cr_tolud_pci=080000001 ret=0
EDAC pnd2: b_cr_touud_lo_pci=080000000 ret=0
EDAC pnd2: b_cr_touud_hi_pci=000000001 ret=0
EDAC pnd2: b_cr_asym_mem_region0_mchbar=000000000 ret=0
EDAC pnd2: b_cr_asym_mem_region1_mchbar=000000000 ret=0
EDAC pnd2: b_cr_mot_out_base_mchbar=000000000 ret=0
EDAC pnd2: b_cr_mot_out_mask_mchbar=000000000 ret=0
EDAC pnd2: b_cr_slice_channel_hash=80000dbc00000244 ret=0
EDAC pnd2: b_cr_asym_2way_mem_region_mchbar=000000000 ret=0
EDAC pnd2: d_cr_drp0=01048c023 ret=0
EDAC pnd2: d_cr_drp0=01048c023 ret=0
EDAC pnd2: d_cr_drp0=01048c023 ret=0
EDAC pnd2: d_cr_drp0=01048c023 ret=0
EDAC pnd2: Unsupported DIMM in channel 0
EDAC pnd2: Unsupported DIMM in channel 1
EDAC pnd2: Unsupported DIMM in channel 2
EDAC pnd2: Unsupported DIMM in channel 3
EDAC pnd2: Failed to register device with error -22.
Bluetooth: hci0: Intel firmware patch completed and activated
intel_rapl: Found RAPL domain package
intel_rapl: Found RAPL domain core
intel_rapl: Found RAPL domain uncore
intel_rapl: Found RAPL domain dram
idma64 idma64.7: Found Intel integrated DMA 64-bit
idma64 idma64.9: Found Intel integrated DMA 64-bit
idma64 idma64.12: Found Intel integrated DMA 64-bit
idma64 idma64.13: Found Intel integrated DMA 64-bit
Bluetooth: BNEP (Ethernet Emulation) ver 1.3
Bluetooth: BNEP filters: protocol multicast
Bluetooth: BNEP socket layer initialized
idma64 idma64.14: Found Intel integrated DMA 64-bit
NET: Registered protocol family 3
NET: Registered protocol family 5
Console: switching to colour frame buffer device 240x67
i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
fuse init (API version 7.27)
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       39 bits physical, 48 bits virtual
CPU(s):              2
On-line CPU(s) list: 0,1
Thread(s) per core:  1
Core(s) per socket:  2
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               92
Model name:          Intel(R) Celeron(R) CPU N3350 @ 1.10GHz
Stepping:            9
CPU MHz:             987.647
CPU max MHz:         2400.0000
CPU min MHz:         800.0000
BogoMIPS:            2188.80
Virtualization:      VT-x
L1d cache:           24K
L1i cache:           32K
L2 cache:            1024K
NUMA node0 CPU(s):   0,1
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm 3dnowprefetch cpuid_fault cat_l2 pti tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms mpx rdt_a rdseed smap clflushopt intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts
00:00.0 Host bridge [0600]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series Host Bridge [8086:5af0] (rev 0b)
00:00.1 Signal processing controller [1180]: Intel Corporation Device [8086:5a8c] (rev 0b)
00:02.0 VGA compatible controller [0300]: Intel Corporation Device [8086:5a85] (rev 0b)
00:0e.0 Audio device [0403]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series Audio Cluster [8086:5a98] (rev 0b)
00:0f.0 Communication controller [0780]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series Trusted Execution Engine [8086:5a9a] (rev 0b)
00:12.0 SATA controller [0106]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SATA AHCI Controller [8086:5ae3] (rev 0b)
00:14.0 PCI bridge [0604]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series PCI Express Port B #2 [8086:5ad7] (rev fb)
00:15.0 USB controller [0c03]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series USB xHCI [8086:5aa8] (rev 0b)
00:16.0 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #1 [8086:5aac] (rev 0b)
00:16.1 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #2 [8086:5aae] (rev 0b)
00:16.2 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #3 [8086:5ab0] (rev 0b)
00:16.3 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #4 [8086:5ab2] (rev 0b)
00:17.0 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #5 [8086:5ab4] (rev 0b)
00:17.1 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #6 [8086:5ab6] (rev 0b)
00:17.2 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #7 [8086:5ab8] (rev 0b)
00:17.3 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series I2C Controller #8 [8086:5aba] (rev 0b)
00:18.0 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series HSUART Controller #1 [8086:5abc] (rev 0b)
00:18.1 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series HSUART Controller #2 [8086:5abe] (rev 0b)
00:18.2 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series HSUART Controller #3 [8086:5ac0] (rev 0b)
00:18.3 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series HSUART Controller #4 [8086:5aee] (rev 0b)
00:19.0 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SPI Controller #1 [8086:5ac2] (rev 0b)
00:19.1 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SPI Controller #2 [8086:5ac4] (rev 0b)
00:19.2 Signal processing controller [1180]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SPI Controller #3 [8086:5ac6] (rev 0b)
00:1b.0 SD Host controller [0805]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SDXC/MMC Host Controller [8086:5aca] (rev 0b)
00:1c.0 SD Host controller [0805]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series eMMC Controller [8086:5acc] (rev 0b)
00:1e.0 SD Host controller [0805]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SDIO Controller [8086:5ad0] (rev 0b)
00:1f.0 ISA bridge [0601]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series Low Pin Count Interface [8086:5ae8] (rev 0b)
00:1f.1 SMBus [0c05]: Intel Corporation Atom/Celeron/Pentium Processor N4200/N3350/E3900 Series SMBus Controller [8086:5ad4] (rev 0b)
01:00.0 Network controller [0280]: Intel Corporation Wireless 3165 [8086:3165] (rev 79)
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 004: ID 058f:3841 Alcor Micro Corp.
Bus 001 Device 003: ID 8087:0a2a Intel Corp.
Bus 001 Device 002: ID 0bda:0129 Realtek Semiconductor Corp. RTS5129 Card Reader Controller
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

12 February, 2019 09:27PM

Sven Hoexter

The day you start to use rc builds in production - Kafka 2.1.1 edition

tl;dr If you want to run Kafka 2.x use 2.1.1rc1 or later.

So someone started to update from Kafka 1.1.1 to 2.1.0 yesterday and it kept crashing every other hour. It pretty much looks like https://issues.apache.org/jira/browse/KAFKA-7697, so we're now trying out 2.1.1rc1 because we missed the rc2 at http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/. So ideally you go with rc2 which has a few more fixes for unrelated issues.

Beside of that, be aware that the update to 2.1.0 is a one way street! Read http://kafka.apache.org/21/documentation.html#upgrade carefully. There is a schema change in the consumer offset topic which is internally used to track your consumer offsets since those moved out of Zookeeper some time ago.

For us the primary lesson is that we've to put way more synthetic traffic on our testing environments, because 2.1.0 was running in the non production environments for several days without an issue, and the relevant team hit the deadlock in production within hours.

12 February, 2019 03:15PM

Reproducible builds folks

Reproducible Builds: Weekly report #198

Here’s what happened in the Reproducible Builds effort between Sunday February 3rd and Saturday February 9th 2019:

Packages reviewed and fixed, and bugs filed

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made a large number of improvements including:

  • Arch Linux-specific changes:
    • Correct information about the hostnames used. []
    • Document that the kernel is not currently varied on rebuilds. []
    • Improve IRC messages. [][]
  • Debian-specific changes:
    • Perform a large number of cleanups, updating documentation to match. [][][][]
    • Avoid unnecessary apt install calls on every deployment run. []
  • LEDE/OpenWrt-specific changes:
    • Attempt to build all the packages again. []
    • Mark a workaround for an iw issue in a better way. []
  • Misc/generic changes:
    • Clarify where NetBSD is actually built. []
    • Improve jobs to check the version of diffoscope relative to upstream in various distributions. [][]
    • Render the artificial date correctly in the build variation tables. []
    • Work around a rare and temporary problem when restarting Munin. []
    • Drop code relating to OpenSSH client ports as this is handled via ~/ssh/config now. []
    • Fix various bits of documentation. [][][][][]
  • Fedora-specific changes:
    • Correctly note that testing Fedora is currently disabled. []
    • Abstract some behaviour to make possible testing of other distributions easier. []
    • Only update mock on build nodes. []

In addition, Mattia Rizzolo updated the configuration for df_inode [] and reverted a change to our pbuilder setup [] whilst Bernhard M. Wiedemann ported make_graph to using Python 3 [].

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

12 February, 2019 12:30PM

hackergotchi for Mike Gabriel

Mike Gabriel

Drupal 6 - Security Support Continued

Believe it or not, I just upgraded my old Drupal 6 instances serving e.g. my blog [1] to Drupal 6 LTS v6.49.

Thanks a lot to Elliot Christenson for continuing Drupal 6 security support. See [2] for more information about Drupal 6 LTS support provided by his company.

[1] https://sunweavers.net/blog
[2] https://www.mydropwizard.com/blog

12 February, 2019 09:30AM by sunweaver

February 11, 2019

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in January 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Time’s almost up and the soft freeze is near. In January I packaged a couple of new upstream versions for Teeworlds (0.7.2), Neverball (this one was a Git snapshot because they apparently don’t like regular releases), cube2-data (easy, because I am upstream myself), adonthell and adonthell-data, fifechan, fife and unknown-horizons.
  • After I uploaded the latest Teeworlds release to stretch-backports too, I sponsored pegsolitaire for Juhani Numminen and a shiny new Supertux release for Reiner Herrmann.
  • I updated KXL, the Kacchan X Windows System Library. You have never heard of it? Well, never mind. However it powers three Debian games.
  • Last but not least I updated btanks,  your fast 2D tank arcade game.

Debian Java


Debian LTS

This was my thirty-fifth month as a paid contributor and I have been paid to work 20,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 28.01.2019 until 03.02.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in mupdf, coturn, php5, netkit-rsh, guacamole-client, openjdk-7, python-numpy, python-gnupg, muble, mysql-connector-python, enigmail, python-colander, slurml-llnl, sox, uriparser, and drupal7.
  • DLA-1631-1. Issued a security update for libcaca fixing 4 CVE.
  • DLA-1633-1. Issued a security update for sqlite3 fixing 5 CVE.
  • DLA-1650-1. Issued a security update for rssh fixing 1 CVE.
  • DLA-1656-1. Issued a security update for agg fixing 1 CVE. This one required a sourceful upload of desmume and exactimage as well because agg provides only a static library.
  • DLA-1662-1. Issued a security update for libthrift-java fixing 1 CVE.
  • DLA-1673-1. Issued a security update for wordpress fixing 7 CVE.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my eight month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 28.01.2019 until 03.02.2019 and I triaged CVE in php5 and systemd.
  • ELA-81-1. Issued a security update for systemd fixing 2 CVE. I investigated CVE-2018-16865 and found that systemd was not exploitable. I marked CVE-2018-16864, CVE-2018-16866 and CVE-2018-15688 as <not-affected> because the vulnerable code was introduced later.
  • ELA-83-1. Issued a security update for php5  fixing 7 upstream bugs. No CVE have been assigned yet but upstream intends to do so shortly.

Thanks for reading and see you next time.

11 February, 2019 11:40PM by Apo

hackergotchi for Andy Simpkins

Andy Simpkins

AVR ATMEGA328P support in avr-gcc

I can save £0.40 per device if I move from the ATMEGA328P to an ATMEGA328PB, given that I am expecting to use around 4000 of these microcontrollers a year this looks like a no brainer decision to me… The ATMEGA328P has been around for a few years now and I did check that the device was supported in avr-gcc *before* I committed to a PCB.  The device on the surface appears to be a simple B revision, with a few changes over the original part as described in Application Note AN_42559 – AT15007: Differences between ATmega328/P and ATmega328PB.  There are potential issues around the crystal drive for the clock (but I am pretty sure that I have done the appropriate calculations so this will work – famous last words I know).

I have had my PCBs back for a couple of weeks now, I have populated the boards and confirmed that electrically they work fine, the interface to the Raspberry Pi has also been tested.  I took a week out for the DebConf Videoteam sprint and a couple of extra days to recover from FOSDEM (catch up on much needed sleep) so today I picked up the project once again.

My first task was to change my firmware build away from an Arduino Uno R3 platform to my own PCB – simple right?  Just move around my #defines to match the GPIO pins I am using to drive my hardware, and change my build flags from -mmcu=atmega328p to -mmcu=atmega328pb and then rebuild.

If only things were that simple….

It turns out that even though the avr-gcc in Stretch knows about the atmega328pb it doesn’t know how to deal with it.

The internet suggested in all its wisdom that this was a known bug, so I updated my trusty laptop to Buster to take a look (where known bug is old and apparently resolved in a version number lower than that of gcc-avr in buster).

Good news – my upgrade to buster went without hitch :-)
Bad news – my program still does not compile – I still have the same error

avr/include/avr/io.h:404:6: warning: #warning "
device type not defined"

My googlefoo must be getting better these days because it didn’t take long to track down a solution:

I need to download a “pack” from Atmel / Microchip that tells gcc about the 328pb.  I downloaded the Atmel ATmega Series Device Support Pack from packs.download.atmel.com 

Once I copied all the *.h files from within the pack to /usr/lib/avr/include to get the avr io headers to work properly.  I also put all the other files from the pack into a folder called `packs` in my project folder.  Inside the packs folder should be folder like “gcc, include, templates…”

I now needed to update my build script to add another GCC flag to tell it to use the atmega328pb definition from the ‘pack’

-B ./packs/gcc/dev/atmega328pb/

We are finally getting somewhere, my errors about undefined device type has gone away, however I am now getting new errors about the SPI control registers being unrecognized.

One of the differences between the two chip ‘revisions’ is that the atmega328pb has an additional SPI port, so it makes sense that registers have been renamed following the convention of adding device number after the register.
Thus the interrupt vector SPI_STC_vect becomes SPI0_STC_vect and the registers simply gain a 0 suffix.

A few #if statements later and my code now compiles and I can commence testing…  Happy days!

The ‘pack’ which contains the details for the ATMEGA328PB is 1.2.132 (2016-12-14) which means it has had quite a while to be included in the tool-chain.  Should this be included as part of avr-gcc for buster?  I would hope so, but we are now in a freeze so perhaps it will be missed.

Next task – avr-dude.  Does the buster build recognize the ATMEGA328PB signatures / program correctly?

[EDIT 20190213 AS]  Yes avr-dude recognises and programs the ATMEGA328PB just fine.

11 February, 2019 03:37PM by andy

February 09, 2019

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS/ELTS (January 2019)

In January 2019, I have worked on the Debian LTS project for 10 hours and on the Debian ELTS project for 2 hours (of originally planned 6 + 1 hours) as a paid contributor. The non-worked 5 ELTS hours I have given back to the pool of available work hours.

LTS Work

  • Fix one CVE issue in sssd (DLA-1635-1 [1]).
  • Fix five CVE issues in libjpeg-turbo (DLA 1638-1 [2]).
  • Fix some more CVE issues in libav (DLA-1654-1 [3]).
  • FreeRDP: patch rebasing and cleanup.
  • FreeRDP: Testing the proposed uploads, sending out a request for testing [4], provide build previews, some more patch work.
  • Give feedback to the Debian stable release team on the planned FreeRDP 1.1 uploads to Debian jessie LTS and Debian stretch.
  • Some little bit of CVE triaging during my LTS frontdesk week (a very quiet week it seemed to me)

The new FreeRDP versions have been uploaded at the beginning of February 2019 (I will mention that properly in my next month's LTS work summary, but you may already want to take a glimnpse at DLA-1666-1 [5]).


  • Start working on OpenSSH security issues currently effecting Debian wheezy ELTS (and also Debian jessie LTS).

The pressing and urgent fixes for OpenSSH will be uploaded during the coming week.

Thanks to all LTS/ELTS sponsors for making these projects possible.



09 February, 2019 02:35PM by sunweaver

February 08, 2019

Iustin Pop

Notes on XFS: raid reshapes, metadata size, schedulers…

Just a couple on notes on XFS, and a story regarding RAID reshapes.

sunit, swidth and raid geometry

When I started looking at improving my backup story, first thing I did was resizing my then-existing backup array from three to four disks, to allow playing with new tools before moving to larger disks. Side-note: yes, I now have an array where the newest drive is 7 years newer than the oldest ☺

Reshaping worked well as usual (thanks md/mdadm!), and I thought all is good. I forgot however that xfs takes its stripe width from the geometry at file-system creation, and any subsequent reshapes are ignored. This is documented (and somewhat logical, if not entirely), in the xfs(5) man-page:

sunit=value and swidth=value … Typically the only time these mount options are necessary if after an underly‐ ing RAID device has had it’s geometry modified, such as adding a new disk to a RAID5 lun and reshaping it.

But I forgot to do that. On top of a raid5. Which means, instead of writing full stripes, xfs was writing ⅔ of a stripe, resulting in lots of read-modify-write, which explains why I saw some unexpected performance issues even in single-threaded workloads.

Lesson learned…

xfs metadata size

Once I got my new large HDDs, I created the new large (for me, at least) file-system with all bells and whistles, which includes lots of metadata. And by lots, I really mean lots:

# mkfs.xfs -m rmapbt=1,reflink=1 /dev/mapper/…
# df -h 
Filesystem         Size  Used Avail Use% Mounted on
/dev/mapper/        33T  643G   33T   2% /…

Almost 650G of metadata! 650G!!!!! Let’s try some variations, on a 50T file-system (on top of sparse file, not that I have this laying around!):

Plain mkfs.xfs:
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       50T   52G   50T   1% /mnt
mkfs.xfs with rmapbt=1:
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       50T  675G   50T   2% /mnt
mkfs.xfs with rmapbt=1,reflink=1:
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       50T  981G   50T   2% /mnt

So indeed, the extra “bells” do eat a lot of disk. Fortunately that ~1T of metadata is not written at mkfs time, otherwise it’d be fun! The actual allocation for that sparse file was “just” 2G. I wonder how come the metadata is consistent if mkfs doesn’t ensure zeroing?

Not sure I’ll actually test the reflink support soon, but since I won’t be able to recreate the file-system easily once I put lots of data on it, I’ll leave it as such. It’s, after all, only 2% of the disk space, even if the absolute value is large.

XFS schedulers

Apparently CFQ is not a good scheduler for XFS. I’m probably the last person on earth to learn this, as it was documented for ages, but better late than never. Yet another explanation for the bad performance I was seeing…

Summary: yes, I have fun playing a sysadmin at home ☺

08 February, 2019 12:02PM

February 07, 2019

Thorsten Alteholz

My Debian Activities in January 2019

FTP master

This month I accepted 363 packages, which is again more than last month. On the other side I rejected 68 uploads, which is almost twice as last month. The overall number of packages that got accepted this month was 494.

Debian LTS

This was my fifty fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 20.5h. During that time I did LTS uploads or prepared security uploads of:

    [DLA 1634-1] wireshark security update for 41 CVEs
    [DLA 1643-1] krb5 security update for three CVEs
    [DLA 1645-1] wireshark security update for three CVEs
    [DLA 1647-1] apache2 security update for one CVE
    [DLA 1651-1] libgd2 security update for four CVEs

I continued to work on CVEs of exiv2.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the eight ELTS month.

During my allocated time I uploaded:

  • ELA-75-1 of wireshark for 26 CVEs
  • ELA-77-1 of krb5 for four CVEs
  • ELA-78-1 of wireshark for three CVEs

As like in LTS, I also did some days of frontdesk duties.

Other stuff

I improved packaging of …

I uploaded new upstream versions of …

I uploaded new packages …

  • ptunnel-ng, which is a fork of ptunnel with ongoing development

Last but not least I sponsored uploads of …

07 February, 2019 09:39PM by alteholz

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

Privacy-preserving monitoring of an anonymity network (FOSDEM 2019)

This is a transcript of a talk I gave at FOSDEM 2019 in the Monitoring and Observability devroom about the work of Tor Metrics.

Direct links:

Producing this transcript was more work than I had anticipated it would be, and I’ve done this in my free time, so if you find it useful then please do let me know otherwise I probably won’t be doing this again.

I’ll start off by letting you know who I am. Generally this is a different audience for me but I’m hoping that there are some things that we can share here. I work for Tor Project. I work in a team that is currently formed of two people on monitoring the health of the Tor network and performing privacy-preserving measurement of it. Before Tor, I worked on active Internet measurement in an academic environment but I’ve been volunteering with Tor Project since 2015. If you want to contact me afterwards, my email address, or if you want to follow me on the Fediverse there is my WebFinger ID.

So what is Tor? I guess most people have heard of Tor but maybe they don’t know so much about it. Tor is quite a few things, it’s a community of people. We have a paid staff of approximately 47, the number keeps going up but 47 last time I looked. We also have hundereds of volunteer developers that contribute code and we also have relay operators that help to run the Tor network, academics and a lot of people involved organising the community locally. We are registered in the US as a non-profit organisation and the two main things that really come out of Tor Project are the open source software that runs the Tor network and the network itself which is open for anyone to use.

Currently there are an estimated average 2,000,000 users per day. This is estimated and I’ll get to why we don’t have exact numbers.

Most people when they are using Tor will use Tor Browser. This is a bundle of Firefox and a Tor client set up in such a way that it is easy for users to use safely.

When you are using Tor Browser your traffic is proxied through three relays. With a VPN there is only one server in the middle and that server can see either side. It is knows who you are and where you are going so they can spy on you just as your ISP could before. The first step in setting up a Tor connection is that the client needs to know where all of those relays are so it downloads a list of all the relays from a directory server. We’re going to call that directory server Dave. Our user Alice talks to Dave to get a list of the relays that are available.

In the second step, the Tor client forms a circuit through the relays and connects finally to the website that Alice would like to talk to, in this case Bob.

If Alice later decides they want to talk to Jane, they will form a different path through the relays.

We know a lot about these relays. Because the relays need to be public knowledge for people to be able to use them, we can count them quite well. Over time we can see how many relays there are that are announcing themselves and we also have the bridges which are a seperate topic but these are special purpose relays.

Because we have to connect to the relays we know their IP addresses and we know if they have IPv4 or IPv6 addresses so as we want to get better IPv6 support in the Tor network we can track this and see how the network is evolving.

Because we have the IP addresses we can combine those IP addresses with GeoIP databases and that can tell us what country those relays are in with some degree of accuracy. Recently we have written up a blog post about monitoring the diversity of the Tor network. The Tor network is not very useful if all of the relays are in the same datacenter.

We also perform active measurement of these relays so we really analyse these relays because this is where we put a lot of the trust in the Tor network. It is distributed between multiple relays but if all of the relays are malicious then the network is not very useful. We make sure that we’re monitoring its diversity and the relays come in different sizes so we want to know are the big relays spread out or are there just a lot of small relays inflating the absolute counts of relays in a location.

When we look at these two graphs, we can see that the number of relays in Russia is about 250 at the moment but when we look at the top 5 by the actual bandwidth they contribute to the network they drop off and Sweden takes Russia’s place in the top 5 contributing around 4% of the capacity.

The Tor Metrics team, as I mentioned we are two people, and we care about measuring and analysing things in the Tor network. There are 3 or 4 repetitive contributors and then occasionally people will come along with patches or perform a one-off analysis of our data.

We use this data for lots of different use cases. One of which is detecting censorship so if websites are blocked in a country, people may turn to Tor in order to access those websites. In other cases, Tor itself might be censored and then we see a drop in Tor users and then we also see we a rise in the use of the special purpose bridge relays that can be used to circumvent censorship. We can interpret the data in that way.

We can detect attacks against the network. If we suddenly see a huge rise in the number of relays then we can suspect that OK maybe there is something malicious going on here and we can deal with that. We can evaluate effects on how performance changes when we make changes to the software. We have recently made changes to an internal scheduler and the idea there is to reduce congestion at relays and from our metrics we can say we have a good idea that this is working.

Probably one of the more important aspects is being able to take this data and make the case for a more private and secure Internet, not just from a position of I think we should do this, I think it’s the right thing, but here is data and here is facts that cannot be so easily disputed.

We only handle public non-sensitive data. Each analysis goes through a rigorous review and discussion process before publication.

As you might imagine the goals of privacy and anonymity network doesn’t lend itself to easy data gathering and extensive monitoring of the network. The Research Safety Board if you are interested in doing research on Tor or attempting to collect data through Tor can offer advice on how to do that safely. Often this is used by academics that want to study Tor but also the Metrics Team has used it on occasion where we want to get second opinions on deploying new measurements.

What we try and do is follow three key principles: data minimalisation, source aggregation, and transparency.

The first one of these is quite simple and I think with GDPR probably is something people need to think about more even if you don’t have an anonymity network. Having large amounts of data that you don’t have an active use for is a liability and something to be avoided. Given a dataset and given an infinite amount of time that dataset is going to get leaked. The probability increases as you go along. We want to make sure that we are collecting as little detail as possible to answer the questions that we have.

When we collect data we want to aggregate it as soon as we can to make sure that sensitive data exists for as little time as possible. This means usually in the Tor relays themselves before they even report information back to Tor Metrics. They will be aggregating data and then we will aggregate the aggregates. This can also include adding noise, binning values. All of these things can help to protect the individual.

And then being as transparent as possible about our processes so that our users are not surprised when they find out we are doing something, relay operators are not surprised, and academics have a chance to say whoa that’s not good maybe you should think about this.

The example that I’m going to talk about is counting unique users. Users of the Tor network would not expect that we are storing their IP address or anything like this. They come to Tor because they want the anonymity properties. So the easy way, the traditional web analytics, keep a list of the IP addresses and count up the uniques and then you have an idea of the unique users. You could do this and combine with a GeoIP database and get unique users per country and these things. We can’t do this.

So we measure indirectly and in 2010 we produced a technical report on a number of different ways we could do this.

It comes back to Alice talking to Dave. Because every client needs to have a complete view of the entire Tor network, we know that each client will fetch the directory approximately 10 times a day. By measuring how many directory fetches there are we can get an idea of the number of concurrent users there are of the Tor network.

Relays don’t store IP addresses at all, they count the number of directory requests and then those directory requests are reported to a central location. We don’t know how long an average session is so we can’t say we had so many unique users but we can say concurrently we had so many users on average. We get to see trends but we don’t get the exact number.

So here is what our graph looks like. At the moment we have the average 2,000,000 concurrent Tor users. The first peak may have been an attempted attack, as with the second peak. Often things happen and we don’t have full context for them but we can see when things are going wrong and we can also see when things are back to normal afterwards.

This is in a class of problems called the count-distinct problem and these are our methods from 2010 but since then there has been other work in this space.

One example is HyperLogLog. I’m not going to explain this in detail but I’m going to give a high-level overview. Imagine you have a bitfield and you initialise all of these bits to zero. You take an IP address, you take a hash of the IP address, and you look for the position of the leftmost one. How many zeros were there at the start of that string? Say it was 3, you set the third bit in your bitfield. At the end you have a series of ones and zeros and you can get from this to an estimate of the total number that there are.

Every time you set a bit there is 50% chance that that bit would be set given the number of distinct items. (And then 50% chance of that 50% for the second bit, and so on…) There is a very complicated proof in the paper that I don’t have time to go through here but this is one example that actually turns out to be quite accurate for counting unique things.

This was designed for very large datasets where you don’t have enough RAM to keep everything in memory. We have a variant on this problem where even keep 2 IP addresses in memory would, for us, be a very large dataset. We can use this to avoid storing even small datasets.

Private Set-Union Cardinality is another example. In this one you can look at distributed databases and find unique counts within those. Unfortunately this currently requires far too much RAM to actually do the computation for us to use this but over time these methods are evolving and should become feasible, hopefully soon.

And then moving on from just count-distinct, the aggregation of counters. We have counters such as how much bandwidth has been used in the Tor network. We want to aggregate these but we don’t want to release the individual relay counts. We are looking at using a method called PrivCount that allows us to get the aggregate total bandwidth used while keeping the individual relay bandwidth counters secret.

And then there are similar schemes to this, RAPPOR and PROCHLO from Google and Prio that Mozilla have written a blog post about are similar technologies. All of the links here in the slides and are in the page on the FOSDEM schedule so don’t worry about writing these down.

Finally, I’m looking at putting together some guidelines for performing safe measurement on the Internet. This is targetted primarily at academics but also if people wanted to apply this to analytics platforms or monitoring of anything that has users and you want to respect those users’ privacy then there could be some techniques in here that are applicable.

Ok. So that’s all I have if there are any questions?

Q: I have a question about how many users have to be honest so that the network stays secure and private, or relays?

A: At the moment when we are collecting statistics we can see – so as I showed earlier the active measurement – we know how much bandwidth a relay can cope with and then we do some load balancing so we have an idea of what fraction of traffic should go to each relay and if one relay is expecting a certain level of traffic and it has wildly different statistics to another relay then we can say that relay is cheating. There isn’t really any incentive to do this and it’s something we can detect quite easily but we are also working on more robust metrics going forward to avoid this being a point where it could be attacked.

Q: A few days ago I heard that with Tor Metrics you are between 2 and 8 million users but you don’t really know in detail what the real numbers are? Can you talk about the variance and which number is more accurate?

A: The 8 million number comes from the PrivCount paper and they did a small study where they looked at unique IP address over a day where we look at concurrent users. There are two different measurements. What we can say is that we know for certain that there are between 2 million and 25 million unique users per day but we’re not sure where in there we fall and 8 million is a reasonable-ish number but also they measured IP addresses and some countries use a lot of NAT so it could be more than 8 million. It’s tricky but we see the trends.

Q: Your presentation actually implies that you are able to collect more private data than you are doing. It says that the only thing preventing you from collecting private user data is the team’s good will and good intentions. Have I got it wrong? Are there any possibilities for the Tor Project team to collect some private user data?

A: Tor Project does not run the relays. We write the code but there individual relay operators that run the relays and if we were to release code that suddenly collecting lots of private data people would realise and they wouldn’t run that code. There is a step between the development and it being deployed. I think that it’s possible other people could write that code and then run those relays but if they started to run enough relays that it looked suspicious then people would ask questions there too, so there is a distributed trust model with the relays.

Q: You talk about privacy preserving monitoring, but also a couple of years ago we learned that the NSA was able to monitor relays and learn which user was connecting to relays so is there also research to make it so Tor users cannot be targetted for using a relay and never being able to be monitored.

A: Yes, there is! There is a lot of research in this area and one of them is through obfuscation techniques where you can make your traffic look like something else. They are called pluggable transports and they can make your traffic look like all sorts of things.

07 February, 2019 02:00PM

February 06, 2019

hackergotchi for Gunnar Wolf

Gunnar Wolf

Raspberry Pi 3 Debian Buster *unofficial preview* image update

As I mentioned two months ago, I adopted the Debian Raspberry 3 build scripts, and am now building a clean Buster (Debian Testing) unofficial preview image. And there are some good news to tell!
Yesterday, I was prompted by Martin Zobel-Helas and Holger Levsen to prepare this image after the Buster Debian Installer (d-i) alpha release. While we don't use d-i to build the image, I pushed the build, and found that...
I did quite a bit of work together with Romain Perier, a soon-to-become Debian Maintainer, and he helped get the needed changes in the main Debian kernel, thanks to which we now finally have working wireless support!

Romain also told me he did some tests, building an image very much like this one, but built for armel instead of armhf, and apparently it works correctly on a Raspberry Pi Zero. That means that, if we do a small amount of changes (and tests, of course), we will be able to provide straight Debian images to boot unmodified on all of the Raspberry lineup!
So, as usual:
You can look at the instructions at the Debian Wiki page on RaspberryPi3. Or you can just jump to the downloads, at my people.debian.org:


raspi3.jpg116.06 KB

06 February, 2019 04:43PM by gwolf

Antoine Beaupré

January 2019 report: LTS, Mailman 3, Vero 4k, Kubernetes, Undertime, Monkeysign, oh my!

January is often a long month in our northern region. Very cold, lots of snow, which can mean a lot of fun as well. But it's also a great time to cocoon (or maybe hygge?) in front of the computer and do great things. I think the last few weeks were particularly fruitful which lead to this rather lengthy report, which I hope will be nonetheless interesting.

So grab some hot coco, a coffee, tea or whatever warm beverage (or cool if you're in the southern hemisphere) and hopefully you'll learn awesome things. I know I did.

Free software volunteer work

As always, the vast majority of my time was actually spent volunteering on various projects, while scrambling near the end of the month to work on paid stuff. For the first time here I mention my Kubernetes work, but I've also worked on the new Mailman 3 packages, my monkeysign and undertime packages (including a new configuration file support for argparse), random Debian work, and Golang packaging. Oh, and I bought a new toy for my home cinema, which I warmly recommend.

Kubernetes research

While I've written multiple articles on Kubernetes for LWN in the past, I am somewhat embarrassed to say that I don't have much experience running Kubernetes itself for real out there. But for a few months, with a group of fellow sysadmins, we've been exploring various container solutions and gravitated naturally towards Kubernetes. In the last month, I particularly worked on deploying a Ceph cluster with Rook, a tool to deploy storage solutions on a Kubernetes cluster (submitting a patch while I was there). Like many things in Kubernetes, Rook is shipped as a Helm chart, more specifically as an "operator", which might be described (if I understand this right) as a container that talks with Kubernetes to orchestrate other containers.

We've similarly worked on containerizing Nextcloud, which proved to be pretty shitty at behaving like a "cloud" application: secrets and dynamic data and configuration are all mixed up in the config directory, which makes it really hard to manage sanely in a container environment. The only way we found it could work was to mount configuration as a volume, which means configuration becomes data and can't be controled through git. Which is bad. This is also how the proposed Nextcloud Helm solves this problem (on which I've provided a review), for what it's worth.

We've also worked on integrating GitLab in our workflow, so that we keep configuration as code and deploy on pushes. While GitLab talks a lot about Kubernetes integration, the actual integration features aren't that great: unless I totally misunderstood how it's supposed to work, it seems you need to provide your own container and run kubectl from it, using the tokens provided by GitLab. And if you want to do anything of significance, you will probably need to give GitLab cluster access to your Kubernetes cluster, which kind of freaks me out considering the number of security issues that keep coming out with GitLab recently.

In general, I must say I was very skeptical of Kubernetes when I first attended those conferences: too much hype, buzzwords and suits. I felt that Google just threw us a toy project to play with while they kept the real stuff to themselves. I don't think that analysis is wrong, but I do think Kubernetes has something to offer, especially for organizations still stuck in the "shared hosting" paradigm where you give users a shell account or (S?!)FTP access and run mod_php on top. Containers at least provide some level of isolation out of the box and make such multi-tenant offerings actually reasonable and much more scalable. With a little work, we've been able to setup a fully redundant and scalable storage cluster and Nextcloud service: doing this from scratch wouldn't be that hard either, but it would have been done only for Nextcloud. The trick is the knowledge and experience we gained by doing this with Nextcloud will be useful for all the other apps we'll be hosting in the future. So I think there's definitely something there.

Debian work

I participated in the Montreal BSP, of which Louis-Philippe Véronneau made a good summary. I also sponsored a few uploads and fixed a few bugs. We didn't fix that many bugs, but I gave two workshops, including my now well-tuned packaging 101 workshop, which seems to be always quite welcome. I really wish I could make a video of that talk, because I think it's useful in going through the essentials of Debian packaging and could use a wider audience. In the meantime, my reference documentation is the best you can get.

I've decided to let bugs-everywhere die in Debian. There's a release critical bug and it seems no one is really using this anymore, at least I'm not. I would probably orphan the package once it gets removed from buster, but I'm not actually the maintainer, just an uploader... A promising alternative to BE seems to be git-bug, with support for synchronization with GitHub issues.

I've otherwise tried to get my figurative "house" of Debian packages in order for the upcoming freeze, which meant new updates for

I've also sponsored the introduction of web-mode (RFS #921130) a nice package to edit HTML in Emacs and filed the usual barrage of bug reports and patches.

Elegant argparse configfile support and new date parser for undertime

I've issued two new releases for my undertime project which helps users coordinate meetings across timezones. I first started working on improvingthe date parser which mostly involved finding a new library to handle dates. I started using dateparser which behaves slightly better, and I ended up packaging it for Debian as well although I still have to re-upload undertime to use the new dependency.

That was a first 1.6.0 release, but that wasn't enough - my users wanted a configuration file! I ended up designing a simple, YAML-based configuration file parser that integrates quite well with argparse, after finding too many issues with existing solutions like Configargparse. I summarized those for the certbot project which suffered from similar issues. I'm quite happy with my small, elegant solution for config file support. It is significantly better than the one I used for Monkeysign which was (ab)using the fromfile option of argparse.

Mailman 3

Motivated by this post extolling the virtues of good old mailing lists to resist social media hegemony, I did a lot (too much) work on installing Mailman 3 on my own server. I have ran Mailman 2 mailing lists for hundreds of clients in my previous job at Koumbit and I have so far used my access there to host a few mailing lists. This time, I wanted to try something new and figured Mailman 3 might have been ready after 4 years since the 3.0 release and almost 10 years since the project started.

How wrong I was! Many things don't work: there is no french translation at all (nor any other translation, for that matter), no invite feature, templates translation is buggy, the Debian backport fails with the MySQL version in stable... it's a mess. The complete history of my failure is better documented in mail.

I worked around many of those issues. I like the fact that I was almost able to replace the missing "invite" feature through the API and there Mailman 3 is much better to look at than the older version. They did fix a lot of things and I absolutely love the web interface which allows users to interact with the mailing list as a forum. But maybe it will take a bit more time before it's ready for my use case.

Right now, I'm hesitant: either I go with a mailing list to connect with friends and family. It works with everyone because everyone uses email, if only for their password resets. The alternative is to use something like a (private?) Discourse instance, which could also double as a comments provider for my blog if I ever decide to switch away from Ikiwiki... Neither seems like a good solution, and both require extra work and maintenance, Discourse particularly so because it is very unlikely it will get shipped as a Debian package.

Vero: my new home cinema box

Speaking of Discourse, the reason I'm thinking about it is I am involved in many online forums running it. It's generally a great experience, although I wish email integration was mandatory - it's great to be able to reply through your email client, and it's not always supported. One of the forums I participate in is the Pixls.us forum where I posted a description of my photography kit, explained different NAS options I'm considering and explained part of my git-annex/dartkable workflow.

Another forum I recently started working on is the OSMC.tv forum. I first asked what were the full specifications for their neat little embedded set-top box, the Vero 4k+. I wasn't fully satisfied with the answers (the hardware is not fully open), but I ended up ordering the device and moving the "home cinema services" off of the venerable marcos server, which is going to turn 8 years old this year. This was an elaborate enterprise which involved wiring power outlets (because a ground was faulty), vacuuming the basement (because it was filthy), doing elaborate research on SSHFS setup and performance, deal with systemd bugs and so on.

In the end it was worth it: my roommates enjoy the new remote control. It's much more intuitive than the previous Bluetooth keyboard, it performs well enough, and is one less thing to overload poor marcos with.

Monkeysign alternatives testing

I already mentioned I was considering Monkeysign retirement and recently a friend asked me to sign his key so I figured it was a great time to test out possible replacements for the project. Turns out things were not as rosy as I thought.

I first tested pius and it didn't behave as well as I hoped. Generally, it asks too many cryptic questions the user shouldn't have to guess the answer to. Specifically, here's the issues I found in my review:

  1. it forces you to specify your signing key, which is error-prone and needlessly difficult for the user

  2. I don't quite understand what the first question means - there's too much to unpack there: is it for inline PGP/MIME? for sending email at all? for sending individual emails? what's going on? and the second questions

  3. the second question should be optional: i already specified my key on the commandline, it should use that as a From...

  4. the signature level is useless and generally disregarded by all software, including OpenPGP. even if it would be used, 0/1/2/3/s/n/h/q is a pretty horrible user interface.

And then it simply fails to send the email completely on dkg's key, but that might be because its key was so exotic...

Gnome-keysign didn't fare much better - I opened six different issues on the promising project:

  1. what does the internet button do?
  2. signing arbitrary keys in GUI
  3. error in french translation
  4. using mutt as a MUA does not work
  5. signing a key on the commandline never completes
  6. flatpak instructions failure

So, surprisingly, Monkeysign might survive a bit longer, as much as I have come to dislike the poor little thing...

Golang packaging

To help a friend getting the new RiseupVPN package in Debian, I uploaded a bunch of Golang dependencies (bug #919936, bug #919938, bug #919941, bug #919944, bug #919945, bug #919946, bug #919947, bug #919948) in Debian. This involved filing many bugs upstream as many of those (often tiny) packages didn't have explicit licences, so many of those couldn't actually be uploaded, but the ITPs are there and hopefully someone will complete that thankless work.

I also tried to package two other useful Golang programs, dmarc-cat and gotop, both of which also required a significant number of dependencies to be packaged (bug #920387, bug #920388, bug #920389, bug #920390, bug #921285, bug #921286, bug #921287, bug #921288). dmarc-cat has just been accepted in Debian - it's very useful to decipher DMARC reports you get when you configure your DNS to receive such reports. This is part of a larger effort to modernize my DNS and mail configuration.

But gotop is just starting - none of the dependencies have been update just yet, and I'm running out of steam a little, even though that looks like an awesome package.

Other work

  • I hosed my workstation / laptop backup by trying to be too clever with Borg. It bit back and left me holding the candle, the bastard.

  • Expanded on my disk testing documentation to include better examples of fio as part of my neglected stressant package

GitHub said I "opened 21 issues in 14 other repositories" which seems a tad insane. And there's of course probably more stuff I'm forgetting here.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

sbuild regression

My first stop this month was to notice a problem with sbuild from buster running on jessie chroots (bug #920227). After discussions on IRC, where fellow Debian Developers basically fabricated me a patch on the fly, I sent merge request #5 which was promptly accepted and should be part of the next upload.


I again worked a bit on systemd. I marked CVE-2018-16866 as not affecting jessie, because the vulnerable code was introduced in later versions. I backported fixes for CVE-2018-16864 and CVE-2018-16865 and published the resulting package as DLA-1639-1, after doing some smoke-testing.

I still haven't gotten the courage to dig back in the large backport of tmpfiles.c required to fix CVE-2018-6954.

tiff review

I did a quick review of the fix for CVE-2018-19210 proposed upstream which seems to have brought upstream's attention back to the issue and finally merge the fix.

Enigmail EOL

After reflecting on the issue one last time, I decided to mark Enigmail as EOL in jessie, which involved an upload of debian-security-support to jessie (DLA-1657-1), unstable and a stable-pu.

DLA / website work

I worked again on fixing the LTS workflow with the DLAs on the main website. Reminder: hundreds of DLAs are missing from the website (bug #859122) and we need to figure out a way to automate the import of newer ones (bug #859123).

The details of my work are in this post but basically, I readded a bunch more DLAs to the MR and got some good feedback from the www team (in MR #47). There's still some work to be done on the DLA parser, although I have merged my own improvements (MR #46) as I felt they had been sitting for review long enough.

Next step is to deal with noise like PGP signatures correctly and thoroughly review the proposed changes.

While I was in the webmaster's backyard, I tried to help with a few things by merging a LTS errata and a paypal integration note although the latter ended up being a mistake that was reverted. I also rejected some issues (MR #13, MR #15) during a quick triage.

phpMyAdmin review

After reading this email from Lucas Kanashiro, I reviewed CVE-2018-19968 and reviewed and tested CVE-2018-19970.

06 February, 2019 03:32PM

hackergotchi for Keith Packard

Keith Packard


Snek-Duino: Snek with Arduino I/O

At the suggestion of a fellow LCA attendee, I've renamed my mini Python-like language from 'Newt' to 'Snek'. Same language, new name. I've made a bunch of progress in the implementation and it's getting pretty usable.

Here's our target system running Snek code that drives the GPIO pins. This example uses Pin 3 in PWM mode to fade up and down:

# Fade an LED on digital pin 3 up and down

led = 3

def up():
    for i in range(0,1,0.001):

def down():
    for i in range(1,0,-0.001):

def blink(n):
    for i in range(n):

Snek-Duino GPIO API

The GPIO interface I've made borrows ideas from the Lego Logo system. Here's an old presentation about Lego Logo. In brief:

  • talkto(output) or talkto((output,direction)) — set the specified pin as an output and use this pin for future output operations. If the parameter is a list or tuple, use the first element as the output pin and the second element as the direction pin.

  • listento(input) — set the specified pin as an input (with pull-ups enabled) and use this pin for future input operations.

  • on() — turn the current output pin on. If there is a recorded power level for the pin, use it.

  • off() — turn the current output pin off.

  • setpower(power) — record the power level for the current output pin. If it is already on, change the power level immediately. Power levels go from 0-1 inclusive.

  • read() — report the state of the current input pin. Pins 0-13 are the digital pins and report values 0 or 1. Pins 14-19 are the analog input pins and report values from 0-1.

  • setleft() — set the current direction pin to 1.

  • setright() — set the current direction pin to 0.

With this simple API, the user has fairly complete control over the pins at the level necessary to read from switches and light sensors and write to lights and motor controllers.

Another Example

This implements a dimmer — reading from an analog input and mirroring that value to a pin.

led = 3
dial = 14

def track():
    while True:
        p = read()
        if p == 1:

Further Work

Right now, all Snek code has to be typed at the command line as there's no file system on the device. The ATmega328P on this board has a 1k EEPROM that I'd like to use to store Snek code to be loaded and run when the device powers on.

And, I'd like to have a limited IDE to run on the host. I'm wondering if I can create something using the standard Python3 curses module so that the IDE could be written in fairly simple Python code. I want a split screen with one portion being a text editor with code to be sent to the device and the other offering a command line. Flipping away from the text editor would send the code to the device so that it could be run.

There's no way to interrupt a running program on the Arduino. I need interrupt-driven keyboard input so that I can catch ^C and halt the interpreter. This would also make it easy to add flow control so you can send long strings of code without dropping most of the characters.

06 February, 2019 07:54AM

François Marier

Encrypted connection between SIP phones using Asterisk

Here is the setup I put together to have two SIP phones connect together over an encrypted channel. Since the two phones do not support encryption, I used Asterisk to provide the encrypted channel over the Internet.

Installing Asterisk

First of all, each VoIP phone is in a different physical location and so I installed an Asterisk server in each house.

One of the server is a Debian stretch machine and the other runs Ubuntu bionic 18.04. Regardless, I used a fairly standard configuration and simply installed the asterisk package on both machines:

apt install asterisk

SIP phones

The two phones, both Snom 300, connect to their local asterisk server on its local IP address and use the same details as I have put in /etc/asterisk/sip.conf:


Dialplan and voicemail

The extension number above (1000) maps to the following configuration blurb in /etc/asterisk/extensions.conf:

exten => 1000,1,Dial(SIP/1000,20)
exten => 1000,n,Goto(in1000-${DIALSTATUS},1)
exten => 1000,n,Hangup
exten => in1000-BUSY,1,Hangup(17)
exten => in1000-CONGESTION,1,Hangup(3)
exten => in1000-CHANUNAVAIL,1,VoiceMail(1000@mailboxes,su)
exten => in1000-CHANUNAVAIL,n,Hangup(3)
exten => in1000-NOANSWER,1,VoiceMail(1000@mailboxes,su)
exten => in1000-NOANSWER,n,Hangup(16)
exten => _in1000-.,1,Hangup(16)

the internal context maps to the following blurb in /etc/asterisk/extensions.conf:

include => home
include => iax2users
exten => 707,1,VoiceMailMain(1000@mailboxes)

and 1000@mailboxes maps to the following entry in /etc/asterisk/voicemail.conf:

1000 => 1234,home,person@email.com

(with 1234 being the voicemail PIN).

Encrypted IAX links

In order to create a virtual link between the two servers using the IAX protocol, I created user credentials on each server in /etc/asterisk/iax.conf:


then I created an entry for the other server in the same file:


The second machine contains the same configuration with the exception of the server name (server1 instead of server2) and hostname (server1.dyn.fmarier.org instead of server2.dyn.fmarier.org).

Speed dial for the other phone

Finally, to allow each phone to ring one another by dialing 2000, I put the following in /etc/asterisk/extensions.conf:

include => home
exten => 2000,1,Set(CALLERID(all)=Francois Marier <2000>)
exten => 2000,2,Dial(IAX2/server1/1000)

and of course a similar blurb on the other machine:

include => home
exten => 2000,1,Set(CALLERID(all)=Other Person <2000>)
exten => 2000,2,Dial(IAX2/server2/1000)

Firewall rules

Since we are using the IAX protocol instead of SIP, there is only one port to open in /etc/network/iptables.up.rules for the remote server:

# IAX2 protocol
-A INPUT -s x.x.x.x/y -p udp --dport 4569 -j ACCEPT

where x.x.x.x/y is the IP range allocated to the ISP that the other machine is behind.

If you want to restrict traffic on the local network as well, then these ports need to be open for the SIP phone to be able to connect to its local server:

# VoIP phones (internal)
-A INPUT -s -p udp --dport 5060 -j ACCEPT
-A INPUT -s -p udp --dport 10000:20000 -j ACCEPT

where is the static IP address allocated to the SIP phone.

06 February, 2019 06:40AM

Antoine Beaupré

Debian build helpers: dh dominates

It's been a while since someone did this. Back in 2009, Joey Hess made a talk at Debconf 9 about debhelper and mentioned in his slides (PDF) that it was used in most Debian packages. Here was the ratio (page 10):

  • debhelper: 54%
  • cdbs: 25%
  • dh: 9%
  • other: 3%

Then Lucas Nussbaum made graphs from snapshot.debian.org that did the same, but with history. His latest post (archive link because original is missing images), from 2015 confirmed Joey's 2009 results. It also showed cdbs was slowly declining and a sharp uptake in the dh usage (over debhelper). Here were the approximate numbers:

  • debhelper: 15%
  • cdbs: 15%
  • dh: 69%
  • other: 1%

I ran the numbers again. Jakub Wilk pointed me to the lintian.debian.org output that can be used to get the current state easily:

$ curl -so lintian.log.gz https://lintian.debian.org/lintian.log.gz
$ zgrep debian-build-system lintian.log.gz | awk '{print $NF}' | sort | uniq -c | sort -nr
  25772 dh
   2268 debhelper
   2124 cdbs-with-debhelper.mk
    257 dhmk
    123 other
      8 cdbs-without-debhelper.mk

Shoving this in a LibreOffice spreadsheet (sorry, my R/Python brain is slow today) gave me this nice little graph:

Pie chart of showing a large proportion of dh packages, and much less of debhelper and cdbs

As of today, the numbers are now:

  • debhelper: 7%
  • cdbs: 7%
  • dh: 84%
  • other: 1%

(No the numbers don't add up. Yes it's a rounding error. Blame LibreOffice.)

So while cdbs lost 10% of the packages in 6 years, it lost another half of its share in the last 4. It's also interesting to note that debhelper and cdbs are both shrinking at a similar rate.

This confirms that debhelper development is where everything is happening right now. The new dh(1) sequencer is also a huge improvement that almost everyone has adopted wholeheartedly.

Now of course, that remaining 15% of debhelper/cdbs (or just 7% of cdbs, depending on how pedantic you are) will be the hard part to transition. Notice how the 1% of "other" packages hasn't really moved in the last four years: that's because some packages in Debian are old, abandoned, ignored, complicated, or all of the above. So it will be difficult to convert the remaining packages and finalize this great unification Joey (unknowingly) started ten years ago, as the remaining packages are probably the hard, messy, old ones no want wants to fix because, well, "they're not broken so don't fix it".

Still, it's nice to see us agree on something for a change. I'd be quite curious to see an update of Lucas' historical graphs. It would be particularly useful to see the impact of the old Alioth server replacement with salsa.debian.org, because it runs GitLab and only supports Git. Without an easy-to-use internal hosting service, I doubt SVN, Darcs, Bzr and whatever is left in "other" there will survive very long.

06 February, 2019 12:54AM

February 05, 2019

hackergotchi for Nicolas Dandrimont

Nicolas Dandrimont

It is complete!

(well, at least the front of it is)

After last week’s DebConf Video Team sprint (thanks again to Jasper @ Linux Belgium for hosting us), the missing components for the stage box turned up right as I was driving back to Paris, and I could take the time to assemble them tonight.

The final inventory of this 6U rack is, from top to bottom:

  • U1: 1 Shure UA844+SWB antenna and power distribution system
  • U2 and U3: 4 Shure BLX4R/S8 receivers
  • U4: 1 Numato Opsis, 1 Minnowboard Turbot, 1 Netgear Gigabit Ethernet Switch to do presenter laptop (HDMI) capture, all neatly packed in a rackmount case.
  • U5 and U6: 1 tray with 2 BLX1/S8 belt packs with MX153T earset microphones and 2 BLX2/SM58 handheld microphone assemblies

This combination of audio and video equipment is all the things that the DebConf Video Team needs to have near the stage to record presentations in one room.

The next step will be to get the back panel milled to properly mount connectors, so that we don’t have to take the back of the box apart to wire things up 🙂

You may see this equipment in action at one of Debian’s upcoming events.

05 February, 2019 08:07PM by olasd

Molly de Blanc

Free software activities (January, 2019)

January was another quiet month for free software. This isn’t to say I wasn’t busy, but merely that there were fewer things going on, with those things being more demanding of my attention. I’m including some more banal activities both to pad out the list, but to also draw some attention to the labor that goes into free software participation.

A photo of two mugs of eggnog, sprinkled with nutmeg and cinnamon. The mugs feature a winter scene of characters from the Moomin books.

January activities (personal)

  • Debian Anti-harassment covered several incidents. These have not yet been detailed in an email to the Debian Project mailing list. I won’t get into details here, due to the sensitive nature of some of the conversations.
  • We began planning for Debian involvement in Google Summer of Code and Outreachy.
  • I put together a slide deck and prepared for FOSDEM. More details about FOSDEM next month! In the mean time, check out my talk description.

January activities (professional)

  • We wrapped up the end of the year fundraiser.
  • We’re planning LibrePlanet 2019! I hope to see you there.
  • I put together a slide deck for CopyLeft Conf, which I’ll detail more in February. I had drew and scanned in my slides, which is a time consuming process.

05 February, 2019 05:03PM by mollydb

Reproducible builds folks

Reproducible Builds: Weekly report #197

Here’s what happened in the Reproducible Builds effort between Sunday January 27th and Saturday February 2nd 2019:

  • There was yet more progress towards making the Debian Installer images reproducible. Following-on from last week, Chris Lamb performed some further testing of the generated images resulting in two patches to ensure that builds were reproducible regardless of both the user’s umask(2) (filed as #920631) and even the underlying ordering of files on disk (#920676). It is hoped these can be merged for the next Debian Installer alpha/beta after the recent “Alpha 5” release.

  • Tails, the privacy-oriented “live” operating system released its first USB image, which is reproducible.

  • Chris Lamb implemented a check in the Lintian static analysis tool that performs automated checks against Debian packages in order to add a check for .sass-cache directories. As as they contain non-deterministic subdirectories they immediately contribute towards an unreproducible build (#920593).
  • disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into filesystems for easy and reliable testing. Chris Lamb fixed an issue this week in the handling of the fsyncdir system call to ensure dpkg(1) can “flush” /var/lib/dpkg correctly [].

  • Hervé Boutemy made more updates to the reproducible-builds.org project website, including documenting mvn.build-root []. In addition, Chris Smith fixed a typo on the tools page [] and Holger Levsen added a link to Lukas’s report to the recent Paris Summit page [].

  • strip-nondeterminism is our our tool that post-processes files to remove known non-deterministic output) version. This week, Chris Lamb investigated an issue regarding the tool not normalising file ownerships in .epub files that was originally identified by Holger Levsen, as well as clarified the negative message in test failures [] and performed some code cleanups (eg. []).

  • Chris Lamb updated the SSL certificate for try.diffoscope.org to ensure validation after the deprecation of TLS-SNI-01 validation in LetsEncrypt.

  • Reproducible Builds were present at FOSDEM 2019 handing out t-shirts to contributors. Thank you!

  • On Tuesday February 26th Chris Lamb will speak at Speck&Tech 31 “Open Security” on Reproducible Builds in Trento, Italy.

  • 6 Debian package reviews were added, 3 were updated and 5 were removed in this week, adding to our knowledge about identified issues. Chris Lamb unearthed a new toolchain issue randomness_in_documentation_underscore_downloads_generated_by_sphinx, .

Packages reviewed and fixed, and bugs filed

Test framework development

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This week, Holger Levsen made a large number of improvements including:

  • Arch Linux-specific changes:
    • The scheduler is now run every 4h so present stats for this time period. []
    • Fix detection of bad builds. []
  • LEDE/OpenWrt-specific changes:
    • Make OpenSSH usable with a TCP port other than 22. This is needed for our OSUOSL nodes. []
    • Perform a minor refactoring of the build script. []
  • NetBSD-specific changes:
    • Add a ~jenkins/.ssh/config to fix jobs regarding OpenSSH running on non-standard ports. []
    • Add a note that osuosl171 is constantly online. []
  • Misc/generic changes:
    • Use same configuration for df_inode as for df to reduce noise. []
    • Remove a now-bogus warning; we have its parallel in Git now. []
    • Define ControlMaster and ControlPath in our OpenSSH configurations. []

In addition, Mattia Rizzolo and Vagrant Cascadian performed maintenance of the build nodes. ([], [], [], etc.)

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, intrigeri & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 February, 2019 03:15PM

Michael Stapelberg

Looking for a new Raspberry Pi image maintainer

This is taken care of: Gunnar Wolf has taken on maintenance of the Raspberry Pi image. Thank you!

(Cross-posting this message I sent to pkg-raspi-maintainers for broader visibility.)

I started building Raspberry Pi images because I thought there should be an easy, official way to install Debian on the Raspberry Pi.

I still believe that, but I’m not actually using Debian on any of my Raspberry Pis anymore¹, so my personal motivation to do any work on the images is gone.

On top of that, I realize that my commitments exceed my spare time capacity, so I need to get rid of responsibilities.

Therefore, I’m looking for someone to take up maintainership of the Raspberry Pi images. Numerous people have reached out to me with thank you notes and questions, so I think the user interest is there. Also, I’ll be happy to answer any questions that you might have and that I can easily answer. Please reply here (or in private) if you’re interested.

If I can’t find someone within the next 7 days, I’ll put up an announcement message in the raspi3-image-spec README, wiki page, and my blog posts, stating that the image is unmaintained and looking for a new maintainer.

Thanks for your understanding,

① just in case you’re curious, I’m now running cross-compiled Go programs directly under a Linux kernel and minimal userland, see https://gokrazy.org/

05 February, 2019 08:42AM

TurboPFor: an analysis


I have recently been looking into speeding up Debian Code Search. As a quick reminder, search engines answer queries by consulting an inverted index: a map from term to documents containing that term (called a “posting list”). See the Debian Code Search Bachelor Thesis (PDF) for a lot more details.

Currently, Debian Code Search does not store positional information in its index, i.e. the index can only reveal that a certain trigram is present in a document, not where or how often.

From analyzing Debian Code Search queries, I knew that identifier queries (70%) massively outnumber regular expression queries (30%). When processing identifier queries, storing positional information in the index enables a significant optimization: instead of identifying the possibly-matching documents and having to read them all, we can determine matches from querying the index alone, no document reads required.

This moves the bottleneck: having to read all possibly-matching documents requires a lot of expensive random I/O, whereas having to decode long posting lists requires a lot of cheap sequential I/O.

Of course, storing positions comes with a downside: the index is larger, and a larger index takes more time to decode when querying.

Hence, I have been looking at various posting list compression/decoding techniques, to figure out whether we could switch to a technique which would retain (or improve upon!) current performance despite much longer posting lists and produce a small enough index to fit on our current hardware.


I started looking into this space because of Daniel Lemire’s Stream VByte post. As usual, Daniel’s work is well presented, easily digestible and accompanied by not just one, but multiple implementations.

I also looked for scientific papers to learn about the state of the art and classes of different approaches in general. The best I could find is Compression, SIMD, and Postings Lists. If you don’t have access to the paper, I hear that Sci-Hub is helpful.

The paper is from 2014, and doesn’t include all algorithms. If you know of a better paper, please let me know and I’ll include it here.

Eventually, I stumbled upon an algorithm/implementation called TurboPFor, which the rest of the article tries to shine some light on.


If you’re wondering: PFor stands for Patched Frame Of Reference and describes a family of algorithms. The principle is explained e.g. in SIMD Compression and the Intersection of Sorted Integers (PDF).

The TurboPFor project’s README file claims that TurboPFor256 compresses with a rate of 5.04 bits per integer, and can decode with 9400 MB/s on a single thread of an Intel i7-6700 CPU.

For Debian Code Search, we use unsigned integers of 32 bit (uint32), which TurboPFor will compress into as few bits as required.

Dividing Debian Code Search’s file sizes by the total number of integers, I get similar values, at least for the docid index section:

  • 5.49 bits per integer for the docid index section
  • 11.09 bits per integer for the positions index section

I can confirm the order of magnitude of the decoding speed, too. My benchmark calls TurboPFor from Go via cgo, which introduces some overhead. To exclude disk speed as a factor, data comes from the page cache. The benchmark sequentially decodes all posting lists in the specified index, using as many threads as the machine has cores¹:

  • ≈1400 MB/s on a 1.1 GiB docid index section
  • ≈4126 MB/s on a 15.0 GiB position index section

I think the numbers differ because the position index section contains larger integers (requiring more bits). I repeated both benchmarks, capped to 1 GiB, and decoding speeds still differed, so it is not just the size of the index.

Compared to Streaming VByte, a TurboPFor256 index comes in at just over half the size, while still reaching 83% of Streaming VByte’s decoding speed. This seems like a good trade-off for my use-case, so I decided to have a closer look at how TurboPFor works.

① See cmd/gp4-verify/verify.go run on an Intel i9-9900K.


To confirm my understanding of the details of the format, I implemented a pure-Go TurboPFor256 decoder. Note that it is intentionally not optimized as its main goal is to use simple code to teach the TurboPFor256 on-disk format.

If you’re looking to use TurboPFor from Go, I recommend using cgo. cgo’s function call overhead is about 51ns as of Go 1.8, which will easily be offset by TurboPFor’s carefully optimized, vectorized (SSE/AVX) code.

With that caveat out of the way, you can find my teaching implementation at https://github.com/stapelberg/goturbopfor

I verified that it produces the same results as TurboPFor’s p4ndec256v32 function for all posting lists in the Debian Code Search index.

On-disk format

Note that TurboPFor does not fully define an on-disk format on its own. When encoding, it turns a list of integers into a byte stream:

size_t p4nenc256v32(uint32_t *in, size_t n, unsigned char *out);

When decoding, it decodes the byte stream into an array of integers, but needs to know the number of integers in advance:

size_t p4ndec256v32(unsigned char *in, size_t n, uint32_t *out);

Hence, you’ll need to keep track of the number of integers and length of the generated byte streams separately. When I talk about on-disk format, I’m referring to the byte stream which TurboPFor returns.

The TurboPFor256 format uses blocks of 256 integers each, followed by a trailing block — if required — which can contain fewer than 256 integers:

SIMD bitpacking is used for all blocks but the trailing block (which uses regular bitpacking). This is not merely an implementation detail for decoding: the on-disk structure is different for blocks which can be SIMD-decoded.

Each block starts with a 2 bit header, specifying the type of the block:

Each block type is explained in more detail in the following sections.

Note that none of the block types store the number of elements: you will always need to know how many integers you need to decode. Also, you need to know in advance how many bytes you need to feed to TurboPFor, so you will need some sort of container format.

Further, TurboPFor automatically choses the best block type for each block.

Constant block

A constant block (all integers of the block have the same value) consists of a single value of a specified bit width ≤ 32. This value will be stored in each output element for the block. E.g., after calling decode(input, 3, output) with input being the constant block depicted below, output is {0xB8912636, 0xB8912636, 0xB8912636}.

The example shows the maximum number of bytes (5). Smaller integers will use fewer bytes: e.g. an integer which can be represented in 3 bits will only use 2 bytes.

Bitpacking block

A bitpacking block specifies a bit width ≤ 32, followed by a stream of bits. Each value starts at the Least Significant Bit (LSB), i.e. the 3-bit values 0 (000b) and 5 (101b) are encoded as 101000b.

Bitpacking with exceptions (bitmap) block

The constant and bitpacking block types work well for integers which don’t exceed a certain width, e.g. for a series of integers of width ≤ 5 bits.

For a series of integers where only a few values exceed an otherwise common width (say, two values require 7 bits, the rest requires 5 bits), it makes sense to cut the integers into two parts: value and exception.

In the example below, decoding the third integer out2 (000b) requires combination with exception ex0 (10110b), resulting in 10110000b.

The number of exceptions can be determined by summing the 1 bits in the bitmap using the popcount instruction.

Bitpacking with exceptions (variable byte)

When the exceptions are not uniform enough, it makes sense to switch from bitpacking to a variable byte encoding:

Decoding: variable byte

The variable byte encoding used by the TurboPFor format is similar to the one used by SQLite, which is described, alongside other common variable byte encodings, at github.com/stoklund/varint.

Instead of using individual bits for dispatching, this format classifies the first byte (b[0]) into ranges:

  • [0—176]: the value is b[0]
  • [177—240]: a 14 bit value is in b[0] (6 high bits) and b[1] (8 low bits)
  • [241—248]: a 19 bit value is in b[0] (3 high bits), b[1] and b[2] (16 low bits)
  • [249—255]: a 32 bit value is in b[1], b[2], b[3] and possibly b[4]

Here is the space usage of different values:

  • [0—176] are stored in 1 byte (as-is)
  • [177—16560] are stored in 2 bytes, with the highest 6 bits added to 177
  • [16561—540848] are stored in 3 bytes, with the highest 3 bits added to 241
  • [540849—16777215] are stored in 4 bytes, with 0 added to 249
  • [16777216—4294967295] are stored in 5 bytes, with 1 added to 249

An overflow marker will be used to signal that encoding the values would be less space-efficient than simply copying them (e.g. if all values require 5 bytes).

This format is very space-efficient: it packs 0-176 into a single byte, as opposed to 0-128 (most others). At the same time, it can be decoded very quickly, as only the first byte needs to be compared to decode a value (similar to PrefixVarint).

Decoding: bitpacking

Regular bitpacking

In regular (non-SIMD) bitpacking, integers are stored on disk one after the other, padded to a full byte, as a byte is the smallest addressable unit when reading data from disk. For example, if you bitpack only one 3 bit int, you will end up with 5 bits of padding.

SIMD bitpacking (256v32)

SIMD bitpacking works like regular bitpacking, but processes 8 uint32 little-endian values at the same time, leveraging the AVX instruction set. The following illustration shows the order in which 3-bit integers are decoded from disk:

In Practice

For a Debian Code Search index, 85% of posting lists are short enough to only consist of a trailing block, i.e. no SIMD instructions can be used for decoding.

The distribution of block types looks as follows:

  • 72% bitpacking with exceptions (bitmap)
  • 19% bitpacking with exceptions (variable byte)
  • 5% constant
  • 4% bitpacking

Constant blocks are mostly used for posting lists with just one entry.


The TurboPFor on-disk format is very flexible: with its 4 different kinds of blocks, chances are high that a very efficient encoding will be used for most integer series.

Of course, the flip side of covering so many cases is complexity: the format and implementation take quite a bit of time to understand — hopefully this article helps a little! For environments where the C TurboPFor implementation cannot be used, smaller algorithms might be simpler to implement.

That said, if you can use the TurboPFor implementation, you will benefit from a highly optimized SIMD code base, which will most likely be an improvement over what you’re currently using.

05 February, 2019 08:18AM

February 04, 2019


I have heard a number of times that sbuild is too hard to get started with, and hence people don’t use it.

To reduce hurdles from using/contributing to Debian, I wanted to make sbuild easier to set up.

sbuild ≥ 0.74.0 provides a Debian package called sbuild-debian-developer-setup. Once installed, run the sbuild-debian-developer-setup(1) command to create a chroot suitable for building packages for Debian unstable.

On a system without any sbuild/schroot bits installed, a transcript of the full setup looks like this:

% sudo apt install -t unstable sbuild-debian-developer-setup
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libsbuild-perl sbuild schroot
Suggested packages:
  deborphan btrfs-tools aufs-tools | unionfs-fuse qemu-user-static
Recommended packages:
  exim4 | mail-transport-agent autopkgtest
The following NEW packages will be installed:
  libsbuild-perl sbuild sbuild-debian-developer-setup schroot
0 upgraded, 4 newly installed, 0 to remove and 1454 not upgraded.
Need to get 1.106 kB of archives.
After this operation, 3.556 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://localhost:3142/deb.debian.org/debian unstable/main amd64 libsbuild-perl all 0.74.0-1 [129 kB]
Get:2 http://localhost:3142/deb.debian.org/debian unstable/main amd64 sbuild all 0.74.0-1 [142 kB]
Get:3 http://localhost:3142/deb.debian.org/debian testing/main amd64 schroot amd64 1.6.10-4 [772 kB]
Get:4 http://localhost:3142/deb.debian.org/debian unstable/main amd64 sbuild-debian-developer-setup all 0.74.0-1 [62,6 kB]
Fetched 1.106 kB in 0s (5.036 kB/s)
Selecting previously unselected package libsbuild-perl.
(Reading database ... 276684 files and directories currently installed.)
Preparing to unpack .../libsbuild-perl_0.74.0-1_all.deb ...
Unpacking libsbuild-perl (0.74.0-1) ...
Selecting previously unselected package sbuild.
Preparing to unpack .../sbuild_0.74.0-1_all.deb ...
Unpacking sbuild (0.74.0-1) ...
Selecting previously unselected package schroot.
Preparing to unpack .../schroot_1.6.10-4_amd64.deb ...
Unpacking schroot (1.6.10-4) ...
Selecting previously unselected package sbuild-debian-developer-setup.
Preparing to unpack .../sbuild-debian-developer-setup_0.74.0-1_all.deb ...
Unpacking sbuild-debian-developer-setup (0.74.0-1) ...
Processing triggers for systemd (236-1) ...
Setting up schroot (1.6.10-4) ...
Created symlink /etc/systemd/system/multi-user.target.wants/schroot.service → /lib/systemd/system/schroot.service.
Setting up libsbuild-perl (0.74.0-1) ...
Processing triggers for man-db ( ...
Setting up sbuild (0.74.0-1) ...
Setting up sbuild-debian-developer-setup (0.74.0-1) ...
Processing triggers for systemd (236-1) ...

% sudo sbuild-debian-developer-setup
The user `michael' is already a member of `sbuild'.
I: SUITE: unstable
I: TARGET: /srv/chroot/unstable-amd64-sbuild
I: MIRROR: http://localhost:3142/deb.debian.org/debian
I: Running debootstrap --arch=amd64 --variant=buildd --verbose --include=fakeroot,build-essential,eatmydata --components=main --resolve-deps unstable /srv/chroot/unstable-amd64-sbuild http://localhost:3142/deb.debian.org/debian
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages 
I: Validating Packages 
I: Found packages in base already in required: apt 
I: Resolving dependencies of required packages...
I: Successfully set up unstable chroot.
I: Run "sbuild-adduser" to add new sbuild users.
ln -s /usr/share/doc/sbuild/examples/sbuild-update-all /etc/cron.daily/sbuild-debian-developer-setup-update-all
Now run `newgrp sbuild', or log out and log in again.

% newgrp sbuild

% sbuild -d unstable hello
sbuild (Debian sbuild) 0.74.0 (14 Mar 2018) on x1

| hello (amd64)                                Mon, 19 Mar 2018 07:46:14 +0000 |

Package: hello
Distribution: unstable
Machine Architecture: amd64
Host Architecture: amd64
Build Architecture: amd64
Build Type: binary

I hope you’ll find this useful.

04 February, 2019 06:08PM

dput usability changes

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.

04 February, 2019 06:08PM

hackergotchi for Julien Danjou

Julien Danjou

How to Log Properly in Python

How to Log Properly in Python

Logging is one of the most underrated features. Often ignored by software engineers, it can save your time when your application's running in production.

Most teams don't think about it until it's too late in their development process. It's when things start to get wrong in deployments that somebody realizes too late that logging is missing.


The Twelve-Factor App defines logs as a stream of aggregated, time-ordered events collected from the output streams of all running processes. It also describes how applications should handle their logging. We can summarize those guidelines as:

  • Logs have no fixed beginning or end.
  • Print logs to stdout.
  • Print logs unbuffered.
  • The environment is responsible for capturing the stream.

From my experience, this set of rules is a good trade-off. Logs have to be kept pretty simple to be efficient and reliable. Building complex logging systems might make it harder to get insight into a running application.

There's also no point in duplication effort in log management (e.g., log file rotation, archival policy, etc) in your different applications. Having an external workflow that can be shared across different programs seems more efficient.

In Python

Python provides a logging subsystem with its logging module. This module provides a Logger object that allows you to emit messages with different levels of criticality. Those messages can then be filtered and send to different handlers.

Let's have an example:

import logging

logger = logging.getLogger("myapp")
logger.error("something wrong")

Depending on the version of Python you're running you'll either see:

No handlers could be found for logger "test123"


something wrong

Python 2 used to have no logging setup by default, so it would print an error message about no handler being found. Since Python 3, a default handler outputting to stdout is now installed — matching the requirements from the 12factor App.

However, this default setup is far from being perfect.


The default format that Python uses does not embed any contextual information. There is no way to know the name of the logger — myapp in the previous example — nor the date and time of the logged message.

You must configure Python logging subsystem to enhance its output format.

To do that, I advise using the daiquiri module. It provides an excellent default configuration and a simple API to configure logging, plus some exciting features.

How to Log Properly in Python

Logging Setup

When using daiquiri, the first thing to do is to set up your logging correctly. This can be done with the daiquiri.setup function as this:

import daiquiri


As simple as that. You can tweak the setup further by asking it to log to file, to change the default string formats, etc, but just calling daiquiri.setup is enough to get a proper logging default.


import daiquiri

daiquiri.getLogger("myapp").error("something wrong")


2018-12-13 10:24:04,373 [38550] ERROR    myapp: something wrong

If your terminal supports writing text in colors, the line will be printed in red since it's an error. The format provided by daiquiri is better than Python's default: this one includes a timestamp, the process ID,  the criticality level and the logger's name. Needless to say that this format can also be customized.

Passing Contextual Information

Logging strings are boring. Most of the time, engineers end up writing code such as:

logger.error("Something wrong happened with %s when writing data at %d", myobject.myfield, myobject.mynumber")

The issue with this approach is that you have to think about each field that you want to log about your object, and to make sure that they are inserted correctly in your sentence. If you forget an essential field to describe your object and the problem, you're screwed.

A reliable alternative to this manual crafting of log strings is to pass interesting objects as keyword arguments. Daiquiri supports it, and it works that way:

import attr
import daiquiri
import requests

logger = daiquiri.getLogger("myapp")

class Request:
     url = attr.ib()
     status_code = attr.ib(init=False, default=None)
     def get(self):
         r = requests.get(self.url)
         self.status_code = r.status_code
         return r

user = "jd"
req = Request("https://google.com/not-this-page")
except Exception:
    logger.error("Something wrong happened during the request",
                 request=req, user=user)

If anything goes wrong with the request, it will be logged with the stack trace, like this:

2018-12-14 10:37:24,586 [43644] ERROR    myapp [request: Request(url='https://google.com/not-this-page', status_code=404)] [user: jd]: Something wrong happened during the request

As you can see, the call to logger.error is pretty straight-forward: a line that explains what's wrong, and then the different interesting objects are passed as keyword arguments.

Daiquiri logs those keyword arguments with a default format of [key: value] that is included as a prefix to the log string. The value is printed using its __format__ method — that's why I'm using the attr module here: it automatically generates this method for me and includes all fields by default. You can also customize daiquiri to use any other format.

Following those guidelines should be a perfect start for logging correctly with Python!

04 February, 2019 10:15AM by Julien Danjou

February 03, 2019

Iustin Pop

HGST HUH721212ALN600

Due to life, I didn’t manage to do any more investigation on backup solutions, but after a long saga I managed to finally have 4 working new HDDs - model HUH721212ALN600.

These are 12TB, 4K sector size, “ISE” (instant secure erase) SATA hard-drives. This combination seemed the best, especially the ISE part; apparently the HDD controller always encrypts data to/from platters, and the “instant erase” part is about wiping the encryption key. Between this and cryptsetup, dealing with either a failed HDD or one that is to be removed from service should be a no-breeze.

Hardware is hardware

The long saga of getting the hard-drives is that this particular model, and in general any 4K hard-drives seem to be very hard to acquire in Switzerland. I ordered and cancelled many times due to “available in 3 days” then “in 5 days” then “unknown delivery”, to finally being able to order and receive 4 of them. And one to be dead-on-arrival, in the form of failing a short smart test, and failing any writes. Sent back, and due to low availability, wait and wait… I’m glad I finally got the replacement and can proceed with the project. A spare wouldn’t be bad though :)

Performance stats

Fun aside, things went through burn-in procedure successfully, and now the raid array is being built. It’s the first time I see mechanical HDDs being faster than the default sync_speed_max. After bumping up that value for this particular raid array:

md4 : active raid5 sdd2[4] sdc2[2] sdb2[1] sda2[0]
      35149965312 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [=>...................]  recovery =  5.2% (619963868/11716655104) finish=769.4min speed=240361K/sec
      bitmap: 0/88 pages [0KB], 65536KB chunk

Still, it’ll take more than that value as the speed decreases significantly towards the end. Not really looking forwards to 24hrs rebuild times, but that’s life:

[Sat Feb  2 09:03:21 2019] md4: detected capacity change from 0 to 35993564479488
[Sat Feb  2 09:03:21 2019] md: recovery of RAID array md4
[Sun Feb  3 03:05:51 2019] md: md4: recovery done.

Eighteen hours to the minute almost.

IOPS graph:

IOPS graph IOPS graph

I was not aware that 7’200 RPM hard-drives can now peak at close 200 IO/s (196, to be precise). For the same RPM, 8 years ago the peak was about 175, so smarter something (controller, cache, actuator, whatever) results in about 20 extra IOPS. And yes, SATA limitation of 31 in-flight I/Os is clearly visible…

While trying to get a bandwidth graph, I was surprised about fio not ending. Apparently there’s a bug in fio since 2014, reported back in 2015, but still not fixed. I filled issue 738 on github, hopefully it is confirmed and can be fixed (I’m not sure what the intent was at all to loop?). And the numbers I see if I let it run for long are a bit weird as well. While running, I see numbers between 260MB/s (max) and around 125MB (min). Some quick numbers for max performance:

# dd if=/dev/sda of=/dev/null iflag=direct bs=8192k count=1000
1000+0 records in
1000+0 records out
8388608000 bytes (8.4 GB, 7.8 GiB) copied, 31.9695 s, 262 MB/s
# dd if=/dev/md4 of=/dev/null iflag=direct bs=8192k count=2000
2000+0 records in
2000+0 records out
16777216000 bytes (17 GB, 16 GiB) copied, 23.1841 s, 724 MB/s

This and the very large storage space are about the only thing mechanical HDDs are still good at.

And now, to migrate from the old array…

Raw info

As other people might be interested (I was and couldn’t find the information beforehand), here is smartctl and hdparm info from the broken hard-drive (before I knew it was so):


    smartctl 6.6 2017-11-05 r4594 [x86_64-linux] (local build)
    Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
    Device Model:     HGST HUH721212ALN600
    Serial Number:    8HHUE36H
    LU WWN Device Id: 5 000cca 270d9a5f8
    Firmware Version: LEGNT3D0
    User Capacity:    12,000,138,625,024 bytes [12.0 TB]
    Sector Size:      4096 bytes logical/physical
    Rotation Rate:    7200 rpm
    Form Factor:      3.5 inches
    Device is:        Not in smartctl database [for details use: -P showall]
    ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4
    SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Thu Jan 10 13:58:03 2019 CET
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    SMART overall-health self-assessment test result: PASSED
    General SMART Values:
    Offline data collection status:  (0x80)	Offline data collection activity
    					was never started.
    					Auto Offline Data Collection: Enabled.
    Self-test execution status:      (   0)	The previous self-test routine completed
    					without error or no self-test has ever 
    					been run.
    Total time to complete Offline 
    data collection: 		(   87) seconds.
    Offline data collection
    capabilities: 			 (0x5b) SMART execute Offline immediate.
    					Auto Offline data collection on/off support.
    					Suspend Offline collection upon new
    					Offline surface scan supported.
    					Self-test supported.
    					No Conveyance Self-test supported.
    					Selective Self-test supported.
    SMART capabilities:            (0x0003)	Saves SMART data before entering
    					power-saving mode.
    					Supports SMART auto save timer.
    Error logging capability:        (0x01)	Error logging supported.
    					General Purpose Logging supported.
    Short self-test routine 
    recommended polling time: 	 (   2) minutes.
    Extended self-test routine
    recommended polling time: 	 (1257) minutes.
    SCT capabilities: 	       (0x003d)	SCT Status supported.
    					SCT Error Recovery Control supported.
    					SCT Feature Control supported.
    					SCT Data Table supported.
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
      1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
      2 Throughput_Performance  0x0005   100   100   054    Pre-fail  Offline      -       0
      3 Spin_Up_Time            0x0007   100   100   024    Pre-fail  Always       -       0
      4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       1
      5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
      7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
      8 Seek_Time_Performance   0x0005   100   100   020    Pre-fail  Offline      -       0
      9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       0
     10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
     12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       1
     22 Unknown_Attribute       0x0023   100   100   025    Pre-fail  Always       -       100
    192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       1
    193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       1
    194 Temperature_Celsius     0x0002   200   200   000    Old_age   Always       -       30 (Min/Max 25/30)
    196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
    197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
    198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
    199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0
    SMART Error Log Version: 1
    No Errors Logged
    SMART Self-test log structure revision number 1
    No self-tests have been logged.  [To run self-tests, use: smartctl -t]
    SMART Selective self-test log data structure revision number 1
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
    Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.



    ATA device, with non-removable media
    	Model Number:       HGST HUH721212ALN600
    	Serial Number:      8HHUE36H
    	Firmware Revision:  LEGNT3D0
    	Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, 
                            SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0; 
                            Revision: ATA8-AST T13 Project D1697 Revision 0b
    	Used: unknown (minor revision code 0x0029) 
    	Supported: 9 8 7 6 5 
    	Likely used: 9
    	Logical		max	current
    	cylinders	16383	16383
    	heads		16	16
    	sectors/track	63	63
    	CHS current addressable sectors:    16514064
    	LBA    user addressable sectors:   268435455
    	LBA48  user addressable sectors:  2929721344
    	Logical  Sector size:                  4096 bytes
    	Physical Sector size:                  4096 bytes
    	device size with M = 1024*1024:    11444224 MBytes
    	device size with M = 1000*1000:    12000138 MBytes (12000 GB)
    	cache/buffer size  = unknown
    	Form Factor: 3.5 inch
    	Nominal Media Rotation Rate: 7200
    	LBA, IORDY(can be disabled)
    	Queue depth: 32
    	Standby timer values: spec'd by Standard, no device specific minimum
    	R/W multiple sector transfer: Max = 2	Current = 0
    	Advanced power management level: 254
    	DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
    	     Cycle time: min=120ns recommended=120ns
    	PIO: pio0 pio1 pio2 pio3 pio4 
    	     Cycle time: no flow control=120ns  IORDY flow control=120ns
    	Enabled	Supported:
    	   *	SMART feature set
    	    	Security Mode feature set
    	   *	Power Management feature set
    	   *	Write cache
    	   *	Look-ahead
    	   *	Host Protected Area feature set
    	   *	WRITE_BUFFER command
    	   *	READ_BUFFER command
    	   *	NOP cmd
    	   *	Advanced Power Management feature set
    	    	Power-Up In Standby feature set
    	   *	SET_FEATURES required to spinup after power up
    	    	SET_MAX security extension
    	   *	48-bit Address feature set
    	   *	Device Configuration Overlay feature set
    	   *	Mandatory FLUSH_CACHE
    	   *	SMART error logging
    	   *	SMART self-test
    	   *	Media Card Pass-Through
    	   *	General Purpose Logging feature set
    	   *	64-bit World wide name
    	   *	URG for READ_STREAM[_DMA]_EXT
    	   *	URG for WRITE_STREAM[_DMA]_EXT
    	   *	{READ,WRITE}_DMA_EXT_GPL commands
    	   *	Segmented DOWNLOAD_MICROCODE
    	    	unknown 119[6]
    	    	unknown 119[7]
    	   *	Gen1 signaling speed (1.5Gb/s)
    	   *	Gen2 signaling speed (3.0Gb/s)
    	   *	Gen3 signaling speed (6.0Gb/s)
    	   *	Native Command Queueing (NCQ)
    	   *	Host-initiated interface power management
    	   *	Phy event counters
    	   *	NCQ priority information
    	   *	READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
    	    	Non-Zero buffer offsets in DMA Setup FIS
    	   *	DMA Setup Auto-Activate optimization
    	    	Device-initiated interface power management
    	    	In-order data delivery
    	   *	Software settings preservation
    	    	unknown 78[7]
    	    	unknown 78[10]
    	    	unknown 78[11]
    	   *	SMART Command Transport (SCT) feature set
    	   *	SCT Write Same (AC2)
    	   *	SCT Error Recovery Control (AC3)
    	   *	SCT Features Control (AC4)
    	   *	SCT Data Tables (AC5)
    	   *	SANITIZE feature set
    	   *	CRYPTO_SCRAMBLE_EXT command
    	   *	OVERWRITE_EXT command
    	   *	reserved 69[3]
    	   *	reserved 69[4]
    	   *	WRITE BUFFER DMA command
    	   *	READ BUFFER DMA command
    	Master password revision code = 65534
    	not	enabled
    	not	locked
    	not	frozen
    	not	expired: security count
    	not	supported: enhanced erase
    	1114min for SECURITY ERASE UNIT.
    Logical Unit WWN Device Identifier: 5000cca270d9a5f8
    	NAA		: 5
    	IEEE OUI	: 000cca
    	Unique ID	: 270d9a5f8
    Checksum: correct

03 February, 2019 08:32PM

Jamie McClelland

I didn't know what ibus was one day ago, now I love it

[See update below.]

After over a decade using mutt as my email client, I finally gave up pretending I didn't want to see pretty pictures in my email and switched to Thunderbird.

Since I don't write email in spanish often, I didn't immediately notice that my old dead key trick for typing spanish accent characters didn't work in Thunderbird like it does in vim or any terminal program.

I learned many years ago that I could set a special key via my openbox autostart script with xmodmap -e 'keysym Alt_R = Multi_key'. Then, if I wanted to type é, I would press and hold my right alt key while I press the apostrophe key, let go, and press the e key. I could get an ñ using the same trick but press the tilde key instead of the apostrophe key. Pretty easy.

When I tried that trick in Thunderbird I got an upside down e. WTF.

I spent about 30 minutes clawing my way through search results on several occassions over the course of many months before I finally found someone say: "I installed the ibus package, rebooted and it all worked." (Sorry Internet, I can't find that page now!)

ibus? apt-cache show ibus states:

IBus is an Intelligent Input Bus. It is a new input framework for the Linux
OS. It provides full featured and user friendly input method user
interface. It also may help developers to develop input method easily.

Well, that's succinct. I still had no idea what ibus was, but it sounded like it might work. I followed those directions and suddenly in my system tray area, there was a new icon. If I clicked on it, it listed by default my English keyboard.

I could right click, hit preferences and add a new keyboard:

English - English (US, International with dead keys)

Now, when I select that new option, I simply press my right alt key and e (no need for the apostrophe) and I get my é. Same with ñ. Hooray!

My only complaint is that while using this keyboard, I can't using regular apostrophes or ~'s. Not sure why, but it's not that hard to switch.

As far as I can tell, ibus tries to abstract some of the difficulties around input methods so it's easier on GUI developers.

Update 2019-02-11

Thanks, Internet, particularly for the comment from Alex about how I was choosing the wrong International keyboard. Of course, my keyboard does not have dead keys, so I need to choose the one called "English (intl., with AltGr dead keys)."

Now everything works perfectly. No need for ibus at all. I can get é with my right alt key followed by e. It works in my unicode terminal, thunderbird, and everywhere else that I've tried.

03 February, 2019 07:07PM

hackergotchi for David Moreno

David Moreno


Spent the Saturday at FOSDEM with my friend jackdoe.

It was great to both see some and avoid other familiar faces. It was great to meet some unfamiliar faces as well.

Until next year! Maybe.

03 February, 2019 01:28PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Found myself charged for cloud build last month.

Found myself charged for cloud build last month. Noticed that I have some remains in cloud storage and GCR.

03 February, 2019 10:28AM by Junichi Uekawa

hackergotchi for Bits from Debian

Bits from Debian

Projects and mentors for Debian's Google Summer of Code 2019 and Outreachy

GSoC logo Outreachy logo

Debian is applying as a mentoring organization for the Google Summer of Code 2019, an internship program open to university students aged 18 and up, and will apply soon for the next round of Outreachy, an internship program for people from groups traditionally underrepresented in tech.

Please join us and help expanding Debian and mentoring new free software contributors!

If you have a project idea related to Debian and can mentor (or can coordinate the mentorship with some other Debian Developer or contributor, or within a Debian team), please add the details to the Debian GSoC2019 Projects wiki page by Tuesday, February 5 2019.

Participating in these programs has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreach mailing list.

03 February, 2019 09:40AM by Alexander Wirt

Russ Allbery

Another new year haul

The last haul I named that was technically not a new year haul since it was posted in December, so I'll use the same title again. This is a relatively small collection of random stuff, mostly picking up recommendations and award nominees that I plan on reading soon.

Kate Elliott — Cold Fire (sff)
Kate Elliott — Cold Steel (sff)
Mik Everett — Self-Published Kindling (non-fiction)
Michael D. Gordin — The Pseudoscience Wars (non-fiction)
Yoon Ha Lee — Dragon Pearl (sff)
Ruth Ozeki — A Tale for the Time Being (sff)
Marge Piercy — Woman on the Edge of Time (sff)
Kim Stanley Robinson — New York 2140 (sff)

I've already reviewed New York 2140. I have several more pre-orders that will be delivered this month, so still safely acquiring books faster than I'm reading them. It's all to support authors!

03 February, 2019 02:38AM

February 02, 2019

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

FOSDEM 2019, Saturday

Got lost in the wet slush of Brussels. Got to ULB. Watched seven talks in whole or partially, some good and some not so good. (Six more that I wanted to see, but couldn't due to overfilled rooms, scheduling conflicts or cancellations.) Marvelled at the Wi-Fi as usual (although n-m is slow to connect to v6-only networks, it seems). Had my own talk. Met people in the hallway track. Charged the laptop a bit in the cafeteria; should get a new internal battery for next year so that it lasts all day. Insane amount of people in the hallways as always. Tired. Going back tomorrow.

FOSDEM continues to be among the top free software conferences. But I would love some better way of finding talks than “read through this list of 750+ talks linearly, except ignore the blockchain devroom”.

02 February, 2019 10:22PM

Futatabi: Multi-camera instant replay with slow motion

I've launched Futatabi, my slow motion software! Actually, the source code has been out as part of Nageru for a few weeks, so it's in Debian buster and all, but there's been a dash the last few days to get all the documentation and such in place.

The FOSDEM talk went well—the turnout wasn't huge (about fifty people in person; I don't know the stream numbers), but I guess it's a fairly narrow topic. Feedback was overall good, and the questions were well thought-out. Thanks to everyone who came, and especially those who asked questions! I had planned for 20 minutes (with demo, but without questions) and ended up in 18, so that's fairly good. I forgot only minor details, and reached my goal of zero bullet points. The recording will be out as soon as I can get my hands on it, although I do suspect it's been downconverted from 60 to 50 and then further to 25 fps, which will naturally kill the smoothness of the interpolated video.

Relevant stuff:

02 February, 2019 10:14PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

glitched Amiga video

This is the fifth part in a series of blog posts. The previous post was Amiga/Gotek boot test.

Glitchy component-video out

Glitchy component-video out

As I was planning out my next Gotek-floppy-adaptor experiment, disaster struck: the video out from my Amiga had become terribly distorted, in a delightfully Rob Sheridan fashion, sufficiently so that it was impossible to operate the machine.

Reading around, the most likely explanation seemed to be a blown capacitor. These devices are nearly 30 years old, and blown capacitors are a common problem. If it were in the Amiga, then the advice is to replace all the capacitors on the mainboard. This is something that can be done by an amateur enthusiast with some soldering skills. I'm too much of a beginner with soldering to attempt something like this. I was recommended a company in Aberystwyth called Mutant Caterpillar who do a full recap and repair service for £60 which seems very reasonable.

Philips CRT

Philips CRT

Luckily, the blown capacitor (if that's what it was) wasn't in the Amiga, but in the A520 video adaptor. I dug my old Philips CRT monitor out of the loft and connected it directly to the Amiga and the picture was perfect. I had been hoping to avoid fetching it down, as I don't have enough space on my desk to leave it in situ, and instead must lug it over whenever I've found a spare minute to play with the Amiga. But it's probably not worth repairing the A520 (or sourcing a replacement) and the upshot is the picture via the RGB out is much clearer.

As I write this, I'm in a hotel room recovering after my first day at FOSDEM 2019, my first FOSDEM conference. There was a Retrocomputing devroom this year that looked really interesting but I was fully booked into the Java room all day today. (And I don't see mention of Amigas in any of the abstracts)

02 February, 2019 07:30PM

hackergotchi for Bits from Debian

Bits from Debian

Help test initial support for Secure Boot

The Debian Installer team is happy to report that the Buster Alpha 5 release of the installer includes some initial support for UEFI Secure Boot (SB) in Debian's installation media.

This support is not yet complete, and we would like to request some help! Please read on for more context and instructions to help us get better coverage and support.

On amd64 machines, by default the Debian installer will now boot (and install) a signed version of the shim package as the first stage boot loader. Shim is the core package in a signed Linux boot chain on Intel-compatible PCs. It is responsible for validating signatures on further pieces of the boot process (Grub and the Linux kernel), allowing for verification of those pieces. Each of those pieces will be signed by a Debian production signing key that is baked into the shim binary itself.

However, for safety during the development phase of Debian's SB support, we have only been using a temporary test key to sign our Grub and Linux packages. If we made a mistake with key management or trust path verification during this development, this would save us from having to revoke the production key. We plan on switching to the production key soon.

Due to the use of the test key so far, out of the box Debian will not yet install or run with SB enabled; Shim will not validate signatures with the test key and will stop, reporting the problem. This is correct and useful behaviour!

Thus far, Debian users have needed to disable SB before installation to make things work. From now on, with SB disabled, installation and use should work just the same as previously. Shim simply chain-loads grub and continues through the boot chain without checking signatures.

It is possible to enrol more keys on a SB system so that shim will recognise and allow other signatures, and this is how we have been able to test the rest of the boot chain. We now invite more users to give us valuable test coverage on a wider variety of hardware by enrolling our Debian test key and running with SB enabled.

If you want to help us test our Secure Boot support, please follow the instructions in the Debian wiki and provide feedback.

With help from users, we expect to be able to ship fully-working and tested UEFI Secure Boot in an upcoming Debian Installer release and in the main Buster release itself.

02 February, 2019 10:00AM by Steve McIntyre, Cyril Brulebois

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

The Incomplete Book of Running: A Short Review

The Incomplete Book of Running

Peter Sagal’s The Incomplete Book of Running has been my enigma for several weeks now. As a connection, Peter and I have at most one degree of separation: a common fellow runner friend and neighbor who, sadly, long departed to Colorodo (hi Russ!). So we’re quasi-neighbors. But he is famous, I am not, but I follow him on social media.

So as “just another runner”, I had been treated to a constant trickling of content about the book. And I had (in vain) hoped my family would get me the book for Xmas, but no such luck. Hence I ordered a copy. And then Amazon, mankind’s paragon of inventory management and shipment, was seemingly out of it for weeks – so that my copy finally came today all the way from England (!!) even though Sagal and I live a few miles apart, and he and I run similar neighborhoud routes, run (or ran) the same track for Tuesday morning speedwork – and as I noticed while devouring the book, share the same obsession for FIRST I tried to install onto my running pals a decade ago. We also ran the same initial Boston Marathon in 2007, ran many similar marathons (Boston, NY, Philly) even at the same time. But bastard that he his not only owns both my PRs at a half (by about two minutes) and full (by about four minutes) marathon – but he also knows how to write!

This is a great book about running, life, and living around Oak Park. As its focus, the reflections about running are good, sometimes even profound, often funny, and show a writer’s genuine talent in putting words around something that is otherwise hard to describe. Particularly for caustic people such as long-distance runners.

The book was a great pleasure to read—and possibly only the second book in a decade or longer that I “inhaled” cover to cover in one sitting this evening as it was just the right content on a Friday night after a long work week. This was a fun and entertaining yet profound read. I really enjoyed his meditation on the process and journey that got him to his PR – when it was time for mine by now over ten years ago it came after a (now surreal seeming) sequence of running Boston, Chicago, New York in one year and London and Berlin the next. And somehow by the time I got to Berlin I was both well trained, and in a good and relaxed mental shape so that things came together for me that day. (I also got lucky as circumstances were favourable: that was one of the many recent years in which a marathon record was broken in Berlin.) And as Sagal describes really well throughout the book, running is a process and a practical philosophy and an out and occassional meditation. But there is much more in the book so go and read it.

One minor correction: It is Pfeiffer with a P before the f for Michelle’s family name as every viewer of the Baker Boys should know.

Great book. Recommended to runners and non-runners alike.

02 February, 2019 05:48AM

February 01, 2019

Petter Reinholdtsen

Websocket from Kraken in Valutakrambod

Yesterday, the Kraken virtual currency exchange announced their Websocket service, providing a stream of exchange updates to its clients. Getting updated rates quickly is a good idea, so I used their API documentation and added Websocket support to the Kraken service in Valutakrambod today. The python library can now get updates from Kraken several times per second, instead of every time the information is polled from the REST API.

If this sound interesting to you, the code for valutakrambod is available from github. Here is example output from the example client displaying rates in a curses view:

           Name Pair   Bid         Ask         Spr    Ftcd    Age
 BitcoinsNorway BTCEUR   2959.2800   3021.0500   2.0%   36    nan    nan
       Bitfinex BTCEUR   3087.9000   3088.0000   0.0%   36     37    nan
        Bitmynt BTCEUR   3001.8700   3135.4600   4.3%   36     52    nan
         Bitpay BTCEUR   3003.8659         nan   nan%   35    nan    nan
       Bitstamp BTCEUR   3008.0000   3010.2300   0.1%    0      1      1
           Bl3p BTCEUR   3000.6700   3010.9300   0.3%    1    nan    nan
       Coinbase BTCEUR   2992.1800   3023.2500   1.0%   34    nan    nan
         Kraken+BTCEUR   3005.7000   3006.6000   0.0%    0      1      0
        Paymium BTCEUR   2940.0100   2993.4400   1.8%    0   2688    nan
 BitcoinsNorway BTCNOK  29000.0000  29360.7400   1.2%   36    nan    nan
        Bitmynt BTCNOK  29115.6400  29720.7500   2.0%   36     52    nan
         Bitpay BTCNOK  29029.2512         nan   nan%   36    nan    nan
       Coinbase BTCNOK  28927.6000  29218.5900   1.0%   35    nan    nan
        MiraiEx BTCNOK  29097.7000  29741.4200   2.2%   36    nan    nan
 BitcoinsNorway BTCUSD   3385.4200   3456.0900   2.0%   36    nan    nan
       Bitfinex BTCUSD   3538.5000   3538.6000   0.0%   36     45    nan
         Bitpay BTCUSD   3443.4600         nan   nan%   34    nan    nan
       Bitstamp BTCUSD   3443.0100   3445.0500   0.1%    0      2      1
       Coinbase BTCUSD   3428.1600   3462.6300   1.0%   33    nan    nan
         Gemini BTCUSD   3445.8800   3445.8900   0.0%   36    326    nan
         Hitbtc BTCUSD   3473.4700   3473.0700  -0.0%    0      0      0
         Kraken+BTCUSD   3444.4000   3445.6000   0.0%    0      1      0
  Exchangerates EURNOK      9.6685      9.6685   0.0%   36  22226    nan
     Norgesbank EURNOK      9.6685      9.6685   0.0%   36  22226    nan
       Bitstamp EURUSD      1.1440      1.1462   0.2%    0      1      2
  Exchangerates EURUSD      1.1471      1.1471   0.0%   36  22226    nan
 BitcoinsNorway LTCEUR      1.0009     22.6538  95.6%   35    nan    nan
 BitcoinsNorway LTCNOK    259.0900    264.9300   2.2%   35    nan    nan
 BitcoinsNorway LTCUSD      0.0000     29.0000 100.0%   35    nan    nan
     Norgesbank USDNOK      8.4286      8.4286   0.0%   36  22226    nan

Yes, I notice the strange negative spread on Hitbtc. I've seen the same on Kraken. Another strange observation is that Kraken some times announce trade orders a fraction of a second in the future. I really wonder what is going on there.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

01 February, 2019 09:25PM

hackergotchi for Wouter Verhelst

Wouter Verhelst


A year and a half ago, we made a decision: I was going to move.

About a year ago, I decided that getting this done without professional help was not a very good idea and would take forever, so I got set up with a lawyer and had her guide me through the process.

After lots of juggling with bureaucracies, some unfortunate delays, and some repeats of things I had already done before, I dropped off a 1 cm thick file of paperwork at the consulate a few weeks ago

Today, I went back there to fetch my passport, containing the visum.

Tomorrow, FOSDEM starts. After that, I will be moving to a different continent!

01 February, 2019 09:22PM

Paul Wise

FLOSS Activities January 2019





  • Debian: merge patch, install dependencies, fix LDAP server, answer keys question, check build hardware differences, remove references to dead systems
  • Debian wiki: re-enable old account unblacklist IP addresses, whitelist email addresses, ping accounts with bouncing email, investigate password reset issue
  • Debian derivatives census: rerun patch generation to fix broken files


  • Initiate discussion about the status of hLinux
  • Respond to queries from the Debian derivatives census Outreachy project intern
  • Respond to queries from Debian users and developers on the mailing lists and IRC


The purple-discord work was sponsored by my employer. All other work was done on a volunteer basis.

01 February, 2019 05:23AM