April 30, 2016

Daniel Stender

My work for Debian in April

This month I've worked on the these things for Debian:

To begin with that, I've set up a Debhelper sequencer script for dh-buildinfo1, this add-on now can be used with dh $@ --with buildinfo in deb/rules instead of having to explicitly call it somewhere in an override.

Debops

I've set up initial Debian packages of Debops2, a collection of fine crafted Ansible roles and playbooks especially for Debian servers, which are shipped with a couple of helper and wrapper scripts in Python3. There are two binary packages, one for the toolset (debops), and the other for the playbooks and roles of the project (debops-playbooks).

The application is easy to use, just initialize a new project with debops-init foo and add your server(s) to foo/ansible/inventory/hosts belonging to groups representing services and things you want to employ on them. For example, the group [debops_gitlab] automatically installs a complete running Gitlab setup on one or a multitude of servers in the same run with the debops command4. Other groups like [debops_mariadb_server] could be used accordingly in the same host inventory. Ansible works without agent, so you don't have to prepare freshly setup servers with nothing special to use that tool randomly (like on localhost). The list of things you could deploy with Debops is quite amazing and dozens of services are at hand.

The new Debian packages are currently in experimental because they need some more fine tuning, e.g. there are a couple of minor error messages which recently occur using it, but it works well. The (early staged) documentation unfortunately couldn't be packaged because of the scattered resp. collective nature of the project (all parts have their own Github repositories)5, and also how to generate the upstream tarball remains a bit of a challenge (currently, it's the outcome of debops-init)6. I'll have this package in unstable soon. More info on Debops is coming up, then.

HashiCorp's Packer

I'm very glad to announce that Packer7 is ready being available in unstable, and the RFP bug could be finally closed after I've taken it over8. It's another great and much convenient devops tool which does a lot of different things in an automated fashion using only a single "one-argument" CLI tool in combination with a couple of lines in a configuration script (thanks to Yaroslav Halchenko for the tip).

Packer helps creating machine images for different platforms. This is like when you use e.g. Debian installations in a Qemu box for testing or development purposes. Instead of setting up a new virtual machine manually the same way as installing Debian on another computer this process can be completely automated with Packer, like I've written about in this blog entry here9. You just need a template which contains instructions for the included Qemu builder and a preseeding script for the Debian installer, and there you go drinking your coffee while Packer does all the work: download the ISO image for installation, create the new virtual harddrive, boot the emulator, run the whole installation process automatically like with answering questions, selecting things, reboot without ISO image to complete the installation etc. A couple of minutes and you have a new pre-baked virtual machine image like from a vendoring machine, another fresh one could be created anytime.

Packer10 supports a number of builders for different target platforms (desktop virtualization solutions as much as public cloud providers and private cloud software), can build in parallel, and also the full range of common provisioners can be employed in the process to equip the newly installed OSs with services and programs. Vagrant boxes could be generated by one of the included postprocessors. I'll write more on Packer here on this blog, soon.

There were more then two dozens of packages missing to complete Packer11, which is the achievement of combined forces within the pkg-go group. Much thanks esp. to Alexandre Viau who have worked on the most of the needed new packages. Thanks also to the FTP masters which were always very quick in reviewing the Go packages, so that it could be proceeded to build and package the sub dependent new ones always consecutively.

Squirrel3

I've didn't had the major work on that and just sponsored this for Fabian Wolff, but want to highlight here that there's a new package of Squirrel12 now available in Debian13.

Squirrel is a lightweight scripting language, somewhat comparable to Lua. It's fully object-oriented and highly embeddable, it's used in a lot of commerical computer games under the hood for implementing intelligence for bots next to other things14, but also for the Internet of Things (it's embedded in hardware from Electric Imp). Squirrel functions could be called from C++15.

I've filed an ITP bug for Squirrel already in 2011 (#651195), but always something else got in the way, and it ended up being an RFP. I'm really glad that it got picked up and completed quickly afterwards.

misc

There were a couple of uploads on updated upstream tarballs and for fixing bugs, namely afl/2.10b-1 and 2.11b-1, python-afl/0.5.3-1, pyutilib/5.3.2-1, pyomo/4.3.11327-1, libvigraimpex/1.10.0+git20160211.167be93dfsg-2 (fix of #820429, thanks for Tobias Frost), and gamera/3.4.2+svn1454-1.

For the pkg-go group, I've set up a new package of github-mitchellh-ioprogress (which is needed by the official DigitalOcean CLI tool doctl, now RFP #807956 instead of ITP due to the lack of time, again a lot of missing packages are missing for that), and provided a little patch for dh-make-golang updating some standards16.

For Packer I've also updated azure-go-autorest and azure-sdk as team upload (#821938, #821832), but it came out that the project which is currently under heavy development towards a new official release broke a lot in the past weeks (no Git branching have been used), so that Packer as a matter of fact needed a vendored snapshot, although there have been only a couple of commits in between. Docker-registry has the same problem with the new package of azure-sdk/2.1.1~beta1, so that it needed to be fixed, too (#822146).

By the way, the tool ratt17 comes very handy for automatically test building down all reverse dependencies, not only for Go packages (thanks to Tianon Gravi for the tip).

Finally, I've posted the needed reverse depencies as RFP bugs for Terraform18 (again quite a lot), Vuls19, and cve-dictionary20, which is needed for Vuls. I'll let them rest a while waiting to get picked up before working anything down.


  1. #570933: dh-buildinfo: should provide a addon file for dh command 

  2. https://tracker.debian.org/pkg/debops 

  3. http://debops.org/ 

  4. The servers have to be accessible by SSH. E.g. you could run debops like: $ debops -u root --private-key=~/.ssh/id_digitalocean 

  5. https://github.com/debops/docs/issues/132 

  6. #819816: ITP: debops -- Ansible based server management utility 

  7. https://www.packer.io/ 

  8. #740753: ITP: packer -- create vm images for multiple platforms 

  9. http://www.danielstender.com/blog/packer-qemu.html 

  10. https://packages.debian.org/unstable/packer 

  11. I've worked on the missing packages this month, namely github-klauspost-pgzip, github-masterzen-xmlpath, github-masterzen-winrm, dylanmei-winrmtest, packer-community-winrmcp (Packer uses WinRM if Windows machines images are created), github-hpcloud-tail, and updated github-rackspace-gophercloud (#822163) and google-api (#822164) to complete it. 

  12. http://squirrel-lang.org/ 

  13. https://tracker.debian.org/pkg/squirrel3 

  14. http://www.linux-magazin.de/layout/set/print/content/view/full/62184 

  15. http://www.linux-magazin.de/Ausgaben/2011/10/plus/Fremdkoerper-Squirrel-Interpreter-und-Skripte-fuer-C 

  16. https://github.com/Debian/dh-make-golang/pull/39 

  17. https://packages.debian.org/unstable/ratt 

  18. #808940: ITP: terraform -- tool for managing cloud infrastructure 

  19. #820614: ITP: vuls -- package inventory scanner for CVE vulnerabilities 

  20. #820615: ITP: go-cve-dictionary -- builds a local copy of the NVD/JVN (vulnerability databases) 

30 April, 2016 11:42AM by Daniel Stender

What I've worked on for Debian this month

This month I've worked on the following things for Debian:

To begin with that, I've set up a Debhelper sequencer script for dh-buildinfo1, this add-on now can be used with dh $@ --with buildinfo in deb/rules instead of having to explicitly call it somewhere in an override.

Debops

I've set up initial Debian packages of Debops2, a collection of fine crafted Ansible roles and playbooks especially for Debian servers, shipped with a couple of convenience and wrapper scripts in Python3. There are two binary packages, one for the toolset (debops), and the other for the playbooks and roles of the project (debops-playbooks).

The application is easy to use, just initialize a new project with debops-init foo and add your server(s) to foo/ansible/inventory/hosts belonging to groups representing services and things you want to employ on them. Like the group [debops_gitlab] automatically installs a complete running Gitlab setup on one or a multitude of servers in the same run with the debops command4. Use other groups like [debops_mariadb_server] accordingly in the same host inventory. Ansible runs agent less, so you don't have to prepare freshly setup servers with nothing special to use that tool randomly (like on localhost). The list of things you could deploy with Debops is quite amazing and you've got dozens of services at your hand.

The new packages are currently in experimental because they need some more fine tuning, like there are a couple of minor error messages which recently occur using it, but it works well. The (early staged) documentation unfortunately couldn't be packaged because of the scattered resp. collective nature of the project (all parts have their own Github repositories)5, and also how to generate the upstream tarball remains a bit of a challenge (currently, it's the outcome of debops-init)6. I'll have this package in unstable soon. More info on Debops is coming up, then.

Hashicorp's Packer

I'm very glad to announce that Packer7 is ready being available in unstable, and the two year old RFP bug could be finally closed8. It's another great and much convenient devops tool which does a lot of different things in an automated fashion using only a single "one-argument" CLI tool in combination with a couple of lines in a configuration script (thanks to Yaroslav Halchenko for the tip).

Packer helps creating machine images for different platforms. This is like when you use e.g. Debian installations in a Qemu box for testing or development purposes. Instead of setting up a new virtual machine manually like installing Debian on another computer this process could be automated with Packer, like I've written about in this blog entry here9. You just need a template containing instructions for the included Qemu-builder and a preseeding script for the Debian installer, and there you go drinking your coffee while Packer does all the work for you: downloading the installation ISO image, creating the new virtual harddrive, booting the emulator, running the whole installation process automatically like answering questions, selecting things, rebooting without ISO image to complete the installation etc. A couple of minutes and you have a new pre-baked virtual machine image like from a vendoring machine, a fresh one everytime you need it.

Packer10 supports a number of builders for different target platforms (desktop virtualization solutions as much as public cloud providers and private cloud software), can build in parallel, and also the full range of common provisioners can be employed in the process to equip the newly installed OSs. Vagrant boxes could be generated by one of the included postprocessors. I'll write more on Packer here on this blog, soon.

There were more then two dozens of packages missing to complete Packer11, which is the achievement of combined forces within the pkg-go group. Much thanks esp. to Alexandre Viau who have worked on the most of the needed new packages. Thanks also to the FTP-masters which were always very quick in reviewing the Go packages, so that it could be proceeded to build and package the sub dependent new ones always consecutively.

Squirrel3

I've didn't had the most work with it and just sponsored this for Fabian Wolff, but want to highlight here that there's a new package of Squirrel12 now available in Debian13.

Squirrel is a lightweight scripting language, somewhat comparable to Lua. It's fully object-oriented and highly embeddable, it's used in a lot of commerical computer games under the hood for implementing intelligence for bots next to other things14, but also for the Internet of Things (it's embedded in hardware from Electric Imp). Squirrel functions could be called from C++15.

I've filed an ITP bug for Squirrel already in 2011 (#651195), but always something else got in the way, and it ended up being an RFP. I'm really glad that it got picked up and completed.

misc

There were a couple of uploads on updated upstream tarballs and for fixing bugs, namely afl/2.10b-1 and 2.11b-1, python-afl/0.5.3-1, pyutilib/5.3.2-1, pyomo/4.3.11327-1, libvigraimpex/1.10.0+git20160211.167be93dfsg-2 (fix of #820429, thanks for Tobias Frost), and gamera/3.4.2+svn1454-1.

For the pkg-go group, I've set up a new package of github-mitchellh-ioprogress (which is needed by the official DigitalOcean CLI tool doctl, now RFP #807956 instead of ITP due to the lack of time - again facing a lot of missing packages), and provided a little patch for dh-make-golang updating some standards16.

For Packer I've also updated azure-go-autorest and azure-sdk as team upload (#821938, #821832), but it came out that the project which is currently under heavy development towards a new official release broke a lot in the past weeks (and no Git branching have been used), so that Packer as a matter of fact needed a vendored snapshot, although there have been only a couple of commits in between. Docker-registry hat the same problem with the new package of azure-sdk/2.1.1~beta1, so that it needed to be fixed, too (#822146).

By the way, the tool ratt17 comes very handy for automatically test building down all reverse dependencies, not only for Go packages (thanks to Tianon Gravi for the tip).

Finally, I've posted the needed reverse depencies as RFP bugs for Terraform18 (again quite a lot), Vuls19, and cve-dictionary20, which is needed for Vuls. I'll let them rest a while waiting to get picked up before working anything down.


  1. #570933: dh-buildinfo: should provide a addon file for dh command 

  2. https://tracker.debian.org/pkg/debops 

  3. http://debops.org/ 

  4. The servers have to be accessible by SSH. E.g. you could run debops like: $ debops -u root --private-key=~/.ssh/id_digitalocean 

  5. https://github.com/debops/docs/issues/132 

  6. #819816: ITP: debops -- Ansible based server management utility 

  7. https://www.packer.io/ 

  8. #740753: ITP: packer -- create vm images for multiple platforms 

  9. http://www.danielstender.com/blog/packer-qemu.html 

  10. https://packages.debian.org/unstable/packer 

  11. I've worked on the missing packages this month, namely github-klauspost-pgzip, github-masterzen-xmlpath, github-masterzen-winrm, dylanmei-winrmtest, packer-community-winrmcp (Packer uses WinRM if Windows machines images are created), github-hpcloud-tail, and updated github-rackspace-gophercloud (#822163) and google-api (#822164) to complete it. 

  12. http://squirrel-lang.org/ 

  13. https://tracker.debian.org/pkg/squirrel3 

  14. http://www.linux-magazin.de/layout/set/print/content/view/full/62184 

  15. http://www.linux-magazin.de/Ausgaben/2011/10/plus/Fremdkoerper-Squirrel-Interpreter-und-Skripte-fuer-C 

  16. https://github.com/Debian/dh-make-golang/pull/39 

  17. https://packages.debian.org/unstable/ratt 

  18. #808940: ITP: terraform -- tool for managing cloud infrastructure 

  19. #820614: ITP: vuls -- package inventory scanner for CVE vulnerabilities 

  20. #820615: ITP: go-cve-dictionary -- builds a local copy of the NVD/JVN (vulnerability databases) 

30 April, 2016 11:42AM by Daniel Stender

April 29, 2016

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

MiniDebCamp Vienna 2016

I'm currently in Vienna for MiniDebCamp and MiniDebConf at FH Technikum Wien, hosted as a part of Linuxwochen Wien. Today and yesterday have been spent hacking on Debian, and I've produced a few package updates and closed a few bugs.

Scapy

The last update to Scapy in Debian was in August 2011. Bug #773554 was filed in 2014 to request a new upstream version be packaged and in a few days this bug should be closed. As this package is maintained by someone else and I'm performing a non-maintainer upload, the upload will sit in the delayed queue for 3 days.

There has also been a Python 3 port of Scapy developed, and I've also packaged this (bug #822096). You will be able to install this version as python3-scapy and run it as /usr/bin/scapy3, which means it can fully co-exist with an installation of the original Python 2 version on the same system.

Hamradio Blend

  • Carles Fernandez had produced an updated package for gnss-sdr and this has now been uploaded to unstable.
  • Ana Custura produced an updated package for chirp and this has now been uploaded to unstable.
  • I've made a couple of updates to the hamradio-maintguide and released version 0.2 of that documentation. This was simple changes for secure URIs for the Vcs-* fields and options for updating existing packages to new upstream versions using uscan, by tarball URL or by local tarball.

qtile

qtile is a "full-featured, hackable tiling window manager written and configured in Python". I've been a big fan of tiling window managers for a few years now, starting with XMonad in 2013, moving to i3 and now I plan to move to qtile as I think its customisability and complete removal of window decorations will work well for me. I have now packaged qtile (bug #762637), and updated one of its dependencies (xcffib), and it will appear in unstable after passing ftp-master scrutiny.

29 April, 2016 04:34PM by Iain R. Learmonth

hackergotchi for Sylvain Le Gall

Sylvain Le Gall

Release of OASIS 0.4.6

I am happy to announce the release of OASIS v0.4.6.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

The main purpose of this release is to make possible to install OASIS with OPAM with OCaml 4.03.0. In order to do so, I had to disable some tests and use a new set of String.*_ascii functions. The OPAM release is pending upload and should soon be available.

29 April, 2016 07:38AM by gildor

April 28, 2016

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.6

Going back to faster release cycle, Weblate 2.6 has been just released. There is improved support for Python 3 or brand new HTTP REST API.

Full list of changes for 2.6:

  • Fixed validation of subprojects with language filter.
  • Improved support for XLIFF files.
  • Fixed machine translation for non English sources.
  • Added REST API.
  • Django 1.10 compatibility.
  • Added categories to whiteboard messages.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: English phpMyAdmin SUSE Weblate | 0 comments

28 April, 2016 04:00PM by Michal Čihař (michal@cihar.com)

hackergotchi for Holger Levsen

Holger Levsen

Voctomix available in Debian sid

Yesterday evening CarlFK prodded me to package Voctomix, which is a live video mixer written by the Chaos Communication Congress' Video Operation Crew. It's written in Python using GStreamer and was started when they realised dvswitch was not suitable anymore for them. The DebConf16 video team plans to test it in Cape Town (for covering the BoF room), so I figured I'd help now with packaging the software.

Less than 24h after I started, voctomix made it through NEW and is now available in sid and hopefully will be available in stretch soon too! And by DebConf16 it should also finally be available in jessie-backports. Wheeehooo!

Thanks to Stefano Rivera who helped me with some dh_python3 detail and the Debian ftpmasters for letting it though NEW so quickly (and btw, for their general awesome work on NEW processing in the last years too!) - may the winkekatze be with you! ;-)

28 April, 2016 03:33PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.1.7

A new release of RcppRedis arrived on CRAN today. And just like for the previous release, Russell Pierce contributed a lot of changes via several pull requests which make for more robust operations. In addition, we have started to add support for MessagePack by relying on our recently-announced RcppMsgPack package.

Changes in version 0.1.7 (2016-04-27)

  • Added support for timeout constructor argument (PR #14 by Russell Pierce)

  • Added new commands exists, ltrim, expire and pexpire along with unit tests (PR #16 by Russell Pierce)

  • Return NULL for empty keys in serialized get for consistency with lpop and rpop (also PR #16 by Russell Pierce)

  • Minor corrections to get code and hget and hset documentation (also PR #16 by Russell Pierce)

  • Error conditions are now properly forwarded as R errors (PR #22 by Russell Pierce)

  • Results from Redis commands are now checked for NULL (PR #23 by Russell Pierce)

  • MessagePack encoding can now be used which requires MessagePackage headers of version 1.0 or later; the (optional) RcppMsgPack package can be used.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 April, 2016 02:16AM

April 27, 2016

Mike Hommey

Announcing git-cinnabar 0.3.2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

This is mostly a bug and regression-fixing release.

What’s new since 0.3.1?

  • Fixed a performance regression when cloning big repositories on OSX.
  • git configuration items with line breaks are now supported.
  • Fixed a number of issues with corner cases in mercurial data (such as, but not limited to nodes with no first parent, malformed .hgtags, etc.)
  • Fixed a stack overflow, a buffer overflow and a use-after free in cinnabar-helper.
  • Better work with git worktrees, or when called from subdirectories.
  • Updated git to 2.7.4 for cinnabar-helper.
  • Properly remove all refs meant to be removed when using git version lower than 2.1.

27 April, 2016 10:42PM by glandium

Stig Sandbeck Mathisen

Using LLDP on Linux. What's on the other side?

On any given server, or workstation, knowing what is at the other end of the network cable is often very useful.

There’s a protocol for that: LLDP. This is a link layer protocol, so it is not routed. Each end transmits information about itself periodically.

You can typically see the type of equipment, the server or switch name, and the network port name of the other end, although there are lots of other bits of information available, too.

This is often used between switches and routers in a server centre, but it is useful to enable on server hardware as well.

There are a few different packages available. I’ve looked at a few of them available for the RedHat OS family (Red Hat Enterprise Linux, CentOS, …) as well as the Debian OS family (Debian, Ubuntu, …)

(Updated 2016-04-29, added more recent information about lldpd, and gathered the switch output at the end.)

ladvd

A simple daemon, with no configuration needed. This runs as a privilege-separated daemon, and has a command line control utility. You invoke it with a list of interfaces as command line arguments to restrict the interfaces it should use.

“ladvd” is not available on RedHat, but is available on Debian.

Install the “ladvd” package, and run “ladvdc” to query the daemon for information.

root@turbotape:~# ladvdc
Capability Codes:
    r - Repeater, B - Bridge, H - Host, R - Router, S - Switch,
    W - WLAN Access Point, C - DOCSIS Device, T - Telephone, O - Other

Device ID        Local Intf Proto Hold-time Capability Port ID
office1-switch23 eno1       LLDP  98        B          42

Even better, it has output that can be parsed for scripting:

root@turbotape:~# ladvdc -b eno1
INTERFACE_0='eno1'
HOSTNAME_0='office1-switch23'
PORTNAME_0='42'
PORTDESCR_0='42'
PROTOCOL_0='LLDP'
ADDR_INET4_0=''
ADDR_INET6_0=''
ADDR_802_0='00:11:22:33:44:55
VLAN_ID_0=''
CAPABILITIES_0='B'
TTL_0='120'
HOLDTIME_0='103'

…my new favourite :)

lldpd

Another package is “lldpd”, which is also simple to configure and use.

“lldpd” is not available on RedHat, but it is present on Debian.

It features a command line interface, “lldpcli”, which can show output with different level of detail, and on different formats, as well as configure the running daemon.

root@turbotape:~# lldpcli show neighbors
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface:    eno1, via: LLDP, RID: 1, Time: 0 day, 00:00:59
  Chassis:
    ChassisID:    mac 00:11:22:33:44:55
    SysName:      office1-switch23
    SysDescr:     ProCurve J9280A Switch 2510G-48, revision Y.11.12, ROM N.10.02 (/sw/code/build/cod(cod11))
    Capability:   Bridge, on
  Port:
    PortID:       local 42
    PortDescr:    42
-------------------------------------------------------------------------------

Among the output formats are “json”, which is easy to re-use elsewhere.

root@turbotape:~# lldpcli -f json show neighbors
{
  "lldp": {
    "interface": {
      "eno1": {
        "chassis": {
          "office1-switch23": {
            "descr": "ProCurve J9280A Switch 2510G-48, revision Y.11.12, ROM N.10.02 (/sw/code/build/cod(cod11))",
            "id": {
              "type": "mac",
              "value": "00:11:22:33:44:55"
            },
            "capability": {
              "type": "Bridge",
              "enabled": true
            }
          }
        },
        "via": "LLDP",
        "rid": "1",
        "age": "0 day, 00:53:23",
        "port": {
          "descr": "42",
          "id": {
            "type": "local",
            "value": "42"
          }
        }
      }
    }
  }
}

lldpad

A much more featureful LLDP daemon, available for both the Debian and RedHat OS families. This has lots of features, but is less trivial to set up.

Configure lldp for each interface

#!/bin/sh

find /sys/class/net/ -maxdepth 1 -name 'en*' |
    while read device; do
        basename "$device"
    done |
    while read interface; do
        {
            lldptool set-lldp -i "$interface" adminStatus=rxtx
            for item in sysName portDesc sysDesc sysCap mngAddr; do
                lldptool set-tlv -i "$interface" -V "$item" enableTx=yes |
                    sed -e "s/^/$item /"
            done
        } |
            sed -e "s/^/$interface /"
    done

Show LLDP neighbor information

#!/bin/sh

find /sys/class/net/ -maxdepth 1 -name 'en*' |
    while read device; do
        basename "$device"
    done |
    while read interface; do
        printf "%s\n" "$interface"
        ethtool $interface | grep -q 'Link detected: yes' || {
            echo "  down"
            echo
            continue
        }
        lldptool get-tlv -n -i "$interface" | sed -e "s/^/  /"
        echo
    done
[...]
enp3s0f0
  Chassis ID TLV
    MAC: 01:23:45:67:89:ab
  Port ID TLV
    Local: 588
  Time to Live TLV
    120
  System Name TLV
    site3-row2-rack1
  System Description TLV
    Juniper Networks, Inc. ex2200-48t-4g , version 12.3R12.4 Build date: 2016-01-20 05:03:06 UTC
  System Capabilities TLV
    System capabilities:  Bridge, Router
    Enabled capabilities: Bridge, Router
  Management Address TLV
    IPv4: 10.21.0.40
    Ifindex: 36
    OID: $
  Port Description TLV
    some important server, port 4
  MAC/PHY Configuration Status TLV
    Auto-negotiation supported and enabled
    PMD auto-negotiation capabilities: 0x0001
    MAU type: Unknown [0x0000]
  Link Aggregation TLV
    Aggregation capable
    Currently aggregated
    Aggregated Port ID: 600
  Maximum Frame Size TLV
    9216
  Port VLAN ID TLV
    PVID: 2000
  VLAN Name TLV
    VID 2000: Name bumblebee
  VLAN Name TLV
    VID 2001: Name stumblebee
  VLAN Name TLV
    VID 2002: Name fumblebee
  LLDP-MED Capabilities TLV
    Device Type:  netcon
    Capabilities: LLDP-MED, Network Policy, Location Identification, Extended Power via MDI-PSE
  End of LLDPDU TLV

enp3s0f1
[...]

on the switch side

On the switch, it is a bit easier to see what’s connected to each interface:

office switch

On the switch side, this system looks like:

office1-switch23# show lldp info remote-device

 LLDP Remote Devices Information

  LocalPort | ChassisId                 PortId PortDescr SysName
  --------- + ------------------------- ------ --------- ----------------------
  [...]
  42        | 22 33 44 55 66 77         eno1   Intel ... turbotape.example.com
  [...]

office1-switch23# show lldp info remote-device 42

 LLDP Remote Device Information Detail

  Local Port   : 42
  ChassisType  : mac-address
  ChassisId    : 00 11 22 33 33 55
  PortType     : interface-name
  PortId       : eno1
  SysName      : turbotape.example.com
  System Descr : Debian GNU/Linux testing (stretch) Linux 4.5.0-1-amd64 #1...
  PortDescr    : Intel Corporation Ethernet Connection I217-LM

  System Capabilities Supported  : bridge, router
  System Capabilities Enabled    : bridge, router

  Remote Management Address
     Type    : ipv4
     Address : 192.0.2.93
     Type    : ipv6
     Address : 20 01 0d b8 00 00 00 00 00 00 00 00 00 00 00 01
     Type    : all802
     Address : 22 33 44 55 66 77

datacenter switch

ssm@site3-row2-rack1> show lldp neighbors
Local Interface    Parent Interface    Chassis Id          Port info          System Name
[...]
ge-0/0/38.0        ae1.0               01:23:45:67:89:58   Interface   2 as enp3s0f0 server.example.com
ge-1/0/38.0        ae1.0               01:23:45:67:89:58   Interface   3 as enp3s0f1 server.example.com
[...]

ssm@site3-row2-rack1> show lldp neighbors interface ge-0/0/38
LLDP Neighbor Information:
Local Information:
Index: 157 Time to live: 120 Time mark: Fri Apr 29 13:00:19 2016 Age: 24 secs
Local Interface    : ge-0/0/38.0
Parent Interface   : ae1.0
Local Port ID      : 588
Ageout Count       : 0

Neighbour Information:
Chassis type       : Mac address
Chassis ID         : 01:23:45:67:89:58
Port type          : Mac address
Port ID            : 01:23:45:67:89:58
Port description   : Interface   2 as enp3s0f0
System name        : server.example.com

System Description : Linux server.example.com 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 4

System capabilities
        Supported  : Station Only
        Enabled    : Station Only

Management Info
        Type              : IPv6
        Address           : 2001:0db8:0000:0000:0000:dead:beef:cafe
        Port ID           : 2
        Subtype           : 2
        Interface Subtype : ifIndex(2)
        OID               : 1.3.6.1.2.1.31.1.1.1.1.2


27 April, 2016 10:00PM

hackergotchi for Joey Hess

Joey Hess

my Shuttleworth Foundation flash grant

Six months ago I received a small grant from the Shuttleworth Foundation with no strings attached other than I should write this blog post about it. That was a nice surprise.

The main thing that ended up being supported by the grant was work on Propellor, my configuration management system that is configured by writing Haskell code. I made 11 releases of Propellor in the grant period, with some improvements from me, and lots more from other contributors. The biggest feature that I added to Propellor was LetsEncrypt support.

More important than features is making Propellor prevent more classes of mistakes, by creative use of the type system. The biggest improvement in this area was type checking the OSes of Propellor properties, so Propellor can reject host configurations that combine eg, Linux-only and FreeBSD-only properties.

Turns out that the same groundwork needed for that is also what's needed to get Propellor to do type-level port conflict detection. I have a branch underway that does that, although it's not quite done yet.

The grant also funded some of my work on git-annex. My main funding for git-annex doesn't cover development of the git-annex assistant, so the grant filled in that gap, particularly in updating the assistant to support the git-annex v6 repo format.

I've very happy to have received this grant, and with the things it enabled me to work on.

27 April, 2016 09:32PM

Niels Thykier

auto-decrufter in top 5 after 10 months

About 10 months ago, we enabled an auto-decrufter in dak.  Then after 3 months it had become the top 11th “remover”.  Today, there are only 3 humans left that have removed more packages than the auto-decrufter… impressively enough, one of them is not even an active FTP-master (anymore).  The current score board:

 5371 Luca Falavigna
 5121 Alexander Reichle-Schmehl
 4401 Ansgar Burchardt
 3928 DAK's auto-decrufter
 3257 Scott Kitterman
 2225 Joerg Jaspert
 1983 James Troup
 1793 Torsten Werner
 1025 Jeroen van Wolffelaar
  763 Ryan Murray

For comparison, here is the number removals by year for the past 6 years:

 5103 2011
 2765 2012
 3342 2013
 3394 2014
 3766 2015  (1842 removed by auto-decrufter)
 2845 2016  (2086 removed by auto-decrufter)

Which tells us that in 2015, the FTP masters and the decrufter performed on average over 10 removals a day.  And by the looks of it, 2016 will surpass that.  Of course, the auto-decrufter has a tendency to increase the number of removed items since it is an advocate of “remove early, remove often!”.:)

 

Data is from https://ftp-master.debian.org/removals-full.txt.  Scoreboard computed as:

  grep ftpmaster: removals-full.txt | \
   perl -pe 's/.*ftpmaster:\s+//; s/\]$//;' | \
   sort | uniq -c | sort --numeric --reverse | head -n10

Removals by year computed as:

 grep ftpmaster: removals-full.txt | \
   perl -pe 's/.* (\d{4}) \d{2}:\d{2}:\d{2}.*/$1/' | uniq -c | tail -n6

(yes, both could be done with fewer commands)


Filed under: Debian

27 April, 2016 06:44PM by Niels Thykier

hackergotchi for Michal Čihař

Michal Čihař

motranslator 1.0

After two months since it's announcement I think it's good time to release 1.0 version of motranslator. This release doesn't bring any major changes, it's more to indicate that the library is stable :-).

The motranslator is a translation library used in current phpMyAdmin master (upcoming 4.7.0) with focus on speed and memory usage. It uses Gettext MO files to load the translations. It also comes with testsuite (100% coverage) and basic documentation.

Recommended way to install it is using composer from Packagist repository:

composer require phpmyadmin/motranslator

The Debian package will be available probably at point phpMyAdmin 4.7.0 will be out, but if you see need to have it earlier, just let me know.

Filed under: English phpMyAdmin | 0 comments

27 April, 2016 10:00AM by Michal Čihař (michal@cihar.com)

April 26, 2016

hackergotchi for Jonathan McDowell

Jonathan McDowell

Notes on Kodi + IR remotes

This post is largely to remind myself of the details next time I hit something similar; I found bits of relevant information all over the place, but not in one single location.

I love Kodi. These days the Debian packages give me a nice out of the box experience that is easy to use. The problem comes in dealing with remote controls and making best use of the available buttons. In particular I want to upgrade the VDR setup my parents have to a more modern machine that’s capable of running Kodi. In this instance an AMD E350 nettop, which isn’t recent but does have sufficient hardware acceleration of video decoding to do the job. Plus it has a built in fintek CIR setup.

First step was finding a decent remote. The fintek is a proper IR receiver supported by the in-kernel decoding options, so I had a lot of flexibility. As it happened I ended up with a surplus to requirements Virgin V Box HD remote (URC174000-04R01). This has the advantage of looking exactly like a STB remote, because it is one.

Pointed it at the box, saw that the fintek_cir module was already installed and fired up irrecord. Failed to get it to actually record properly. Googled lots. Found ir-keytable. Fired up ir-keytable -t and managed to get sensible output with the RC-5 decoder. Used irrecord -l to get a list of valid button names and proceed to construct a vboxhd file which I dropped in /etc/rc_keymaps/. I then added a

fintek-cir * vboxhd

line to /etc/rc_maps.cfg to force my new keymap to be loaded on boot.

That got my remote working, but then came the issue of dealing with the fact that some keys worked fine in Kodi and others didn’t. This seems to be an issue with scancodes above 0xff. I could have remapped the remote not to use any of these, but instead I went down the inputlirc approach (which is already in use on the existing VDR box).

For this I needed a stable device file to point it at; the /dev/input/eventN file wasn’t stable and as a platform device it didn’t end up with a useful entry in /dev/input/by-id. A ‘quick’

udevadm info -a -p $(udevadm info -q path -n /dev/input/eventN)

provided me with the PNP id (FIT0002) allowing me to create /etc/udev/rules.d/70-remote-control.rules containing

KERNEL=="event*",ATTRS{id}=="FIT0002",SYMLINK="input/remote"

Bingo, a /dev/input/remote symlink. /etc/defaults/inputlirc ended up containing:

EVENTS="/dev/input/remote"
OPTIONS="-g -m 0"

The options tell it to grab the device for its own exclusive use, and to take all scancodes rather than letting the keyboard ones through to the normal keyboard layer. I didn’t want anything other than things specifically configured to use the remote to get the key presses.

At this point Kodi refused to actually do anything with the key presses. Looking at ~kodi/.kodi/temp/kodi.log I could see them getting seen, but not understood. Further searching led me to construct an Lircmap.xml - in particular the piece I needed was the <remote device="/dev/input/remote"> bit. The existing /usr/share/kodi/system/Lircmap.xml provided a good starting point for what I wanted and I dropped my generated file in ~kodi/.kodi/userdata/.

(Sadly it turns out I got lucky with the remote; it seems to be using the RC-5x variant which was broken in 3.17; works fine with the 3.16 kernel in Debian 8 (jessie) but nothing later. I’ve narrowed down the offending commit and raised #117221.)

Helpful pages included:

26 April, 2016 08:32PM

hackergotchi for Pau Garcia i Quiles

Pau Garcia i Quiles

Is KDE the right place for Thunderbird?

For years, Mozilla has been saying they are no longer focused on Thunderbird and its place is outside of Mozilla. Now it seems they are going to act on what they said: Mozilla seeks new home for e-mail client Thunderbird.

The candidates they are exploring are the Software Freedom Conservancy, The Document Foundation, and I expect at least the Apache Software Foundation to be a serious candidate, and Gnome to propose.

Some voices in KDE say we should also propose the KDE eV as a candidate hosting organization.

What follows is my opinion, not the official opinion of the eV or the board’s, or the KDE Community’s opinion. Take it with a grain (or more) of salt.

I am not so sure. I am trying to think what the KDE eV can offer to Mozilla to be appealing to them and if my analysis is correct, we are too far and Thunderbird would pose many risks to the other projects in KDE.

(I am blurring the lines between “KDE eV”, “KDE community”, “KDE Frameworks”, etc as it has no relevance for the discussion)

Thunderbird is an open source project/product with a lot of commercial users and has (still has?) many paid contributors.

IMHO what Mozilla is looking for is an organization with a well-oiled funding machine, able to campaign for money (even if in a tight circle, something like ours Patron program), and accept and process funds in a way that directly benefits Thunderbird. I e. hiring developers to implement X or Y, or work on some area full-time, or at least, half-time.

KDE does not work like that.

KDE has few commercial users (other than distros, if you want to count them as commercial users). Other than Blue Systems, I don’t think we have any developer working for KDE.

Also, the eV is not exactly a well-oiled funding machine. We have been talking about that for years. And we do not hire developers directly to work on X or Y (at most, we pay for part of the expenses of sprints).

All of that makes me think we are not the right host for Thunderbird.

But it does not stop there!

Let’s say Thunderbird comes to KDE and suddenly we are offered USD 1 M from several organizations who want to be “Patrons of Thunderbird”, or influence Thunderbird, or whatever.

First problem: do we allow funds to go to a specific project rather than the eV deciding how to distribute them? AFAIK we do not allow that and at least one KDE sub-project has had trouble with that in the past.

Then there is the thing about “Patrons of Thunderbird”: no such thing. Either you are a Patron of KDE, including Plasma Mobile, OwnCloud, and whatnot, or you are nothing. You cannot be a “Patron of Partial KDE, namely Thunderbird”.

Influencing, did I say? The eV is by its own rules not an influencer on KDE’s direction, just an entity to provide legal and economic support. Quite the opposite from what Mozilla does today for Thunderbird.

Even if funders would not mind all that, there is the huge risk this poses for all the other projects. With as little as USD 200K donated towards Thunderbird (and USD 200K is not much for a product with so many commercial users, which means a healthy ecosystem of companies making money on support, development, etc, and thus donating to somehow influence or be perceived as important players), Thunderbird becomes the most important project in KDE. How would we manage this? In any sensible organization, Thunderbird would become the main focus and all the other KDE projects would be relegated. Even if we decide not to, external PR would make that look like it happened.

For all those reasons, I think KDE is not the right place for Thunderbird at the moment. It would require a big change in what the eV can do and how it operates. And that change may be for good but it’s not there now and it will not be by the time Mozilla has to decide if KDE is the right place.

All that, and I have not even talked about technology and what any sensible Thunderbird “customer” would think today: what is the medium and long-term roadmap? Migrate Thunderbird users to Kontact/KDE PIM? Port Thunderbird to Qt + KF5, maybe including moving to QtWebEngine? Will Windows support be deteriorated by that change? Or maybe the plan is to cancel KMail and Akregator? Those are second-thoughts, unimportant right now.

Update If you want to contribute to the discussion, please join the KDE-Community mailing list.

 

26 April, 2016 07:45PM by pgquiles

Niels Thykier

Putting Debian packages in labelled boxes

Lintian 2.5.44 was released the other day and (to most) the most significant bug fix was probably that Lintian learned about Policy 3.9.8.  I would like to thank Axel Beckert for doing that.  Notably it also made me update the test suite so to make future policy releases less painful.

For others, it might be the fact that Lintian now accepts (valid) versioned provides (which seemed prudent now that Britney accepts them as well).  Newcomers might appreciate that we are giving a much more sensible warning when they have extra spaces in their changelog “sign off” line (rather than pretending it is an improper NMU).  But I digress…

 

What I am here to talk about is that Lintian 2.5.44 started classifying packages based on various “facts” or “properties”, we can determine.  Therefore:

  • Every package will have at least one tag now!
  • These labels are known as something called “classification tags”.
  • The tags are not issues to be fixed!  (I will repeat this later to ensure you get this point!)

Here are some of the “labelled boxes” your packages will be put into[0]:

The tags themselves are (as mentioned) mere classifications and their primary purpose is to classify or measure certain properties.  With them any body can download the data set and come with some bold statement about Debian packages (hopefully without relying too much on “lies, damned lies and statistics“).  Lets try that immediately!

  • Almost 75% of all Debian packages do not need to run arbitrary code doing installation[2]!
  • The “dh-sequencer” with cdbs is the future![3]

In the next release, we will also add tracking of auto-generated snippets from dh_*-tools.  Currently unversioned, but I hope to add versioning to that so we can find and rebuild packages that have been built with buggy autoscripts (like #788098)

If you want to see the classification tags for your package, please run lintian with like this:

# Add classification tags
$ lintian -L +classification <pkg-or-changes>
# Or if you want only classification tags$ lintian -L =classification <pkg-or-changes>

Please keep in mind that classification tags (“C”) are not issues in themselves. Lintian is simply attempting to add a visible indicator about a given “fact” or “property” in the package – nothing more, nothing less.

 

Future work – help (read: patches) welcome:

 

[0] Mind you, the reporting framework’s handling of these tags could certainly be improved.

[1] Please note how it distinguishes 1.0 into native and non-native based on whether the package has a diff.gz.  Presumably that can be exploited somehow …

[2] Disclaimer: At the time of writing, only ~80% of the archive have been processed.  This is computed as: NS / (NS + WS), where NS and WS are the number of unique packages with the tags “no-ctrl-scripts” and “ctrl-script” respectively.

[3] … or maybe not, but we got two packages classified as using both CDBS and the dh-sequencer.  I have not looked at it in detail. For the curious: libmecab-java and ctioga2.


Filed under: Debian, Lintian

26 April, 2016 07:17PM by Niels Thykier

Matthias Klumpp

Why are AppStream metainfo files XML data?

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons:
  cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The “foobar.png” is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:

Icons:
  cached: name: foobar.png
          width: 128
          height: 128

The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Správce souborů</summary>
<summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Správce souborů
  da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Características:</p>
  <ul>
    <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
    <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>
    <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>
    <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
    ....
  </ul>
</description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

  • Embed the HTML markup in the file, and risk non-careful translators breaking the markup by e.g. not closing tags.
  • Use Markdown, and risk people not writing the markup correctly when translating a really long string in Gettext.

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn’t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don’t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn’t fun to write by hand. But for this case, XML is a good choice.

26 April, 2016 04:20PM by Matthias

hackergotchi for Michal Čihař

Michal Čihař

Weekly phpMyAdmin contributions 2016-W16

Last week was again focused on bug fixing due to increased amount of received bug reports on 4.6.0 release. Fortunately most of the annoying bugs are already fixed in git and will be soon released as 4.6.1.

Another bigger task which was started last week was wiki migration. So far we've been using own wiki running MediaWiki and we're migrating it to GitHub wiki. The wiki on GitHub is way simpler, but it seems as better choice for us. During the migration all user documentation will be merged into our documentation, so that it's all in one place and wiki will be targeted on developers.

Handled issues:

Filed under: English phpMyAdmin | 2 comments

26 April, 2016 04:00PM by Michal Čihař (michal@cihar.com)

Reproducible builds folks

Reproducible builds: week 52 in Stretch cycle

What happened in the Reproducible Builds effort between April 17th and April 23rd 2016:

Toolchain fixes

Thomas Weber uploaded lcms2/2.7-1 which will not write uninitialized memory when writing color names. Original patch by Lunar.

The GCC 7 development phase has just begun, so Dhole reworked his patch to make gcc use SOURCE_DATE_EPOCH if set which prompted interesting feedback, but it has not been merged yet.

Alexis Bienvenüe submitted a patch for sphinx to strip Python object memory addresses from the generated documentation.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: cobertura, commons-pool, easymock, eclipselink, excalibur-logkit, gap-radiroot, gluegen2, jabref, java3d, jcifs, jline, jmock2, josql, jtharness, libfann, libgroboutils-java, libjemmy2-java, libjgoodies-binding-java, libjgrapht0.8-java, libjtds-java, liboptions-java, libpal-java, libzeus-jscl-java, node-transformers, octave-msh, octave-secs2d, openmama, rkward.

The following packages have become reproducible after being fixed:

Patches submitted that have not made their way to the archive yet:

  • #821356 against emoslib by boyska: use echo in a portable manner across shells.
  • #822268 against transdecoder by Dhole: set PERL_HASH_SEED=0 when calling the scripts that generate samples.

tests.reproducible-builds.org

  • Steven Chamberlain investigated the performance of our armhf boards which also provided a nice overview of our armhf build network.
  • As i386 has almost been completely tested the order of the architectures displayed has been changed to reflect the fact that i386 is now the 2nd most popular architecture in Debian. (h01ger)
  • In order to decrease the number of blacklisted packages, the first build is now run with a timeout of 18h (previously: 12h) and the 2nd with 24h timeout (previously: 18h). (h01ger)
  • We now also vary the CPU model on amd64 (and soon on i386 too) so that one build is performed using a "AMD Opteron 62xx class CPU" while the other is done using a "Intel Core Processor (Haswell)". This is now possible as proftitbricks.com offers VMs running both types of CPU and have generously increased their sponsorship once more. (h01ger)
  • Profitbricks increased our storage space by 400 GB which will be used to setup a 2nd build node for the coreboot/OpenWrt/NetBSD/Arch Linux/Fedora tests. This 2nd build node will run 398 days in the future for testing reproducibility on a different date.

diffoscope development

diffoscope 52 was released with changes from Mattia Rizzolo, h01ger, Satyam Zode and Reiner Herrmann, who also did the release. Notable changes included:

  • Drop transitional debbindiff package.
  • Let objdump demangle symbols for better readability.
  • Install bin/diffoscope instead of auto-generated script. (Closes: #821777)

As usual, diffoscope 52 is available on Debian, Archlinux and PyPI, other distributions will hopefully soon update.

Package reviews

28 reviews have been added, 11 have been updated and 94 have been removed in this week.

14 FTBFS bugs were reported by Chris Lamb (one being was a duplicate of a bug filed by Sebastian Ramacher an hour earlier).

Misc.

This week's edition was written by Lunar, Holger 'h01ger' Levsen and Chris Lamb and reviewed by a bunch of Reproducible builds folks on IRC.

26 April, 2016 03:34PM

hackergotchi for Rhonda D'Vine

Rhonda D'Vine

Prince

Last week we lost another great musician, song writer, artist. It's painful to realise that more and more of the people you grew up with aren't there anymore. We lost Prince, TAFKAP, Symbol, Prince. He wrote a lot of great music, even some you wouldn't attribute to him, like Sinead O'Connor's Nothing Compares To You, Bangles' Manic Monday or Chaka Khan's I Feel For You. But I actually would like to share some songs that are also performed by himself, so without further ado here are the songs:

Rest in peace, Prince. And you, enjoy.

/music | permanent link | Comments: 0 | Flattr this

26 April, 2016 12:32PM by Rhonda

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Full stack

As I'm nearing the point where Nageru, my live video mixer, can produce a stream directly that is actually suitable to streaming directly to clients (without a transcoding layer in the chain), it struck me the other day how much of the chain I've actually had to touch:

In my test setup, the signal comes into a Blackmagic Intensity Shuttle. At some point, I found what I believe is a bug in the card's firmware; I couldn't fix it, but a workaround was applied in the Linux kernel. (I also have some of their PCI cards, in which I haven't found any bugs, but I have found bugs in their drivers.)

From there, it goes into bmusb, a driver I wrote myself. bmusb uses libusb-1.0 to drive the USB card from userspace—but for performance and stability reasons, I patched libusb to use the new usbfs zerocopy support in the Linux kernel. (The patch is still pending review.) Said zerocopy support wasn't written by me, but I did the work to clean up the support and push it upstream (it's in the 4.6-rc* series).

Once safely through bmusb, it goes of course into Nageru, which I wrote myself. Nageru uses Movit for pixel processing, which I also wrote myself. Movit in turn uses OpenGL; I've found bugs in all three major driver implementations, and fixed a Nageru-related one in Mesa (and in the process of debugging that, found bugs in apitrace, a most useful OpenGL debugger). Sound goes through zita-resampler to stretch it ever so gently (in case audio and video clocks are out of sync), which I didn't wrote, but patched to get SSE support (patch pending upstream).

So now Nageru chews a bit on it, and then encodes the video using x264 (that's the new part in 1.3.0—of course, you need a fast CPU to do that as opposed to using Quick Sync). I didn't write x264, but I had to redo parts of the “speedcontrol” patch (not part of upstream; awaiting review semi-upstream) because of bugs and outdated timings, but I also found a bug in x264 proper (fixed by upstream, pending inclusion). Muxing is done through ffmpeg, where I actually found multiple bugs in the muxer (some of which are still pending fixes).

Once the stream is safely encoded and hopefully reasonably standards-conforming (that took me quite a while), it goes to Cubemap, which I wrote, for reflection to clients. For low-bitrate clients, it takes a detour through VLC to get a re-encode on a faster machine to lower bitrate—I've found multiple bugs in VLC's streaming support in the past (and also fixed some of them, plus written the code that interacts with Cubemap).

From there it goes to any of several clients, usually a browser. I didn't write any browsers (thank goodness!), but I wrote the client-side JavaScript that picks the closest relay, and the code for sending it to a Chromecast. I also found a bug in Chrome for Android (will be fixed in version 50 or 51, although the fix was just about turning on something that was already in the works), and one in Firefox for Linux (fixed by patching GStreamer's MP4 demuxer, although they've since switched away from that to something less crappy). IE/Edge also broke at some point, but unfortunately I don't have a way to report bugs to Microsoft. There's also at least one VLC bug involved on the client side (it starts decoding frames too late if they come with certain irregular timestamps, which causes them to drop), but I want to verify that they still persist even after the muxer is fixed before I go deep on that.

Moral of the story: If anyone wants to write a multimedia application and says “I'll just use <framework, language or library XYZ>, and I'll get everything for free; I just need to click things together!”, they simply don't know what they're talking about and are in for a rude awakening. Multimedia is hard, an amazing amount of things can go wrong, complex systems have subtle bugs, and there is no silver bullet.

26 April, 2016 12:11PM

Matthias Klumpp

A GNOME Software Hackfest report

Two weeks ago was the GNOME Software hackfest in London, and I’ve been there! And I just now found the time to blog about it, but better do it late than never 😉 .

Arriving in London and finding the Red Hat offices

After being stuck in trains for the weekend, but fortunately arriving at the airport in time, I finally made it to London with quite some delay due to the slow bus transfer from Stansted Airport. After finding the hotel, the next issue was to get food and a place which accepted my credit card, which was surprisingly hard – in defence of London I must say though, that it was a Sunday, 7 p.m. and my card is somewhat special (in Canada, it managed to crash some card readers, so they needed a hard-reset). While searching for food, I also found the Red Hat offices where the hackfest was starting the next day by accident. My hotel, the office and the tower bridge were really close, which was awesome! I have been to London in 2008 the last time, and only for a day, so being that close to the city center was great. The hackfest didn’t leave any time to visit the city much, but by being close to the center, one could hardly avoid the “London experience” 😉 .

Cool people working on great stuff

towerbridge2016That’s basically the summary for the hackfest 😉 . It was awesome to meet with Richard Hughes again, since we haven’t seen each other in person since 2011, but work on lots of stuff together. This was especially important, since we managed to solve quite some disagreements we had over stuff – Richard even almost managed to make me give in to adding <kudos/> to the AppStream spec, something which I was pretty against supporting (it didn’t make it yet, but I am no longer against the idea of having that – the remaining issues are solvable).

Meeting Iain Lane again (after FOSDEM) was also very nice, and also seeing other people I’ve only worked with over IRC or bug reports (e.g. William, Kalev, …) was great. Also lots of “new” people were there, like guys from Endless, who build their low-budget computer for developing/emerging countries on top of GNOME and Linux technologies. It’s pretty cool stuff they do, you should check out their website! (they also build their distribution on top of Debian, which is even more awesome, and something I didn’t know before (because many Endless people I met before were associated with GNOME or Fedora, I kind of implicitly assumed the system was based on Fedora 😛 )).

The incarnation of GNOME Software used by endless looks pretty different from what the normal GNOME user sees, since it’s adjusted for a different audience and input method. But it looks great, and is a good example for how versatile GS already is! And for upstream GNOME, we’ve seen some pretty great mockups done by Endless too – I hope those will make it into production somehow.

Ironically, a "snapstore" was close to the office ;-)

Ironically, a “snapstore” was close to the office ;-)

XdgApp and sandboxing of apps was also a big topic, aside from Ubuntu and Endless integration. Fortunately, Alexander Larsson was also there to answer all the sandboxing and XdgApp-questions.

I used the time to follow up on a conversation with Alexander we started at FOSDEM this year, about the Limba vs. XdgApp bundling issue. While we are in-line on the sandboxing approach, the way how software is distributed is implemented differently in Limba and XdgApp, and it is bad to have too many bundling systems around (doesn’t make for a good story where we can just tell developers “ship as this bundling format, and it will be supported everywhere”). Talking with Alex about this was very nice, and I think there is a way out of the too-many-solutions dilemma, at least for Limba and XdgApp – I will blog about that separately soon.

On the Ubuntu side, a lot of bugs and issues were squashed and changes upstreamed to GNOME, and people were generally doing their best to reduce Richard’s bus-factor on the project a little 😉 .

I mainly worked on AppStream issues, finishing up the last pieces of appstream-generator and running it against some sample package sets (and later that week against the whole Debian archive). I also started to implement support for showing AppStream issues in the Debian PTS (this work is not finished yet). I also managed to solve a few bugs in the old DEP-11 generator and prepare another release for Ubuntu.

We also enjoyed some good Japanese food, and some incredibly great, but also suddenly very expensive Indian food (but that’s a different story 😉 ).

The most important thing for me though was to get together with people actually using AppStream metadata in software centers and also more specialized places. This yielded some useful findings, e.g. that localized screenshots are not something weird, but actually a wanted feature of Endless for their curated AppStore. So localized screenshots will be part of the next AppStream spec. Also, there seems to be a general need to ship curation information for software centers somehow (which apps are featured? how are they styled? added special banners for some featured apps, “app of the day” features, etc.). This problem hasn’t been solved, since it’s highly implementation-specific, and AppStream should be distro-agnostic. But it is something we might be able to address in a generic way sooner or later (I need to talk to people at KDE and Elementary about it).

In summary…

It was a great event! Going to conferences and hackfests always makes me feel like it moves projects leaps ahead, even if you do little coding. Sorting out issues together with people you see in person (rather than communicating with them via text messages or video chat), is IMHO always the most productive way to move forward (yeah, unless you do this every week, but I think you get my point 😀 ).

For me, being the only (and youngest ^^) developer at the hackfest who was not employed by any company in the FLOSS business, the hackfest was also motivating to continue to invest spare time into working on these projects.

So, the only thing left to do is a huge shout out of “THANK YOU” to the Ubuntu Community Fund – and therefore the Ubuntu community – for sponsoring me! You rock! Also huge thanks to Canonical for organizing the sponsoring really quickly, so I didn’t get into trouble with paying my flights.

Laney and attente walking on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.

Laney and attente on the Millennium Bridge after we walked the distance between Red Hat and Canonical’s offices.

To worried KDE people: No, I didn’t leave the blue side – I just generally work on cross-desktop stuff, and would like all desktops to work as well as possible 😉

26 April, 2016 07:50AM by Matthias

hackergotchi for Francois Marier

Francois Marier

Using DNSSEC and DNSCrypt in Debian

While there is real progress being made towards eliminating insecure HTTP traffic, DNS is a fundamental Internet service that still usually relies on unauthenticated cleartext. There are however a few efforts to try and fix this problem. Here is the setup I use on my Debian laptop to make use of both DNSSEC and DNSCrypt.

DNSCrypt

DNSCrypt was created to enable end-users to encrypt the traffic between themselves and their chosen DNS resolver.

To switch away from your ISP's default DNS resolver to a DNSCrypt resolver, simply install the dnscrypt-proxy package and then set it as the default resolver either in /etc/resolv.conf:

nameserver 127.0.2.1

if you are using a static network configuration or in /etc/dhcp/dhclient.conf:

supersede domain-name-servers 127.0.2.1;

if you rely on dynamic network configuration via DHCP.

There are two things you might want to keep in mind when choosing your DNSCrypt resolver:

  • whether or not they keep any logs of the DNS traffic
  • whether or not they support DNSSEC

I have personally selected a resolver located in Iceland by setting the following in /etc/default/dnscrypt-proxy:

DNSCRYPT_PROXY_RESOLVER_NAME=ns0.dnscrypt.is

DNSSEC

While DNSCrypt protects the confidentiality of our DNS queries, it doesn't give us any assurance that the results of such queries are the right ones. In order to authenticate results in that way and prevent DNS poisoning, a hierarchical cryptographic system was created: DNSSEC.

In order to enable it, I have setup a local unbound DNSSEC resolver on my machine and pointed /etc/resolv.conf (or /etc/dhcp/dhclient.conf) to my unbound installation at 127.0.0.1.

Then I put the following in /etc/unbound/unbound.conf.d/dnscrypt.conf:

server:
    # Remove localhost from the donotquery list
    do-not-query-localhost: no

forward-zone:
    name: "."
    forward-addr: 127.0.2.1@53

to stop unbound from resolving DNS directly and to instead go through the encrypted DNSCrypt proxy.

Reliability

In my experience, unbound and dnscrypt-proxy are fairly reliable but they eventually get confused (presumably) by network changes and start returning errors.

The ugly but dependable work-around I have found is to create a cronjob at /etc/cron.d/restart-dns.conf that restarts both services once a day:

0 3 * * *    root    /usr/sbin/service dnscrypt-proxy restart
1 3 * * *    root    /usr/sbin/service unbound restart

Captive portals

The one remaining problem I need to solve has to do with captive portals. This can be quite annoying when travelling because it requires me to use the portal's DNS resolver in order to connect to the splash screen that unlocks the wifi connection.

The dnssec-trigger package looked promising but when I tried it on my jessie laptop, it wasn't particularly reliable.

My temporary work-around is to comment out this line in /etc/dhcp/dhclient.conf whenever I need to connect to such annoying wifi networks:

#supersede domain-name-servers 127.0.0.1;

If you've found a better solution to this problem, please leave a comment!

26 April, 2016 03:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppMgsPack 0.1.0

Over the last few months, I have been working casually on a new package to integrate MessagePack with R. What is MessagePack, you ask? To quote its website, "It's like JSON, but fast and small."

Or in more extended terms:

MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.

Now, serialization formats are a dime a dozen: JSON, BSON, Protocol Buffers, CaptnProto, Flatbuffer. The list goes on and on. So why another? In a nutshell: "software ecosystems".

I happen to like working with Redis, and within the world of Redis, MessagePack is a first-class citizen supported by things close to the core like the embedded Lua interpreter, as well as fancy external add-ons such as the Redis Desktop Manager GUI. So nothing overly fundamentalist here, but a fairly pragmatic choice based on what happens to fit my needs. Plus, having worked on and off with Protocol Buffers for close to a decade, the chance of working with something not requiring a friggin' schema compiler seemed appealing for a chance.

So far, we have been encoding a bunch of data streams at work via MessagePack into Redis (and of course back). It works really well---header-only C++11 libraries for the win. I'll provide an updated RcppRedis which uses this (if present) in due course.

For now and the foreseeable future, this RcppMsgPack package will live only on the ghrr drat repository. To make RcppMsgPack work, I currently have to include the MessagePack 1.4.0 headers. A matching package for this version of the headers is in Debian but so far only in experimental. Once this hits the mainline repository I can depend on it, and upload a (lighter, smaller) RcppMsgPack to CRAN.

Until then, please just do

## install drat if not present
if (!require(drat)) install.packages("drat")

## use drat to select ghrr repo
drat::addRepo("ghrr")

## install RcppMsgPack
install.packages("RcppMsgPack")

More details, issue tickets etc are at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 April, 2016 02:08AM

April 25, 2016

hackergotchi for Gunnar Wolf

Gunnar Wolf

Passover / Pesaj, a secular viewpoint, a different viewpoint... And slowly becoming history!

As many of you know (where "you" is "people reading this who actually know who I am), I come from a secular Jewish family. Although we have some religious (even very religious) relatives, neither my parents nor my grandparents were religious ever. Not that spirituality wasn't important to them — My grandparents both went deep into understanding by and for themselves the different spiritual issues that came to their mind, and that's one of the traits I most remember about them while I was growing up. But formal, organized religion was never much welcome in the family; again, each of us had their own ways to concile our needs and fears with what we thought, read and understood.

This week is the Jewish celebration of Passover, or Pesaj as we call it (for which Passover is a direct translation, as Pesaj refers to the act of the angel of death passing over the houses of the sons of Israel during the tenth plague in Egypt; in Spanish, the name would be Pascua, which rather refers to the ritual sacrifice of a lamb that was done in the days of the great temple)... Anyway, I like giving context to what I write, but it always takes me off the main topic I want to share. Back to my family.

I am a third-generation member of the Hashomer Hatzair zionist socialist youth movement; my grandmother was among the early Hashomer Hatzair members in Poland in the 1920s, both my parents were active in the Mexico ken in the 1950s-1960s (in fact, they met and first interacted there), and I was a member from 1984 until 1996. It was also thanks to Hashomer that my wife and I met, and if my children get to have any kind of Jewish contact in their lifes, I hope it will be through Hashomer as well.

Hashomer is a secular, nationalist movement. A youth movement with over a century of history might seem like a contradiction. Over the years, of course, it has changed many details, but as far as I know, the essence is still there, and I hope it will continue to be so for good: Helping shape integral people, with identification with Judaism as a nation and not as a religion; keeping our cultural traits, but interpreting them liberally, and aligned with a view towards the common good — Socialism, no matter how the concept seems passé nowadays. Colectivism. Inclusion. Peaceful coexistence with our neighbours. Acceptance of the different. I could write pages on how I learnt about each of them during my years in Hashomer, how such concepts striked me as completely different as what the broader Jewish community I grew up in understood and related to them... But again, I am steering off the topic I want to pursue.

Every year, we used to have a third Seder (that is, a third Passover ceremony) at Hashomer. A third one, because as tradition mandates two ceremonies to be held outside Israel, and a movement comprised of people aged between 7 and 21, having a seder competing with the familiar one would not be too successful, we held a celebration on a following day. But it would never be the same as the "formal" Pesaj: For the Seder, the Jewish tradition mandates following the Hagada — The Seder always follows a predetermined order (literally, Seder means order), and the Hagadá (which means both legend and a story that is spoken; you can find full Hagadot online if you want to see what rites are followed; I found a seemingly well done, modern, Hebrew and English version, a more traditional one, in Hebrew and Spanish, and Wikipedia has a description including its parts and rites) is, quite understandably, full with religious words, praises for God, and... Well, many things that are not in line with Hashomer's values. How could we be a secular movement and have a big celebration full with praises for God? How could we yearn for life in the kibbutz distance from the true agricultural meaning of the celebration?

The members of Hashomer Hatzair repeatedly took on the task (or, as many would see it, the heresy) of adapting the Hagada to follow their worldview, updated it for the twentieth century, had it more palatable for our peculiarities. Yesterday, when we had our Seder, I saw my father still has –together with the other, more traditional Hagadot we use– two copies of the Hagadá he used at Hashomer Hatzair's third Seder. And they are not only beautiful works showing what they, as very young activists thought and made solemn, but over time, they are becoming historic items by themselves (one when my parents were still young janijim, in 1956, and one when they were starting to have responsabilities and were non-formal teachers or path-showers, madrijim, in 1959). He also had a copy of the Hagadá we used in the 1980s when I was at Hashomer; this last one was (sadly?) not done by us as members of Hashomer, but prepared by a larger group between Hashomer Hatzair and the Mexican friends of Israeli's associated left wing party, Mapam. This last one, I don't know which year it was prepared and published on, but I remember following it in our ceremony.

So, I asked him to borrow me the three little books, almost leaflets, and scanned them to be put online. Of course, there is no formal licensing information in them, much less explicit authorship information, but they are meant to be shared — So I took the liberty of uploading them to the Internet Archive, tagging them as CC-0 licensed. And if you are interested in them, flowing over and back between Spanish and Hebrew, with many beautiful texts adapted for them from various sources, illustrated by our own with the usual heroic, socialist-inspired style, and lovingly hand-reproduced using the adequate technology for their day... Here they are:

I really enjoyed the time I took scanning and forming them, reading some passages, imagining ourselves and my parents as youngsters, remembering the beautiful work we did at such a great organization. I hope this brings this joy to others like it did to me.

פעם שומר, תמיד שומר. Once shomer, always shomer.

25 April, 2016 04:51PM by gwolf

Reproducible builds folks

hackergotchi for Ricardo Mones

Ricardo Mones

Maximum number of clients reached Error: Can't open display: :0

Today it happened again: you try to open some program and nothing happens. Go to an open terminal, try again and it answers with the above message. Other days I used to reboot the session, but that's something I don't really think should be necessary.

First thought about X gone mad, but this one seems pretty well behaved:

$ lsof -p `pidof Xorg` | wc -l
5

Then noticed I had a long running chromium process (a jQuery page monitoring a remote service) so tried this one as well:

$ for a in `pidof chromium`; do echo "$a "`lsof -p $a | wc -l`; done
27914 5
26462 5
25350 5
24693 5
23378 5
22723 5
22165 5
21476 222
21474 1176
21443 5
21441 204
21435 546
11644 5
11626 5
11587 5
11461 5
11361 5
9833 5
9726 5

Wow, I'd bet you can guess next command ;-)

$ kill -9 21435 21441 21474 21476

This of course wiped out all chromium processes, but also fixed the problem. Suggestions for selective chromium killing welcome! But I'd better like to know why those files are not properly closed. Just relaunching chromium to write this post yields:

$ for a in `pidof chromium`; do echo "$a "`lsof -p $a | wc -l`; done
11919 5
11848 222
11841 432
11815 5
11813 204
11807 398

Which looks a bit exaggerated to me :-(

25 April, 2016 08:20AM

hackergotchi for Norbert Preining

Norbert Preining

Gödel and Daemons – an excursion into literature

Explaining Gödel’s theorems to students is a pain. Period. How can those poor creatures crank their mind around a Completeness and an Incompleteness Proof… I understand that. But then, there are brave souls using Gödel’s theorems to explain the world of demons to writers, in particular to answer the question:

You can control a Demon by knowing its True Name, but why?

goedel-glabrezu

Very impressive.

Found at worldbuilding.stackexchange.com, pointed to me by a good friend. I dare to full quote author Cort Ammon (nothing more is known), to preserve this masterpiece. Thanks!!!!


Use of their name forces them to be aware of the one truth they can never know.

Tl/Dr: If demons seek permanent power but trust no one, they put themselves in a strange position where mathematical truisms paint them into a corner which leaves their soul small and frail holding all the strings. Use of their name suggests you might know how to tug at those strings and unravel them wholesale, from the inside out!

Being a demon is tough work. If you think facing down a 4000lb Glabrezu without their name is difficult, try keeping that much muscle in shape in the gym! Never mind how many manicurists you go through keeping the claws in shape!

I don’t know how creative such demons truly are, but the easy route towards the perfect French tip that can withstand the rigors of going to the gym and benching ten thousand pounds is magic. Such a demon might learn a manicure spell from the nearby resident succubi. However, such spells are often temporary. No demon worth their salt is going to admit in front of a hero that they need a moment to refresh their mani before they can fight. The hero would just laugh at them. No, if a demon is going to do something, they’re going to do it right, and permanently. Not just nice french tips with a clear lacquer over the top, but razor sharp claws that resharpen themselves if they are blunted and can extend or retract at will!

In fact, come to think of it, why even go to the gym to maintain one’s physique? Why not just cast a magic spell which permanently makes you into the glorious Hanz (or Franz) that the trainer keeps telling you is inside you, just waiting to break free. Just get the spell right once, and think of the savings you could have on gym memberships.

Demons that wish to become more powerful, permanently, must be careful. If fairy tales have anything to teach is, it’s that one of the most dangerous things you can do is wish for something forever, and have it granted. Forever is a very long time, and every spell has its price. The demon is going to have to make sure the price is not greater than the perks. It would be a real waste to have a manicure spell create the perfect claws, only to find that they come with a peculiar perchance to curve towards one’s own heart in an attempt to free themselves from the demon that cast them.

So we need proofs. We need proofs that each spell is a good idea, before we cast it. Then, once we cast it, we need proof that the spell actually worked intended. Otherwise, who knows if the next spell will layer on top perfectly or not. Mathematics to the rescue! The world of First Order Logic (FOL, or herefter simply “logic”) is designed to offer these guarantees. With a few strokes of a pen, pencil, or even brush, it can write down a set of symbols which prove, without a shadow of a doubt, that not only will the spell work as intended, but that the side effects are manageable. How? So long as the demon can prove that they can cast a negation spell to undo their previous spell, the permanency can be reverted by the demon. With a few more fancy symbols, the demon can also prove that nobody else outside of the demon can undo their permanency. It’s a simple thing for mathematics really. Mathematics has an amazing spell called reductio ad infinitum which does unbelievable things.

However, there is a catch. There is always a catch with magic, even when that magic is being done through mathematics. In 1931, Kurt Gödel published his Incompleteness Theorems. These are 3 fascinating works of mathematical art which invoke the true names of First Order Logic and Set Theory. Gödel was able to prove that any system which is powerful enough to prove out all of algebra (1 + 1 = 2, 2 + 1 = 3, 3 * 5 = 15, etc.), could not prove its own validity. The self referential nature of proving itself crossed a line that First Order Logic simply could not return from. He proved that any system which tries must pick up one of these five traits:

  • Incomplete – they missed a detail when trying to prove everything
  • Incorrect – They got everything, but at least one point is wrong
  • Unprovable – They might be right, but they can never prove it
  • Intractable – If you’re willing to sit down and write down a proof that takes longer than eternity, you can prove a lot. Proofs that fit into eternity have limits.
  • Illogical – Throw logic to the wind, and you can prove anything!

If the demon wants itself to be able to cancel the spell, his proof is going to have to include his own abilities, creating just the kind of self referential effects needed to invoke Gödel’s incompleteness theorems. After a few thousand years, the demon may realize that this is folly.

A fascinating solution the demon might choose is to explore the “incomplete” solution to Gödel’s challenge. What if the demon permits the spell to change itself slightly, but in an unpredictable way. If the demon was a harddrive, perhaps he lets a single byte get changed by the spell in a way he cannot expect. This is actually enough to sidestep Gödel’s work, by introducing incompleteness. However, now we have to deal with pesky laws of physic and magics. We can’t just create something out of nothing, so if we’re going to let the spell change a single byte of us, there must be a single byte of information, its dual, that is unleashed into the world. Trying to break such conservation laws opens up a whole can of worms. Better to let that little bit go free into the world.

Well, almost. If you repeat this process a whole bunch of times, layering spells like a Matryoska doll, you’re eventually left with a “soul” that is nothing but the leftover bits of your spells that you simply don’t know enough about to use. If someone were collecting those bits and pieces, they might have the undoing of your entire self. You can’t prove it, of course, but its possible that those pieces that you sent out into the world have the keys to undo your many layers of armor, and then you know they are the bits that can nullify your soul if they get there. So what do you do? You hide them. You cast your spells only on the darkest of nights, deep in a cave where no one can see you. If you need assistants, you make sure to ritualistically slaughter them all, lest one of them know your secret and whisper it to a bundle of reeds, “The king has horns,” if you are familiar with the old fairy tale. Make it as hard as possible for the secret to escape, and hope that it withers away to nothingness before someone discovers it, leaving you invincible.

Now we come back to the name. The demon is going to have a name it uses to describe its whole self, including all of the layers of spellcraft it has acquired. This will be a great name like Abraxis, the Unbegotten Father or “Satan, lord of the underworld.” However, they also need to keep track of their smaller self, their soul. Failure to keep track of this might leave them open to an attack if they had missed a detail when casting their spells, and someone uncovered something to destroy them. This would be their true name, potentially something less pompous, like Gaylord Focker or Slartybartfarst. They would never use this name in company. Why draw attention to the only part of them that has the potential to be weak.

So when the hero calls out for Slartybartfarst, the demon truly must pay attention. If they know the name the demon has given over the remains of their tattered soul, might they know how to undo the demon entirely? Fear would grip their inner self, like a child, having to once again consider that they might be mortal. Surely they would wish to destroy the hero that spoke the name, but any attempt runs the risk of falling into a trap and exposing a weakness (surely their mind is racing, trying to enumerate all possible weaknesses they have). It is surely better for them to play along with you, once you use their true name, until they understand you well enough to confidently destroy you without destroying themselves.

So you ask for answers which are plausible. This one needs no magic at all. None of the rules are invalid in our world today. Granted finding a spell of perfect manicures might be difficult (believe me, some women have spent their whole life searching), but the rules are simply those of math. We can see this math in non-demonic parts of society as well. Consider encryption. An AES-256 key is so hard to brute force that it is currently believed it is impossible to break it without consuming 3/4 of the energy in the Milky Way Galaxy (no joke!). However, know the key, and decryption is easy. Worse, early implementations of AES took shortcuts. They actually left the signature of the path they took through the encryption in their accesses to memory. The caches on the CPU were like the reeds from the old fable. Merely observing how long it took to read data was sufficient to gather those reeds, make a flute, and play a song that unveils the encryption key (which is clearly either “The king has horns” or “1-2-3-4-5” depending on how secure you think your luggage combination is). Observing the true inner self of the AES encryption implementations was enough to completely dismantle them. Of course, not every implementation fell victim to this. You had to know the name of the implementation to determine which vulnerabilities it had, and how to strike at them.

Or, more literally, consider the work of Alfred Whitehead, Principia Mathematica. Principia Mathematica was to be a proof that you could prove all of the truths in arithmetic using purely procedural means. In Principia Mathematica, there was no manipulation based on semantics, everything he did was based on syntax — manipulating the actual symbols on the paper. Gödel’s Incompleteness Theorem caught Principia Mathematica by the tail, proving that its own rules were sufficient to demonstrate that it could never accomplish its goals. Principia Mathematica went down as the greatest Tower of Babel of modern mathematical history. Whitehead is no longer remembered for his mathematical work. He actually left the field of mathematics shortly afterwards, and became a philosopher and peace advocate, making a new name for himself there.

(by Cort Ammon)

25 April, 2016 12:31AM by Norbert Preining

April 24, 2016

Bits from Debian

Debian welcomes its 2016 summer interns

GSoC 2016 logo Outreachy logo

We're excited to announce that Debian has selected 29 interns to work with us this summer: 4 in Outreachy, and 25 in the Google Summer of Code.

Here is the list of projects and the interns who will work on them:

Android SDK tools in Debian:

APT - dpkg communications rework:

Continuous Integration for Debian-Med packages:

Extending the Debian Developer Horizon:

Improving and extending AppRecommender:

Improving the debsources frontend:

Improving voice, video and chat communication with Free Software:

MIPS and MIPSEL ports improvements:

Reproducible Builds for Debian and Free Software:

Support for KLEE in Debile:

The Google Summer of Code and Outreachy programs are possible in Debian thanks to the effort of Debian developers and contributors that dedicate part of their free time to mentor students and outreach tasks.

Join us and help extend Debian! You can follow the students weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists.

Congratulations to all of them!

24 April, 2016 07:00PM by Nicolas Dandrimont

Dominique Dumont

Automount usb devices with systemd

Hello

Ever since udisk-glue was obsoleted with udisk (the first generation), I’ve been struggling to find a solution to automatically mount a usb drive when such a device is connected to a kodi based home cinema PC. I wanted to avoid writing dedicated scripts or udev rules. Systemd is quite powerful and I thought that a simple solution should be possible using systemd configuration.

Actually, auto-mount notion covers 2 scenario :

  1. A device is mounted after being plugged in
  2. An already available device is mounted when a process accesses its mount point

The first case is the one that is needed with Kodi. The second may be usefull so is  also documented in this post.

For the first case, add a line like the following in /etc/fstab:

/dev/sr0 /mnt/br auto defaults,noatime,auto,nofail 0 2

and reload systemd configuration:

sudo systemctl daemon-reload

The important parameters are “auto” and “nofail”: with “auto”, systemd mounts the filesystem as soon as the device is available. This behavior is different from sysvinit where “auto” is taken into account only when “mount -a” is run by init scripts. “nofail” ensures that boot does not fail when the device is not available.

The second case is handled by a line like the following one (even if the line is split here to improve readability):

/dev/sr0 /mnt/br auto defaults,x-systemd.automount,\
   x-systemd.device-timeout=5,noatime,noauto 0 2

With the line above in /etc/fstab, the file system is mounted when user does (for instance) “ls /mnt/br” (actually, the first “ls” fails and triggers the mount. A second “ls” gives the expected result. There’s probably a way to improve this behavior, but I’ve not found it…)

“x-systemd.*” parameters are documented in systemd.mount(5).

Last but not least, using a plain device file (like /dev/sr0) works fine to automount optical devices. But it is difficult to predict the name of a device file created for a usb drive, so a LABEL or a UUID should be used in /etc/fstab instead of a plain device file. I.e. something like:

LABEL=my_usb_drive /mnt/my-drive auto defaults,auto,nofail 0 2

All the best

 


Tagged: kodi, systemd

24 April, 2016 05:34PM by dod

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Brad Mehldau at the CSO, again

Almost seven years since the last time we saw him here, Brad Mehldau returned to the CSO for a concert on Friday eve in his standard trio setup with Larry Grenadier on bass and Jeff Ballard on drums.

The material mostly (all?) new and drawn from the upcoming album Blues and Ballads. The morning of the concert---which happened to be the final one in their tour---he retweeted a bit from this review in the Boston Globe

[Brad Mehldau] flashed facets of his renowned pianism: crystalline touch, deep lyricism, harmonic sophistication, adroit use of space, and the otherworldly independence of his right and left hands.

I cannot really describe his style any better than this. If you get a chance to see him, go!

24 April, 2016 05:23PM

hackergotchi for Norbert Preining

Norbert Preining

Armenia and Turkey – Erdoğan did it again

It is 101 years to the day that Turkey started the first genocide of the 20th century, the Armenian Genocide. And Recep Tayyip Erdoğan, the populistic and seemingly maniac president of Turkey, does not drop any chance to continue the shame of Turkey.

armenien-genocide

After having sued a German comedian of making fun of him – followed promptly by an as shameful cowtow of Merkel by allowing the jurisdiction to start prosecuting Jan Böhmermann, heis continuing suing other journalists, and above all putting pressure on the European Community to not support a concert tour of the Dresdner Sinfoniker in memoriam of the genocide.

European Values have disappeared, and politicians pay stupid tribute to a dictator-like Erdoğan who is destroying free speech and free media, not only in his country but all around the world. Must be a good friend of Abe, both are installing anti-freedom laws.

Shame on Europe for this. And Turkey, either vote Erdoğan out of office, or you should not (and hopefully will never) be allowed into the EC, because you don’t belong there.

24 April, 2016 11:19AM by Norbert Preining

hackergotchi for Daniel Pocock

Daniel Pocock

LinuxWochen, MiniDebConf Vienna and Linux Presentation Day

Over the coming week, there are a vast number of free software events taking place around the world.

I'll be at the LinuxWochen Vienna and MiniDebConf Vienna, the events run over four days from Thursday, 28 April to Sunday, 1 May.

At MiniDebConf Vienna, I'll be giving a talk on Saturday (schedule not finalized yet) about our progress with free Real-Time Communications (RTC) and welcoming 13 new GSoC students (and their mentors) working on this topic under the Debian umbrella.

On Sunday, Iain Learmonth and I will be collaborating on a workshop/demonstration on Software Defined Radio from the perspective of ham radio and the Debian Ham Radio Pure Blend. If you want to be an active participant, an easy way to get involved is to bring an RTL-SDR dongle. It is highly recommended that instead of buying any cheap generic dongle, you buy one with a high quality temperature compensated crystal oscillator (TXCO), such as those promoted by RTL-SDR.com.

Saturday, 30 April is also Linux Presentation Day in many places. There is an event in Switzerland organized by the local local FSFE group in Basel.

DebConf16 is only a couple of months away now, Registration is still open and the team are keenly looking for additional sponsors. Sponsors are a vital part of such a large event, if your employer or any other organization you know benefits from Debian, please encourage them to contribute.

24 April, 2016 06:23AM by Daniel.Pocock

April 23, 2016

Scott Kitterman

Computer System Security Policy Debate (Follow-up)

As a follow-up to my recent post on the debate in the US over new encryption restrictions, I thought a short addition might be relevant.  This continues.

There was a recent Congressional hearing on the topic that featured mostly what you would expect.  Police always want access to any possible source of evidence and the tech industry tries to explain that the risks associated with mandates to do so are excessive with grandstanding legislators sprinkled throughout.   What I found interesting (and I use that word with some trepidation as it is still a multi-hour video of a Congressional hearing) is that there was rather less grandstanding and and less absolutism from some parties than I was expecting.

There is overwhelming consensus that these requirements [for exceptional access] are incompatible with good security engineering practice

Dr. Matthew Blaze

The challenge is that political people see everything as a political/policy issue, but this isn’t that kind of issue.  I get particularly frustrated when I read ignorant ramblings like this that dismiss the overwhelming consensus of the people that actually understand what needs to be done as emotional, hysterical obstructionism.  Contrary to what seems to be that author’s point, constructive dialogue and understanding values does nothing to change the technical risks of mandating exceptional access.  Of course the opponents of Feinstein-Burr decry it as technologically illiterate, it is technologically illiterate.

This doesn’t quite rise to the level of that time the Indiana state legislature considered legislating a new value (or in fact multiple values) for the mathematical constant Pi, but it is in the same legislative domain.

23 April, 2016 10:12PM by skitterman

April 22, 2016

hackergotchi for Gergely Nagy

Gergely Nagy

ErgoDox: Day 0

Today my ErgoDox EZ arrived, I flashed a Dvorak firmware a couple of times, and am typing this on the new keyboard. It's slow and painful, but the possibilities are going to be worth it in the end.

That is all. Writing even this much took ages.

22 April, 2016 07:30PM by Gergely Nagy

hackergotchi for Matthew Garrett

Matthew Garrett

Circumventing Ubuntu Snap confinement

Ubuntu 16.04 was released today, with one of the highlights being the new Snap package format. Snaps are intended to make it easier to distribute applications for Ubuntu - they include their dependencies rather than relying on the archive, they can be updated on a schedule that's separate from the distribution itself and they're confined by a strong security policy that makes it impossible for an app to steal your data.

At least, that's what Canonical assert. It's true in a sense - if you're using Snap packages on Mir (ie, Ubuntu mobile) then there's a genuine improvement in security. But if you're using X11 (ie, Ubuntu desktop) it's horribly, awfully misleading. Any Snap package you install is completely capable of copying all your private data to wherever it wants with very little difficulty.

The problem here is the X11 windowing system. X has no real concept of different levels of application trust. Any application can register to receive keystrokes from any other application. Any application can inject fake key events into the input stream. An application that is otherwise confined by strong security policies can simply type into another window. An application that has no access to any of your private data can wait until your session is idle, open an unconfined terminal and then use curl to send your data to a remote site. As long as Ubuntu desktop still uses X11, the Snap format provides you with very little meaningful security. Mir and Wayland both fix this, which is why Wayland is a prerequisite for the sandboxed xdg-app design.

I've produced a quick proof of concept of this. Grab XEvilTeddy from git, install Snapcraft (it's in 16.04), snapcraft snap, sudo snap install xevilteddy*.snap, /snap/bin/xevilteddy.xteddy . An adorable teddy bear! How cute. Now open Firefox and start typing, then check back in your terminal window. Oh no! All my secrets. Open another terminal window and give it focus. Oh no! An injected command that could instead have been a curl session that uploaded your private SSH keys to somewhere that's not going to respect your privacy.

The Snap format provides a lot of underlying technology that is a great step towards being able to protect systems against untrustworthy third-party applications, and once Ubuntu shifts to using Mir by default it'll be much better than the status quo. But right now the protections it provides are easily circumvented, and it's disingenuous to claim that it currently gives desktop users any real security.

comment count unavailable comments

22 April, 2016 01:51AM

April 21, 2016

John Goerzen

Count me as a systemd convert

Back in 2014, I wrote about some negative first impressions of systemd. I also had a plea to debian-project to end all the flaming, pointing out that “jessie will still boot”, noting that my preference was for sysvinit but things are what they are and it wasn’t that big of a deal.

Although I still have serious misgivings about the systemd upstream’s attitude, I’ve got to say I find the system rather refreshing and useful in practice.

Here’s an example. I was debugging the boot on a server recently. It mounts a bunch of NFS filesystems and runs a third-party daemon that is started from an old-style /etc/init.d script.

We had a situation where the NFS filesystems the daemon required didn’t mount on boot. The daemon then was started, and unfortunately it basically does a mkdir -p on startup. So it started running and processing requests with negative results.

So there were two questions: why did the NFS filesystems fail to start, and how could we make sure the daemon wouldn’t start without them mounted? For the first, journalctl -xb was immensely helpful. It logged the status of each individual mount, and it turned out that it looked like a modprobe or kernel race condition when a bunch of NFS mounts were kicked off in parallel and all tried to load the nfsv4 module at the same time. That was easy enough to work around by adding nfsv4 to /etc/modules. Now for the other question: refusing to start the daemon if the filesystems weren’t there.

With systemd, this was actually trivial. I created /etc/systemd/system/mydaemon.service.requires (I’ll call the service “mydaemon” here), and in it I created a symlink to /lib/systemd/system/remote-fs.target. Then systemctl daemon-reload, and boom, done. systemctl list-dependencies mydaemon will even show the the dependency tree, color-coded status of each item on it, and will actually show every single filesystem that remote-fs requires and the status of it in one command. Super handy.

In a non-systemd environment, I’d probably be modifying the init script and doing a bunch of manual scripting to check the filesystems. Here, one symlink and one command did it, and I get tools to inspect the status of the mydaemon prerequisites for free.

I’ve got to say, as someone that has occasionally had to troubleshoot boot ordering and update-rc.d symlink hell, troubleshooting this stuff in systemd is considerably easier and the toolset is more powerful. Yes, it has its set of poorly-documented complexity, but then so did sysvinit.

I never thought the “world is falling” folks were right, but by now I can be counted among those that feels like systemd has matured to the point where it truly is superior to sysvinit. Yes, in 2014 it had some bugs, but by here in 2016 it looks pretty darn good and I feel like Debian’s decision has been validated through my actual experience with it.

21 April, 2016 01:45PM by John Goerzen

hackergotchi for Alessio Treglia

Alessio Treglia

Corporate Culture in the Transformative Enterprise

 

alberoVitaThe “accelerated” world of the Western or “Westernized” countries seems to be fed by an insidious food, which generates a kind of psychological dependence: anxiety. The economy of global markets cannot help it, it has a structural need of it to feed their iron logic of survival. The anxiety generated in the masses of consumers and in market competitors is crucial for Companies fighting each other and now they can only live if men are projected to objective targets continuously moving forward, without ever allowing them to achieve a stable destination.

The consumer is thus constantly maintained in a state of perpetual breathlessness, always looking for the fresh air of liberation that could eventually reduce his tension. It is a state of anxiety caused by false needs generated by advertising campaigns whose primary purpose is to create a need, to interpret to their advantage a still confused psychological demand leading to the destination decided by the market…

<Read More…[by Fabio Marzocca]>

21 April, 2016 08:40AM by Fabio Marzocca

hackergotchi for Mario Lang

Mario Lang

Scraping the web with Python and XQuery

During a JAWS for Windows training, I was introduced to the Research It feature of that screen reader. Research It is a quick way to utilize web scraping to make working with complex web pages easier. It is about extracting specific information from a website that does not offer an API. For instance, look up a word in an online dictionary, or quickly check the status of a delivery. Strictly speaking, this feature does not belong in a screen reader, but it is a very helpful tool to have at your fingertips.

Research It uses XQuery (actually, XQilla) to do all the heavy lifting. This also means that the Research It Rulesets are theoretically also useable on other platforms. I was immediately hooked, because I always had a love for XPath. Looking at XQuery code is totally self-explanatory for me. I just like the syntax and semantics.

So I immediately checked out XQilla on Debian, and found #821329 and #821330, which were promptly fixed by Tommi Vainikainen, thanks to him for the really quick response!

Unfortunately, making xqilla:parse-html available and upgrading to the latest upstream version is not enough to use XQilla on Linux with the typical webpages out there. Xerces-C++, which is what XQilla uses to fetch web resources, does not support HTTPS URLs at the moment. I filed #821380 to ask for HTTPS support in Xerces-C to be enabled by default.

And even with HTTPS support enabled in Xerces-C, the xqilla:parse-html function (which is based on HTML Tidy) fails for a lot of real-world webpages I tried. Manually upgrading the six year old version of HTML Tidy in Debian to the latest from GitHub (tidy-html5, #810951) did not help a lot either.

Python to the rescue

XQuery is still a very nice language for extracting information from markup documents. XQilla just has a bit of a hard time dealing with the typical HTML documents out there. After all, it was designed to deal with well-formed XML documents.

So I decided to build myself a little wrapper around XQilla which fetches the web resources with the Python Requests package, and cleans the HTML document with BeautifulSoup (which uses lxml to do HTML parsing). The output of BeautifulSoup can apparently be passed to XQilla as the context document. This is a fairly crazy hack, but it works quite reliably so far.

Here is how one of my web scraping rules looks like:

from click import argument, group

@group()
def xq():
  """Web scraping for command-line users."""
  pass

@xq.group('github.com')
def github():
  """Quick access to github.com."""
  pass

@github.command('code_search')
@argument('language')
@argument('query')
def github_code_search(language, query):
  """Search for source code."""
  scrape(get='https://github.com/search',
         params={'l': language, 'q': query, 'type': 'code'})

The function scrape automatically determines the XQuery filename according to the callers function name. Here is how github_code_search.xq looks like:

declare function local:source-lines($table as node()*) as xs:string*
{
  for $tr in $table/tr return normalize-space(data($tr))
};

let $results := html//div[@id="code_search_results"]/div[@class="code-list"]
for $div in $results/div
let $repo := data($div/p/a[1])
let $file := data($div/p/a[2])
let $link := resolve-uri(data($div/p/a[2]/@href))
return (concat($repo, ": ", $file), $link, local:source-lines($div//table),
        "---------------------------------------------------------------")

That is all I need to implement a custom web scraping rule. A few lines of Python to specify how and where to fetch the website from. And a XQuery file that specifies how to mangle the document content.

And thanks to the Python click package, the various entry points of my web scraping script can easily be called from the command-line.

Here is a sample invokation:

fx:~/xq% ./xq.py github.com
Usage: xq.py github.com [OPTIONS] COMMAND [ARGS]...

  Quick access to github.com.

Options:
  --help  Show this message and exit.

Commands:
  code_search  Search for source code.

fx:~/xq% ./xq.py github.com code_search Pascal '"debian/rules"'
prof7bit/LazPackager: frmlazpackageroptionsdeb.pas
https://github.com/prof7bit/LazPackager/blob/cc3e35e9bae0c5a582b0b301dcbb38047fba2ad9/frmlazpackageroptionsdeb.pas
230 procedure TFDebianOptions.BtnPreviewRulesClick(Sender: TObject);
231 begin
232 ShowPreview('debian/rules', EdRules.Text);
233 end;
234
235 procedure TFDebianOptions.BtnPreviewChangelogClick(Sender: TObject);
---------------------------------------------------------------
prof7bit/LazPackager: lazpackagerdebian.pas
https://github.com/prof7bit/LazPackager/blob/cc3e35e9bae0c5a582b0b301dcbb38047fba2ad9/lazpackagerdebian.pas
205 + 'mv ../rules debian/' + LF
206 + 'chmod +x debian/rules' + LF
207 + 'mv ../changelog debian/' + LF
208 + 'mv ../copyright debian/' + LF
---------------------------------------------------------------

For the impatient, here is the implementation of scrape:

from bs4 import BeautifulSoup
from bs4.element import Doctype, ResultSet
from inspect import currentframe
from itertools import chain
from os import path
from os.path import abspath, dirname
from subprocess import PIPE, run
from tempfile import NamedTemporaryFile

import requests

def scrape(get=None, post=None, find_all=None,
           xquery_name=None, xquery_vars={}, **kwargs):
  """Execute a XQuery file.
  When either get or post is specified, fetch the resource and run it through
  BeautifulSoup, passing it as context to the XQuery.
  If find_all is given, wrap the result of executing find_all on
  the BeautifulSoup in an artificial HTML body.
  If xquery_name is not specified, the callers function name is used.
  xquery_name combined with extension ".xq" is searched in the directory
  where this Python script resides and executed with XQilla.
  kwargs are passed to get or post calls.  Typical extra keywords would be:
  params -- To pass extra parameters to the URL.
  data -- For HTTP POST.
  """

  response = None
  url = None
  context = None

  if get is not None:
    response = requests.get(get, **kwargs)
  elif post is not None:
    response = requests.post(post, **kwargs)

  if response is not None:
    response.raise_for_status()
    context = BeautifulSoup(response.text, 'lxml')
    dtd = next(context.descendants)
    if type(dtd) is Doctype:
      dtd.extract()
    if find_all is not None:
      context = context.find_all(find_all)
    url = response.url

  if xquery_name is None:
    xquery_name = currentframe().f_back.f_code.co_name
  cmd = ['xqilla']
  if context is not None:
    if type(context) is BeautifulSoup:
      soup = context
      context = NamedTemporaryFile(mode='w')
      print(soup, file=context)
      cmd.extend(['-i', context.name])
    elif isinstance(context, list) or isinstance(context, ResultSet):
      tags = context
      context = NamedTemporaryFile(mode='w')
      print('<html><body>', file=context)
      for item in tags: print(item, file=context)
      print('</body></html>', file=context)
      context.flush()
      cmd.extend(['-i', context.name])
  cmd.extend(chain.from_iterable(['-v', k, v] for k, v in xquery_vars.items()))
  if url is not None:
    cmd.extend(['-b', url])
  cmd.append(abspath(path.join(dirname(__file__), xquery_name + ".xq")))

  output = run(cmd, stdout=PIPE).stdout.decode('utf-8')
  if type(context) is NamedTemporaryFile: context.close()

  print(output, end='')

The full source for xq can be found on GitHub. The project is just two days old, so I have only implemented three scraping rules as of now. However, adding new rules has been made deliberately easy, so that I can just write up a few lines of code whenever I find something on the web which I'd like to scrape on the command-line. If you find this "framework" useful, make sure to share your insights with me. And if you impelement your own scraping rules for a public service, consider sharing that as well.

If you have an comments or questions, send me mail. Oh, and by the way, I am now also on Twitter as @blindbird23.

21 April, 2016 08:30AM by Mario Lang

April 20, 2016

hackergotchi for Jonathan Dowland

Jonathan Dowland

mount-on-demand backups

Last week, someone posted a request for help on the popular Server Fault Q&A site: they had apparently accidentally deleted their entire web hosting business, and all their backups. The post (now itself deleted) was a reasonably obvious fake, but mainstream media reported on it anyway, and then life imitated art and 123-reg went and did actually delete all their hosted VMs, and their backups.

I was chatting to some friends from $job-2 and we had a brief smug moment that we had never done anything this bad, before moving on to incredulity that we had never done anything this bad in the 5 years or so we were running the University web servers. Some time later I realised that my personal backups were at risk from something like this because I have a permanently mounted /backup partition on my home NAS. I decided to fix it.

I already use Systemd to manage mounting the /backup partition (via a backup.mount file) and its dependencies. I'll skip the finer details of that for now.

I planned to define some new Systemd units for each backup job which was previously scheduled via Cron in order that I could mark them as depending on the /backup mount. I needed to adjust that mount definition by adding StopWhenUnneeded=true. This ensures that /backup will be unmounted when it is not in use by another job, and not at risk of a stray rm -rf.

The backup jobs are all simple shell scripts that convert quite easily into services. An example:

backup-home.service:

[Unit]
Requires=backup.mount
After=backup.mount

[Service]
User=backupuser
Group=backupuser
ExecStart=/home/backupuser/bin/phobos-backup-home

To schedule this, I also need to create a timer:

backup-home.timer:

[Timer]
OnCalendar=*-*-* 04:01:00

[Install]
WantedBy=timers.target

To enable the timer, you have to both enable and start it:

systemctl enable backup-home.timer
systemctl start backup-home.timer

I created service and timer units for each of my cron jobs.

The other big difference to driving these from Cron is that by default I won't get any emails if the jobs generate output - in particular, if they fail. I definitely do want mail if things fail. The Arch Wiki has an interesting proposed solution to this which I took a look at. It's a bit clunky, and my initial experiments with a derivation from this (using mail(1) not sendmail(1)) have not yet generated any mail.

Pros and Cons

The Systemd timespec is more intuitive than Cron's. It's a shame you need a minimum of three more lines of boilerplate for the simplest of timers. I think WantedBy=timers.target should probably be an implicit default for all .timer type units. Here I think clarity suffers in the name of consistency.

With timers, start doesn't kick-off the job, it really means "enable" in the context of timers, which is clumsy considering the existing enable verb, which seems almost superfluous, but is necessary for consistency, since Systemd units need to be enabled before they can be started As Simon points out in the comments, this is not true. Rather, "enable" is needed for the timer to be active upon subsequent boots, but won't enable it in the current boot. "Start" will enable it for the current boot, but not for subsequent ones.

Since I need a .service and a .unit file for each active line in my crontab, that's a lot of small files (twice as many as the number of jobs being defined) and they're all stored in system-wide folder because of the dependency on the necessarily system-level units defining the mount.

It's easy to forget the After= line for the backup services. On the one hand, it's a shame that After= doesn't imply Require=, so you don't need both; or alternatively there was a convenience option that did both. On the other hand, there are already too many Systemd options and adding more conjoined ones would just make it even more complicated.

It's a shame I couldn't use user-level units to achieve this, but they could not depend on the system-level ones, nor activate /backup. This is a sensible default, since you don't want any user to be able to start any service on-demand, but some way of enabling it for these situations would be good. I ruled out systemd.automount because a stray rm -rf would trigger the mount which defeats the whole exercise. Apparently this might be something you solve with Polkit, as the Arch Wiki explains, which looks like it has XML disease.

I need to get mail-on-error working reliably.

20 April, 2016 08:49PM

hackergotchi for Ben Hutchings

Ben Hutchings

Experiments with signed kernels and modules in Debian

I've lately been working on support for Secure Boot in Debian, mostly in the packages maintained by the kernel team.

My instructions for setting up UEFI Secure Boot are based on OVMF running on KVM/QEMU. All 'Designed for Windows' PCs should allow reconfiguration of SB, but it may not be easy to do so. They also assume that the firmware includes an EFI shell.

Updated: Robert Edmonds pointed out that the 'Designed for Windows' requirements changed with Windows 10:

The ability to reconfigure SB is indeed now optional for devices which are designed to always boot with a specific Secure Boot configuration. I also noticed that the requirements say that OEMs should not sign an EFI shell binary. Therefore I've revised the instructions to use efibootmgr instead.

Background

UEFI Secure Boot, when configured and enabled (which it is on most new PCs) requires that whatever it loads is signed with a trusted key. The one common trusted key for PCs is held by Microsoft, and while they will sign other people's code for a nominal fee, they require that it also validates the code it loads, i.e. the kernel or next stage boot loader. The kernel in turn is responsible for validating any code that could compromise its integrity (kernel modules, kexec images).

Currently there are no such signed boot loaders in Debian, though the shim and grub-signed packages included in many other distributions should be usable. However it's possible to load an appropriately configured Linux kernel directly from the UEFI firmware (typically through the shell) which is what I'm doing at the moment.

Packaging signed kernels

Signing keys obviously need to be protected against disclosure; the private keys can't be included in a source package. We also won't install them on buildds separately, and generating signatures at build time would of course be unreproducible. So I've created a new source package, linux-signed, which contains detached signatures prepared offline.

Currently the binary packages built from linux-signed also contain only detached signatures, which are applied as necessary at installation time. The signed kernel image (only on x86 for now) is named /boot/vmlinuz-kversion.efi.signed. However, since packages must not modify files owned by another package and I didn't want to dpkg-divert thousands of modules, the module signatures remain detached. Detached module signatures are a new invention of mine, and require changes in kmod and various other packages to support them. (An alternate might be to put signed modules under a different directory and drop a configuration file in /lib/depmod.d to make them higher priority. But then we end up with two copies of every module installed, which can be a substantial waste of space.)

Preparation

The packages you need to repeat the experiment:

  • linux-image-4.5.0-1-flavour version 4.5.1-1 from unstable (only 686, 686-pae or amd64 flavours have signed kernels; most flavours have signed modules)
  • linux-image-4.5.0-1-flavour-signed version 1~exp3 from experimental
  • initramfs-tools version 0.125 from unstable
  • kmod and libkmod2 unofficial version 22-1.2 from people.debian.org

For Secure Boot, you'll then need to copy the signed kernel and the initrd onto the EFI system partition, normally mounted at /boot/efi.

SB requires a Platform Key (PK) which will already be installed on a real PC. You can replace it but you don't need to. If you're using OVMF, there are no persistent keys so you do need to generate your own:

openssl req -new -x509 -newkey rsa:2048 -keyout pk.key -out pk.crt \
    -outform der -nodes

You'll also need to install the certificate for my kernel image signing key, which is under debian/certs in the linux-signed package. OVMF requires this in DER format:

openssl x509 -in linux-signed-1~exp3/debian/certs/linux-image-benh@debian.org.cert.pem \
    -out linux.crt -outform der 

You'll need to copy the certificate(s) to a FAT-formatted partition such as the EFI system partition, so that the firmware can read it.

Use efibootmgr to add a boot entry for the kernel, for example:

efibootmgr -c -d /dev/sda -L linux-signed -l '\vmlinuz.efi' -u 'initrd=initrd.img root=/dev/sda2 ro quiet'

You should use the same kernel parameters as usual, except that you also need to specify the initrd filename using the initrd= parameter. The EFI stub code at the beginning of the kernel will load the initrd using EFI boot services.

Enabling Secure Boot

  1. Reboot the system and enter UEFI setup
  2. Find the menu entry for Secure Boot customisation (in OVMF, it's under 'Device Manager' for some reason)
  3. In OVMF, enrol the PK from pk.crt
  4. Add linux.crt to the DB (whitelist database)
  5. Ensure that Secure Boot is enabled and in 'User Mode'

Booting the kernel in Secure Boot

If all went well, Linux will boot as normal. You can confirm that Secure Boot was enabled by reading /sys/kernel/security/securelevel, which will contain 1 if it was.

Module signature validation

Module signatures are now always checked and unsigned modules will be given the 'E' taint flag. If Secure Boot is used or you add the kernel parameter module.sig_enforce=1, unsigned modules will be rejected. You can also turn on signature enforcement and turn off various other methods of modifying kernel code (such as kexec) by writing 1 to /sys/kernel/security/securelevel.

20 April, 2016 06:53PM

Reproducible builds folks

Reproducible builds: week 51 in Stretch cycle

What happened in the reproducible builds effort between April 10th and April 16th 2016:

Toolchain fixes

  • Roland Rosenfeld uploaded transfig/1:3.2.5.e-6 which honors SOURCE_DATE_EPOCH. Original patch by Alexis Bienvenüe.
  • Bill Allombert uploaded gap/4r8p3-2 which makes convert.pl honor SOURCE_DATE_EPOCH. Original patch by Jerome Benoit, duplicate patch by Dhole.
  • Emmanuel Bourg uploaded ant/1.9.7-1 which makes the Javadoc task use UTF-8 as the default encoding if none was specified and SOURCE_DATE_EPOCH is set.

Antoine Beaupré suggested that gitpkg stops recording timestamps when creating upstream archives. Antoine Beaupré also pointed out that git-buildpackage diverges from the default gzip settings which is a problem for reproducibly recreating released tarballs which were made using the defaults.

Alexis Bienvenüe submitted a patch extending sphinx SOURCE_DATE_EPOCH support to copyright year.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: atinject-jsr330, avis, brailleutils, charactermanaj, classycle, commons-io, commons-javaflow, commons-jci, gap-radiroot, jebl2, jetty, libcommons-el-java, libcommons-jxpath-java, libjackson-json-java, libjogl2-java, libmicroba-java, libproxool-java, libregexp-java, mobile-atlas-creator, octave-econometrics, octave-linear-algebra, octave-odepkg, octave-optiminterp, rapidsvn, remotetea, ruby-rinku, tachyon, xhtmlrenderer.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #820603 on viking by Alexis Bienvenüe: fix icon headers inclusion order.
  • #820661 on nullmailer by Alexis Bienvenüe: fix the order in which files are included in the static archive.
  • #820668 on sawfish by Alexis Bienvenüe: fix file ordering in theme archives, strip hostname and username from the config.h file, and honour SOURCE_DATE_EPOCH when creating the config.h file.
  • #820740 on bless by Alexis Bienvenüe: always use /bin/sh as shell.
  • #820742 on gmic by Alexis Bienvenüe: strip the build date from help messages.
  • #820809 on wsdl4j by Alexis Bienvenüe: use a plain text representation of the copyright character.
  • #820815 on freefem++ by Alexis Bienvenüe: fix the order in which files are included in the .edp files, and honour SOURCE_DATE_EPOCH when using the build date.
  • #820869 on pyexiv2 by Alexis Bienvenüe: honour the SOURCE_DATE_EPOCH environment variable through the ustrftime function, to get a reproducible copyright year.
  • #820932 on fim by Alexis Bienvenüe: fix the order in which files are joined in header files, strip the build date from fim binary, make the embeded vim2html script honour SOURCE_DATE_EPOCH variable when building the documentation, and force language to be English when using bison to make a grammar that is going to be parsed using English keywords.
  • #820990 on grib-api by Santiago Vila: always call dh-buildinfo.

diffoscope development

Zbigniew Jędrzejewski-Szmek noted in #820631 that diffoscope doesn't work properly when a file contains several cpio archives.

Package reviews

21 reviews have been added, 14 updated and 22 removed in this week.

New issue found: timestamps_in_htm_by_gap.

Chris Lamb reported 10 new FTBFS issues.

Misc.

The video and the slides from the talk "Reproducible builds ecosystem" at LibrePlanet 2016 have been published now.

This week's edition was written by Lunar and Holger Levsen. h01ger automated the maintenance and publishing of this weekly newsletter via git.

20 April, 2016 06:47PM

hackergotchi for Michal Čihař

Michal Čihař

Testing Sphinx documentation with Jenkins

While reviewing comments on phpMyAdmin wiki (which we're shrinking down to developer documentation and moving end user documentation to proper documentation) I've noticed that people complained there on broken links in our documentation. Indeed there was quite some of them as this is something nobody really checks. It seems like obvious task to automate.

It seemed to me as obvious as somebody had to do it already. Unfortunately I have not found much, but at least there was Using Jenkins to parse sphinx warnings. This helps with the build warnings, but unfortunately I found no integration for the linkcheck builder. Fortunately it's quite easy with the Jenkins Warnings plugin to write custom parsers and to parse linkcheck output as well.

The Sphinx output parser based on above link can be configured like:

Regular Expression:

^(.*):(\d+): \((.*)\) (.*)

Mapping Script:

import hudson.plugins.warnings.parser.Warning

String fileName = matcher.group(1)
String lineNumber = matcher.group(2)
String category = matcher.group(3)
String message = matcher.group(4)

return new Warning(fileName, Integer.parseInt(lineNumber), "sphinx", category, message);

Example Log Message:

Percona-Server-1.0.2-3.rst:67: (WARNING/2) Inline literal start-string without end-string.

The Sphinx linkcheck output is quite similar:

Regular Expression:

^(.*):(\d+): \[([^\]]*)\] (.*)

Mapping Script:

import hudson.plugins.warnings.parser.Warning

String fileName = matcher.group(1)
String lineNumber = matcher.group(2)
String category = matcher.group(3)
String message = matcher.group(4)

return new Warning(fileName, Integer.parseInt(lineNumber), "sphinx-linkcheck", category, message);

Example Log Message:

faq.rst:793: [broken] http://www.hardened-php.net/: <urlopen error [Errno -3] Temporary failure in name resolution>

All you need to do now is to enable these in your Jenkins project, let the Sphinx parse output and the Sphinx linkcheck one file generated by linkcheck (usually _build/linkcheck/output.txt). The result can be found on the phpMyAdmin CI server.

Filed under: English phpMyAdmin | 0 comments

20 April, 2016 10:00AM by Michal Čihař (michal@cihar.com)

hackergotchi for Norbert Preining

Norbert Preining

GnuPG notes: subkeys, yubikey, gpg1 vs gpg2

Switching from one GnuPG master key to the usage of subkeys was long on my list of things I wanted to do, but never came around. With the advent of a YubiKey NEO in my pocket I finally took the plunge: reading through lots of web pages (and adding one here for confusion), trying to understand the procedures, and above all, understanding my own requirements!

gpg-subkeys-yubi

To sum up a long story, it was worth the plunge, and all over the security level of my working environment has improved considerable.

While the advantages of subkeys are well documented (e.g., Debian Wiki), at the end of the day I was – like probably many Debian Developers – having one master key that was used for every action: mail decryption and signing, signing of uploads, etc. Traveling a lot I always felt uncomfortable. Despite a lengthy passphrase, I still didn’t want my master key to get into wrong hands in case the laptop got stolen. Furthermore, I had my master key on several computers (work, laptop, mail server), which didn’t help a lot either. With all this, I started to compile a list of requirements/objectives I wanted to have:

  • master key is only available on offline medium (USB sticks)
  • subkeys for signing, encryption, authentication
  • possibility to sign and decrypt my emails on the server where I read emails (ssh/mutt)
  • laptop does not contain any keys, instead use Yubikey
  • all keys with expiry date (1y)
  • mixture of gpg versions: local laptop: gpg2.1, mail server: gpg1

Warning Before we start a word of caution – make backups, best is to make backups at every stage. You don’t want that an erroneous operations wipes out your precious keys without a backup!

Preparation

In the following I will assume that MASTERKEY environment variable contains the id of the master key to be converted. Furthermore, I have followed some of the advice here, so key ids will be shown in long format.

Let us start with the current situation:

$ gpg -K $MASTERKEY
sec   4096R/0x6CACA448860CDC13 2010-09-14
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
uid                            Norbert Preining <norbert@preining.info>
uid                            Norbert Preining <preining@logic.at>
uid                            Norbert Preining <preining@debian.org>
uid                            Norbert Preining <preining@jaist.ac.jp>
ssb   4096R/0xD1D2BD14810F62B3 2010-09-14

In the following we will go through the following steps:

  • Prepare the Yubikey NEO (forthcoming blog>
  • Edit to current key: add expiry, add photo, and above all add subkeys
  • Create revocation certificate
  • Create gpg2.1 structure
  • Backup to USB media
  • Move subkeys to Yubikey NEO
  • Remove master keys
  • Separate gpg1 (for mail server) and gpg2 (for laptop)
  • Upload to key servers

Yubikey SmartCard setup

There are several guides out there, but I will in very near future write one about using the NEO for various usage scenaria including GPG keys.

Edit the current key

The following can be done in one session or in different sessions, the screen logs are after starting with:

$ gpg --expert --edit-key $MASTERKEY

add expiry date

Having an expiry date on your key serves two purposes: If you loose it, it will solve itself automatically, and furthermore, you are forced to deal with the key – and refresh your gpg knowledge – at least once a year. That are two perfect reasons to set expiry to one year.

The following log selects each key in turn and sets its expiry date.

$ gpg --expert --edit-key $MASTERKEY
gpg (GnuPG) 1.4.20; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
 
Secret key is available.
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: never       usage: SC  
                               trust: ultimate      validity: ultimate
sub  4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: never       usage: E   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
 
gpg> expire
Changing expiration time for the primary key.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at Mon 06 Feb 2017 08:09:16 PM JST
Is this correct? (y/N) y
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub  4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: never       usage: E   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
 
gpg> key 1
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub* 4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: never       usage: E   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
 
gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at Mon 06 Feb 2017 08:09:27 PM JST
Is this correct? (y/N) y
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub* 4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06  usage: E   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>

Add a photo

Not strictly necessary, but an interesting feature. gpg suggests 240×288, I resized a photo of my head, greyscaled it, and optimized it with jpegoptim -s -m40 my-photo.jpg. The parameter 40 is the quality, I played around a bit to find the best balance between size and quality. The size should not be too big as the photo will be part of the key!

gpg> addphoto
 
Pick an image to use for your photo ID.  The image must be a JPEG file.
Remember that the image is stored within your public key.  If you use a
very large picture, your key will become very large as well!
Keeping the image close to 240x288 is a good size to use.
 
Enter JPEG filename for photo ID: GPG/norbert-head.jpg
Is this photo correct (y/N/q)? y
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub* 4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06  usage: E   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ unknown] (5)  [jpeg image of size 4185]

Add subkeys of 2048bit for signing/encryption/authentication

Now comes the interesting part, adding three subkeys: one for signing, one for encrypting, and one for authentication. The one for signing is the one you will use for signing your uploads to Debian as well as emails. The authentication key will later be used to provide ssh authentication. Note that you have to use the --expert expert option to edit-key (as shown above), otherwise gpg does not allow to do this.

As I want to move the subkeys to the Yubikey NEO, a keysize of 2048bits is necessary.

First for the signing:

gpg> addkey
Key is protected.
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at Mon 06 Feb 2017 08:10:06 PM JST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
....+++++
..........+++++
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub* 4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06  usage: E   
sub  2048R/0xEC00B8DAD32266AA  created: 2016-02-07  expires: 2017-02-06  usage: S   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ unknown] (5)  [jpeg image of size 4185]

Now the same for encryption key:

gpg> addkey
Key is protected.
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
Your selection? 6
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at Mon 06 Feb 2017 08:10:20 PM JST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
..+++++
........+++++
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub* 4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06  usage: E   
sub  2048R/0xEC00B8DAD32266AA  created: 2016-02-07  expires: 2017-02-06  usage: S   
sub  2048R/0xBF361ED434425B4C  created: 2016-02-07  expires: 2017-02-06  usage: E   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ unknown] (5)  [jpeg image of size 4185]

Finally for the authentication key. Note that only here the --expert is necessary! We use ‘(8) RSA (set your own capabilities)’ and then toggle sign and encryption capabilities off, and authentication on.

gpg> addkey
Key is protected.
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
Your selection? 8
 
Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: Sign Encrypt 
 
   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished
 
Your selection? s
 
Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: Encrypt 
 
   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished
 
Your selection? e
 
Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: 
 
   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished
 
Your selection? a
 
Possible actions for a RSA key: Sign Encrypt Authenticate 
Current allowed actions: Authenticate 
 
   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished
 
Your selection? q
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 1y
Key expires at Mon 06 Feb 2017 08:10:34 PM JST
Is this correct? (y/N) y
Really create? (y/N) y
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
......+++++
+++++
 
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub* 4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06  usage: E   
sub  2048R/0xEC00B8DAD32266AA  created: 2016-02-07  expires: 2017-02-06  usage: S   
sub  2048R/0xBF361ED434425B4C  created: 2016-02-07  expires: 2017-02-06  usage: E   
sub  2048R/0x9C7CA4E294F04D49  created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ unknown] (5)  [jpeg image of size 4185]
 
gpg> save

Check the current status

Good point to take a break and inspect the current status. We should have one main key and three subkeys, all with expiry dates of 1 year ahead, and a photo also attached to the key:

$ gpg --expert --edit-key $MASTERKEY
gpg (GnuPG) 1.4.20; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
 
Secret key is available.
 
gpg: checking the trustdb
gpg: public key 0x0FC3EC02FBBB8AB1 is 58138 seconds newer than the signature
gpg: 3 marginal(s) needed, 1 complete(s) needed, classic trust model
gpg: depth: 0  valid:   2  signed:  28  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1  valid:  28  signed:  41  trust: 28-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2016-11-02
pub  4096R/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06  usage: SC  
                               trust: ultimate      validity: ultimate
sub  4096R/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06  usage: E   
sub  2048R/0xEC00B8DAD32266AA  created: 2016-02-07  expires: 2017-02-06  usage: S   
sub  2048R/0xBF361ED434425B4C  created: 2016-02-07  expires: 2017-02-06  usage: E   
sub  2048R/0x9C7CA4E294F04D49  created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg>

Create revocation certificate

In case something happens, like all your backups are burned, your computers are destroyed, or all data stolen by the NSA, it is a good idea to have an old fashioned paper print out of a revocation certificate which allows you to revoke the key even if you are not in possession of it.

This should be printed out and kept in a safe place.

$ gpg --gen-revoke $MASTERKEY > GPG/revoke-certificate-$MASTERKEY.txt
 
sec  4096R/0x6CACA448860CDC13 2010-09-14 Norbert Preining <norbert@preining.info>
 
Create a revocation certificate for this key? (y/N) y
Please select the reason for the revocation:
  0 = No reason specified
  1 = Key has been compromised
  2 = Key is superseded
  3 = Key is no longer used
  Q = Cancel
(Probably you want to select 1 here)
Your decision? 1
Enter an optional description; end it with an empty line:
> 
Reason for revocation: Key has been compromised
(No description given)
Is this okay? (y/N) y
 
You need a passphrase to unlock the secret key for
user: "Norbert Preining <norbert@preining.info>"
4096-bit RSA key, ID 0x6CACA448860CDC13, created 2010-09-14
 
Enter passphrase:
 
ASCII armored output forced.
Revocation certificate created.

Please move it to a medium which you can hide away; if the NSA or KGB or Mossad gets access to this certificate, they can use it to make your key unusable. It is smart to print this certificate and store it away, just in case your media become unreadable.

Create gpg 2.1 structure

There are currently three versions of gpg available: ‘classic’ (version 1) which is one static binary, perfect for servers or scripting tasks; ‘stable’ (version 2.0) which is the modularized version supporting OpenPGP, S/MIME, and Secure Shell; and finally ‘modern’ (version 2.1 and up) with enhanced features like support for Elliptic Curve cryptography. Debian currently ships version 1 as standard, and also the modern version (but there are traces in experimental of a pending transition).

The newer versions of GnuPG are modularized and use an agent. For the following we need to kill any running instance of gpg-agent.

$ killall gpg-agent

After that a simple call to gpg2 to list the secret keys will convert the layout to the new standard:

$ gpg2 -K $MASTERKEY
gpg: keyserver option 'ca-cert-file' is obsolete; please use 'hkp-cacert' in dirmngr.conf
gpg: starting migration from earlier GnuPG versions
gpg: porting secret keys from '/home/norbert/.gnupg/secring.gpg' to gpg-agent
gpg: key 0xD2BF4AA309C5B094: secret key imported
gpg: key 0x6CACA448860CDC13: secret key imported
gpg: migration succeeded
sec   rsa4096/0x6CACA448860CDC13 2010-09-14 [SC] [expires: 2017-02-06]
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
uid                   [ultimate] Norbert Preining <norbert@preining.info>
uid                   [ultimate] Norbert Preining <preining@logic.at>
uid                   [ultimate] Norbert Preining <preining@debian.org>
uid                   [ultimate] Norbert Preining <preining@jaist.ac.jp>
uid                   [ultimate] [jpeg image of size 4185]
ssb   rsa4096/0xD1D2BD14810F62B3 2010-09-14 [E] [expires: 2017-02-06]
ssb   rsa2048/0xEC00B8DAD32266AA 2016-02-07 [S] [expires: 2017-02-06]
ssb   rsa2048/0xBF361ED434425B4C 2016-02-07 [E] [expires: 2017-02-06]
ssb   rsa2048/0x9C7CA4E294F04D49 2016-02-07 [A] [expires: 2017-02-06]

After this there will be new files/directories in the .gnupg directory, in particular: .gnupg/private-keys-v1.d/ which contains the private keys.

Creating backup

Now your .gnupg directory contains still all the keys, available for gpg1 and gpg2.1.

You MUST MAKE A BACKUP NOW!!! on at least 3 USB sticks and maybe some other offline media. Keep them in a safe place, better in different and safe places, you will need them for extending the expiry date, signing other keys, etc.

Warning concerning USB and vfat file systems

gpg >= 2.1 requires gpg-agent which in turn needs a socket. If you have the backup on an USB drive (most often with vfat file system), you need to redirect the socket, as vfat does not support sockets!

Edit /USBSTICK/gnupghome/S.gpg-agent and enter there

%Assuan%
socket=/dev/shm/S.gpg-agent

After that the socket will be created in /dev/shm/ instead and invoking gpg with gpg2 --homedir /USBSTICK/gnupghome will work.

You have done your backups, right?

Move sub keys to card

As I mentioned, I want to have no keys on my laptop which I carry around to strange countries, instead I want to have them all on a Yubikey NEO. I will describe the setup and usage in details soon, but mention here only how to move the keys to the card. This requires a finished setup including change of pins.

Note that when using gpg2 to move the keys to the card, the local copies are actually deleted, but only for the gpg2(.1) files. The gpg1 secret keys are still all in place.

$ gpg2 --edit-key $MASTERKEY
gpg (GnuPG) 2.1.11; Copyright (C) 2016 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
 
Secret key is available.
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> key 2
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb* rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> keytocard
Please select where to store the key:
   (1) Signature key
   (3) Authentication key
Your selection? 1
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb* rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> key 2
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> key 3
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb* rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> keytocard
Please select where to store the key:
   (2) Encryption key
Your selection? 2
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb* rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> key 3
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> key 4
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb* rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> keytocard
Please select where to store the key:
   (3) Authentication key
Your selection? 3
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb* rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> key 4
 
sec  rsa4096/0x6CACA448860CDC13
     created: 2010-09-14  expires: 2017-02-06  usage: SC  
     trust: ultimate      validity: ultimate
ssb  rsa4096/0xD1D2BD14810F62B3
     created: 2010-09-14  expires: 2017-02-06  usage: E   
ssb  rsa2048/0xEC00B8DAD32266AA
     created: 2016-02-07  expires: 2017-02-06  usage: S   
ssb  rsa2048/0xBF361ED434425B4C
     created: 2016-02-07  expires: 2017-02-06  usage: E   
ssb  rsa2048/0x9C7CA4E294F04D49
     created: 2016-02-07  expires: 2017-02-06  usage: A   
[ultimate] (1). Norbert Preining <norbert@preining.info>
[ultimate] (2)  Norbert Preining <preining@logic.at>
[ultimate] (3)  Norbert Preining <preining@debian.org>
[ultimate] (4)  Norbert Preining <preining@jaist.ac.jp>
[ultimate] (5)  [jpeg image of size 4185]
 
gpg> save

Note the repetition of selecting and deselecting keys.

Current status

After this procedure we are now in the following situation:

  • gpg1: all keys are still available
  • gpg2: sub keys are moved to yubikey (indicated below by ssb>), and master key is still available

In gpg words it looks like this:

$ gpg2 -K $MASTERKEY
gpg: keyserver option 'ca-cert-file' is obsolete; please use 'hkp-cacert' in dirmngr.conf
sec   rsa4096/0x6CACA448860CDC13 2010-09-14 [SC] [expires: 2017-02-06]
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
uid                   [ultimate] Norbert Preining <norbert@preining.info>
uid                   [ultimate] Norbert Preining <preining@logic.at>
uid                   [ultimate] Norbert Preining <preining@debian.org>
uid                   [ultimate] Norbert Preining <preining@jaist.ac.jp>
uid                   [ultimate] [jpeg image of size 4185]
ssb   rsa4096/0xD1D2BD14810F62B3 2010-09-14 [E] [expires: 2017-02-06]
ssb>  rsa2048/0xEC00B8DAD32266AA 2016-02-07 [S] [expires: 2017-02-06]
ssb>  rsa2048/0xBF361ED434425B4C 2016-02-07 [E] [expires: 2017-02-06]
ssb>  rsa2048/0x9C7CA4E294F04D49 2016-02-07 [A] [expires: 2017-02-06]
 
$ gpg -K $MASTERKEY
sec   4096R/0x6CACA448860CDC13 2010-09-14 [expires: 2017-02-06]
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
uid                            Norbert Preining <norbert@preining.info>
uid                            Norbert Preining <preining@logic.at>
uid                            Norbert Preining <preining@debian.org>
uid                            Norbert Preining <preining@jaist.ac.jp>
uid                            [jpeg image of size 4185]
ssb   4096R/0xD1D2BD14810F62B3 2010-09-14 [expires: 2017-02-06]
ssb   2048R/0xEC00B8DAD32266AA 2016-02-07 [expires: 2017-02-06]
ssb   2048R/0xBF361ED434425B4C 2016-02-07 [expires: 2017-02-06]
ssb   2048R/0x9C7CA4E294F04D49 2016-02-07 [expires: 2017-02-06]
 
$ gpg2 --card-status
 
....
Name of cardholder: Norbert Preining
....
PIN retry counter : 3 3 3
Signature counter : 0
Signature key ....: 5871 F824 2DCC 3660 2362  BE7D EC00 B8DA D322 66AA
      created ....: 2016-02-07 11:10:06
Encryption key....: 2501 195C 90AB F4D2 3DEA  A303 BF36 1ED4 3442 5B4C
      created ....: 2016-02-07 11:10:20
Authentication key: 9CFB 3775 C164 0E99 F0C8  014C 9C7C A4E2 94F0 4D49
      created ....: 2016-02-07 11:10:34
General key info..: sub  rsa2048/0xEC00B8DAD32266AA 2016-02-07 Norbert Preining <norbert@preining.info>
sec   rsa4096/0x6CACA448860CDC13  created: 2010-09-14  expires: 2017-02-06
ssb   rsa4096/0xD1D2BD14810F62B3  created: 2010-09-14  expires: 2017-02-06
ssb>  rsa2048/0xEC00B8DAD32266AA  created: 2016-02-07  expires: 2017-02-06
                                  card-no: 0006 03645719
ssb>  rsa2048/0xBF361ED434425B4C  created: 2016-02-07  expires: 2017-02-06
                                  card-no: 0006 03645719
ssb>  rsa2048/0x9C7CA4E294F04D49  created: 2016-02-07  expires: 2017-02-06
                                  card-no: 0006 03645719
$

Remove private master keys

You are sure that you have a working backup? Did you try it with gpg --homedir ...? Only if you are really sure, continue.

We are now removing the master key from both the gpg2 and gpg1 setup.

removal for gpg2

gpg2 keeps the private keys in ~/.gnupg/private-keys-v1.d/KEYGRIP.key and the KEYGRIP can be found by adding --with-keygrip to the key listing. Be sure to delete the correct file, the one related to the master key.

$ gpg2 --with-keygrip --list-key $MASTERKEY
pub   rsa4096/0x6CACA448860CDC13 2010-09-14 [SC] [expires: 2017-02-06]
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
      Keygrip = 9DC1E90703856C1DE0EAC970CED7ABF5EE5EF79D
uid                   [ultimate] Norbert Preining <norbert@preining.info>
uid                   [ultimate] Norbert Preining <preining@logic.at>
uid                   [ultimate] Norbert Preining <preining@debian.org>
uid                   [ultimate] Norbert Preining <preining@jaist.ac.jp>
uid                   [ultimate] [jpeg image of size 4185]
sub   rsa4096/0xD1D2BD14810F62B3 2010-09-14 [E] [expires: 2017-02-06]
      Keygrip = 4B8FF57434DD989243666377376903281D861596
sub   rsa2048/0xEC00B8DAD32266AA 2016-02-07 [S] [expires: 2017-02-06]
      Keygrip = 39B14EF1392F2F251863A87AE4D44CE502755C39
sub   rsa2048/0xBF361ED434425B4C 2016-02-07 [E] [expires: 2017-02-06]
      Keygrip = E41C8DDB2A22976AE0DA8D7D11F586EA793203EA
sub   rsa2048/0x9C7CA4E294F04D49 2016-02-07 [A] [expires: 2017-02-06]
      Keygrip = A337DE390143074C6DBFEA64224359B9859B02FC
 
$ rm ~/.gnupg/private-keys-v1.d/9DC1E90703856C1DE0EAC970CED7ABF5EE5EF79D.key
$

After that the missing key is shown in gpg2 -K with an additional # meaning that the key is not available:

$ gpg2 -K $MASTERKEY
sec#  rsa4096/0x6CACA448860CDC13 2010-09-14 [SC] [expires: 2017-02-06]
...
removal for gpg1

Up to gpg v2.0 there is no simple way to delete only one part of the key. We export the subkeys, delete the private key, and reimport the subkeys:

$ gpg --output secret-subkeys --export-secret-subkeys $MASTERKEY
 
$ gpg --delete-secret-keys $MASTERKEY
 
sec  4096R/0x6CACA448860CDC13 2010-09-14 Norbert Preining <norbert@preining.info>
 
Delete this key from the keyring? (y/N) y
This is a secret key! - really delete? (y/N) y
 
$ gpg --import secret-subkeys
gpg: key 0x6CACA448860CDC13: secret key imported
gpg: key 0x6CACA448860CDC13: "Norbert Preining <norbert@preining.info>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
gpg:       secret keys read: 1
gpg:   secret keys imported: 1
 
$

Current status

We are basically at the stage we wanted to achieve:

For gpg2.1 only the old encryption key is available, the master key is not, and the other sub keys are moved to the yubikey:

$ gpg2 -K $MASTERKEY
sec#  rsa4096/0x6CACA448860CDC13 2010-09-14 [SC] [expires: 2017-02-06]
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
uid                   [ultimate] Norbert Preining <norbert@preining.info>
uid                   [ultimate] Norbert Preining <preining@logic.at>
uid                   [ultimate] Norbert Preining <preining@debian.org>
uid                   [ultimate] Norbert Preining <preining@jaist.ac.jp>
uid                   [ultimate] [jpeg image of size 4185]
ssb   rsa4096/0xD1D2BD14810F62B3 2010-09-14 [E] [expires: 2017-02-06]
ssb>  rsa2048/0xEC00B8DAD32266AA 2016-02-07 [S] [expires: 2017-02-06]
ssb>  rsa2048/0xBF361ED434425B4C 2016-02-07 [E] [expires: 2017-02-06]
ssb>  rsa2048/0x9C7CA4E294F04D49 2016-02-07 [A] [expires: 2017-02-06]
$

And for gpg <= 2.0 the old encryption key and the sub keys are available, but the master key is not:

$ gpg -K $MASTERKEY
sec#  4096R/0x6CACA448860CDC13 2010-09-14 [expires: 2017-02-06]
      Key fingerprint = F7D8 A928 26E3 16A1 9FA0  ACF0 6CAC A448 860C DC13
uid                            Norbert Preining <norbert@preining.info>
uid                            Norbert Preining <preining@logic.at>
uid                            Norbert Preining <preining@debian.org>
uid                            Norbert Preining <preining@jaist.ac.jp>
uid                            [jpeg image of size 4185]
ssb   4096R/0xD1D2BD14810F62B3 2010-09-14 [expires: 2017-02-06]
ssb   2048R/0xEC00B8DAD32266AA 2016-02-07 [expires: 2017-02-06]
ssb   2048R/0xBF361ED434425B4C 2016-02-07 [expires: 2017-02-06]
ssb   2048R/0x9C7CA4E294F04D49 2016-02-07 [expires: 2017-02-06]
 
$

Split the .gnupg directory for mail server and laptop

As mentioned, I want to have a gpg1 version available at the server where I read my emails, and be able to sign/encrypt emails there, while on my laptop no secret key is available. Thus I prepare two gnupg directories.

For the mailserver the gpg2 specific files are removed:

$ cp -a .gnupg .gnupg-mail
$ cd .gnupg-mail
$ rm -rf private-keys-v1.d/ pubring.gpg~ reader_0.status
$ rm -rf S.gpg-agent* S.scdaemon .gpg-v21-migrated

On my laptop, where I did all this operation, I remove the gpg1 files, namely the outdated secring.gpg:

$ cd $HOME/.gnupg
$ rm secring.gpg

As a last step I move the .gnupg-mail directory to my mail server.

Once could *expire* the old encryption key, but for now I leave it as is.

Upload keys to keyservers

If you are a Debian Developer, a simple update of your master key will suffice:

gpg --keyserver hkp://keyring.debian.org --send-key YOURMASTERKEYID

Note that the update from the keyring server to the actual Debian keyring takes up to one month. Until that time either do not upload anything, or use the (offline) master key for signing. After your key has been updated in the Debian keyring, signatures made with the signing subkey will be accepted for uploading to Debian.

It might be also a good idea to upload your new keys to some keyservers like:

gpg --keyserver hkp://pool.sks-keyservers.net --send-key $MASTERKEY

Now you an also fix the configuration file skew between gpg1 and gpg2.

Further remark

I am currently trying to use the authentication key from my Yubikey NEO as ssh key, but bugs (see #795368 and #818969) prohibit it at the moment. Raphael Herzog gave a possible fix by killing the gpg-agent and restarting it with gpg-agent --daemon from an X terminal, and I can confirm that this worked.

After one year before the key expires I need to extend the key validity for another year. For this you need the offline master key. I will describe the process when it becomes necessary.

Reading list

The following web sites have been useful in collecting the necessary information:

  1. https://iain.learmonth.me/yubikey-neo-gpg/
  2. https://iain.learmonth.me/yubikey-udev/
  3. http://blog.josefsson.org/2014/06/23/offline-gnupg-master-key-and-subkeys-on-yubikey-neo-smartcard/
  4. https://wiki.debian.org/Subkeys
  5. https://jclement.ca/articles/2015/gpg-smartcard/ as modernized version of (3)
  6. https://www.esev.com/blog/post/2015-01-pgp-ssh-key-on-yubikey-neo/ similar style, with ssh and gnome-keyring infos
  7. http://karlgrz.com/2fa-gpg-ssh-keys-with-pass-and-yubikey-neo/ also good reading
  8. https://help.riseup.net/en/security/message-security/openpgp/best-practices good and concise advise on gpg practices

My writing is mostly based on (5) with additions from (4).

Please let me know of any errors, improvements, and fixes. I hope this walk-through might help others in the same situation.

20 April, 2016 05:42AM by Norbert Preining

April 19, 2016

Dariusz Dwornikowski

HAProxy and 503 HTTP errors with AWS ELB as a backend

Although, AWS provides load balancer service in the form of Elastic Load Balancer (ELB), a common trick is to use HAProxy in the middle to provide SSL offloading, complex routing and better logging.
In this scenario, a public ELB is the frontier of all the traffic, HAProxy farm in the middle is managed by an Auto Scaling Group, and one (or more) internal backend ELBs stay in front of Web farm.

haproxy

I think that HAProxy does not need any introductions here. It is highly scalable and reliable piece of software. There is however a small caveat when you use it with domain names and not IP addresses. To speed up things, HAProxy resolves all the domain named during startup (during config file parsing in fact). Hence, when the IP of a domain changes, you end up with a lot of 503s (Service Unavailable).

Why is this important ? In AWS, ELB's IP can change over time, so it is recommended to use ELB's domain name. Now, when you use this domain name in HAProxy's backend, you can end up with 503s. ELB IPs do not change so often but still you would not want any downtimes.

The solution is to configure runtime resolvers in HAProxy and use them in the backend (unforntunatelly this works only in HAProxy 1.6):

 ::haproxy
 resolvers myresolver
      nameserver dns1 10.10.10.10:53
      resolve_retries       30
      timeout retry         1s
      hold valid           10s

  backend mybackend
      server myelb-internal.123456.eu-west-1.elb.amazonaws.com check resolvers myresolver

Now HAProxy will check the domain at runtime, no more 503s.

19 April, 2016 12:26PM by Dariusz Dwornikowski

Rémi Vanicat

LudumDare35

Ludumdare 35

For the Third time, I've submitted a compo to the ludumdare. So I've a new game (source available on github).

Some note about the technology used:

  • this is a javascript/html5 game,
  • using the phaser framework.
  • Code has been wrote using Emacs and js2-mode,
  • tested with the python -m SimpleHTTPServer http server.
  • Sound:

    • sfx is mostly recording of real life sound, edited with Audacity, but I've also used labChirp (inside wine...).
    • Music is done using Bosca Ceoil.

      Next time I will try lmms.

  • Graphics are using aseprite (mostly) and gimp (very little)
  • Level has been created using Tiled

Most of those tool are free software, Exception are labChirp (we have no source), and Adobe Air/flash that is used by Bosca Ceoil (but Bosca Ceoil is a free software).

19 April, 2016 11:39AM

hackergotchi for Michal Čihař

Michal Čihař

Weekly phpMyAdmin contributions 2016-W15

After weeks of bugfixing my focus has again shifted to refactoring and code cleanups.

One big area was charsets and collations, which were cached in the session data so far. This had bad effect of making the session data quite huge leading to performance loss on every page, while the cached information is needed only on few pages. I've removed this caching, cleaned up the code and everything seems to be behave faster, even the pages which used cached content in the past.

Second area was handling of file uploads. Historically we had two copies of code doing almost the same thing. I've tried to merge them and use File class for all the operations. However this code was built to handle lot of corner cases, so I'm a bit afraid of breaking some special setups.

Handled issues:

Filed under: English phpMyAdmin | 0 comments

19 April, 2016 10:00AM by Michal Čihař (michal@cihar.com)

hackergotchi for Norbert Preining

Norbert Preining

Gaming: Monument Valley

With a small baby invading your lifestyle, not much time for other activities is left over, especially for gaming. Most of the times even using my computer is a one-hand-action. In these times mobile games that can be played one-handed are greatly appreciated. And if it is one like Monument Valley, full of atmosphere and incredibly stimulating game play, then the level of gratitude is near infinite.

monument-valley1

Set in an Escherian universe where space and distance is often an illusion, the player is guiding a small princess through several levels (10+1 in the basic game, 8 more in the In-App-Purchase) of astonishing simplicity and beauty at the same time. Carefully crafted graphics, atmospheric music, calm game play (no action, don’t worry), and the lovely crows sitting around and craaaahing at the little princess.

My favorite level was the first of the expansion pack “Forgotten shores” called “The Chasm” – a wonderful homage to the Lord of the Rings and the descent to the Bridge of Khazad-dûm. Not one of the difficult levels, but definitely one of the most funny. The middle image in the above collage is from that level.

I guess my only complain about this game is that it doesn’t last long. I remember only one really difficult level where I had to play around quite some time. Most of the others are quite straight forward, but despite of that, you cannot stop playing until you are all through it.

Simply a wonderful game, that was well worth every Yen. Thanks to the developers for doing innovative things in a perfect setting.

monument-valley2

Post scriptum: After finishing this game I also tried Evo Explores, a clone of Monument Valley. The difference cannot be more stunning: Evo Explores is mostly repetitive, with a focus on an irrelevant story, simple graphics, and riddles that miss the ingenuity of the original.

19 April, 2016 04:09AM by Norbert Preining

Craig Sanders

Book Review: Trader’s World by Charles Sheffield

One line review:

Boys Own Dale Carnegie Post-Apocalyptic Adventures with casual racism and misogyny.

That tells you everything you need to know about this book. I wasn’t expecting much from it, but it was much worse than I anticipated. I’m about half-way through it at the moment, and can’t decide whether to just give up in disgust or keep reading in horrified fascination to see if it gets even worse (which is all that’s kept me going with it so far).

Book Review: Trader’s World by Charles Sheffield is a post from: Errata

19 April, 2016 03:05AM by cas

April 18, 2016

Reproducible builds folks

Reproducible builds: week 50 in Stretch cycle

What happened in the reproducible builds effort between April 3rd and April 9th 2016:

Media coverage

Emily Ratliff wrote an article for SecurityWeek called Establishing Correspondence Between an Application and its Source Code - How Combining Two Completely Separate Open Source Projects Can Make Us All More Secure.

Tails have started work on a design for freezable APT repositories to make it easier and practical to perform reproductions of an entire distribution at a given point in time, which will be needed to create reproducible installation- or live-media.

Toolchain fixes

Alexis Bienvenüe submitted patches adding support for SOURCE_DATE_EPOCH in several tools: transfig, imagemagick, rdtool, and asciidoctor. boyska submitted one for python-reportlab.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: atinject-jsr330 brailleutils cglib3 gnugo libcobra-java libgnumail-java libjchart2d-java libjcommon-java libjfreechart-java libjide-oss-java liblaf-widget-java liblastfm-java liboptions-java octave-control octave-mpi octave-nan octave-parallel octave-stk octave-struct octave-tsa oar

The following packages became reproducible after getting fixed:

Several uploads fixed some reproducibility issues, but not all of them:

  • rkward/0.6.5-1 uploaded by Thomas Friedrichsmeier, original patch by Philip Rinn.
  • mailfilter/0.8.4-1 uploaded by Elimar Riesebieter, original patch by Chris Lamb.
  • bind9/1:9.10.3.dfsg.P4-6 uploaded by Michael Gilbert, original patch by Reiner Herrmann.
  • bzr/2.7.0-{3,4} by Jelmer Vernooij.
  • samba/2:4.3.6+dfsg-2 uploaded by Mathieu Parent, fix by Jelmer Vernooij.
  • fwupdate/0.5-3 by Mario Limonciello.
  • paraview/5.0.1+dfsg1-1 by Anton Gladky.

Patches submitted which have not made their way to the archive yet:

  • #819883 on debootstrap by Reiner Herrmann: tell tar to sort the archive members.
  • #819885 on chktex by Sascha Steinbiss: use the time of latest debian/changelog entry as documentation timestamp.
  • #819915 on kannel by Alexis Bienvenüe: use the time of latest debian/changelog entry as documentation timestamp.
  • #819921 on basket by Alexis Bienvenüe: remove build date from debug info.
  • #819965 on openarena-data by Alexandre Detiste: normalize file permissions before creating .pk3 archive.
  • #820016 on gabedit by Alexis Bienvenüe: sort object files used to build the executable.
  • #820032 on bibledit-gtk by Alexis Bienvenüe: remove useless included Makefile.
  • #820072 on synfig by Alexis Bienvenüe: remove build date from info output.
  • #820148 on autopkgtest by Alexis Bienvenüe: fix install order to cope with locales with case insensitive globbing.
  • #820152 on anope by Alexis Bienvenüe: remove build date from the version string.
  • #820179 on aodh by Alexis Bienvenüe: remove build date from the documentation.
  • #820183 on cython by Alexis Bienvenüe: add support SOURCE_DATE_EPOCH.
  • #820194 on nasm by rain1: sorts keys when traversing hash tables used to build the documentation.
  • #820226 on chrony by Alexis Bienvenüe: add support for SOURCE_DATE_EPOCH to preset the ntp_era_split parameter.
  • #820457 on recode by Alexis Bienvenüe: use system help2man.
  • #820522 on gtkspell by Alexis Bienvenüe: force shell to /bin/sh in example Makefile.

Other upstream fixes

Alexander Batischev made a commit to make newsbeuter reproducible.

tests.reproducible-builds.org

  • An architecture agnostic summary has been added to the reproducible-tracker.json by Valerie Young to make it easy to parse whether a package is unreproducible anywhere.
  • To find more reproducibility issues a new variation was added to the i386 builders, so that one build is done using a 32 bit kernel (686-PAE) and the other build is using a 64 bit kernel (amd64). (h01ger)
  • The 2nd builds are now done in fr_CH on amd64, de_CH on i386 and it_CH on armhf. (h01ger)
  • The variation table has been updated to reflect the recent changes and various small bugs have been fixed. (h01ger)

Package reviews

93 reviews have been removed, 66 added and 21 updated in the previous week.

12 new FTBFS bugs have been reported by Chris Lamb and Niko Tyni.

Misc.

This week's edition was written by Lunar, Holger Levsen, Reiner Herrmann, Mattia Rizzolo and Ximin Luo.

With the departure of Lunar as a full-time contributor, Reproducible Builds Weekly News (this thing you're reading) has moved from his personal Debian blog on Debian People to the Reproducible Builds team web site on Debian Alioth. You may want to update your RSS or Atom feeds.

Very many thanks to Lunar for writing and publishing this weekly news for so long, well & continously!

18 April, 2016 04:19PM

hackergotchi for Michael Prokop

Michael Prokop

Event: DebConf 16

*

Yes, I’m going to DebConf 16! This year DebConf – the Debian Developer Conference – will take place in Cape Town, South Africa.

Outbound:

2016-06-26 15:40 VIE -> 17:10 LHR BA0703
2016-06-26 21:30 LHR -> 09:55 CPT BA0059

Inbound:

2016-07-09 19:30 CPT –> 06:15 LHR BA0058
2016-07-10 07:55 LHR –> 11:05 VIE BA0696

18 April, 2016 03:18PM by mika

hackergotchi for Jonathan McDowell

Jonathan McDowell

Going to DebConf 16

Going to DebConf16

Whoop! Looking forward to it already (though will probably spend it feeling I should be finishing my dissertation).

Outbound:

2016-07-01 15:20 DUB -> 16:45 LHR BA0837
2016-07-01 21:35 LHR -> 10:00 CPT BA0059

Inbound:

2016-07-10 19:20 CPT -> 06:15 LHR BA0058
2016-07-11 09:20 LHR -> 10:45 DUB BA0828

(image stolen from Gunnar)

18 April, 2016 01:12PM

Dariusz Dwornikowski

XWiki and slashes in URI

XWiki is a great open source Atlassian Confluence replacement (some argue it is better, I leave it to your assessment). We use XWiki a lot at Tenesys to document internal projects, and create documentation of clients' platforms. We run XWiki in Tomcat application server, behind nginx proxy.

We use great XWiki's plugin, called FAQ, which can be used to create, well FAQs. The problem we had was that sometimes people (me especially) created FAQ entries with a / in the name, which resulted in XWiki creating a slug with / character, which is used to delimit page hierarchy in XWIki. Basically, you wanted to write How to install Debian/Ubuntu package and you ended up with two pages: How to install Debian and a subpage Ubuntu package. You can't easily delete the 'slashed' FAQ page because by default the last one is deleted only.

The solution to this problems is twofold. First of all, you need to tell Tomcat to allow passing encoded slash (%2F) oto XWiki. Add to -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true to CATALINA_OPTS. You can either do it via catalina.sh or catalina.opts.

Second of all, you need to make sure that your nginx proxy directive is bare, i.e. does not contain URI part, see relevant stack question here. Basically you want your proxy_pass to look like that:

location / {
  proxy_pass http://backend;
}

... not like that.

location / {
  proxy_pass http://backend/xwiki;
}

I spent quite a lot of time before I discovered that nginx caveat. Hope it helps somebody too.

18 April, 2016 11:54AM by Dariusz Dwornikowski

Petter Reinholdtsen

NUUG contests Norwegian police DNS seizure of popcorn-time.no

It is days like today I am really happy to be a member of the Norwegian Unix User group, a member association for those of us believing in free software, open standards and unix-like operating systems. NUUG announced today it will try to bring the seizure of the DNS domain popcorn-time.no as unlawful, to stand up for the principle that writing about a controversial topic is not infringing copyrights, and censuring web pages by hijacking DNS domain should be decided by the courts, not the police. The DNS domain was seized by the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime a month ago. I hope this bring more paying members to NUUG to give the association the financial muscle needed to bring this case as far as it must go to stop this kind of DNS hijacking.

18 April, 2016 08:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.0.4

A few days ago a new upstream version "2.0" of CCTZ was released. See here for the corresponding post on the Google OpenSource Blog.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo. RcppCCTZ connects this library to R by relying on Rcpp. This new version adds more support to the notion of civil time representation -- i.e. independent of timezones -- which can then be mapped to different time zone representations.

Changes in this version are summarized here:

Changes in version 0.0.4 (2016-04-17)

  • Synchronized with CCTZ v2 upstream.

  • Updated examples.cpp accordingly

A quick example is provided here where we look at the time when Neil Armstrong first stepped on the moon as an absolute ("civil") time and two local representations:

// from examples/hello.cc
// 
// [[Rcpp::export]]
int helloMoon() {
    cctz::time_zone syd;
    if (!cctz::load_time_zone("Australia/Sydney", &syd)) return -1;

    // Neil Armstrong first walks on the moon
    const auto tp1 = cctz::convert(cctz::civil_second(1969, 7, 21, 12, 56, 0), syd);

    const std::string s = cctz::format("%F %T %z", tp1, syd);
    Rcpp::Rcout << s << "\n";

    cctz::time_zone nyc;
    cctz::load_time_zone("America/New_York", &nyc);

    const auto tp2 = cctz::convert(cctz::civil_second(1969, 7, 20, 22, 56, 0), nyc);
    return tp2 == tp1 ? 0 : 1;
}

We can call this from R, and get the expected result (of equivalence between the dates):

R> library(RcppCCTZ)
R> helloMoon()
1969-07-21 12:56:00 +1000
[1] 0
R>

We also have a diff to the previous version thanks to CRANberries.

More details, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 April, 2016 03:16AM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2016 pretest and Debian packages

Preparation for the release of TeX Live 2016 have started some time ago with the freeze of updates in TeX Live 2015. Yesterday we announced the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs. At the same time I have uploaded the first set of packages of TeX Live 2016 for Debian to the experimental suite.

texlive-2016-debian-pretest

Concerning the binaries we do expect a few further changes, but hopefully nothing drastic. The most invasive change on the tlmgr side is that cryptographic signatures are now verified to guarantee authenticity of the packages downloaded, but this is rather irrelevant for Debian users (though I will look into how that works in user mode).

Other than that, many packages have been updated or added since the last Debian packages, here is the unified list:

acro, animate, appendixnumberbeamer, arabluatex, asapsym, asciilist, babel-belarusian, bibarts, biblatex-bookinarticle, biblatex-bookinother, biblatex-caspervector, biblatex-chicago, biblatex-gost, biblatex-ieee, biblatex-morenames, biblatex-opcit-booktitle, bibtexperllibs, bxdvidriver, bxenclose, bxjscls, bxnewfont, bxpapersize, chemnum, cjk-ko, cochineal, csplain, cstex, datetime2-finnish, denisbdoc, dtx, dvipdfmx-def, ejpecp, emisa, fithesis, fnpct, font-change-xetex, forest, formation-latex-ul, gregoriotex, gzt, hausarbeit-jura, hyperxmp, imakeidx, jacow, l3, l3kernel, l3packages, latex2e, latex2e-help-texinfo-fr, latex-bib2-ex, libertinust1math, lollipop, lt3graph, lua-check-hyphen, lualibs, luamplib, luatexja, mathalfa, mathastext, mcf2graph, media9, metrix, nameauth, ndsu-thesis, newtx, normalcolor, noto, nucleardata, nwejm, ocgx2, pdfcomment, pdfpages, pkuthss, polyglossia, proposal, qcircuit, reledmac, rmathbr, savetrees, scanpages, stex, suftesi, svrsymbols, teubner, tex4ebook, tex-ini-files, tikzmark, tikzsymbols, titlesec, tudscr, typed-checklist, ulthese, visualtikz, xespotcolor, xetex-def, xetexko, ycbook, yinit-otf.

Enjoy.

18 April, 2016 02:23AM by Norbert Preining

hackergotchi for Matthew Garrett

Matthew Garrett

One more attempt at SATA power management

Around a year ago I wrote some patches in an attempt to improve power management on Haswell and Broadwell systems by configuring Serial ATA power management appropriately. I got a couple of reports of them triggering SATA errors for some users, couldn't reproduce them myself and so didn't have a lot of confidence in them. Time passed.

I've been working on power management stuff again this week, so it seemed like a good opportunity to revisit these. I've made a few changes and pushed a couple of trees - one against master and one against 4.5.

First, these probably only have relevance to users of mobile Intel parts in the U or S range (/proc/cpuinfo will tell you - you're looking for a four-digit number that starts with 4 (Haswell), 5 (Broadwell) or 6 (Skylake) and ends with U or S), and won't do anything unless you have SATA drives (including PCI-based SATA). To test them, first disable anything like TLP that might alter your SATA link power management policy. Then check powertop - you should only be getting to PC3 at best. Build a kernel with these patches and boot it. /sys/class/scsi_host/*/link_power_management_policy should read "firmware". Check powertop and see whether you're getting into deeper PC states. Now run your system for a while and check the kernel log for any SATA errors that you didn't see before.

Let me know if you see SATA errors and are willing to help debug this, and leave a comment if you don't see any improvement in PC states.

comment count unavailable comments

18 April, 2016 02:15AM

hackergotchi for Clint Adams

Clint Adams

doctest

The most prevalent Debian architecture in my home is armel and yet I still want armel and powerpc to be removed as release architectures.

18 April, 2016 01:11AM

April 17, 2016

Bits from Debian

DPL elections 2016, congratulations Mehdi Dogguy!

The Debian Project Leader elections finished yesterday and the winner is Mehdi Dogguy! Of a total of 1023 developers, 282 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2016 page.

The new term for the project leader starts today April 17th and expire on April 17th 2017.

17 April, 2016 04:40PM by Ana Guerrero Lopez

Andreas Metzler

balance sheet snowboarding season 2015/16

A very weak season, mainly due to two reasons:

  • weather: It was just too warm. Pre-season in November things looked hopeful with early snow, but December 2015 was absurdely warm. According to the weather station Damüls Hertehof (1580m above sea level) there were only three days in December when temperatures fell below 0° Celsius. I had my first snow day on December 6 followed by 3 more days in December (12, 19 and 20) with continuously worsening conditions. I gave it another chance on January 6th, but things had not improved and I paused until January 21, which was the first good day.
  • On February 7th 2016 I had a small accident at luging, resulting in a (partially) torn ligament at my left ankle joint. Joy. No snowboarding until March 13. My season ended on April 10, when Warth/Schröcken closed.

Here is the balance sheet:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/2016
number of (partial) days2517293730302523302417
Damüls101051016231042994
Diedamskopf1542423134141911312
Warth/Schröcken03041310021
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m1327811015
# of runs309189503551462449516468597530354

17 April, 2016 01:46PM by Andreas Metzler

hackergotchi for Sean Whitton

Sean Whitton

Mail expiring out of the postfix mail queue

On my Debian machines I run stunnel to create an secure connection to my e-mail provider’s SMTP gateway. Postfix sends mail through that TLS tunnel. Recently I stopped receiving e-mail from rss2email and I discovered tonight that the reason was that the tunnel has caved in on the machine which rss2email was running on. Unfortunately, some mail was permanently discarded from the postfix queue because it turns out that postfix will by default keep mail in the queue for a maximum of only 5 days. Since the connection to the gateway was down, postfix couldn’t return the mail to its sender (i.e. me).

Fortunately, I’m not smart enough to have any log rotation going on, so I could easily find the message that were lost:

grep "status=expired, returned to sender" /var/log/mail.log \
    | awk '{print $6}' \
    | while read id; do grep "$id" -m1 /var/log/mail.log; done

The first grep determines the queue id of the messages that were expired, and then the second grep finds the first entry in the mail log for that message, which provides the time the message was sent. Replacing -m1 with -m4 gave me the message-id of the messages and the intended recipient of the messages. This allowed me to restore them from backups or bounce them from my sent mail folder for those that I tried to send myself.

To prevent this from happening again, I’ve extended the maximum lifetime of messages in the queue from 5 days to 10:

postconf -e maximal_queue_lifetime=10d

I’ve incorporated a check for clogged mail queues on my machines into my weekly backup routine.

17 April, 2016 05:19AM