January 22, 2018

hackergotchi for Thomas Lange

Thomas Lange

FAI.me build service now supports backports

The FAI.me build service now supports packages from the backports repository. When selecting the stable distribution, you can also enable backports packages. The customized installation image will then uses the kernel from backports (currently 4.14) and you can add additional packages by appending /stretch-backports to the package name, e.g. notmuch/stretch-backports.

Currently, the FAIme service offers images build with Debian stable, stable with backports and Debian testing.

If you have any ideas for extensions or any feedback, send an email to FAI.me =at= fai-project.org

FAI.me

22 January, 2018 01:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.8: Strictly maintenance

Another Rblpapi release, now at version 0.3.8, arrived on CRAN yesterday. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the eight release since the package first appeared on CRAN in 2016. This release wraps up a few smaller documentation and setup changes, but also includes an improvement to the (less frequently-used) subscription mode which Whit cooked up on the weekend. Details below:

Changes in Rblpapi version 0.3.8 (2018-01-20)

  • The 140 day limit for intra-day data histories is now mentioned in the getTicks help (Dirk in #226 addressing #215 and #225).

  • The Travis CI script was updated to use run.sh (Dirk in #226).

  • The install_name_tool invocation under macOS was corrected (@spennihana in #232)

  • The blpAuthenticate help page has additional examples (@randomee in #252).

  • The blpAuthenticate code was updated and improved (Whit in #258 addressing #257)

  • The jump in version number was an oversight; this should have been 0.3.7.

And only while typing up these notes do I realize that I fat-fingered the version number. This should have been 0.3.7. Oh well.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 January, 2018 12:47PM

hackergotchi for Daniel Pocock

Daniel Pocock

Keeping an Irish home warm and free in winter

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

bare home

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

22 January, 2018 09:20AM by Daniel.Pocock

hackergotchi for Norbert Preining

Norbert Preining

Continuous integration testing of TeX Live sources

The TeX Live sources consists in total of around 15000 files and 8.7M lines (see git stats). It integrates several upstream projects, including big libraries like FreeType, Cairo, and Poppler. Changes come in from a variety of sources: external libraries, TeX specific projects (LuaTeX, pdfTeX etc), as well as our own adaptions and changes/patches to upstream sources. Since quite some time I wanted to have a continuous integration (CI) testing, but since our main repository is based on Subversion, the usual (easy, or the one I know) route via Github and one of the CI testing providers, didn’t come to my mind – until last week.

Over the weekend I have set up CI testing for our TeX Live sources by using the following ingredients: git-svn for checkout, Github for hosting, Travis-CI for testing, and a cron job that does the connection. To be more specific:

  • git-svn I use git-svn to check out only the source part of the (otherwise far to big) subversion repository onto my server. This is similar to the git-svn checkout of the whole of TeX Live as I reported here, but contains only the source part.
  • Github The git-svn checkout is pushed to the project TeX-Live/texlive-source on Github.
  • Travis-CI The CI testing is done in the TeX-Live/texlive-source project on Travis-CI (who are offering free services for open source projects, thanks!)

Although this sounds easy, there are a few stumbling blocks: First of all, the .travis.yml file is not contained in the main subversion repository. So adding it to the master tree that is managed via git-svn is not working, because the history is rewritten (git svn rebase). My solution was to create a separate branch travis-ci which adds only the .travis.yml file and merge master.

Travis-CI by default tests all branches, and does not test those not containing a .travis.yml, but to be sure I added an except clause stating that the master branch should not be tested. This way other developers can try different branches, too. The full .travis.yml can be checked on Github, here is the current status:

# .travis.yml for texlive-source CI building
# Norbert Preining
# Public Domain

language: c

branches:
  except:
  - master

before_script:
  - find . -name \*.info -exec touch '{}' \;

before_install:
  - sudo apt-get -qq update
  - sudo apt-get install -y libfontconfig-dev libx11-dev libxmu-dev libxaw7-dev

script: ./Build

What remains is stitching these things together by adding a cron job that regularly does git svn rebase on the master branch, merges the master branch into travis-ci branch, and pushes everything to Github. The current cron job is here:

#!/bin/bash
# cron job for updating texlive-source and pushing it to github for ci
set -e

TLSOURCE=/home/norbert/texlive-source.git
GIT="git --no-pager"

quiet_git() {
    stdout=$(tempfile)
    stderr=$(tempfile)

    if ! $GIT "$@" $stdout 2>$stderr; then
	echo "STDOUT of git command:"
	cat $stdout
	echo "************"
        cat $stderr >&2
        rm -f $stdout $stderr
        exit 1
    fi

    rm -f $stdout $stderr
}

cd $TLSOURCE
quiet_git checkout master
quiet_git svn rebase
quiet_git checkout travis-ci
# don't use [skip ci] here because we only built the 
# last commit, which would stop building
quiet_git merge master -m "merging master"
quiet_git push --all

With this setup we can CI testing of our changes in the TeX Live sources, and in the future maybe some developers will use separate branches to get testing there, too.

Enjoy.

22 January, 2018 09:15AM by Norbert Preining

hackergotchi for Shirish Agarwal

Shirish Agarwal

PrimeZ270-p, Intel i7400 review and Debian – 1

This is going to be a biggish one as well.

This is a continuation from my last blog post .

Before diving into installation, I had been reading for quite a while Matthew Garett’s work. Thankfully most of his blog posts do get mirrored on planet.debian.org hence it is easy to get some idea as what needs to be done although have told him (I think even shared here) that he should somehow make his site more easily navigable. Trying to find posts on either ‘GPT’ and ‘UEFI’ and to have those posts in an ascending or descending way date-wise is not possible, at least I couldn’t find a way to do it as he doesn’t do it date-wise or something.

The closest I could come to is sing ‘$keyword’ site:https://mjg59.dreamwidth.org/ via a search-engine and go through the entries shared therein. This doesn’t mean I don’t value his contribution. It is in fact, the opposite. AFAIK he was one of the first people who drew the community’s attention when UEFI came in and only Microsoft Windows could be booted on them, nothing else.

I may be wrong but AFAIK he was the first one to talk about having a shim and was part of getting people to be part of the shim process.

While I’m sure Matthew’s understanding may have evolved significantly from what he had shared before, it was two specific blog posts that I had to re-read before trying to install MS-Windows and then Debian-GNU/Linux system on it. .

I went to a friend’s house who had windows 7 running at his end, I ran over there, used diskpart and did the change to GPT using Windows technet article.

I had to use/go the GPT way as I understood that MS-Windows takes all the four primary partitions for itself, leaving nothing for any other operating system to do/use .

I did the conversion to GPT and tried to have MS-Windows 10 as my current motherboard and all future motherboards from Intel Gen7/Gen8 onwards do not support anything less than Windows 10. I did see an unofficial patch floating on github somewhere but now have lost the reference to it. I had read some of the bug-reports of the repo. which seemed to suggest it was still a work in progress.

Now this is where it starts becoming a bit… let’s say interesting.

Now a friend/client of mine offered me a job to review MS-Windows 10, with his product keys of course. I was a bit hesitant as it had been a long time since I had worked with MS-Windows and didn’t know if I could do it or not, the other was a suspicion that I might like it too much. While I did review it, I found –

a. It it one heck of a bloatware – I had thought MS-Windows would have learned it by now but no, they still have to have to learn that adware and bloatware aren’t solutions. I still can’t get my head wrapped around as to how 4.1 GB of an MS-WIndows ISO gets extracted to 20 GB and still have to install shit-loads of third-party tools to actually get anything done. Just amazed (and not in good way.) .

Just to share as an example I still had to get something like Revo Uninstaller as MS-Windows even till date hasn’t learned to uninstall programs cleanly and needs a tool like that to clean the registry and other places to remove the titbits left along the way.

Edit/Update – It still doesn’t have Fall Creators Update which is still supposed to be another 4 GB+ iso which god only knows how much space that will take.

b. It’s still not gold – With all the hoopla around MS-Windows 10 that I had been hearing and seeing ads, I was under the impression that MS-Windows had turned gold i.e. it had a release like Debian would have ‘buster’ something around next year probably around or after 2019 Debconf is held. Windows 10 Microsoft would be released around July 2018, so it’s still a few months off.

c. I had read an insightful article few years ago by a Junior Microsoft employee sharing/emphasizing why MS cannot do GNU/Linux volunteer/bazaar type of development. To put in not so many words, it came down to the cultural differences the way two communities operate. While in GNU/Linux a one more patch, one more pull request will be encouraged, and it may be integrated in that point release or it can’t it would be in the next point release (unless it changes something much more core/fundamentally which needs more in-depth review) MS-Windows on the other hand, actively discourages that sort of behavior as it meant more time for integration and testing and from the sound of it MS still doesn’t do Continuous Integration (CI), regressive testing etc. as is common in many GNU/Linux common projects more and more.

I wish I could have shared the article but don’t have the link anymore. @Lazyweb, if you would be so kind so as to help find that article. The developer had shared some sort of ssh credentials or something to prove who he was which he later to remove (probably) because of the consequences to him for sharing that insight were not worth it, although the writings seemed to be valid.

There were many more quibbles but shared the above ones. For e.g. copying files from hdd to usb disks doesn’t tell how much time it takes, while in Debian I’ve come to see time taken for any operation as guaranteed.

Before starting on to the main issue, some info. before-hand although I don’t know how relevant or not that info. might be –

Prime Z270-P uses EFI 2.60 by American Megatrends –

/home/shirish> sudo dmesg | grep -i efi
[sudo] password for shirish:
[ 0.000000] efi: EFI v2.60 by American Megatrends

I can share more info. if needed later.

Now as I understood/interpretated info. found on the web and by experience Microsoft makes quite a few more partitions than necessary to get MS-Windows installed.

This is how it stacks up/shows up –

> sudo fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: xxxxxxxxxxxxxxxxxxxxxxxxxxx

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data
/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem
/dev/sda6 3718232064 5280731135 1562499072 745.1G Linux filesystem
/dev/sda7 5280731136 7761199103 2480467968 1.2T Linux filesystem
/dev/sda8 7761199104 7814035455 52836352 25.2G Linux swap

I had made 2 GB for /boot in MS-Windows installer as I had thought it would take only some space and leave the rest for Debian GNU/Linux’s /boot to put its kernel entries, tools to check memory and whatever else I wanted to have on /boot/debian but for some reason I have not yet understood, that didn’t work out as I expected it to be.

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data

As seen in the above, the first four primary partitions are taken by MS-Windows themselves. I just wish I had understood how to use GPT disklabels properly so I could figure out things better, but it seems (for reasons not fully understood) why the efi partition is a lowly 100 MB which I suspect where /boot is when I asked it to be 2 GB. Is that UEFI doing, Microsoft’s doing or something which is a default bit, dunno. Having the EFI partition smaller hampers the way I want to do things as will be clear in a short while from now.

After I installed MS-Windows, I installed Debian GNU/Linux using the net install method.

The following is what I had put on piece of paper as what partitions would be for GNU/Linux –

/boot – 512 MB (should be enough to accommodate couple of kernel versions, memory checking and any other tools I might need in the future.

/ – 700 GB – well admittedly that looks insane a bit but I do like to play with new programs/binaries as and when possible and don’t want to run out of space as and when I forget to clean it up.

[off-topic, wishlist] One tool I would like to have (and dunno if it’s there) is an ability to know when I installed a package, how many times I have used it, how frequently and the ability to add small notes or description to the package. Many a times I have seen that the package description is either too vague or doesn’t focus on the practical usefulness of a package to me .

An easy example to share what I mean would be the apt package –

aptitude show apt
Package: apt
Version: 1.6~alpha6
Essential: yes
State: installed
Automatically installed: no
Priority: required
Section: admin
Maintainer: APT Development Team
Architecture: amd64
Uncompressed Size: 3,840 k
Depends: adduser, gpgv | gpgv2 | gpgv1, debian-archive-keyring, libapt-pkg5.0 (>= 1.6~alpha6), libc6 (>= 2.15), libgcc1 (>= 1:3.0), libgnutls30 (>= 3.5.6), libseccomp2 (>=1.0.1), libstdc++6 (>= 5.2)
Recommends: ca-certificates
Suggests: apt-doc, aptitude | synaptic | wajig, dpkg-dev (>= 1.17.2), gnupg | gnupg2 | gnupg1, powermgmt-base, python-apt
Breaks: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~), aptitude (< 0.8.10)
Replaces: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~)
Provides: apt-transport-https (= 1.6~alpha6)
Description: commandline package manager
This package provides commandline tools for searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library.

These include:
* apt-get for retrieval of packages and information about them from authenticated sources and for installation, upgrade and removal of packages together with their dependencies
* apt-cache for querying available information about installed as well as installable packages
* apt-cdrom to use removable media as a source for packages
* apt-config as an interface to the configuration settings
* apt-key as an interface to manage authentication keys

Now while I love all the various tools that the apt package has, I do have special fondness for $apt-cache rdepends $package

as it gives another overview of a package or library or shared library that I may be interested in and which other packages are in its orbit.

Over period of time it becomes easy/easier to forget packages that you don’t use day-to-day hence having something like such a tool would be a god-send where you can put personal notes about packages. Another could be reminders of tickets posted upstream or something on those lines. I don’t know of any tool/package which does something on those lines. [/off-topic, wishlist]

/home – 1.2 TB

swap – 25.2 GB

Admit I got a bit overboard on swap space but as and when I get more memory at least should have swap 1:1 right. I am not sure if the old rules would still apply or not.

Then I used Debian buster alpha 2 netinstall iso

https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/debian-buster-DI-alpha2-amd64-netinst.iso and put it on the usb stick. I did use the sha1sum to ensure that the netinstall iso was the same as the original one https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/SHA1SUMS

After that simply doing a dd if of was enough to copy the net install to the usb stick.

I did have some issues with the installation which I’ll share in the next post but the most critical issue was that I had to again do make a /boot and even though I made /boot as a separate partition and gave 1 GB to it during the partitioning step, I got only 100 MB and I have no idea why it is like that.

/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem

> df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 88M 68M 14M 84% /boot

home/shirish> ls -lh /boot
total 55M
-rw-r--r-- 1 root root 193K Dec 22 19:42 config-4.14.0-2-amd64
-rw-r--r-- 1 root root 193K Jan 15 01:15 config-4.14.0-3-amd64
drwx------ 3 root root 1.0K Jan 1 1970 efi
drwxr-xr-x 5 root root 1.0K Jan 20 10:40 grub
-rw-r--r-- 1 root root 19M Jan 17 10:40 initrd.img-4.14.0-2-amd64
-rw-r--r-- 1 root root 21M Jan 20 10:40 initrd.img-4.14.0-3-amd64
drwx------ 2 root root 12K Jan 1 17:49 lost+found
-rw-r--r-- 1 root root 2.9M Dec 22 19:42 System.map-4.14.0-2-amd64
-rw-r--r-- 1 root root 2.9M Jan 15 01:15 System.map-4.14.0-3-amd64
-rw-r--r-- 1 root root 4.4M Dec 22 19:42 vmlinuz-4.14.0-2-amd64
-rw-r--r-- 1 root root 4.7M Jan 15 01:15 vmlinuz-4.14.0-3-amd64

root@debian:/boot/efi/EFI# ls -lh
total 3.0K
drwx------ 2 root root 1.0K Dec 31 21:38 Boot
drwx------ 2 root root 1.0K Dec 31 19:23 debian
drwx------ 4 root root 1.0K Dec 31 21:32 Microsoft

I would be the first to say I don’t really the understand this EFI business.

The only thing I do understand that it’s good that even without OS it becomes easier to see that all the components if you change/add which would or would not work in BIOS. In bios, getting info on components were iffy at best.

There have been other issues with EFI which I may take in another blog post but for now I would be happy if somebody can share –

how to have a big /boot/ so it’s not a small partition for debian boot. I don’t see any value in having a bigger /boot for MS-Windows unless there is a way to also get grub2 pointer/header added in MS-Windows bootloader. Will share the reasons for it in the next blog post.

I am open to reinstalling both MS-Windows and Debian from scratch although that would happen when debian-buster-alpha3 arrives. Any answer to the above would give me something to try the solution and share if I get the desired result.

Looking forward for answers.

22 January, 2018 05:23AM by shirishag75

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

French Gender-Neutral Translation for Roundcube

Here's a quick blog post to tell the world I'm now doing a French gender-neutral translation for Roundcube.

A while ago, someone wrote on the Riseup translation list to complain against the current fr_FR translation. French is indeed a very gendered language and it is common place in radical spaces to use gender-neutral terminologies.

So yeah, here it is: https://github.com/baldurmen/roundcube_fr_FEM

I haven't tested the UI integration yet, but I'll do that once the Riseup folks integrate it to their Roundcube instance.

22 January, 2018 05:00AM by Louis-Philippe Véronneau

January 21, 2018

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#15: Tidyverse and data.table, sitting side by side ... (Part 1)

Welcome to the fifteenth post in the rarely rational R rambling series, or R4 for short. There are two posts I have been meaning to get out for a bit, and hope to get to shortly---but in the meantime we are going start something else.

Another longer-running idea I had was to present some simple application cases with (one or more) side-by-side code comparisons. Why? Well at times it feels like R, and the R community, are being split. You're either with one (increasingly "religious" in their defense of their deemed-superior approach) side, or the other. And that is of course utter nonsense. It's all R after all.

Programming, just like other fields using engineering methods and thinking, is about making choices, and trading off between certain aspects. A simple example is the fairly well-known trade-off between memory use and speed: think e.g. of a hash map allowing for faster lookup at the cost of some more memory. Generally speaking, solutions are rarely limited to just one way, or just one approach. So if pays off to know your tools, and choose wisely among all available options. Having choices is having options, and those tend to have non-negative premiums to take advantage off. Locking yourself into one and just one paradigm can never be better.

In that spirit, I want to (eventually) show a few simple comparisons of code being done two distinct ways.

One obvious first candidate for this is the gunsales repository with some R code which backs an earlier NY Times article. I got involved for a similar reason, and updated the code from its initial form. Then again, this project also helped motivate what we did later with the x13binary package which permits automated installation of the X13-ARIMA-SEATS binary to support Christoph's excellent seasonal CRAN package (and website) for which we now have a forthcoming JSS paper. But the actual code example is not that interesting / a bit further off the mainstream because of the more specialised seasonal ARIMA modeling.

But then this week I found a much simpler and shorter example, and quickly converted its code. The code comes from the inaugural datascience 1 lesson at the Crosstab, a fabulous site by G. Elliot Morris (who may be the highest-energy undergrad I have come across lately) focusssed on political polling, forecasts, and election outcomes. Lesson 1 is a simple introduction, and averages some polls of the 2016 US Presidential Election.

Complete Code using Approach "TV"

Elliot does a fine job walking the reader through his code so I will be brief and simply quote it in one piece:


## Getting the polls

library(readr)
polls_2016 <- read_tsv(url("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"))

## Wrangling the polls

library(dplyr)
polls_2016 <- polls_2016 %>%
    filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
library(lubridate)
polls_2016 <- polls_2016 %>%
    mutate(end_date = ymd(end_date))
polls_2016 <- polls_2016 %>%
    right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date),
                                              max(polls_2016$end_date), by="days")))

## Average the polls

polls_2016 <- polls_2016 %>%
    group_by(end_date) %>%
    summarise(Clinton = mean(Clinton),
              Trump = mean(Trump))

library(zoo)
rolling_average <- polls_2016 %>%
    mutate(Clinton.Margin = Clinton-Trump,
           Clinton.Avg =  rollapply(Clinton.Margin,width=14,
                                    FUN=function(x){mean(x, na.rm=TRUE)},
                                    by=1, partial=TRUE, fill=NA, align="right"))

library(ggplot2)
ggplot(rolling_average)+
  geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
  geom_point(aes(x=end_date,y=Clinton.Margin))

It uses five packages to i) read some data off them interwebs, ii) then filters / subsets / modifies it leading to a right (outer) join with itself before iv) averaging per-day polls first and then creates rolling averages over 14 days before v) plotting. Several standard verbs are used: filter(), mutate(), right_join(), group_by(), and summarise(). One non-verse function is rollapply() which comes from zoo, a popular package for time-series data.

Complete Code using Approach "DT"

As I will show below, we can do the same with fewer packages as data.table covers the reading, slicing/dicing and time conversion. We still need zoo for its rollapply() and of course the same plotting code:


## Getting the polls

library(data.table)
pollsDT <- fread("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv")

## Wrangling the polls

pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
pollsDT[, end_date := as.IDate(end_date)]
pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]),
                                              max(pollsDT[,end_date]), by="days")), on="end_date"]

## Average the polls

library(zoo)
pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), by=end_date]
pollsDT[, Clinton.Margin := Clinton-Trump]
pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
                                   FUN=function(x){mean(x, na.rm=TRUE)},
                                   by=1, partial=TRUE, fill=NA, align="right")]

library(ggplot2)
ggplot(pollsDT) +
    geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
    geom_point(aes(x=end_date,y=Clinton.Margin))

This uses several of the components of data.table which are often called [i, j, by=...]. Row are selected (i), columns are either modified (via := assignment) or summarised (via =), and grouping is undertaken by by=.... The outer join is done by having a data.table object indexed by another, and is pretty standard too. That allows us to do all transformations in three lines. We then create per-day average by grouping by day, compute the margin and construct its rolling average as before. The resulting chart is, unsurprisingly, the same.

Benchmark Reading

We can looking how the two approaches do on getting data read into our session. For simplicity, we will read a local file to keep the (fixed) download aspect out of it:

R> url <- "http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"
R> download.file(url, destfile=file, quiet=TRUE)
R> file <- "/tmp/poll-responses-clean.tsv"
R> res <- microbenchmark(tidy=suppressMessages(readr::read_tsv(file)),
+                       dt=data.table::fread(file, showProgress=FALSE))
R> res
Unit: milliseconds
 expr     min      lq    mean  median      uq      max neval
 tidy 6.67777 6.83458 7.13434 6.98484 7.25831  9.27452   100
   dt 1.98890 2.04457 2.37916 2.08261 2.14040 28.86885   100
R> 

That is a clear relative difference, though the absolute amount of time is not that relevant for such a small (demo) dataset.

Benchmark Processing

We can also look at the processing part:

R> rdin <- suppressMessages(readr::read_tsv(file))
R> dtin <- data.table::fread(file, showProgress=FALSE)
R> 
R> library(dplyr)
R> library(lubridate)
R> library(zoo)
R> 
R> transformTV <- function(polls_2016=rdin) {
+     polls_2016 <- polls_2016 %>%
+         filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
+     polls_2016 <- polls_2016 %>%
+         mutate(end_date = ymd(end_date))
+     polls_2016 <- polls_2016 %>%
+         right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), 
+                                                   max(polls_2016$end_date), by="days")))
+     polls_2016 <- polls_2016 %>%
+         group_by(end_date) %>%
+         summarise(Clinton = mean(Clinton),
+                   Trump = mean(Trump))
+ 
+     rolling_average <- polls_2016 %>%
+         mutate(Clinton.Margin = Clinton-Trump,
+                Clinton.Avg =  rollapply(Clinton.Margin,width=14,
+                                         FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                         by=1, partial=TRUE, fill=NA, align="right"))
+ }
R> 
R> transformDT <- function(dtin) {
+     pollsDT <- copy(dtin) ## extra work to protect from reference semantics for benchmark
+     pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
+     pollsDT[, end_date := as.IDate(end_date)]
+     pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]), 
+                                                   max(pollsDT[,end_date]), by="days")), on="end_date"]
+     pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), 
+                        by=end_date][, Clinton.Margin := Clinton-Trump]
+     pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
+                                        FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                        by=1, partial=TRUE, fill=NA, align="right")]
+ }
R> 
R> res <- microbenchmark(tidy=suppressMessages(transformTV(rdin)),
+                       dt=transformDT(dtin))
R> res
Unit: milliseconds
 expr      min       lq     mean   median       uq      max neval
 tidy 12.54723 13.18643 15.29676 13.73418 14.71008 104.5754   100
   dt  7.66842  8.02404  8.60915  8.29984  8.72071  17.7818   100
R> 

Not quite a factor of two on the small data set, but again a clear advantage. data.table has a reputation for doing really well for large datasets; here we see that it is also faster for small datasets.

Side-by-side

Stripping the reading, as well as the plotting both of which are about the same, we can compare the essential data operations.

Summary

We found a simple task solved using code and packages from an increasingly popular sub-culture within R, and contrasted it with a second approach. We find the second approach to i) have fewer dependencies, ii) less code, and iii) running faster.

Now, undoubtedly the former approach will have its staunch defenders (and that is all good and well, after all choice is good and even thirty years later some still debate vi versus emacs endlessly) but I thought it to be instructive to at least to be able to make an informed comparison.

Acknowledgements

My thanks to G. Elliot Morris for a fine example, and of course a fine blog and (if somewhat hyperactive) Twitter account.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 January, 2018 10:40PM

January 20, 2018

Russ Allbery

New year haul

Some new acquired books. This is a pretty wide variety of impulse purchases, filled with the optimism of a new year with more reading time.

Libba Bray — Beauty Queens (sff)
Sarah Gailey — River of Teeth (sff)
Seanan McGuire — Down Among the Sticks and Bones (sff)
Alexandra Pierce & Mimi Mondal (ed.) — Luminescent Threads (nonfiction anthology)
Karen Marie Moning — Darkfever (sff)
Nnedi Okorafor — Binti (sff)
Malka Older — Infomocracy (sff)
Brett Slatkin — Effective Python (nonfiction)
Zeynep Tufekci — Twitter and Tear Gas (nonfiction)
Martha Wells — All Systems Red (sff)
Helen S. Wright — A Matter of Oaths (sff)
J.Y. Yang — Waiting on a Bright Moon (sff)

Several of these are novellas that were on sale over the holidays; the rest came from a combination of reviews and random on-line book discussions.

The year hasn't been great for reading time so far, but I do have a couple of things ready to review and a third that I'm nearly done with, which is not a horrible start.

20 January, 2018 11:08PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

PC desktop build, Intel, spectre issues etc.

This is and would be a longish one.

I have been using desktop computers for around couple of decades now. My first two systems were an Intel Pentium III and then a Pentium Dual-core, the first one on kobian/mercury motherboard. The motherboards were actually called Mercury and was a brand which was later sold to Kobian which kept the brand-name. The motherboards and the CPU/processor used to be cheap. One could set up a decentish low-end system with display for around INR 40k/- which seemed to be decent as a country we had just come out of non-alignment movement and also chose to come out of isolationist tendencies (technology and otherwise as well). Most middle-class income families got their first taste of computers after y2k. There were quite a few y2k incomes which prompted the Government to lose duties further.

One of the highlights during 1991 when satellite TV came was shown by CNN (probably CNN International) was the coming down of the Berlin Wall. There were many of us who were completely ignorant of world politics or what is/was happening in other parts of the world.

Computer systems at those times were considered a luxury item and duties were sky-high ( between 1992-2001). The launch of Mars Pathfinder, its subsequent successful landing on the Martian surface also catapulted people’s imagination about PCs and micro-processors.

I can still recall the excitement that was among young people of my age first seeing the liftoff from Cape Canaveral and then later the processed images of Spirits cameras showing images of a desolate desert-type land. We also witnessed the beginnings of ‘International Space Station‘ (ISS) .

Me and few of my friends had drunk lot of Carl Sagan and many other sci-fi coolaids/stories. Star Trek, the movies and the universal values held/shared by them was a major influence to all our lives.

People came to know about citizen based science or projects/distributed science projects, y2k fear appeared to be unfounded all these factors and probably a few more prompted the Government of India to reduce duties on motherboards, processors, components as well as taking Computers out of the restricted list which lead to competition and finally the common man being able to dream of a system sooner than later. Y2K also kick-started the beginnings of Indian software industry which is the bread and butter of many a middle class men-women who are in the service industry using technology directly or indirectly.

In 2002 I bought my first system, an Intel Pentium III, i810 chipset (integrated graphics) with 256 MB of SDRAM which was supposed to be sufficient for the tasks it was being used for, Some light gaming, some web-mails, seeing movies,etc running on a mercury board. I don’t remember the code-name partly because the code-names are/were really weird and partly because it is just too long ago. I remember using Windows ’98 and trying to install one of the early GNU/Linux variants on that machine. Ir memory serves right, you had to flick a jumper (like a switch) to use the extended memory.

I do not know/remember what happened but I think somewhere in a year or two in that time-frame Mercury India filed for bankruptcy and the name, manufacturing was sold to Kobian. After Kobian took over the ownership, it said it would neither honor the 3/5 year warranty or even repairs on the motherboards Mercury had sold, it created a lot of bad will against the company and relegated itself to the bottom of the pile for both experienced and new system-builders. Also mercury motherboards weren’t reputed/known to have a long life although the one I had gave me quite a decent life.

The next machine I purchased was a Pentium Dual-core, (around 2009/2010) LGA a Williamnette which had out-of-order execution, the bug meltdown which is making news nowadays has history this far back. I think I bought it in 45nm which was a huge jump from the previous version although still secure in the mATX package. Again the board was from mercury. (Intel 845 chipset, DDR2 2 GB RAM and SATA came to stay).

So meltdown has been in existence for 10-12 odd years and is in everything which either uses Intel or ARM processors.

As you can probably make-out most systems came stretched out 2-3 years later than when they were launched in American or/and European markets. Also business or tourism travel was neither so easy, smooth or transparent as is today. All of which added to delay in getting new products in India.

Sadly, the Indian market is similar to other countries where Intel is used in more than 90% machines. I know of few institutions (though pretty much rare) who insisted and got AMD solutions.

That was the time when gigabyte came onto the scene which formed the basis of the Wolfdale-3M 45nm system which was in the same price range as the earlier models, and offered a weeny tiny bit of additional graphics performance.To the best of my knowledge, it was perhaps the first motherboard which had solid state capacitors being offered/put in a budget motherboard. The mobo-processor bundle used to be in the range of INR 7/8k excluding RAM. cabinet etc, I had a Philips 17″ CRT display which ran a good decade or so, so just had to get the new cabinet, motherboard, CPU, RAM and was good to go.

Few months later at a hardware exhibition held in the city I was invited to an Asus party which was just putting a toe-hold in the Indian market. I went to the do, enjoyed myself. They had a small competition where they asked some questions and asked if people had queries. To my surprise, I found that most people who were there were hardware vendors and for one reason or the other they chose to remain silent. Hence I got an AMD Asus board. This is different from winning another Gigabyte motherboard which I also won in the same year in another competition as well in the same time-frame. Both were mid-range motherboards (ATX build).

As I had just bought a Gigabyte (mATX) motherboard and had made the build, I had to give both the motherboards away, one to a friend and one to my uncle and both were pleased with the AMD-based mobos which they somehow paired with AMD processors. At that time AMD had one-upped Intel in both graphics and even bare computing especially at the middle level and they were striving to push into new markets.

Apart from the initial system bought, most of my systems when being changed were in the INR 20-25k/- budget including all and any accessories I bought later.

The only real expensive parts I purchased have been external hdd ( 1 TB WD passport) and then a Viewsonic 17″ LCD which together sent me back by around INR 10k/- but both seem to give me adequate performance (both have outlived the warranty years) with the monitor being used almost 24×7 over 6 years or so, of course over GNU/Linux specifically Debian. Both have been extremely well value for the money.

As I had been exposed to both the motherboards I had been following those and other motherboards as well. What was and has been interesting to observe what Asus did later was to focus more on the high-end gaming market while Gigabyte continued to dilute it energy both in the mid and high-end motherboards.

Cut to 2017 and had seen quite a few reports –

http://www.pcstats.com/NewsView.cfm?NewsID=131618

http://www.digitimes.com/news/a20170904PD207.html

http://www.guru3d.com/news-story/asus-has-the-largest-high-end-intel-motherboard-share.html

All of which points to the fact that Asus had cornered a large percentage of the market and specifically the gaming market . While there are no formal numbers as both Asus and Gigabyte choose to releases only APAC numbers rather than a country-wide split which would have made for some interesting reading.

Just so that people do not presume anything, there are about 4-5 motherboard vendors in the Indian market. There is Asus at the top (I believe) followed by Gigabyte, Intel at a distant 3rd place (because it’s too expensive). There are also pockets of Asrock and MSI and I know of people who follow them religiously although their mobos are supposed to be somewhat pensive than the two above. Asus and Gigabyte do try to fight out with each other but each has its core competency I believe with Asus being used by heavy gamers, overclockers more than Gigabyte.

Anyway come October 2017 and my main desktop died and am left as they say up the creek without the paddle. I didn’t even have Net access for about 3 weeks due to BSNL or PMC’s foolishness and then later small riots breaking out due to Koregaon Bhima conflict.

This led to a situation where I had to buy/build a system with oldish/half knowledge. I was open to having an AMD system but both datacare and even Rashi peripherals, Pune both of whom used to deal in AMD systems shared they had stopped dealing in AMD stuff sometime back. While datacare had AMD mobos, getting processors were an issue. Both the vendors are near to my home so if I buy from them getting support becomes an non-issue. I could have gone out of my way to get an AMD processor but getting support could have been an issue as would have had to travel and I do not know the vendors enough. Hence fell back to the Intel platform.

I asked around quite a few PC retailers and distributors around and found the Asus Prime Z270-P was the only mid-range motherboard available at that time. I did come to know a bit later of other motherboards in the z270 series but most vendors didn’t/don’t stock them as there is capital, interest and stock cost.

History – Historically, there has also been huge time lag in getting motherboards, processors etc. between worldwide announcements, and then announcements of sale in India and actually getting hands-on to the newest motherboards and processors as seen above. This had led to quite a bit of frustration to many a users. I have known of many a soul visiting Lamington Road, Mumbai to get the latest motherboard, processor. Even to-date this system flourishes as Mumbai has an International Airport and there is always a demand and people willing to pay a premium for the newest processor/motherboard even before any reviews are in.

I was highly surprised to know recently that Prime Z370-P motherboards are already selling (just 3 months late) with the Intel 8th generation processors although these are still as samples rather than a torrent some of the other motherboard-combo might be.

At the end I bought an Intel I7400 chip and an Asus Prime Z270-P motherboard with 2400 mhz Corsair 8 GB and a 4 TB WD Green (5400) HDD with a Circle 545 cabinet and (with the almost criminal 400 Watts SMPS). Later came to know that it’s not really even 400 Watts, but around 20-25% less . The whole package costed me north of INR 50k/- with still need to spend on a better SMPS (probably a Cosair or Coolermaster 80 600/650 SMPS) with a few accessories I still need to complete the system.

I will be changing the PSU most probably next week.

Circle SMPS Picture

Asus motherboard, i7400 and RAM

Disclosure – The neatness you see is not me. I was unsure if I would be able to put the heatsink on the CPU properly as that is the most sensitive part while building a system. A bent pin on the CPU could play havoc as well as void the warranty on the CPU or motherboard or both. The new thing I saw were the knobs that can be seen on the heatsink fan is something which I hadn’t seen before. The vendor did the fixing of the processor on the mobo for me as well as tied up the remaining power cables without asking for which I am and was grateful and would definitely provide him with more business as and when I need components.

Future – While it’s ok for now, I’m still using a pretty old 2 speaker setup which I hope to upgrade to either a 2.1/3.1 speaker setup, have full 64 GB 2400 Mhz Kingston Razor/G.Skill/Corsair memory, an M.2 512 MB SSD .

If I do get the Taiwan Debconf bursary I do hope to buy some or all of the above plus a Samsung or some other Android/Replicant/Librem smartphone. I have been also looking for a vastly simplified smartphone for my mum with big letters and everything but that has been a failure to find in the Indian market. Of course this all depends if I do get the bursary and even after the bursary if Global warranty and currency exchange works out in my favor vis-a-vis what I would have to pay in India.

Apart from above, Taiwan is supposed to be a pretty good source to get graphic novels, manga comics, lots of RPG games for very cheap prices with covers and hand-drawn material etc. All of this is based upon few friend’s anecdotal experiences so dunno if all of that would still hold true if I manage to be there.

There are also quite a few chip foundries and maybe during debconf could have visit to one of them if possible. It would be rewarding if the visit was to any 45nm or lower chip foundry as India is still stuck at 65nm range till date.

I would be sharing about my experience about the board, the CPU, the expectations I had from the Intel chip and the somewhat disappointing experience of using Debian on the new board in the next post, not necessarily Debian’s fault but the free software ecosystem being at fault here.

Feel free to point out any mistakes you find, grammatically or even otherwise. The blog post has been in the works for over couple of weeks so its possible for mistakes to creep in.

20 January, 2018 10:05PM by shirishag75

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.15: Numerous tweaks and enhancements

The fifteenth release in the 0.12.* series of Rcpp landed on CRAN today after just a few days of gestation in incoming/.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, and the 0.12.14.release in November 2017 making it the nineteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1288 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This release contains a pretty large number of pull requests by a wide variety of authors. Most of these pull requests are very focused on a particular issue at hand. One was larger and ambitious with some forward-looking code for R 3.5.0; however this backfired a little on Windows and is currently "parked" behind a #define. Full details are below.

Changes in Rcpp version 0.12.15 (2018-01-16)

  • Changes in Rcpp API:

    • Calls from exception handling to Rf_warning() now correctly set an initial format string (Dirk in #777 fixing #776).

    • The 'new' Date and Datetime vectors now have is_na methods too. (Dirk in #783 fixing #781).

    • Protect more temporary SEXP objects produced by wrap (Kevin in #784).

    • Use public R APIs for new_env (Kevin in #785).

    • Evaluation of R code is now safer when compiled against R 3.5 (you also need to explicitly define RCPP_PROTECTED_EVAL before including Rcpp.h). Longjumps of all kinds (condition catching, returns, restarts, debugger exit) are appropriately detected and handled, e.g. the C++ stack unwinds correctly (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • The new function Rcpp_fast_eval() can be used for performance-sensitive evaluation of R code. Unlike Rcpp_eval(), it does not try to catch errors with tryEval in order to avoid the catching overhead. While this is safe thanks to the stack unwinding protection, this also means that R errors are not transformed to an Rcpp::exception. If you are relying on error rethrowing, you have to use the slower Rcpp_eval(). On old R versions Rcpp_fast_eval() falls back to Rcpp_eval() so it is safe to use against any versions of R (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • Overly-clever checks for NA have been removed (Kevin in #790).

    • The included tinyformat has been updated to the current version, Rcpp-specific changes are now more isolated (Kirill in #791).

    • Overly picky fall-through warnings by gcc-7 regarding switch statements are now pre-empted (Kirill in #792).

    • Permit compilation on ANDROID (Kenny Bell in #796).

    • Improve support for NVCC, the CUDA compiler (Iñaki Ucar in #798 addressing #797).

    • Speed up tests for NA and NaN (Kirill and Dirk in #799 and #800).

    • Rearrange stack unwind test code, keep test disabled for now (Lionel in #801).

    • Further condition away protect unwind behind #define (Dirk in #802).

  • Changes in Rcpp Attributes:

    • Addressed a missing Rcpp namespace prefix when generating a C++ interface (James Balamuta in #779).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ now shows Rcpp::Rcpp.plugin.maker() and not the outdated ::: use applicable non-exported functions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 January, 2018 09:53PM

hackergotchi for Norbert Preining

Norbert Preining

TLCockpit v0.8

Today I released v0.8 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr. I spent the winter holidays in updating and polishing, but also in helping me debug problems that users have reported. Hopefully the new version works better for all.

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last release:

  • add debug facility: It is now possible to pass -d for debugging to tlcockpit, activating debugging. There is also -dd for more verbose debugging.
  • select mirror facility: The edit screen for the repository setting now allows selecting from the current list of mirrors, see the following screenshot:
  • initial loading speedup: Till now we used to parse the json output of tlmgr, which included everything the whole database contains. We now load the initial minimal information via info --data and load additional data when details for a package is shown on demand. This should especially make a difference on systems without a compiled json Perl library available.
  • fixed self update: In the previous version, updating the TeX Live Manager itself was not properly working – it was updated but the application itself became unresponsive afterwards. This is hopefully fixed (although this is really tricky).
  • status indicator: The status indicator has moved from the menu bar (where it was somehow a stranger) to below the package listing, and now also includes the currently running command, see screenshot after the next item.
  • nice spinner: Only an eye-candy, but I added a rotating spinner while loading the database, updates, backups, or doing postactions. See the attached screenshot, which also shows the new location of the status indicator and the additional information provided.

I hope that this version is more reliable, stable, and easier to use. As usual, please use the issue page of the github project to report problems.

TeX Live should contain the new version starting from tomorrow.

Enjoy.

20 January, 2018 02:32PM by Norbert Preining

January 19, 2018

hackergotchi for Eddy Petri&#537;or

Eddy Petrișor

Suppressing color output of the Google Repo tool

On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never info
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto.

19 January, 2018 06:51AM by eddyp (noreply@blogger.com)

January 18, 2018

Mike Gabriel

Building packages with Meson and Debhelper version level 11 for Debian stretch-backports

More a reminder for myself, than a blog post...

If you want to backport a project from unstable based on the meson build system and your package uses debhelper to invoke the meson build process, then you need to modify the backported package's debian/control file slightly:

diff --git a/debian/control b/debian/control
index 43e24a2..d33e76b 100644
--- a/debian/control
+++ b/debian/control
@@ -14,7 +14,7 @@ Build-Depends: debhelper (>= 11~),
                libmate-menu-dev (>= 1.16.0),
                libmate-panel-applet-dev (>= 1.16.0),
                libnotify-dev,
-               meson,
+               meson (>= 0.40.0),
                ninja-build,
                pkg-config,
 Standards-Version: 4.1.3

Enforce the build to pull-in meson from stretch-backports, i.e. a meson version that is newer than 0.40.0.

Reasoning: if you want to build your package against debhelper (>= 11~) from stretch-backports it will use the --wrap-mode option when invoking meson. However, this option only got added in meson 0.40.0. So you need to make sure, the meson version from stretch-backports gets pulled in, too, for your build. The build will fail when using the meson version that we find in Debian stretch.

18 January, 2018 07:30PM by sunweaver

hackergotchi for Joey Hess

Joey Hess

cubietruck temperature sensor

I wanted to use 1-wire temperature sensors (DS18B20) with my Cubietruck board, running Debian. The only page I could find documenting this is for the sunxi kernel, not the mainline kernel Debian uses. After a couple of hours of research I got it working, so here goes.

wiring

First you need to pick a GPIO pin to use for the 1-wire signal. The Cubietruck's GPIO pins are documented here, and I chose to use pin PG8. Other pins should work as well, although I originally tried to use PB17 and could not get it to work for an unknown reason. I also tried to use PB18 but there was a conflict with something else trying to use that same pin. To find a free pin, cat /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins and look for a line like: "pin 200 (PG8): (MUX UNCLAIMED) (GPIO UNCLAIMED)"

Now wire the DS18B20 sensor up. With its flat side facing you, the left pin goes to ground, the center pin to PG8 (or whatever GPIO pin you selected), and the right pin goes to 3.3V. Don't forget to connect the necessary 4.7K ohm resistor between the center and right pins.

You can find plenty of videos showing how to wire up the DS18B20 on youtube, which typically also involve a quick config change to a Raspberry Pi running Raspbian to get it to see the sensor. With Debian it's unfortunately quite a lot more complicated, and so this blog post got kind of long.

configuration

We need to get the kernel to enable the GPIO pin. This seems like a really easy thing, but this is where it gets really annoying and painful.

You have to edit the Cubietruck's device tree. So apt-get source linux and in there edit arch/arm/boot/dts/sun7i-a20-cubietruck.dts

In the root section ('/'), near the top, add this:

    onewire_device {
       compatible = "w1-gpio";
       gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */
       pinctrl-names = "default";
       pinctrl-0 = <&my_w1_pin>;
    };

In the '&pio` section, add this:

    my_w1_pin: my_w1_pin@0 {
         allwinner,pins = "PG8";
         allwinner,function = "gpio_in";
    };

Note that if you used a different pin than PG8 you'll need to change that. The "pio 6 8" means letter G, pin 8. The 6 is because G is the 7th letter of the alphabet. I don't know where this is documented; I reverse engineered it from another example. Why this can't be hex, or octal, or symbolic names or anything sane, I don't know.

Now you'll need to compile the dts file into a dtb file. One way is to configure the kernel and use its Makefile; I avoided that by first sudo apt-get install device-tree-compiler and then running, in the top of the linux source tree:

cpp -nostdinc -I include -undef -x assembler-with-cpp \
    ./arch/arm/boot/dts/sun7i-a20-cubietruck.dts | \
    dtc -O dtb -b 0 -o sun7i-a20-cubietruck.dtb -

You'll need to install that into /etc/flash-kernel/dtbs/sun7i-a20-cubietruck.dtb on the cubietruck. Then run flash-kernel to finish installing it.

use

Now reboot, and if all went well, it'll come up and the GPIO pin will finally be turned on:

# grep PG8 /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins
pin 200 (PG8): onewire_device 1c20800.pinctrl:200 function gpio_in group PG8

And if you picked a GPIO pin that works and got the sensor wired up correctly, in /sys/bus/w1/devices/ there should be a subdirectory for the sensor, using its unique ID. Here I have two sensors connected, which 1-wire makes easy to do, just hang them all off the same wire.. er wires.

root@honeybee:/sys/bus/w1/devices> ls
28-000008290227@  28-000008645973@  w1_bus_master1@
root@honeybee:/sys/bus/w1/devices> cat *-*/w1_slave
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375

So, it's 15.37 Celsius in my house. I need to go feed the fire, this took too long to get set up.

future work

Are you done at this point? I fear not entirely, because what happens when there's a kernel upgrade? If the device tree has changed in some way in the new kernel, you might need to update the modified device tree file. Or it might not boot properly or not work in some way.

With Raspbian, you don't need to modify the device tree. Instead it has support for device tree overlay files, which add some entries to the main device tree. The distribution includes a bunch of useful overlays, including one that enables GPIO pins. The Raspberry Pi's bootloader takes care of merging the main device tree and the selected overlays.

There are u-boot patches to do such merging, or the merging could be done before reboot (by flash-kernel perhaps), but apparently Debian's device tree files are built without phandle based referencing needed for that to work. (See http://elektranox.org/2017/05/0020-dt-overlays/)

There's also a kernel patch to let overlays be loaded on the fly using configfs. It seems to have been around for several years without being merged, for whatever reason, but would avoid this problem nicely if it ever did get merged.

18 January, 2018 03:47AM

January 17, 2018

Thorsten Alteholz

First steps with arm64

As it was Christmas time recently, I wanted to allow oneself something special. So I ordered a Macchiatobin from SolidRun. Unfortunately they don’t exaggerate with their delivery times and I had to wait about two months for my device. I couldn’t celebrate Christmas time with it, but fortunately New Year.

Anyway, first I tried to use the included U-Boot to start the Debian installer on an USB stick. Oh boy, that was a bad idea and in retrospect just a waste of time. But there is debian-arm@l.d.o and Steve McIntyre was so kind to help me out of my vale of tears.

First I put the EDK2 flash image from Leif on an SD card, set the jumper on the board to boot from it (for the SD card boot, the right most jumper has to be set!) and off we go. Afterwards I put the debian-testing-arm64-netinst.iso on an USB stick and tried to start this. Unfortunately I was hit by #887110 and had to use a mini installer from here. Installation went smooth and as a last step I had to start the rescue mode and install grub to the removable media path. It is an extra point in the installer, so no need to enter cryptic commands :-).

Voila, rebooted and my Macchiatobin is up and running.

17 January, 2018 09:52PM by alteholz

hackergotchi for Matthew Garrett

Matthew Garrett

Privacy expectations and the connected home

Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.

Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.

There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?

To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.

(Thanks to Nikki Everett for conversations that inspired this post)

(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's)

comment count unavailable comments

17 January, 2018 09:45PM

Renata D'Avila

Not being perfect

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, I was wondering that when I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that could be me. Even though I had written down almost every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong intern.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start doing some work. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I had studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>>

<<EventCalendar(category=CategoryEventCalendar)>>

<<EventCalendar(,category=CategoryEventCalendar)>>

<<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

17 January, 2018 07:49PM by Renata

Not being perfect

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, when I was wondering that I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that was very much me, because even though I had written down pretty much every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong candidate.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>>

<<EventCalendar(category=CategoryEventCalendar)>>

<<EventCalendar(,category=CategoryEventCalendar)>>

<<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

17 January, 2018 07:49PM by Renata

hackergotchi for Jonathan Dowland

Jonathan Dowland

Announcing "Just TODO It"

just TODO it UI

Recently, I wished to use a trivially-simple TODO-list application whilst working on a project. I had a look through what was available to me in the "GNOME Software" application and was surprised to find nothing suitable. In particular I just wanted to capture a list of actions that I could tick off; I didn't want anything more sophisticated than that (and indeed, more sophistication would mean a learning curve I couldn't afford at the time). I then remembered that I'd written one myself, twelve years ago. So I found the old code, dusted it off, made some small adjustments so it would work on modern systems and published it.

At the time that I wrote it, I found (at least) one other similar piece of software called "Tasks" which used Evolution's TODO-list as the back-end data store. I can no longer find any trace of this software, and the old web host (projects.o-hand.com) has disappeared.

My tool is called Just TODO It and it does very little. If that's what you want, great! You can reach the source via that prior link or jump straight to GitHub: https://github.com/jmtd/todo

17 January, 2018 05:20PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppMsgPack 0.2.1

Am update of RcppMsgPack got onto CRAN today. It contains a number of enhancements Travers had been working on, as well as one thing CRAN asked us to do in making a suggested package optional.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.1 (2018-01-15)

  • Some corrections and update to DESCRIPTION, README.md, msgpack.org.md and vignette (#6).

  • Update to c_pack.cpp and tests (#7).

  • More efficient packing of vectors (#8).

  • Support for timestamps and NAs (#9).

  • Conditional use of microbenchmark in tests/ as required for Suggests: package [CRAN request] (#10).

  • Minor polish to tests relaxing comparison of timestamp, and avoiding a few g++ warnings (#12 addressing #11).

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 January, 2018 02:00AM

January 16, 2018

Jamie McClelland

Procrastinating by tweaking my desktop with devilspie2

Tweaking my desktop seems to be my preferred form of procrastination. So, a blog like this is a sure sign I have too much work on my plate.

I have a laptop. I carry it to work and plug it into a large monitor - where I like to keep all my instant or near-instant communications displayed at all times while I switch between workspaces on my smaller laptop screen as I move from email (workspace one), to shell (workspace two), to web (workspace three), etc.

When I'm not at the office, I only have my laptop screen - which has to accomdate everything.

I soon got tired of dragging things around everytime I plugged or unplugged the monitor and starting accumulating a mess of bash scripts running wmctrl and even calling my own python-wnck script. (At first I couldn't get wmctrl to pin a window but I lived with it. But when gajim switched to gtk3 and my openbox window decorations disappeared, then I couldn't even pin my window manually.)

Now I have the following simpler setup.

Manage hot plugging of my monitor.

Symlink to my monitor status device:

0 jamie@turkey:~$ ls -l ~/.config/turkey/monitor.status 
lrwxrwxrwx 1 jamie jamie 64 Jan 15 15:26 /home/jamie/.config/turkey/monitor.status -> /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/status
0 jamie@turkey:~$ 

Create a udev rule to place my monitor to the right of my LCD every time the monitor is plugged in and every time it is unplugged.

0 jamie@turkey:~$ cat /etc/udev/rules.d/90-vga.rules 
# When a monitor is plugged in, adjust my display to take advantage of it
ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/etc/udev/scripts/vga-adjust"
0 jamie@turkey:~$ 

And here is the udev script:

0 jamie@turkey:~$ cat /etc/udev/scripts/vga-adjust 
#!/bin/bash

logger -t "jamie-udev" "Monitor event detected, waiting 1 second for system to detect change."

# We don't know whether the VGA monitor is being plugged in or unplugged so we
# have to autodetect first. And,it takes a few seconds to assess whether the
# monitor is there or not, so sleep for 1 second.
sleep 1 
monitor_status="/home/jamie/.config/turkey/monitor.status"
status=$(cat "$monitor_status")  

XAUTHORITY=/home/jamie/.Xauthority
if [ "$status" = "disconnected" ]; then
  # The monitor is not plugged in   
  logger -t "jamie-udev" "Monitor is being unplugged"
  xrandr --output DP-1 --off
else
  logger -t "jamie-udev" "Monitor is being plugged in"
  xrandr --output DP-1 --right-of eDP-1 --auto
fi  
0 jamie@turkey:~$

Move windows into place.

So far, this handles ensuring the monitor is activated and placed in the right position. But, nothing has changed in my workspace.

Here's where the devilspie2 configuration comes in:

==> /home/jamie/.config/devilspie2/00-globals.lua <==
-- Collect some global varibles to be used throughout.
name = get_window_name();
app = get_application_name();
instance = get_class_instance_name();

-- See if the monitor is plugged in or not. If monitor is true, it is
-- plugged in, if it is false, it is not plugged in.
monitor = false;
device = "/home/jamie/.config/turkey/monitor.status"
f = io.open(device, "rb")
if f then
  -- Read the contents, remove the trailing line break.
  content = string.gsub(f:read "*all", "\n", "");
  if content == "connected" then
    monitor = true;
  end
end


==> /home/jamie/.config/devilspie2/gajim.lua <==
-- Look for my gajim message window. Pin it if we have the monitor.
if string.find(name, "Gajim: conversations.im") then
  if monitor then
    set_window_geometry(1931,31,590,1025);
    pin_window();
  else
    set_window_workspace(4);
    set_window_geometry(676,31,676,725);
    unpin_window();
  end
end

==> /home/jamie/.config/devilspie2/grunt.lua <==
-- grunt is the window I use to connect via irc. I typically connect to
-- grunt via a terminal called spade, which is opened using a-terminal-yoohoo
-- so that bell actions cause a notification. The window is called spade if I
-- just opened it but usually changes names to grunt after I connect via autossh
-- to grunt. 
--
-- If no monitor, put spade in workspace 2, if monitor, then pin it to all
-- workspaces and maximize it vertically.

if instance == "urxvt" then
  -- When we launch, the terminal is called spade, after we connect it
  -- seems to get changed to jamie@grunt or something like that.
  if name == "spade" or string.find(name, "grunt:") then
    if monitor then
      set_window_geometry(1365,10,570,1025);
      set_window_workspace(3);
      -- maximize_vertically();
      pin_window();
    else
      set_window_geometry(677,10,676,375);
      set_window_workspace(2);
      unpin_window();
    end
  end
end

==> /home/jamie/.config/devilspie2/terminals.lua <==
-- Note - these will typically only work after I start the terminals
-- for the first time because their names seem to change.
if instance == "urxvt" then
  if name == "heart" then
    set_window_geometry(0,10,676,375);
  elseif name == "spade" then
    set_window_geometry(677,10,676,375);
  elseif name == "diamond" then
    set_window_geometry(0,376,676,375);
  elseif name == "clover" then
    set_window_geometry(677,376,676,375);
  end
end

==> /home/jamie/.config/devilspie2/zimbra.lua <==
-- Look for my zimbra firefox window. Shows support queue.
if string.find(name, "Zimbra") then
  if monitor then
    unmaximize();
    set_window_geometry(2520,10,760,1022);
    pin_window();
  else
    set_window_workspace(5);
    set_window_geometry(0,10,676,375);
    -- Zimbra can take up the whole window on this workspace.
    maximize();
    unpin_window();
  end
end

And lastly, it is started (and restartd) with:

0 jamie@turkey:~$ cat ~/.config/systemd/user/devilspie2.service 
[Unit]
Description=Start devilspie2, program to place windows in the right locations.

[Service]
ExecStart=/usr/bin/devilspie2

[Install]
WantedBy=multi-user.target
0 jamie@turkey:~$ 

Which I have configured via a key combination that I hit everytime I plug in or unplug my monitor.

16 January, 2018 02:51PM

Reproducible builds folks

Reproducible Builds: Weekly report #142

Here's what happened in the Reproducible Builds effort between Sunday December 31 and Saturday January 13 2018:

Media coverage

Development and fixes in key packages

Chris Lamb implemented two reproducibility checks in the lintian Debian package quality-assurance tool:

  • Warn about packages that ship Hypothesis example files. (#886101, report)
  • Warn about packages that override dh_fixperms without calling dh_fixperms as this makes the build vary depending on the current umask(2). (#885910, report)

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

60 package reviews have been added, 43 have been updated and 76 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been added:

The notes of one issue type was updated:

  • build_dir_in_documentation_generated_by_doxygen: 1, 2

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adam Borowski (2)
  • Adrian Bunk (16)
  • Niko Tyni (1)
  • Chris Lamb (6)
  • Jonas Meurer (1)
  • Simon McVittie (1)

diffoscope development

disorderfs development

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

16 January, 2018 12:00PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

OpenSym 2017 Program Postmortem

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview

Statistics
Papers submitted 44
Papers accepted 20
Acceptance rate 45%
Posters submitted 2
Posters presented 9
Associate Chairs 8
PC Members 59
Authors 108
Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”

Topics

Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

distribution of papers across topics with breakdown by accept/poster/reject

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

scores for each paper submitted to opensym 2017: average, distribution, etc

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.

Rebuttals

This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher
6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

histogram of paper lengths for final accepted papersIn the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.

Bidding

Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.

Conclusions

The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.


This blog post was originally posted on the Community Data Science Collective blog.

16 January, 2018 03:38AM by Benjamin Mako Hill

Russell Coker

More About the Thinkpad X301

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

16 January, 2018 03:22AM by etbe

hackergotchi for Axel Beckert

Axel Beckert

Tex Yoda II Mechanical Keyboard with Trackpoint

Here’s a short review of the Tex Yoda II Mechanical Keyboard with Trackpoint, a pointer to the next Swiss Mechanical Keyboard Meetup and why I ordered a $300 keyboard with less keys than a normal one.

Short Review of the Tex Yoda II

Pro
  • Trackpoint
  • Cherry MX Switches
  • Compact but heavy alumium case
  • Backlight (optional)
  • USB C connector and USB A to C cable with angled USB C plug
  • All three types of Thinkpad Trackpoint caps included
  • Configurable layout with nice web-based configurator (might be opensourced in the future)
  • Fn+Trackpoint = scrolling (not further configurable, though)
  • Case not clipped, but screwed
  • Backlight brightness and Trackpoint speed configurable via key bindings (usually Fn and some other key)
  • Default Fn keybindings as side printed and backlit labels
  • Nice packaging
Contra
  • It’s only a 60% Keyboard (I prefer TKL) and the two common top rows are merged into one, switched with the Fn key.
  • Cursor keys by default (and labeled) on the right side (mapped to Fn + WASD) — maybe good for games, but not for me.
  • ~ on Fn-Shift-Esc
  • Occassionally backlight flickering (low frequency)
  • Pulsed LED light effect (i.e. high frequency flickering) on all but the lowest brightness level
  • Trackpoint is very sensitive even in the slowest setting — use Fn+Q and Fn+E to adjust the trackpoint speed (“tps”)
  • No manual included or (obviously) downloadable.
  • Only the DIP switches 1-3 and 6 are documented, 4 and 5 are not. (Thanks gismo for the question about them!)
  • No more included USB hub like the Tex Yoda I had or the HHKB Lite 2 (USB 1.1 only) has.
My Modifications So Far
Layout Modifications Via The Web-Based Yoda 2 Configurator
  • Right Control and Menu key are Right and Left cursors keys
  • Fn+Enter and Fn+Shift are Up and Down cursor keys
  • Right Windows key is the Compose key (done in software via xmodmap)
  • Middle mouse button is of course a middle click (not Fn as with the default layout).
Other Modifications
  • Clear dampening o-rings (clear, 50A) under each key cap for a more silent typing experience
  • Braided USB cable

Next Swiss Mechanical Keyboard Meetup

On Sunday, the 18th of February 2018, the 4th Swiss Mechanical Keyboard Meetup will happen, this time at ETH Zurich, building CAB, room H52. I’ll be there with at least my Tex Yoda II and my vintage Cherry G80-2100.

Why I ordered a $300 keyboard

(JFTR: It was actually USD $299 plus shipping from the US to Europe and customs fee in Switzerland. Can’t exactly find out how much of shipping and customs fee were actually for that one keyboard, because I ordered several items at once. It’s complicated…)

I always was and still are a big fan of Trackpoints as common on IBM and Lenovo Thinkapds as well as a few other laptop manufactures.

For a while I just used Thinkpads as my private everyday computer, first a Thinkpad T61, later a Thinkpad X240. At some point I also wanted a keyboard with Trackpoint on my workstation at work. So I ordered a Lenovo Thinkpad USB Keyboard with Trackpoint. Then I decided that I want a permanent workstation at home again and ordered two more such keyboards: One for the workstation at home, one for my Debian GNU/kFreeBSD running ASUS EeeBox (not affected by Meltdown or Spectre, yay! :-) which I often took with me to staff Debian booths at events. There, a compact keyboard with a built-in pointing device was perfect.

Then I met the guys from the Swiss Mechanical Keyboard Meetup at their 3rd meetup (pictures) and knew: I need a mechanical keyboard with Trackpoint.

IBM built one Model M with Trackpoint, the M13, but they’re hard to get. For example, ClickyKeyboards sells them, but doesn’t publish the price tag. :-/ Additionally, back then there were only two mouse buttons usual and I really need the third mouse button for unix-style pasting.

Then there’s the Unicomp Endura Pro, the legit successor of the IBM Model M13, but it’s only available with an IMHO very ugly color combination: light grey key caps in a black case. And they want approximately 50% of the price as shipping costs (to Europe). Additionally it didn’t have some other nice keyboard features I started to love: Narrow bezels are nice and keyboards with backlight (like the Thinkpad X240 ff. has) have their advantages, too. So … no.

Soon I found, what I was looking for: The Tex Yoda, a nice, modern and quite compact mechanical keyboard with Trackpoint. Unfortunately it is sold out since quite some years ago and more then 5000 people on Massdrop were waiting for its reintroduction.

And then the unexpected happened: The Tex Yoda II has been announced. I knew, I had to get one. From then on the main question was when and where will it be available. To my surprise it was not on Massdrop but at a rather normal dealer, at MechanicalKeyboards.com.

At that time a friend heard me talking of mechanical keyboards and of being unsure about which keyboard switches I should order. He offered to lend me his KBTalking ONI TKL (Ten Key Less) keyboard with Cherry MX Brown switches for a while. Which was great, because from theory, MX Brown switches were likely the most fitting ones for me. He also gave me two other non-functional keyboards with other Cherry MX switch colors (variants) for comparision. As a another keyboard to compare I had my programmable Cherry G80-2100 from the early ’90s with vintage Cherry MX Black switches. Another keyboard to compare with is my Happy Hacking Keyboard (HHKB) Lite 2 (PD-KB200B/U) which I got as a gift a few years ago. While the HHKB once was a status symbol amongst hackers and system administrators, the old models (like this one) only had membrane type keyboard switches. (They nevertheless still seem to get built, but only sold in Japan.)

I noticed that I was quickly able to type faster with the Cherry MX Brown switches and the TKL layout than with the classic Thinkpad layout and its rubber dome switches or with the HHKB. So two things became clear:

  • At least for now I want Cherry MX Brown switches.
  • I want a TKL (ten key less) layout, i.e. one without the number block but with the cursor block. As with the Lenovo Thinkpad USB Keyboards and the HHKB, I really like the cursor keys being in the easy to reach lower right corner. The number pad is just in the way to have that.

Unfortunately the Tex Yoda II was without that cursor block. But since it otherwise fitted perfectly into my wishlist (Trackpoint, Cherry MX Brown switches available, Backlight, narrow bezels, heavy weight), I had to buy one once available.

So in early December 2017, I ordered a Tex Yoda II White Backlit Mechanical Keyboard (Brown Cherry MX) at MechanicalKeyboards.com.

Because I was nevertheless keen on a TKL-sized keyboard I also ordered a Deck Francium Pro White LED Backlit PBT Mechanical Keyboard (Brown Cherry MX) which has an ugly font on the key caps, but was available for a reduced price at that time, and the controller got quite good reviews. And there was that very nice Tai-Hao 104 Key PBT Double Shot Keycap Set - Orange and Black, so the font issue was quickly solved with keycaps in my favourite colour: orange. :-)

The package arrived in early January. The aluminum case of the Tex Yoda II was even nicer than I thought. Unfortunately they’ve sent me a Deck Hassium full-size keyboard instead of the wanted TKL-sized Deck Francium. But the support of MechanicalKeyboards.com was very helpful and I assume I can get the keyboard exchanged at no cost.

16 January, 2018 02:38AM by Axel Beckert (abe+blog@deuxchevaux.org)

January 15, 2018

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Retpoline-enabled GCC

Since I assume there are people out there that want Spectre-hardened kernels as soon as possible, I pieced together a retpoline-enabled build of GCC. It's based on the latest gcc-snapshot package from Debian unstable with H.J.Lu's retpoline patches added, but built for stretch.

Obviously this is really scary prerelease code and will possibly eat babies (and worse, it hasn't taken into account the last-minute change of retpoline ABI, so it will break with future kernels), but it will allow you to compile 4.15.0-rc8 with CONFIG_RETPOLINE=y, and also allow you to assess the cost of retpolines (-mindirect-branch=thunk) in any particularly sensitive performance userspace code.

There will be upstream backports at least to GCC 7, but probably pretty far back (I've seen people talk about all the way to 4.3). So you won't have to run my crappy home-grown build for very long—it's a temporary measure. :-)

Oh, and it made Stockfish 3% faster than with GCC 6.3! Hooray.

15 January, 2018 09:28PM

hackergotchi for Cyril Brulebois

Cyril Brulebois

Quick recap of 2017

I haven’t been posting anything on my personal blog in a long while, let’s fix that!

Partial reason for this is that I’ve been busy documenting progress on the Debian Installer on my company’s blog. So far, the following posts were published there:

After the Stretch release, it was time to attend DebConf’17 in Montreal, Canada. I’ve presented the latest news on the Debian Installer front there as well. This included a quick demo of my little framework which lets me run automatic installation tests. Many attendees mentioned openQA as the current state of the art technology for OS installation testing, and Philip Hands started looking into it. Right now, my little thing is still useful as it is, helping me reproduce regressions quickly, and testing bug fixes… so I haven’t been trying to port that to another tool yet.

I also gave another presentation in two different contexts: once at a local FLOSS meeting in Nantes, France and once during the mini-DebConf in Toulouse, France. Nothing related to Debian Installer this time, as the topic was how I helped a company upgrade thousands of machines from Debian 6 to Debian 8 (and to Debian 9 since then). It was nice to have Evolix people around, since we shared our respective experience around automation tools like Ansible and Puppet.

After the mini-DebConf in Toulouse, another event: the mini-DebConf in Cambridge, UK. I tried to give a lightning talk about “how snapshot.debian.org helped saved the release(s)” but clearly speed was lacking, and/or I had too many things to present, so that didn’t work out as well as I hoped. Fortunately, no time constraints when I presented that during a Debian meet-up in Nantes, France. :)

Since Reproducible Tails builds were announced, it seemed like a nice opportunity to document how my company got involved into early work on reproducibility for the Tails project.

On an administrative level, I’m already done with all the paperwork related to the second financial year. \o/

Next things I’ll likely write about: the first two D-I Buster Alpha releases (many blockers kept popping up, it was really hard to release), and a few more recent release critical bug reports.

15 January, 2018 11:00AM

hackergotchi for Daniel Pocock

Daniel Pocock

RHL'18 in Saint-Cergue, Switzerland

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

15 January, 2018 08:02AM by Daniel.Pocock

January 14, 2018

hackergotchi for Sean Whitton

Sean Whitton

lastjedi

A few comments on Star Wars: The Last Jedi.

Vice Admiral Holdo’s subplot was a huge success. She had to make a very difficult call over which she knew she might face a mutiny from the likes of Poe Dameron. The core of her challenge was that there was no speech or argument she could have given that would have placated Dameron and restored unity to the crew. Instead, Holdo had to press on in the face of that disunity. This reflects the fact that, sometimes, living as one should demands pressing on in the face deep disagreement with others.

Not making it clear that Dameron was in the wrong until very late in the film was a key component of the successful portrayal of the unpleasantness of what Holdo had to do. If instead it had become clear to the audience early on that Holdo’s plan was obviously the better one, we would not have been able to observe the strength of Holdo’s character in continuing to pursue her plan despite the mutiny.

One thing that I found weak about Holdo was her dress. You cannot be effective on the frontlines of a hot war in an outfit like that! Presumably the point was to show that women don’t have to give up their femininity in order to take tough tactical decisions under pressure, and that’s indeed something worth showing. But this could have been achieved by much more subtle means. What was needed was to have her be the character with the most feminine outfit, and it would have been possible to fulfill that condition by having her wear something much more practical. Thus, having her wear that dress was crude and implausible overkill in the service of something otherwise worth doing.

I was very disappointed by most of the subplot with Rey and Luke: both the content of that subplot, and its disconnection from the rest of film.

Firstly, the content. There was so much that could have been explored that was not explored. Luke mentions that the Jedi failed to stop Darth Sidious “at the height of their powers”. Well, what did the Jedi get wrong? Was it the Jedi code; the celibacy; the bureaucracy? Is their light side philosophy to absolutist? How are Luke’s beliefs about this connected to his recent rejection of the Force? When he lets down his barrier and reconnects with the force, Yoda should have had much more to say. The Force is, perhaps, one big metaphor for certain human capacities not emphasised by our contemporary culture. It is at the heart of Star Wars, and it was at the heart of Empire and Rogue One. It ought to have been at the heart of The Last Jedi.

Secondly, the lack of integration with the rest of the film. One of the aspects of Empire that enables its importance as a film, I suggest, is the tight integration and interplay between the two main subplots: the training of Luke under Yoda, and attempting to shake the Empire off the trail of the Millennium Falcon. Luke wants to leave the training unfinished, and Yoda begs him to stay, truly believing that the fate of the galaxy depends on him completing the training. What is illustrated by this is the strengths and weaknesses of both Yoda’s traditional Jedi view and Luke’s desire to get on with fighting the good fight, the latter of which is summed up by the binary sunset scene from A New Hope. Tied up with this desire is Luke’s love for his friends; this is an important strength of his, but Yoda has a point when he says that the Jedi training must be completed if Luke is to be ultimately succesful. While the Yoda subplot and what happens at Cloud City could be independently interesting, it is only this integration that enables the film to be great. The heart of the integration is perhaps the Dark Side Cave, where two things are brought together: the challenge of developing the relationship with oneself possessed by a Jedi, and the threat posed by Darth Vader.

In the Last Jedi, Rey just keeps saying that the galaxy needs Luke, and eventually Luke relents when Kylo Ren shows up. There was so much more that could have been done with this! What is it about Rey that enables her to persuade Luke? What character strengths of hers are able to respond adequately to Luke’s fear of the power of the Force, and doubt regarding his abilities as a teacher? Exploring these things would have connected together the rebel evacuation, Rey’s character arc and Luke’s character arc, but these three were basically independent.

(Possibly I need to watch the cave scene from The Last Jedi again, and think harder about it.)

14 January, 2018 11:54PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.14

Another small maintenance release, version 0.6.14, of the digest package arrived on CRAN and in Debian today.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

Just like release 0.6.13 a few weeks ago, this release accomodates another request by Luke and Tomas and changes two uses of NAMED to MAYBE_REFERENCED which helps in the transition to the new reference counting model in R-devel. Thierry also spotted a minor wart in how sha1() tested type for matrices and corrected that, and I converted a few references to https URLs and correct one now-dead URL.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 January, 2018 10:24PM

hackergotchi for Mario Lang

Mario Lang

I pushed an implementation of myself to GitHub

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario
++++++++++++
===========+:
           ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@
====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.

14 January, 2018 09:22PM by Mario Lang

Iustin Pop

SSL migration

SSL migration

This week I managed to finally migrate my personal website to SSL, and on top of that migrate the SMTP/IMAP services to certificates signed by "proper" a CA (instead of my own). This however was more complex than I thought…

Let's encrypt?

I first wanted to do this when Let's Encrypt became available, but the way it works - with short term certificates with automated renewal put me off at first. The certbot tool needs to make semi-arbitrary outgoing requests to renew the certificates, and on public machines I have a locked-down outgoing traffic policy. So I gave up, temporarily…

I later found out that at least for now (for the current protocol), certbot only needs to talk to a certain API endpoint, and after some more research, I realized that the http-01 protocol is very straight-forward, only needing to allow some specific plain http URLs.

So then:

Issue 1: allowing outgoing access to a given API endpoint, somewhat restricted. I solved this by using a proxy, forcing certbot to go through it via env vars, learning about systemctl edit on the way, and from the proxy, only allowing that hostname. Quite weak, but at least not "open policy".

Issue 2: due to how http-01 works, it requires to leave some specific paths under http, which means you can't have (in Apache) a "redirect everything to https" config. While fixing this I learned about mod_macro, which is quite interesting (and doesn't need an external pre-processor).

The only remaining problem is that you can't renew automatically certificates for non-externally accessible systems; the dns protocol also need changing externally-visible state, so more or less the same. So:

Issue 3: For internal websites, still need a solution if own CA (self-signed, needs certificates added to clients) is not acceptable.

How did it go?

It seems that using SSL is more than SSLEngine on. I learned in this exercise about quite a few things.

CAA

DNS Certification Authority Authorization is pretty nice, and although it's not a strong guarantee (against malicious CAs), it gives some more signals that proper clients could check ("For this domain, only this CA is expected to sign certificates"); also, trivial to configure, with the caveat that one would need DNSSEC as well for end-to-end checks.

OCSP stapling

I was completely unaware of OCSP Stapling, and yay, seems like a good solution to actually verifying that the certs were not revoked. However… there are many issues with it:

  • there needs to be proper configuration on the webserver to not cause more problems than without; Apache at least, needs increasing the cache lifetime, disable sending error responses (for transient CA issues), etc.
  • but even more, it requires the web server user to be able to make "random" outgoing requests, which IMHO is a big no-no
  • even the command line tools (i.e. openssl ocsp) are somewhat deficient: no proxy support (while s_client can use one)

So the proper way to do this seems to be a separate piece of software, isolated from the webserver, that does proper/eager refresh of certificates while handling errors well.

Issue 4: No OCSP until I find a good way to do it.

HSTS, server-side and preloading

HTTP Strict Transport Security represent a commitment to encryption: once published with recommended lifetime, browsers will remember that the website shouldn't be accessed over plain http, so you can't rollback.

Preloading HSTS is even stronger, and so far I haven't done it. Seems worthwhile, but I'll wait another week or so ☺ It's easily doable online.

HPKP

HTTP Public Key Pinning seems dangerous, at least by some posts. Properly deployed, it would solve a number of problems with the public key infrastructure, but still, complex and a lot of overhead.

Certificate chains

Something I didn't know before is that the servers are supposed to serve the entire chain; I thought, naïvely, that just the server is enough, since the browsers will have the root-root CA, but the intermediaries seem to be problematic.

So, one needs to properly serve the full chain (Let's Encrypt makes this trivial, by the way), and also monitor that it is so.

Ciphers and SSL protocols

OpenSSL disabled SSLv2 in recent builds, but at least Debian stable still has SSLv3+ enabled and Apache does not disable it, so if you put your shiny new website through a SSL checker you get many issues (related strictly to ciphers).

I spent a bit of time researching and getting to the conclusion that:

  • every reasonable client (for my small webserver) supports TLSv1.1+, so disabling SSLv3/TLSv1.0 solved a bunch of issues
  • however, even for TLSv1.1+, a number of ciphers are not recommended by US standards, but going into explicit cipher disable is a pain because I don't see a way to make it "cheap" (without needing manual maintenance); so there's that, my website is not HIPAA compliant due to Camellia cipher.

Issue 5: Weak default configs

Issue 6: Getting perfect ciphers not easy.

However, while not perfect, getting a proper config once you did the research is pretty trivial in terms of configuration.

My apache config. Feedback welcome:

SSLCipherSuite HIGH:!aNULL
SSLHonorCipherOrder on
SSLProtocol all -SSLv3 -TLSv1

And similarly for dovecot:

ssl_cipher_list = HIGH:!aNULL
ssl_protocols = !SSLv3 !TLSv1
ssl_prefer_server_ciphers = yes
ssl_dh_parameters_length = 4096

The last line there - the dh_params - I found via nmap, as my previous config has it do 1024, which is weaker than the key, defeating the purpose of a long key. Which leads to the next point:

DH parameters

It seems that DH parameters can be an issue, in the sense that way too many sites/people reuse the same params. Dovecot (in Debian) generates its own, but Apache (AFAIK) not, and needs explicit configuration added to use your own.

Issue 7: Investigate DH parameters for all software (postfix, dovecot, apache, ssh); see instructions.

Tools

A number interesting tools:

  • Online resources to analyse https config: e.g. SSL labs, and htbridge; both give very detailed information.
  • CAA checker (but this is trivial).
  • nmap ciphers report: nmap --script ssl-enum-ciphers, and very useful, although I don't think this works for STARTTLS protocols.
  • Cert Spotter from SSLMate. This seems to be useful as a complement to CAA (CAA being the policy, and Cert Spotter the monitoring for said policy), but it goes beyond it (key sizes, etc.); for the expiration part, I think nagios/icinga is easier if you already have it setup (check_http has options for lifetime checks).
  • Certificate chain checker; trivial, but a useful extra check that the configuration is right.

Summary

Ah, the good old days of plain http. SSL seems to add a lot of complexity; I'm not sure how much is needed and how much could actually be removed by smarter software. But, not too bad, a few evenings of study is enough to get a start; probably the bigger cost is in the ongoing maintenance and keeping up with the changes.

Still, a number of unresolved issues. I think the next goal will be to find a way to properly do OCSP stapling.

14 January, 2018 10:05AM

Daniel Leidert

Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

My mail server runs with a self signed certificate. So bts, configured like this ...


BTS_SMTP_HOST=mail.wgdd.de:587
BTS_SMTP_AUTH_USERNAME='user'
BTS_SMTP_AUTH_PASSWORD='pass'

...lately refused to send mails with this error:


bts: failed to open SMTP connection to mail.wgdd.de:587
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):


if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls',
SSL_fingerprint => 'sha1$hex-fingerprint'
)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
}

Pretty happy to being able to use the bts command again.

14 January, 2018 02:46AM by Daniel Leidert (noreply@blogger.com)

Andreas Bombe

Fixing a Nintendo Game Boy Screen

Over the holidays my old Nintendo Game Boy (the original DMG-01 model) has resurfaced. It works, but the display had a bunch of vertical lines near the left and right border that stay blank. Apparently a common problem with these older Game Boys and the solution is to apply heat to the connector foil upper side to resolder the contacts hidden underneath. There’s lots of tutorials and videos on the subject so I won’t go into much detail here.

Just one thing: The easiest way is to use a soldering iron (the foil is pretty heat resistant, it has to be soldered during production after all) and move it along the top at the affected locations. Which I tried at first and it kind of works but takes ages. Some columns reappear, others disappear, reappeared columns disappear again… In someone’s comment I read that they needed over five minutes until it was fully fixed!

So… simply apply a small drop of solder to the tip. That’s what you do for better heat transfer in normal soldering and of course it also works here (since the foil connector back doesn’t take solder this doesn’t make a mess or anything). That way, the missing columns reappeared practically instantly at the touch of the solder iron and stayed fixed. Temperature setting was 250°C, more than sufficient for the task.

This particular Game Boy always had issues with the speaker stopping to work but we never had it replaced, I think because the problem was intermittent. After locating the bad solder joint on the connector and reheating it this problem was also fixed. Basically this almost 28 year old device is now in better working condition than it ever was.

14 January, 2018 12:28AM

January 12, 2018

hackergotchi for Norbert Preining

Norbert Preining

Scala: debug logging facility and adjustment of logging level in code

As soon as users start to use your program, you want to implement some debug facilities with logging, and allow them to be turned on via command line switches or GUI elements. I was surprised that doing this in Scala wasn’t as easy as I thought, so I collected the information on how to set it up.

Basic ingredients are the scala-logging library which wraps up slf4j, the Simple Logging Facade for Java, and a compatible backend, I am using logback, a successor of Log4j.

At the current moment adding the following lines to your build.sbt will include the necessary libraries:

libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.7.2"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"

Next is to set up the default logging by adding a file src/main/resources/logback.xml containing at least the following entry for logging to stdout:

    
        
            %d{HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n
        
    

    
        
    

Note the default level here is set to info. More detailed information on the format of the logback.xml can be found here.

In the scala code a simple mix-in of the LazyLogging trait is enough to get started:

import com.typesafe.scalalogging.LazyLogging

object ApplicationMain extends App with LazyLogging {
  ...
  logger.trace(...)
  logger.debug(...)
  logger.info(...)
  logger.warn(...)
  logger.error(...)

(the above commands are in increasingly serious) The messages will only be shown if the logger call has higher seriousness than what is configured in logback.xml (or INFO by default). That means that anything of level trace and debug will not be shown.

But we don’t want to ship always a new program with different logback.xml, so changing the default log level programatically is somehow a strict requirement. Fortunately a brave soul posted a solution on stackexchange, namely

import ch.qos.logback.classic.{Level,Logger}
import org.slf4j.LoggerFactory

  LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).
    asInstanceOf[Logger].setLevel(Level.DEBUG)

This can be used to evaluate command line switches and activate debugging on the fly. The way I often do this is to allow flags -q, -qq, -d, and -dd for quiet, extra quiet, debug, extra debug, which would be translated to the logging levels warning, error, debug, and trace, respectively. Multiple invocations select the maximum debug level (so -q -d does turn on debugging).

This can activated by the following simple code:

val cmdlnlog: Int = args.map( {
    case "-d" => Level.DEBUG_INT
    case "-dd" => Level.TRACE_INT
    case "-q" => Level.WARN_INT
    case "-qq" => Level.ERROR_INT
    case _ => -1
  } ).foldLeft(Level.OFF_INT)(scala.math.min(_,_))
if (cmdlnlog == -1) {
  // Unknown log level has been passed in, error out
  Console.err.println("Unsupported command line argument passed in, terminating.")
  sys.exit(0)
}
// if nothing has been passed on the command line, use INFO
val newloglevel = if (cmdlnlog == Level.OFF_INT) Level.INFO_INT else cmdlnlog
LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).
  asInstanceOf[Logger].setLevel(Level.toLevel(newloglevel))

where args are the command line parameters (in case of ScalaFX that would be parameters.unnamed, in case of a normal Scala application the argument to the main entry function). More complicated command line arguments of course need a more sophisticated approach.

Hope that helps.

12 January, 2018 11:18PM by Norbert Preining

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, December 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

12 January, 2018 02:15PM by Raphaël Hertzog

hackergotchi for Jonathan Dowland

Jonathan Dowland

Jason Scott Talks His Way Out Of It

I've been thoroughly enjoying the Jason Scott Talks His Way Out Of It Podcast by Jason Scott (of the Internet Archive and Archive Team, amongst other things) and perhaps you will too.

Scott started this podcast and a corresponding Patreon/LibrePay/Ko-Fi/Paypal/etc funding stream in order to help him get out of debt. He's candid about getting in and out of debt within the podcast itself; but he also talks about his work at The Internet Archive, the history of Bulletin-Board Systems, Archive Team, and many other topics. He's a good speaker and it's well worth your time. Consider supporting him too!

This reminds me that I am overdue writing an update on my own archiving activities over the last few years. Stay tuned…

12 January, 2018 01:59PM

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live 2017.20180110-1 – the big rework

In short succession a new release of TeX Live for Debian – what could that bring? While there are not a lot of new and updated packages, there is a lot of restructuring of the packages in Debian, mostly trying to placate the voices that the TeX Live packages are getting bigger and bigger and bigger (which is true). In this release we have introduce two measures to allow for smaller installations: optional font package dependencies and downgrade of the -doc packages to suggests.

Let us discuss the two changes, first the one about optional font packages: Till the last release the TeX Live package texlive-fonts-extra depended on a long list of font-* packages, which did amount to a considerable download and install size. There was a reason for this: TeX documents using these fonts via file name use kpathsea, so there are links from the texmf-dist tree to the actual font files. To ensure that these links are not dangling, the font packages were a unconditional dependency.

But current LaTeX packages allow to lookup fonts not only via file name, but also via font name using fontconfig library. Although this is a suboptimal solution due to the inconsistencies and bugs of the fontconfig library (OsF and Expert font sets are a typical example of fonts that throw fontconfig into despair), it allows the use of fonts outside the TEXMF trees.

We have done the following changes to allow users to reduce the installation size by implementing the following changes:

  • texlive-fonts-extra only recommends the various font packages, but does not depend on them;
  • links from the texmf-dist tree are now shipped in a new package texlive-fonts-extra-links;
  • texlive-fonts-extra recommends texlive-fonts-extra-links, but does not strictly depend on it;
  • only texlive-full depends on texlive-fonts-extra-links to provide the same experience as upstream TeX Live

With these changes in place, users can decide to only install the TeX Live packages they need, and leave out texlive-fonts-extra-links and install only those fonts they actually need. This is in particular of interest for the build dependencies which will shrink considerably.

The other change we have implemented in this release is a long requested, but always by me rejected one, namely the demotion of -doc packages to suggestions instead of recommendations. The texlive-*-doc packages are at times rather big, and with the default setup to install recommendations this induced a sharp rise in disc/download volume when installing TeX Live.

By demoting the -doc packages to suggests they will not be automatically installed. I am still not convinced that this is a good solution, mostly due to two reasons: (i) people will cry out about missing documentation, and (ii) it is gray terrain in license terms, due to the requirement of several packages that code and docs are distributed together.

Due to the above two reasons I might revert this change in future, but for now let us see how it goes.

Of course, there is also the usual bunch of updates and new packages, see below. The current packages are already in unstable, only src:texlive-extra needs passage through ftp-masters NEW queue, so most people will need to wait for the acceptance of a trivial package containing only links into Debian 😉

Enjoy.

New packages

blowup, sectionbreak.

Updated packages

animate, arabluatex, babel, beamer, beuron, biber, bidi, bookcover, calxxxx-yyyy, comprehensive, csplain, fira, fontools, glossaries-extra, graphics-def, libertine, luamplib, markdown, mathtools, mcf2graph, media9, modernposter, mpostinl, pdftools, poemscol, pst-geo, pstricks, reledmac, scsnowman, sesstime, skak, tex4ht, tikz-kalender, translator, unicode-math, xassoccnt, xcntperchap, xdvi, xepersian, xsavebox, xurl, ycbook, zhlipsum.

12 January, 2018 12:28AM by Norbert Preining

January 11, 2018

Gaming: Monument Valley 2

I recently found out the Monument Valley, one of my favorite games in 2016, has a successor, Monument Valley 2. Short on time as I usually am, I was nervous that the game would destroy my scheduling, but I went ahead and purchased it!

Fortunately, it turned out to be quite a short game, maybe max 2h to complete all the levels. The graphics are similar to the predecessor, that is to say beautifully crafted. The game mechanics is also unchanged, with only a few additions (like the grow-rotating tree (see the image in the lower right corner above). What has changed that there are now two actors, mother and daughter, and sometimes one has to manage both in parallel, which adds a nice twist.

What I didn’t like too much were the pseudo-philosophical teachings in the middle, like that one

but then, they are fast to skip over.

All in all again a great game and not too much of a time killer. This time I played it not on my mobile but on my Fire tablet, and the bigger screen was excellent and made the game more enjoyable.

Very recommendable.

11 January, 2018 01:34AM by Norbert Preining

January 10, 2018

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in December 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I spent some time in December 2017 to revive Hex-a-Hop, a nice (and somehow cute) logic game, which eventually closed seven bugs. Unfortunately this game was not well maintained but it should be up-to-date again now.
  • I released a new version of debian-games, a collection of games metapackages. Five packages were removed from Debian but  I could also add eight new games or frontends to compensate for that.
  • I updated a couple of packages to fix minor and normal bugs namely: dopewars (#633392,  #857671), caveexpress, marsshooter, snowballz (#866481), drascula, lure-of-the-temptress, lgeneral-data (#861048) and lordsawar (#885888).
  • I also packaged new upstream versions of renpy and lgeneral.
  • Last but not least: I completed another bullet transition (#885179).

Debian Java

Debian LTS

This was my twenty-second month as a paid contributor and I have been paid to work 14 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1216-1. Issued a security update for wordpress fixing 4 CVE.
  • DLA-1227-1. Issued a security update for imagemagick fixing 4 CVE.
  • DLA-1231-1. Issued a security update for graphicsmagick fixing 8 CVE. I confirmed that two more CVE (CVE-2017-17783 and CVE-2017-17913) did not affect the version in Wheezy.
  • DLA-1236-1. Issued a security update for plexus-utils fixing 1 CVE.
  • DLA-1237-1. Issued a security update for plexus-utils2 fixing 1 CVE.
  • DLA-1208-1. I released an update for Debian’s reportbug tool to fix bug #878088. The LTS and security teams will be informed from now on when users report regressions due to security updates. I have also prepared updates for Jessie/Stretch and unstable but my NMU was eventually canceled by the maintainer of reportbug . He has not made a concrete counterproposal yet.

Misc

  • I reviewed and sponsored mygui and openmw for Bret Curtis.
  • I updated byzanz and fixed #830011.
  • I adopted the imlib2 image library and prepared a new upstream release. I hope to release it soon.

Non-maintainer upload

  • I NMUed lmarbles, prepared a new upstream release and fixed some bugs.

Thanks for reading and see you next time.

10 January, 2018 07:07PM by Apo

hackergotchi for Sean Whitton

Sean Whitton

Are you a DD or DM doing source-only uploads to Debian out of a git repository?

If you are a Debian Maintainer (DM) or Debian Developer (DD) doing source-only uploads to Debian for packages maintained in git, you are probably using some variation of the following:

% # sbuild/pbuilder, install and test the final package
% # everything looks good
% dch -r
% git commit debian/changelog "Finalise 1.2.3-1 upload"
% gbp buildpackage -S --git-tag
% debsign -S
% dput ftp-master ../foo_1.2.3-1_source.changes
% git push --follow-tags origin master

where the origin remote is probably salsa.debian.org. Please consider replacing the above with the following:

% # sbuild/pbuilder, install and test the final package
% # everything looks good
% dch -r
% git commit debian/changelog "Finalise 1.2.3-1 upload"
% dgit push-source --gbp
% git push --follow-tags origin master

where the dgit push-source call does the following:

  1. Various sanity checks, some of which are not performed by any other tools, such as
    • not accidently overwriting an NMU.
    • not missing the .orig.tar from your upload
    • ensuring that the Distribution field in your changes is the same as your changelog
  2. Builds a source package from your git HEAD.
  3. Signs the .changes and .dsc.
  4. dputs these to ftp-master.
  5. Pushes your git history to dgit-repos.

Why might you want to do this? Well,

  1. You don’t need to learn how to use dgit for any other parts of your workflow. It’s entirely drop-in.
    • dgit will not make any merge commits on your master branch, or anything surprising like that. (It might make a commit to tweak your .gitignore.)
  2. No-one else in your team is required to use dgit. Nothing about their workflow need change.
  3. Benefit from dgit’s sanity checks.
  4. Provide your git history on dgit-repos in a uniform format that is easier for users, NMUers and downstreams to use (see dgit-user(7) and dgit-simple-nmu(7)).
    • Note that this is independent of the history you push to alioth/salsa. You still need to push to salsa as before, and the format of that history is not changed.
  5. Only a single command is required to perform the source-only upload, instead of three.

Hints

  1. If you’re using git dpm you’ll want --dpm instead of --gbp.
  2. If the last upload of the package was not performed with dgit, you’ll need to pass --overwrite. dgit will tell you if you need this. This is to avoid accidently excluding the changes in NMUs.

10 January, 2018 05:54PM

hackergotchi for Ben Hutchings

Ben Hutchings

Meltdown and Spectre in Debian

I'll assume everyone's already heard repeatedly about the Meltdown and Spectre security issues that affect many CPUs. If not, see meltdownattack.com. These primarily affect systems that run untrusted code - such as multi-tenant virtual hosting systems. Spectre is also a problem for web browsers with Javascript enabled.

Meltdown

Over the last week the Debian kernel team has worked to mitigate Meltdown in all suites. This mitigation is currently limited to kernels running in 64-bit mode (amd64 architecture), but the issue affects 32-bit mode as well.

You can see where this mitigation is applied on the security tracker. As of today, wheezy, jessie, jessie-backports, stretch and unstable/sid are fixed while stretch-backports, testing/buster and experimental are not.

Spectre

Spectre needs to be mitigated in the kernel, browsers, and potentially other software. Currently the kernel changes to mitigate it are still under discussion upstream. Mozilla has started mitigating Spectre in Firefox and some of these changes are now in Debian unstable (version 57.0.4-1). Chromium has also started mitigating Spectre but no such changes have landed in Debian yet.

10 January, 2018 03:05AM

Debian LTS work, December 2017

I was assigned 14 hours of work by Freexian's Debian LTS initiative, but only worked 6 hours so I carried over 8 hours to January.

I prepared and uploaded an update to the Linux kernel to fix various security issues. I issued DLA-1200-1 for this update. I also prepared another update on the Linux 3.2 longterm stable branch, though most of that work was done while on holiday so I didn't count the hours. I spent some time following the closed mailing list used to coordinate backports of KPTI/KAISER.

10 January, 2018 02:12AM

January 09, 2018

hackergotchi for Lars Wirzenius

Lars Wirzenius

On using Github and a PR based workflow

In mid-2017, I decided to experiment with using pull-requests (PRs) on Github. I've read that they make development using git much nicer. The end result of my experiment is that I'm not going to adopt a PR based workflow.

The project I chose for my experiment is vmdb2, a tool for generating disk images with Debian. I put it up on Github, and invited people to send pull requests or patches, as they wished. I got a bunch of PRs, mostly from two people. For a little while, there was a flurry of activity. It has has now calmed down, I think primarily because the software has reached a state where the two contributors find it useful and don't need it to be fixed or have new features added.

This was my first experience with PRs. I decided to give it until the end of 2017 until I made any conclusions. I've found good things about PRs and a workflow based on them:

  • they reduce some of the friction of contributing, making it easier for people to contribute; from a contributor point of view PRs certainly seem like a better way than sending patches over email or sending a message asking to pull from a remote branch
  • merging a PR in the web UI is very easy

I also found some bad things:

  • I really don't like the Github UI or UX, in general or for PRs in particular
  • especially the emails Github sends about PRs seemed useless beyond a basic "something happened" notification, which prompt me to check the web UI
  • PRs are a centralised feature, which is something I prefer to avoid; further, they're tied to Github, which is something I object to on principle, since it's not free software
    • note that Gitlab provides support for PRs as well, but I've not tried it; it's an "open core" system, which is not fully free software in my opinion, and so I'm wary of Gitlab; it's also a centralised solution
    • a "distributed PR" system would be nice
  • merging a PR is perhaps too easy, and I worry that it leads me to merging without sufficient review (that is of course a personal flaw)

In summary, PRs seem to me to prioritise making life easier for contributors, especially occasional contributors or "drive-by" contributors. I think I prefer to care more about frequent contributors, and myself as the person who merges contributions. For now, I'm not going to adopt a PR based workflow.

(I expect people to mock me for this.)

09 January, 2018 04:11PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

How Virgin Media lost me as a supporter

For a long time I’ve been a supporter of Virgin Media (from a broadband perspective, though their triple play TV/Phone/Broadband offering has seemed decent too). I know they have a bad reputation amongst some people, but I’ve always found their engineers to be capable, their service in general reliable, and they can manage much faster speeds than any UK ADSL/VDSL service at cheaper prices. I’ve used their services everywhere I’ve lived that they were available, starting back in 2001 when I lived in Norwich. The customer support experience with my most recent move has been so bad that I am no longer of the opinion it is a good idea to use their service.

Part of me wonders if the customer support has got worse recently, or if I’ve just been lucky. We had a problem about 6 months ago which was clearly a loss of signal on the line (the modem failed to see anything and I could clearly pinpoint when this had happened as I have collectd monitoring things). Support were insistent they could do a reset and fix things, then said my problem was the modem and I needed a new one (I was on an original v1 hub and the v3 was the current model). I was extremely dubious but they insisted. It didn’t help, and we ended up with an engineer visit - who immediately was able to say they’d been disconnecting noisy lines that should have been unused at the time my signal went down, and then proceeded to confirm my line had been unhooked at the cabinet and then when it was obvious the line was noisy and would have caused problems if hooked back up patched me into the adjacent connection next door. Great service from the engineer, but support should have been aware of work in the area and been able to figure out that might have been a problem rather than me having a 4-day outage and numerous phone calls when the “resets” didn’t fix things.

Anyway. I moved house recently, and got keys before moving out of the old place, so decided to be organised and get broadband setup before moving in - there was no existing BT or Virgin line in the new place so I knew it might take a bit longer than usual to get setup. Also it would have been useful to have a connection while getting things sorted out, so I could work while waiting in for workmen. As stated at the start I’ve been pro Virgin in the past, I had their service at the old place and there was a CableTel (the Belfast cable company NTL acquired) access hatch at the property border so it was clear it had had service in the past. So on October 31st I placed an order on their website and was able to select an installation date of November 11th (earlier dates were available but this was a Saturday and more convenient).

This all seemed fine; Virgin contacted me to let me know there was some external work that needed to be done but told me it would happen in time. This ended up scheduled for November 9th, when I happened to be present. The engineers turned up, had a look around and then told me there was an issue with access to their equipment - they needed to do a new cable pull to the house and although the ducting was all there the access hatch for the other end was blocked by some construction work happening across the road. I’d had a call about this saying they’d be granted access from November 16th, so the November 11th install date was pushed out to November 25th. Unfortunate, but understandable. The engineers also told me that what would happen is the external team would get a cable hooked up to a box on the exterior of the house ready for the internal install, and that I didn’t need to be around for that.

November 25th arrived. There was no external box, so I was dubious things were actually going to go ahead, but I figured there was a chance the external + internal teams would turn up together and get it sorted. No such luck. The guy who was supposed to do the internal setup turned up, noticed the lack of an external box and informed me he couldn’t do anything without that. As I’d expected. I waited a few days to hear from Virgin and heard nothing, so I rang them and was told the installation had moved to December 6th and the external bit would be done before that - I can’t remember the exact date quoted but I rang a couple of times before the 6th and was told it would happen that day “for sure” each time.

December 5th arrives and I get an email telling me the installation has moved to December 21st. This is after the planned move date and dangerously close to Christmas - I’m aware that in the event of any more delays I’m unlikely to get service until the New Year. Lo and behold on December 7th I’m informed my install is on hold and someone will contact me within 5 working days to give me an update.

Suffice to say I do not get called. I ring towards the end of the following week and am told they are continuing to have trouble carrying out work on the access hatch. So I email the housing company doing the work across the road, asking if Virgin have actually been in touch and when the building contractors plan to give them the access they require. I get a polite response saying Virgin have been on-site but did not ask for anything to be moved or make it clear they were trying to connect a customer. And providing an email address for the appropriate person in the construction company to arrange access.

I contact Virgin to ask about this on December 20th. There’s no update but this time I manage to get someone who actually seems to want to help, rather than just telling me it’s being done today or soon. I get email confirmation that the matter is being escalated to the Area Field Manager (I’d been told this by phone on December 16th as well but obviously nothing had happened), and provide the contact details for the construction company.

And then I wait. I’m aware things wind down over the Christmas period, so I’m not expecting an install before the New Year, but I did think I might at least get a call or email with an update. Nothing. My wife rang to finally cancel our old connection last week (it’s been handy to still have my backup host online and be able to go and do updates in the old place) and they were aware of the fact we were trying to get a new connection and that there had been issues, but had no update and then proceeded to charge a disconnection fee, even though Virgin state no disconnection if you move and continue with Virgin Media.

So last week I rang and cancelled the order. And got the usual story of difficulty with access and asked to give them 48 hours to get back to me. I said no, that the customer service so far had been appalling and to cancel anyway. Which I’m informed has now been done.

Let’s be clear on what I have issue with here. While the long delay is annoying I don’t hold Virgin entirely responsible - there is construction work going on and things slow down over Christmas (though the order was placed long enough beforehand that this really shouldn’t have impacted things). The problem is the customer service and complete lack of any indication Virgin are managing this process well internally - the fact the interior install team turned up when the exterior work hadn’t been completed is shocking! If Virgin had told me at the start (or once they’d done the first actual physical visit to the site and seen the situation) that there was going to be a delay and then been able to provide a firm date, I’d have been much more accepting. Instead, the numerous reschedules, an inability to call back and provide updates when promised and the empty assurances that exterior work will be carried out on certain dates all leave me lacking any faith in what Virgin tell me. Equally, the fact they have charged a disconnection fee when their terms state they wouldn’t is ridiculous (a complaint has been raised but at the time of writing the complaints team has, surprise, surprise, not been in contact). If they’re so poor when I’m trying to sign up as a new customer, why should I have any faith in their ability to provide decent support when I actually have their service?

It’s also useful to contrast my Virgin experience with 2 others. Firstly, EE who I used for 4G MiFi access. Worked out of the box, and when I rang to cancel (because I no longer need it) were quick and efficient about processing the cancellation and understood that I’d been pleased with the service but no longer needed it, so didn’t do a hard retention sell.

Secondly, I’ve ended up with FTTC over a BT Openreach line from a local Gamma reseller, MCL Services. I placed this order on December 8th, after Virgin had put my install on hold. At the point of order I had an install date of December 19th, but within 3 hours it was delayed until January 3rd. At this point I thought I was going to have similar issues, so decided to leave both orders open to see which managed to complete first. I double-checked with MCL on January 2nd that there’d been no updates, and they confirmed it was all still on and very unlikely to change. And, sure enough, on the morning of January 3rd a BT engineer turned up after having called to give a rough ETA. Did a look around, saw the job was bigger than expected and then, instead of fobbing me off, got the job done. Which involved needing a colleague to turn up to help, running a new cable from a pole around the corner to an adjacent property and then along the wall, and installing the master socket exactly where suited me best. In miserable weather.

What did these have in common that Virgin does not? First, communication. EE were able to sort out my cancellation easily, at a time that suited me (after 7pm, when I’d got home from work and put dinner on). MCL provided all the installation details for my FTTC after I’d ordered, and let me know about the change in date as soon as BT had informed them (it helps I can email them and actually get someone who can help, rather than having to phone and wait on hold for someone who can’t). BT turned up and discovered problems and worked out how to solve them, rather than abandoning the work - while I’ve had nothing but good experiences with Virgin’s engineers up to this point there’s something wrong if they can’t sort out access to their network in 2 months. What if I’d been an existing customer with broken service?

This is a longer post than normal, and no one probably cares, but I like to think that someone in Virgin might read it and understand where my frustrations throughout this process have come from. And perhaps improve things, though I suspect that’s expecting too much and the loss of a single customer doesn’t really mean a lot to them.

09 January, 2018 08:39AM

hackergotchi for Don Armstrong

Don Armstrong

Debbugs Versioning: Merging

One of the key features of Debbugs, the bug tracking system Debian uses, is its ability to figure out which bugs apply to which versions of a package by tracking package uploads. This system generally works well, but when a package maintainer's workflow doesn't match the assumptions of Debbugs, unexpected things can happen. In this post, I'm going to:

  1. introduce how Debbugs tracks versions
  2. provide an example of a merge-based workflow which Debbugs doesn't handle well
  3. provide some suggestions on what to do in this case

Debbugs Versioning

Debbugs tracks versions using a set of one or more rooted trees which it builds from the ordering of debian/changelog entries. In the simplist case, every upload of a Debian package has changelogs in the same order, and each upload adds just one version. For example, in the case of dgit, to start with the package has this (abridged) version tree:

the next upload, 3.13, has a changelog with this version ordering: 3.13 3.12 3.11 3.10, which causes the 3.13 version to be added as a descendant of 3.12, and the version tree now looks like this:

dgit is being developed while also being used, so new versions with potentially disruptive changes are uploaded to experimental while production versions are uploaded to unstable. For example, the 4.0 experimental upload was based on the 3.10 version, with the changelog ordering 4.0 3.10. The tree now has two branches, but everything seems as you would expect:

Merge based workflows

Bugfixes in the maintenance version of dgit also are made to the experimental package by merging changes from the production version using git. In this case, some changes which were present in the 3.12 and 3.11 versions are merged using git, corresponds to a git merge flow like this:

If an upload is prepared with changelog ordering 4.1 4.0 3.12 3.11 3.10, Debbugs combines this new changelog ordering with the previously known tree, to produce this version tree:

This looks a bit odd; what happened? Debbugs walks through the new changelog, connecting each of the new versions to the previous version if and only if that version is not already an ancestor of the new version. Because the changelog says that 3.12 is the ancestor of 4.0, that's where the 4.1 4.0 version tree is connected.

Now, when 4.2 is uploaded, it has the changelog ordering (based on time) 4.2 3.13 4.1 4.0 3.12 3.11 3.10, which corresponds to this git merge flow:

Debbugs adds in 3.13 as an ancestor of 4.2, and because 4.1 was not an ancestor of 3.13 in the previous tree, 4.1 is added as an ancestor of 3.13. This results in the following graph:

Which doesn't seem particularly helpful, because

is probably the tree that more closely resembles reality.

Suggestions on what to do

Why does this even matter? Bugs which are found in 3.11, and fixed in 3.12 now show up as being found in 4.0 after the 4.1 release, though they weren't found in 4.0 before that release. It also means that 3.13 now shows up as having all of the bugs fixed in 4.2, which might not be what is meant.

To avoid this, my suggestion is to order the entries in changelogs in the same order that the version graph should be traversed from the leaf version you are releasing to the root. So if the previous version tree is what is wanted, 3.13 should have a changelog with ordering 3.13 3.12 3.11 3.10, and 4.2 should have a changelog with ordering 4.2 4.1 4.0 3.10.

What about making the BTS support DAGs which are not trees? I think something like this would be useful, but I don't personally have a good idea on how this could be specified using the changelog or how bug fixed/found/absent should be propagated in the DAG. If you have better ideas, email me!

09 January, 2018 05:25AM

January 08, 2018

Michael Stapelberg

Debian buster on the Raspberry Pi 3 (update)

I previously wrote about my Debian buster preview image for the Raspberry Pi 3.

Now, I’m publishing an updated version, containing the following changes:

  • WiFi works out of the box. Use e.g. ip link set dev wlan0 up, and iwlist wlan0 scan.
  • Kernel boot messages are now displayed on an attached monitor (if any), not just on the serial console.
  • Root file system resizing will now not touch the partition table if the user modified it.
  • The image is now compressed using xz, reducing its size to 170M.

As before, the image is built with vmdb2, the successor to vmdebootstrap. The input files are available at https://github.com/Debian/raspi3-image-spec.

Note that Bluetooth is still untested (see wiki:RaspberryPi3 for details).

Given that Bluetooth is the only known issue, I’d like to work towards getting this image built and provided on official Debian infrastructure. If you know how to make this happen, please send me an email. Thanks!

As a preview version (i.e. unofficial, unsupported, etc.) until that’s done, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2018-01-08/. To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget https://people.debian.org/~stapelberg/raspberrypi3/2018-01-08/2018-01-08-raspberry-pi-3-buster-PREVIEW.img.xz
$ xzcat 2018-01-08-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdb bs=64k oflag=dsync status=progress

If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:

$ ssh root@rpi3
# Password is “raspberry”

08 January, 2018 09:55PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Announcing BadISO

For a few years now I have been working on-and-off on a personal project to import data from a large collection of home-made CD-Rs and DVD-Rs. I've started writing up my notes, experiences and advice for performing a project like this; but they aren't yet in a particularly legible state.

As part of this work I wrote some software called "BadISO" which takes a possibly-corrupted or incomplete optical disc image (specifically ISO9660) and combined with a GNU ddrescue map (or log) file, tells you which files within the image are intact, and which are not. The idea is you have tried to import a disc using ddrescue and some areas of the disc have not read successfully. The ddrescue map file tells you which areas in byte terms, but not what files that corresponds to. BadISO plugs that gap.

Here's some example output:

$ badiso my_files.iso
…
✔ ./joes/allstars.zip
✗ ./joes/ban.gif
✔ ./joes/eur-mgse.zip
✔ ./joes/gold.zip
✗ ./joes/graphhack.txt
…

BadISO was (and really, is) a hacky proof of concept written in Python. I have ambitions to re-write it properly (in either Idris or Haskell) but I'm not going to get around to it in the near future, and in the meantime at least one other person has found this useful. So I'm publishing it in its current state.

BadISO currently requires GNU xorriso.

You can grab it from https://github.com/jmtd/badiso.

08 January, 2018 07:00PM

Jelmer Vernooij

Breezy: Forking Bazaar

A couple of months ago, Martin and I announced a friendly fork of Bazaar, named Breezy.

It's been 5 years since I wrote a Bazaar retrospective and around 6 since I seriously contributed to the Bazaar codebase.

Goals

We don't have any grand ambitions for Breezy; the main goal is to keep Bazaar usable going forward. Your open source projects should still be using Git.

The main changes we have made so far come down to fixing a number of bugs and to bundling useful plugins. Bundling plugins makes setting up an environment simpler and to eliminate the API compatibility issues that plagued external plugins in the Bazaar world.

Perhaps the biggest effort in Breezy is porting the codebase to Python 3, allowing it to be used once Python 2 goes EOL in 2020.

A fork

Breezy is a fork of Bazaar and not just a new release series.

Bazaar upstream has been dormant for the last couple of years anyway - we don't lose anything by forking.

We're forking because gives us the independence to make some of the changes we deemed necessary and that are otherwise hard to make for an established project, For example, we're now bundling plugins, taking an axe to a large number of APIs and dropping support for older platforms.

A fork also means independence from Canonical; there is no CLA for Breezy (a hindrance for Bazaar) and we can set up our own infrastructure without having to chase down Canonical staff for web site updates or the installation of new packages on the CI system.

More information

Martin gave a talk about Breezy at PyCon UK this year.

Breezy bugs can be filed on Launchpad. For the moment, we are using the Bazaar mailing list and the #bzr IRC channel for any discussions and status updates around Breezy.

08 January, 2018 04:00PM by Jelmer Vernooj

hackergotchi for Sean Whitton

Sean Whitton

dgit push-source

dgit 4.2, which is now in Debian unstable, has a new subcommand: dgit push-source. This is just like dgit push, except that

  • it forces a source-only upload; and
  • it also takes care of preparing the _source.changes, transparently, without the user needing to run dpkg-buildpackage -S or dgit build-source or whatever.

push-source is useful to ensure you don’t accidently upload binaries, and that was its original motivation. But there is a deeper significance to this new command: to say

% dgit push-source unstable

is, in one command, basically to say

% git push ftp-master HEAD:unstable

That is: dgit push-source is like doing a single-step git push of your git HEAD straight into the archive! The future is now!

The contrast here is with ordinary dgit push, which is not analogous to a single git push command, because

  • it involves uploading .debs, which make a material change to the archive other than updating the source code of the package; and
  • it must be preceded by a call to dpkg-buildpackage, dgit sbuild or similar to prepare the .changes file.

While dgit push-source also involves uploading files to ftp-master in addition to the git push, because that happens transparently and does not require the user to run a build command, it can be thought of as an implementation detail.

Two remaining points of disanalogy:

  1. git push will push your HEAD no matter the state of the working tree, but dgit push-source has to clean your working tree. I’m thinking about ways to improve this.

  2. For non-native packages, you still need an orig.tar in ... Urgh. At least that’s easy to obtain thanks to git-deborig.

08 January, 2018 07:48AM

January 07, 2018

hackergotchi for Mehdi Dogguy

Mehdi Dogguy

Salsa webhooks and integrated services

Since many years now, Debian is facing an issue with one of its most important services: alioth.debian.org (Debian's forge). It is used by most the teams and hosts thousands of repositories (of all sorts) and mailing-lists. The service was stable (and still is), but not maintained. So it became increasingly important to find its replacement.

Recently, a team for volunteers organized a sprint to work on the replacement of Alioth. I was very skeptical about the status of this new project until... tada! An announcement was sent out about the beta release of this new service: salsa.debian.org (a GitLab CE instance). Of course, Salsa hosts only Git repositories and doesn't deal with other {D,}VCSes used on Alioth (like Darcs, Svn, CVS, Bazaar and Mercurial) but it is a huge step forward!

I must say that I absolutely love this new service which brings fresh air to Debian developers. We all owe a debt of gratitude to all those who made this possible. Thank you very much for working on this!

Alas, no automatic migration was done between the two services (for good reasons). The migration is left to the maintainers. It might be an easy task for some who maintain a few packages, but it is a depressing task for bigger teams.

To make this easy, Christoph Berg wrote a batch import script to import a Git repository in a single step. Unfortunately, GitLab is what it is... and it is not possible to set team-wide parameters to use in each repository. Salsa's documentation describes how to configure that for each repository (project, in GitLab's jargon) but this click-monkey-work is really not for many of us. Fortunately, GitLab has a nice API and all this is scriptable. So I wrote some scripts to:
  • Import a Git repo (mainly Christoph's script with an enhancement)
  • Set up IRC notifications
  • Configure email pushes on commits
  • Enable webhooks to automatically tag 'pending' or 'close' Debian BTS bugs depending on mentions in commits messages.
I published those scripts here: salsa-scripts. They are not meant to be beautiful, but only to make your life a little less miserable. I hope you find them useful. Personally, this first step was a prerequisite for the migration of my personal and team repositories over to Salsa. If more people want to contribute to those scripts, I can move the repository into the Debian group.

07 January, 2018 11:07PM by Mehdi (noreply@blogger.com)

Petter Reinholdtsen

Legal to share more than 11,000 movies listed on IMDB?

I've continued to track down list of movies that are legal to distribute on the Internet, and identified more than 11,000 title IDs in The Internet Movie Database (IMDB) so far. Most of them (57%) are feature films from USA published before 1923. I've also tracked down more than 24,000 movies I have not yet been able to map to IMDB title ID, so the real number could be a lot higher. According to the front web page for Retro Film Vault, there are 44,000 public domain films, so I guess there are still some left to identify.

The complete data set is available from a public git repository, including the scripts used to create it. Most of the data is collected using web scraping, for example from the "product catalog" of companies selling copies of public domain movies, but any source I find believable is used. I've so far had to throw out three sources because I did not trust the public domain status of the movies listed.

Anyway, this is the summary of the 28 collected data sources so far:

 2352 entries (   66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
 2302 entries (  120 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  195 entries (   63 unique) with and   200 without IMDB title ID in free-movies-cinemovies.json
   89 entries (   52 unique) with and    38 without IMDB title ID in free-movies-creative-commons.json
  344 entries (   28 unique) with and   655 without IMDB title ID in free-movies-fesfilm.json
  668 entries (  209 unique) with and  1064 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   21 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 6822 entries ( 6669 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
  137 entries (    0 unique) with and     0 without IMDB title ID in free-movies-imdb-externlist.json
 1205 entries (   57 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
   84 entries (   20 unique) with and   167 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  135 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (  100 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  229 entries (   87 unique) with and     1 without IMDB title ID in free-movies-manual.json
   44 entries (    2 unique) with and    64 without IMDB title ID in free-movies-openflix.json
  291 entries (   33 unique) with and   474 without IMDB title ID in free-movies-profilms-pd.json
  211 entries (    7 unique) with and     0 without IMDB title ID in free-movies-publicdomainmovies-info.json
 1232 entries (   57 unique) with and  1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
   46 entries (   13 unique) with and    81 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   64 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 1758 entries (  882 unique) with and  3786 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
   63 entries (   16 unique) with and   141 without IMDB title ID in free-movies-vodo.json
11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID

I keep finding more data sources. I found the cinemovies source just a few days ago, and as you can see from the summary, it extended my list with 63 movies. Check out the mklist-* scripts in the git repository if you are curious how the lists are created. Many of the titles are extracted using searches on IMDB, where I look for the title and year, and accept search results with only one movie listed if the year matches. This allow me to automatically use many lists of movies without IMDB title ID references at the cost of increasing the risk of wrongly identify a IMDB title ID as public domain. So far my random manual checks have indicated that the method is solid, but I really wish all lists of public domain movies would include unique movie identifier like the IMDB title ID. It would make the job of counting movies in the public domain a lot easier.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

07 January, 2018 10:30PM

Russ Allbery

Free software log (December 2017)

I finally have enough activity to report that I need to think about how to format these updates. It's a good problem to have.

In upstream work, I updated rra-c-util with an evaluation of the new warning flags for GCC 7, enabling as many warnings as possible. I also finished the work to build with Clang with warnings enabled, which required investigating a lot of conversions between variables of different sizes. Part of that investigation surfaced that the SA_LEN macro was no longer useful, so I removed that from INN and rra-c-util.

I'm still of two minds about whether adding the casts and code to correctly build with -Wsign-conversion is worth it. I started a patch to rra-c-util (currently sitting in a branch), but wasn't very happy about the resulting code quality. I think doing this properly requires some thoughtfulness about macros and a systematic approach.

Releases:

In Debian Policy, I wrote and merged a patch for one bug, merged patches for two other bugs, and merged a bunch of wording improvements to the copyright-format document. I also updated the license-count script to work with current Debian infrastructure and ran it again for various ongoing discussions of what licenses to include in common-licenses.

Debian package uploads:

  • lbcd (now orphaned)
  • libafs-pag-perl (refreshed and donated to pkg-perl)
  • libheimdal-kadm5-perl (refreshed and donated to pkg-perl)
  • libnet-ldapapi-perl
  • puppet-modules-puppetlabs-apt
  • puppet-modules-puppetlabs-firewall (team upload)
  • puppet-modules-puppetlabs-ntp (team upload)
  • puppet-modules-puppetlabs-stdlib (two uploads)
  • rssh (packaging refresh, now using dgit)
  • webauth (now orphaned)
  • xfonts-jmk

There are a few more orphaning uploads and uploads giving Perl packages to the Perl packaging team coming, and then I should be down to the set of packages I'm in a position to actively maintain and I can figure out where I want to focus going forward.

07 January, 2018 08:27PM

January 06, 2018

Sven Hoexter

BIOS update Dell Latitude E7470 and Lenovo TP P50

Maybe some recent events let to BIOS update releases by various vendors around the end of 2017. So I set out to update (for the first time) the BIOS of my laptops. Searching the interwebs for some hints I found a lot outdated information involving USB thumb drives, CDs, FreeDOS in variants but also some useful stuff. So here is the short list of what actually worked in case I need to do it again.

Update: Added a Wiki page so it's possible to extend the list. Seems that some of us avoided the update hassle so far, but now with all those Intel ME CVEs and Intel microcode updates it's likely we've to do it more often.

Dell Latitude E7470 (UEFI boot setup)

  1. Download the file "Latitude_E7x70_1.18.5.exe" (or whatever is the current release).
  2. Move the file to "/boot/efi/".
  3. Boot into the one time boot menu with F12 during the BIOS/UEFI start.
  4. Select the "Flash BIOS Update" menu option.
  5. Use your mouse to select the update file visually and watch the magic.

So no USB sticks, FreeDOS, SystemrescueCD images or other tricks involved. If it's cool that the computer in your computers computer running Minix (or whatever is involved in this case) updates your firmware is a different topic, but the process is pretty straight forward.

Lenovo ThinkPad P50

  1. Download the BIOS Update bootable CD image from Lenovo "n1eur31w.iso" (Select Windows as OS so it's available for download).
  2. Extract the eltorito boot image from the image "geteltorito -o thinkpad.img Downloads/n1eur31w.iso".
  3. Dump it on a USB thumb drive "dd if=thinkpad.img of=/dev/sdX".
  4. Boot from this thumb drive and follow the instructions of the installer.

I guess the process is similar for almost all ThinkPads.

06 January, 2018 10:34PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

My Free Software Activities in December 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h and I had two hours left but I only spent 13h. During this time, I managed the LTS frontdesk during one week, reviewing new security issues and classifying the associated CVE (18 commits to the security tracker).

I also released DLA-1205-1 on simplesamlphp fixing 6 CVE. I prepared and released DLA-1207-1 on erlang with the help of the maintainer who tested the patch that I backported. I handled tkabber but it turned out that the CVE report was wrong, I reported this to MITRE who marked the CVE as DISPUTED (see CVE-2017-17533).

During my CVE triaging work, I decided to mark mp3gain and libnet-ping-external-perl as unsupported (the latter has been removed everywhere already). I re-classified the suricata CVE as not worth an update (following the decision of the security team). I also dropped global from dla-needed as the issue was marked unimportant but I still filed #884912 about it so that it gets tracked in the BTS.

I filed #884911 on ohcount requesting new upstream (fixing CVE) and update of homepage field (that is misleading in current package). I dropped jasperreports from dla-needed.txt as issues are undetermined and upstream is uncooperative, instead I suggested to mark the package as unsupported (see #884907).

Misc Debian Work

Debian Installer. I suggested to switch to isenkram instead of discover for automatic package installation based on recognized hardware. I also filed a bug on isenkram (#883470) and asked debian-cloud for help to complete the missing mappings.

Packaging. I sponsored asciidoc 8.6.10-2 for Joseph Herlant. I uplodaded new versions of live-tools and live-build fixing multiple bugs that had been reported (many with patches ready to merge). Only #882769 required a bit more work to track down and fix. I also uploaded dh-linktree 0.5 with a new feature contributed by Paul Gevers. By the way, I no longer use this package so I will happily give it over to anyone who needs it.

QA team. When I got my account on salsa.debian.org (a bit before the announce of the beta phase), I created the group for the QA team and setup a project for distro-tracker.

Bug reports. I filed #884713 on approx, requesting that systemd’s approx.socket be configured to not have any trigger limit.

Package Tracker

Following the switch to Python 3 by default, I updated the packaging provided in the git repository. I’m now also providing a systemd unit to run gunicorn3 for the website.

I merged multiple patches of Pierre-Elliott Bécue fixing bugs and adding a new feature (vcswatch support!). I fixed a bug related to the lack of a link to the experimental build logs and a bit of bug triaging.

I also filed two bugs against DAK related to bad interactions with the package tracker: #884930 because it does still use packages.qa.debian.org to send emails instead of tracker.debian.org. And #884931 because it sends removal mails to too many email addresses. And I filed a bug against the tracker (#884933) because the last issue also revealed a problem in the way the tracker handles removal mails.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

06 January, 2018 10:50AM by Raphaël Hertzog

hackergotchi for Shirish Agarwal

Shirish Agarwal

RequestPolicy Continued

Dear Friends,

First up, I saw a news item about Indian fake e-visa portal. As it is/was Sunday, I decided to see if there indeed is such a mess. I dug out torbrowser-bundle (tbb), checked the IP it was giving me (some Canadian IP starting from (216.xxx.xx.xx) typed in ‘Indian visa application’ and used duckduckgo.com to see which result cropped up first.

I deliberately used tbb as I wanted to ensure it wasn’t coming from an Indian IP where the chances of Indian e-visa portal being fake should be negligible. Scamsters would surely be knowledgable to differ between IPs coming from India and from some IP from some other country.

The first result duckduckgo.com gave was https://indianvisaonline.gov.in/visa/index.html

I then proceeded to download whois on my new system (more on that in another blog post

$ sudo aptitude install whois

and proceeded to see if it’s the genuine thing or not and this is the information I got –

$ whois indianvisaonline.gov.in
Access to .IN WHOIS information is provided to assist persons in determining the contents of a domain name registration record in the .IN registry database. The data in this record is provided by .IN Registry for informational purposes only, and .IN does not guarantee its accuracy. This service is intended only for query-based access. You agree that you will use this data only for lawful purposes and that, under no circumstances will you use this data to: (a) allow, enable, or otherwise support the transmission by e-mail, telephone, or facsimile of mass unsolicited, commercial advertising or solicitations to entities other than the data recipient's own existing customers; or (b) enable high volume, automated, electronic processes that send queries or data to the systems of Registry Operator, a Registrar, or Afilias except as reasonably necessary to register domain names or modify existing registrations. All rights reserved. .IN reserves the right to modify these terms at any time. By submitting this query, you agree to abide by this policy.

Domain ID:D4126837-AFIN
Domain Name:INDIANVISAONLINE.GOV.IN
Created On:01-Apr-2010 12:10:51 UTC
Last Updated On:18-Apr-2017 22:32:00 UTC
Expiration Date:01-Apr-2018 12:10:51 UTC
Sponsoring Registrar:National Informatics Centre (R12-AFIN)
Status:OK
Reason:
Registrant ID:dXN4emZQYOGwXU6C
Registrant Name:Director Immigration and Citizenship
Registrant Organization:Ministry of Home Affairs
Registrant Street1:NDCC-II building
Registrant Street2:Jaisingh Road
Registrant Street3:
Registrant City:New Delhi
Registrant State/Province:Delhi
Registrant Postal Code:110001
Registrant Country:IN
Registrant Phone:+91.23438035
Registrant Phone Ext.:
Registrant FAX:+91.23438035
Registrant FAX Ext.:
Registrant Email:dsmmp-mha@nic.in
Admin ID:dXN4emZQYOvxoltA
Admin Name:Director Immigration and Citizenship
Admin Organization:Ministry of Home Affairs
Admin Street1:NDCC-II building
Admin Street2:Jaisingh Road
Admin Street3:
Admin City:New Delhi
Admin State/Province:Delhi
Admin Postal Code:110001
Admin Country:IN
Admin Phone:+91.23438035
Admin Phone Ext.:
Admin FAX:+91.23438035
Admin FAX Ext.:
Admin Email:dsmmp-mha@nic.in
Tech ID:jiqNEMLSJPA8a6wT
Tech Name:Rakesh Kumar
Tech Organization:National Informatics Centre
Tech Street1:National Data Centre
Tech Street2:Shashtri Park
Tech Street3:
Tech City:New Delhi
Tech State/Province:Delhi
Tech Postal Code:110053
Tech Country:IN
Tech Phone:+91.24305154
Tech Phone Ext.:
Tech FAX:
Tech FAX Ext.:
Tech Email:nsrawat@nic.in
Name Server:NS1.NIC.IN
Name Server:NS2.NIC.IN
Name Server:NS7.NIC.IN
Name Server:NS10.NIC.IN
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
Name Server:
DNSSEC:Unsigned

It seems to be a legitimate site as almost all information seems to be legit. I know for a fact, that all or 99% of all Indian government websites are done by NIC or National Institute of Computing. The only thing which rankled me was that DNSSEC was unsigned but then haven’t seen NIC being as pro-active about web-security as they should be as they handle many government sensitive internal and external websites.

I did send an email for them imploring them to use the new security feature.

To be doubly sure, one could also use an add-on like showip add it your firefox profile and using any of the web services obtain the IP Address of the website.

For instance, the same website which we are investigating gives 164.100.129.113

Doing a whois of 164.100.129.113 tells that NICNET has got/purchased a whole range of addresses i.e. 164.100.0.0 – 164.100.255.255 which is 65025 addresses which it uses.

One can see NIC’s wikipedia page to understand the scope it works under.

So from both accounts, it is safe to assume that the web-site and page is legit.

Well, that’s about it for the site. While this is and should be trivial to most Debian users, it might or might not be to all web users but it is one way in which you can find if a site is legitimate.

Few weeks back, I read Colin’s blog post about Kitten Block which also was put on p.d.o.

So let me share RequestPolicy Continued –

Requestpolicy Continued Mozilla Add-on

This is a continuation of RequestPolicy which was abandoned (upstream) by the original developer and resides in the Debian repo.

http://tracker.debian.org/xul-ext-requestpolicy

I did file a ticket stating both the name-change and where the new watch file should point at 870607

What it does is similar to what Adblock/Kitten Block does + more. It basically restricts any third-party domain from having permission to show to you. It is very similar to another add-on called u-block origin .

I liked RPC as it’s known because it hardly has any learning curve.

You install the add-on, see which third-party domains you need and just allow them. For instance, many websites nowadays fonts.googleapis.com, ajax.googleapis.com is used by many sites, pictures or pictography content is usually looked after by either cloudflare or cloudfront.

One of the big third parties that you would encounter of-course is google.com and gstatic.net. Lot of people use gstatic and its brethren for spam protection but they come with cost of user-identifibility and also the controversial crowdsourced image recognition.

It is a good add-on which does remind you of competing offerings elsewhere but also a stark reminder of how much google has penetrated and to what levels within sites.

I use tor-browser and RPC as my browsing is distraction-free as loads of sites have nowadays moved to huge bandwidth consuming animated ads etc. While I’m on a slow non-metered (eat/surf all you want) kind of service, for those using metered (x bytes for y price including upload and download) the above is also a god-send..

On the upstream side, they do need help both with development and testing the build. While I’m not sure, I think the maintainer didn’t reply or do anything for my bug as he knew that Web-Exensions are around the corner. Upstream has said he hopes to have a new build compatible with web extensions by the end of February 2018.

On the debian side of things, I have filed 870607 but know it probably will be acted once the port to web-extensions has been completed and some testing done so might take time.

06 January, 2018 10:00AM by shirishag75

January 05, 2018

hackergotchi for Steve Kemp

Steve Kemp

More ESP8266 projects, radio and epaper

I finally got the radio-project I've been talking about for the past while working. To recap:

  • I started with an RDA5807M module, but that was too small, and too badly-performing.
  • I moved on to using an Si4703-based integrated "evaluation" board. That was fine for headphones, but little else.
  • I finally got a TEA5767-based integrated "evaluatioN" board, which works just fine.
    • Although it is missing RDS (the system that lets you pull the name of the station off the transmission).
    • It also has no (digital) volume-control, so you have to adjust the volume physically, like a savage.

The project works well, despite the limitations, so I have a small set of speakers and the radio wired up. I can control the station via my web-browser and have an alarm to make it turn on/off at different times of day - cheating at that by using the software-MUTE facility.

All in all I can say that when it comes to IoT the "S stands for Simplicity" given that I had to buy three different boards to get the damn thing working the way I wanted. That said total cost is in the region of €5, probably about the same price I could pay for a "normal" hand-held radio. Oops.

The writeup is here:

The second project I've been working on recently was controlling a piece of ePaper via an ESP8266 device. This started largely by accident as I discovered you can buy a piece of ePaper (400x300 pixels) for €25 which is just cheap enough that it's worth experimenting with.

I had the intention that I'd display the day's calendar upon it, weather forecast, etc. My initial vision was a dashboard-like view with borders, images, and text. I figured rather than messing around with some fancy code-based grid-layout I should instead just generate a single JPG/PNG on a remote host, then program the board to download and display it.

Unfortunately the ESP8266 device I'm using has so little RAM that decoding and displaying a JPG/PNG from a remote URL is hard. Too hard. In the end I had to drop the use of SSL, and simplify the problem to get a working solution.

I wrote a perl script (what else?) to take an arbitrary JPG/PNG/image of the correct dimensions and process it row-by-row. It would keep track of the number of contiguous white/black pixels and output a series of "draw Lines" statements.

The ESP8266 downloads this simple data-file, and draws each line one at a time, ultimately displaying the image whilst keeping some memory free.

I documented the hell out of my setup here:

And here is a sample image being displayed:

05 January, 2018 10:00PM

hackergotchi for Joey Hess

Joey Hess

Spectre question

Could ASLR be used to prevent the Spectre attack?

The way Spectre mitigations are shaping up, it's going to require modification of every program that deals with sensitive data, inserting serialization instructions in the right places. Or programs can be compiled with all branch prediction disabled, with more of a speed hit.

Either way, that's going to be piecemeal and error-prone. We'll be stuck with a new class of vulnerabilities for a long time. Perhaps good news for the security industry, but it's going to become as tediously bad as buffer overflows for the rest of us.

Also, so far the mitigations being developed for Spectre only cover branching, but the Spectre paper also suggests the attack can be used in the absence of branches to eg determine the contents of registers, as long as the attacker knows the address of suitable instructions to leverage.

So I wonder if a broader mitigation can be developed, if only to provide some defense in depth from Spectre.

The paper on Spectre has this to say about ASLR:

The algorithm that tracks and matches jump histories appears to use only the low bits of the virtual address (which are further reduced by simple hash function). As a result, an adversary does not need to be able to even execute code at any of the memory addresses containing the victim’s branch instruction. ASLR can also be compensated, since upper bits are ignored and bits 15..0 do not appear to be randomized with ASLR in Win32 or Win64.

I think ASLR on Linux also doesn't currently randomize those bits of the address; cat /proc/self/maps shows the lower 3 bytes always 0. But, could it be made to randomize more of the address, and so guard against Spectre?

(Then we'd only have to get all binaries built PIE, which is easily audited for and distributions are well on their way toward already. Indeed, most executables on my laptop are already built PIE, with the notable exception of /bin/bash and all haskell programs.)

I don't have an answer to this question, but am very curious about thoughts from anyone who knows about ASLR and low level CPU details. I have not been able to find any discussion of it aside from the above part of the Spectre paper.

05 January, 2018 08:17PM

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Gammu release day

I've just released new versions of Gammu, python-gammu and Wammu. These are mostly bugfix releases (see individual changelogs for more details), but they bring back Wammu for Windows.

This is especially big step for Wammu as the existing Windows binary was almost five years old. The another problem with that was that it was cross-compiled on Linux and it always did not behave correctly. The current binaries are automatically produced on AppVeyor during our continuous integration.

Another important change for Windows users is addition of wheel packages to python-gammu, so all you need to use it on Windows is to pip install python-gammu.

Of course the updated packages are also on their way to Debian and to Ubuntu PPA.

Filed under: Debian English Gammu python-gammu Wammu

05 January, 2018 05:00PM

January 04, 2018

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Loose threads about Spectre mitigation

KPTI patches are out from most vendors now. If you haven't applied them yet, you should; even my phone updated today (the benefits of running a Nexus phone, I guess). This makes Meltdown essentially like any other localroot security hole (ie., easy to mitigate if you just update, although of course a lot won't do that), except for the annoying slowdown of some workloads. Sorry, that's life.

Spectre is more difficult. There are two variants; one abuses indirect jumps and one normal branches. There's no good mitigation for the last one that I know of at this point, so I won't talk about it, but it's also probably the hardest to pull off. But the indirect one is more interesting, as there are mitigations popping up. Here's my understanding of the situation, based on random browsing of LKML (anything in here may be wrong, so draw your own conclusions at the end):

Intel has issued microcode patches that they claim will make most of their newer CPUs (90% of the ones shipped in the last years) “immune from Spectre and Meltdown”. The cornerstone seems to be a new feature called IBRS, which allows you to flush the branch predictor or possibly turn it off entirely (it's not entirely clear to me which one it is). There's also something called IBPB (indirect branch prediction barrier), which seems to be most useful for AMD processors (which don't support IBRS at the moment, except some do sort-of anyway, and also Intel supports it), and it works somewhat differently from IBRS, so I don't know much about it.

Anyway, using IBRS on every entry ot the kernel is really really slow, and if the kernel is entry is just to serve a process and then go back to the same process (ie., no context switch), the kernel only needs to protect itself, not all other processes. Thus, it can make use of a new technique called a retpoline, which is essentially an indirect call that's written in a funny way such that the CPU doesn't try to branch-predict it (and thus is immune to this kind of Spectre attack). Similarly, LLVM has a retpoline patch set for general code (the code in the kernel seems to be hand-made, mostly), and GCC is getting a similar feature.

Retpolines for well for the kernel because it has very few indirect calls in hot paths (and in the common case of no context switch, it won't have to enable IBRS), so the overhead is supposedly only about 2%. If you're doing anything with lots of indirect calls (say, C++ without aggressive devirtualization from LTO or PGO), retpolines will be much more expensive—I mean, the predictor is there for a reason. And if you're on Skylake, retpolines are predicted anyway, so it's back to IBRS. Similarly, IBPB can be enabled on context switch or on every kernel entry, depending on your level of paranoia (and different combinations will seemingly perform better or worse on different workloads, except IBPB on every kernel entry will disable IBRS).

None of this seems to affect hyperthreading, but I guess scheduling stuff with different privileges on the same hyperthread pair is dangerous anyway, so don't do that (?).

Finally, I see the word SPEC_CTRL being thrown around, but supposedly it's just the mechanism for enabling or disabling IBRS. I don't know how this interacts with the fact that the kernel typically boots (and thus detects whether SPEC_CTRL is available or not) before the microcode is loaded (which typically happens in initramfs, or otherwise early in the boot sequence).

In any case, the IBRS patch set is here, I believe, and it also ships in RHEL/CentOS kernels (although you don't get it broken out into patches unless you're a paying customer).

04 January, 2018 10:46PM