August 01, 2022

hackergotchi for Bastian Venthur

Bastian Venthur

Keychron keyboards fixed on Linux

Last year, I wrote about on how to get my buggy Keychron C1 keyboard working properly on Linux by setting a kernel module parameter. Afterwards, I contacted Hans de Goede since he was the last one that contributed a major patch to the relevant kernel module. After some debugging, it turned out that the Keychron keyboards are indeed misbehaving when set to Windows mode. Almost a year later, Bryan Cain provided a patch fixing the behavior, which has now been merged to the Linux kernel in 5.19.

Thank you, Hans and Bryan!

01 August, 2022 08:00PM by Bastian Venthur

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Using Git Server Hooks on GitLab CE to Validate Tags

Since a long time ago I’ve been a gitlab-ce user, in fact I’ve set it up on three of the last four companies I’ve worked for (initially I installed it using the omnibus packages on a debian server but on the last two places I moved to the docker based installation, as it is easy to maintain and we don’t need a big installation as the teams using it are small).

On the company I work for now (kyso) we are using it to host all our internal repositories and to do all the CI/CD work (the automatic deployments are triggered by web hooks in some cases, but the rest is all done using gitlab-ci).

The majority of projects are using nodejs as programming language and we have automated the publication of npm packages on our gitlab instance npm registry and even the publication into the npmjs registry.

To publish the packages we have added rules to the gitlab-ci configuration of the relevant repositories and we publish them when a tag is created.

As the we are lazy by definition, I configured the system to use the tag as the package version; I tested if the contents of the package.json where in sync with the expected version and if it was not I updated it and did a force push of the tag with the updated file using the following code on the script that publishes the package:

# Update package version & add it to the .build-args
INITIAL_PACKAGE_VERSION="$(npm pkg get version|tr -d '"')"
npm version --allow-same --no-commit-hooks --no-git-tag-version \
  "$CI_COMMIT_TAG"
UPDATED_PACKAGE_VERSION="$(npm pkg get version|tr -d '"')"
echo "UPDATED_PACKAGE_VERSION=$UPDATED_PACKAGE_VERSION" >> .build-args
# Update tag if the version was updated or abort
if [ "$INITIAL_PACKAGE_VERSION" != "$UPDATED_PACKAGE_VERSION" ]; then
  if [ -n "$CI_GIT_USER" ] && [ -n "$CI_GIT_TOKEN" ]; then
    git commit -m "Updated version from tag $CI_COMMIT_TAG" package.json
    git tag -f "$CI_COMMIT_TAG" -m "Updated version from tag"
    git push -f -o ci.skip origin "$CI_COMMIT_TAG"
  else
    echo "!!! ERROR !!!"
    echo "The updated tag could not be uploaded."
    echo "Set CI_GIT_USER and CI_GIT_TOKEN or fix the 'package.json' file"
    echo "!!! ERROR !!!"
    exit 1
  fi
fi

This feels a little dirty (we are leaving commits on the tag but not updating the original branch); I thought about trying to find the branch using the tag and update it, but I drop the idea pretty soon as there were multiple issues to consider (i.e. we can have tags pointing to commits present in multiple branches and even if it only points to one the tag does not have to be the HEAD of the branch making the inclusion difficult).

In any case this system was working, so we left it until we started to publish to the NPM Registry; as we are using a token to push the packages that we don’t want all developers to have access to (right now it would not matter, but when the team grows it will) I started to use gitlab protected branches on the projects that need it and adjusting the .npmrc file using protected variables.

The problem then was that we can no longer do a standard force push for a branch (that is the main point of the protected branches feature) unless we use the gitlab api, so the tags with the wrong version started to fail.

As the way things were being done seemed dirty anyway I thought that the best way of fixing things was to forbid users to push a tag that includes a version that does not match the package.json version.

After thinking about it we decided to use githooks on the gitlab server for the repositories that need it, as we are only interested in tags we are going to use the update hook; it is executed once for each ref to be updated, and takes three parameters:

  • the name of the ref being updated,
  • the old object name stored in the ref,
  • and the new object name to be stored in the ref.

To install our hook we have found the gitaly relative path of each repo and located it on the server filesystem (as I said we are using docker and the gitlab’s data directory is on /srv/gitlab/data, so the path to the repo has the form /srv/gitlab/data/git-data/repositories/@hashed/xx/yy/hash.git).

Once we have the directory we need to:

  • create a custom_hooks sub directory inside it,
  • add the update script (as we only need one script we used that instead of creating an update.d directory, the good thing is that this will also work with a standard git server renaming the base directory to hooks instead of custom_hooks),
  • make it executable, and
  • change the directory and file ownership to make sure it can be read and executed from the gitlab container

On a console session:

$ cd /srv/gitlab/data/git-data/repositories/@hashed/xx/yy/hash.git
$ mkdir custom_hooks
$ edit_or_copy custom_hooks/update
$ chmod 0755 custom_hooks/update
$ chown --reference=. -R custom_hooks

The update script we are using is as follows:

#!/bin/sh

set -e

# kyso update hook
#
# Right now it checks version.txt or package.json versions against the tag name
# (it supports a 'v' prefix on the tag)

# Arguments
ref_name="$1"
old_rev="$2"
new_rev="$3"

# Initial test
if [ -z "$ref_name" ] ||  [ -z "$old_rev" ] || [ -z "$new_rev" ]; then
  echo "usage: $0 <ref> <oldrev> <newrev>" >&2
  exit 1
fi

# Get the tag short name
tag_name="${ref_name##refs/tags/}"

# Exit if the update is not for a tag
if [ "$tag_name" = "$ref_name" ]; then
  exit 0
fi

# Get the null rev value (string of zeros)
zero=$(git hash-object --stdin </dev/null | tr '0-9a-f' '0')

# Get if the tag is new or not
if [ "$old_rev" = "$zero" ]; then
  new_tag="true"
else
  new_tag="false"
fi

# Get the type of revision:
# - delete: if the new_rev is zero
# - commit: annotated tag
# - tag: un-annotated tag
if [ "$new_rev" = "$zero" ]; then
  new_rev_type="delete"
else
  new_rev_type="$(git cat-file -t "$new_rev")"
fi

# Exit if we are deleting a tag (nothing to check here)
if [ "$new_rev_type" = "delete" ]; then
  exit 0
fi

# Check the version against the tag (supports version.txt & package.json)
if git cat-file -e "$new_rev:version.txt" >/dev/null 2>&1; then
  version="$(git cat-file -p "$new_rev:version.txt")"
  if [ "$version" = "$tag_name" ] || [ "$version" = "${tag_name#v}" ]; then
    exit 0
  else
    EMSG="tag '$tag_name' and 'version.txt' contents '$version' don't match"
    echo "GL-HOOK-ERR: $EMSG"
    exit 1
  fi
elif git cat-file -e "$new_rev:package.json" >/dev/null 2>&1; then
  version="$(
    git cat-file -p "$new_rev:package.json" | jsonpath version | tr -d '\[\]"'
  )"
  if [ "$version" = "$tag_name" ] || [ "$version" = "${tag_name#v}" ]; then
    exit 0
  else
    EMSG="tag '$tag_name' and 'package.json' version '$version' don't match"
    echo "GL-HOOK-ERR: $EMSG"
    exit 1
  fi
else
  # No version.txt or package.json file found
  exit 0
fi

Some comments about it:

  • we are only looking for tags, if the ref_name does not have the prefix refs/tags/ the script does an exit 0,
  • although we are checking if the tag is new or not we are not using the value (in gitlab that is handled by the protected tag feature),
  • if we are deleting a tag the script does an exit 0, we don’t need to check anything in that case,
  • we are ignoring if the tag is annotated or not (we set the new_rev_type to tag or commit, but we don’t use the value),
  • we test first the version.txt file and if it does not exist we check the package.json file, if it does not exist either we do an exit 0, as there is no version to check against and we allow that on a tag,
  • we add the GL-HOOK-ERR: prefix to the messages to show them on the gitlab web interface (can be tested creating a tag from it),
  • to get the version on the package.json file we use the jsonpath binary (it is installed by the jsonpath ruby gem) because it is available on the gitlab container (initially I used sed to get the value, but a real JSON parser is always a better option).

Once the hook is installed when a user tries to push a tag to a repository that has a version.txt file or package.json file and the tag does not match the version (if version.txt is present it takes precedence) the push fails.

If the tag matches or the files are not present the tag is added if the user has permission to add it in gitlab (our hook is only executed if the user is allowed to create or update the tag).

01 August, 2022 11:00AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

August.

August. I think I finally understood what's going on in io_uring.

01 August, 2022 12:07AM by Junichi Uekawa

July 31, 2022

Russell Coker

hackergotchi for Joachim Breitner

Joachim Breitner

The Via Alpina red trail through Slovenia

This July my girlfriend and I hiked the Slovenian part of the Red Trail of the Via Alpina, from the edge of the Julian Alps to Trieste, and I’d like to share some observations and tips that we might have found useful before our trip.

Our most favorite camp spot Our most favorite camp spot

Getting there

As we traveled with complete camping gear and wanted to stay in our tent, we avoided the high alpine parts of the trail and started just where the trail came down from the Alps and entered the Karst. A great way to get there is to take the night train from Zurich or Munich towards Ljubljana, get off at Jesenice, have breakfast, take the local train to Podbrdo and you can start your tour at 9:15am. From there you can reach the trail at Pedrovo Brdo within 1½h.

Finding the way

We did not use any paper maps, and instead relied on the OpenStreetMap data, which is very good, as well as the official(?) GPX tracks on Komoot, which are linked from the official route descriptions. We used OsmAnd.

In general, trails are generally very well marked (red circle with white center, and frequent signs), but the signs rarely tell you which way the Via Alpina goes, so the GPS was needed.

Sometimes the OpenStreetMap trail and the Komoot trail disagreed on short segments. We sometimes followed one and other times the other.

Variants

We diverged from the trail in a few places:

  • We did not care too much about the horses in Lipica and at least on the map it looked like a longish boringish and sun-exposed detour, so we cut the loop and hiked from Prelože pri Lokvi up onto the peak of the Veliko Gradišče (which unfortunately is too overgrown to provide a good view).

  • When we finally reached the top of Mali Kras and had a view across the bay of Trieste, it seemed silly to walk to down to Dolina, and instead we followed the ridge through Socerb, essentially the Alpe Adria Trail.

  • Not really a variant, but after arriving in Muggia, if one has to go to Trieste, the ferry is a probably nicer way to finish a trek than the bus.

Pitching a tent

We used our tent almost every night, only in Idrija we got a room (and a shower…). It was not trivial to find good camp spots, because most of the trail is on hills with slopes, and the flat spots tend to have housed built on them, but certainly possible. Sometimes we hid in the forest, other times we found nice small and freshly mowed meadows within the forest.

Water

Since this is Karst land, there is very little in terms of streams or lakes along the way, which is a pity.

The Idrijca river right south of Idrija was very tempting to take a plunge. Unfortunately we passed there early in the day and we wanted to cover some ground first, so we refrained.

As for drinking water, we used the taps at the bathrooms of the various touristic sites, a few (but rare) public fountains, and finally resorted to just ringing random doorbells and asking for water, which always worked.

Paths

A few stages lead you through very pleasant narrow forest paths with a sight, but not all. On some days you find yourself plodding along wide graveled or even paved forest roads, though.

Landscape and sights

The view from Nanos is amazing and, with this high peak jutting out over a wide plain, rather unique. It may seem odd that the trail goes up and down that mountain on the same day when it could go around, but it is certainly worth it.

The Karst is mostly a cultivated landscape, with lots of forestry. It is very hilly and green, which is pretty, but some might miss some craggedness. It’s not the high alps, after all, but at least they are in sight half the time.

But the upside is that there are few sights along the way that are worth visiting, in particular the the Franja Partisan Hospital hidden in a very narrow gorge, the Predjama Castle and the Škocjan Caves

31 July, 2022 09:19AM by Joachim Breitner (mail@joachim-breitner.de)

Russell Coker

Workstations With ECC RAM

The last new PC I bought was a Dell PowerEdge T110II in 2013. That model had been out for a while and I got it for under $2000. Since then the CPI has gone up by about 20% so it’s probably about $2000 in today’s money. Currently Dell has a special on the T150 tower server (the latest replacement for the T110II) which has a G6405T CPU that isn’t even twice as fast as the i3-3220 (3746 vs 2219) in the T110II according to passmark.com (AKA cpubenchmark.net). The special price is $2600. I can’t remember the details of my choices when purchasing the T110II but I recall that CPU speed wasn’t a priority and I wanted a cheap reliable server for storage and for light desktop use. So it seems that the current entry model in the Dell T1xx server line is less than twice as fast as fast as it was in 2013 while costing about 25% more! An option is to spend an extra $989 to get a Xeon E-2378 which delivers a reasonable 18,248 in that benchmark. The upside of a T150 is that is uses buffered DDR4 ECC RAM which is pretty cheap nowadays, you can get 32G for about $120.

For systems sold as workstations (as opposed to T1xx servers that make great workstations but aren’t described as such) Dell has the Precision line. The Precision 3260 “Compact Workstation” currently starts at $1740, it has a fast CPU but takes SO-DIMMs and doesn’t come with ECC RAM. So to use it as a proper workstation you need to discard the RAM and buy DDR5 unbuffered/unregistered ECC SO-DIMMS – which don’t seem to be on sale yet. The Precision 3460 is slightly larger, slightly more expensive, and also takes SO-DIMMs. The Precision 3660 starts at $2550 and takes unbuffered DDR5 ECC RAM which is available and costs half as much as the SO-DIMM equivalent would cost (if you could even buy it), but the general trend in RAM prices is that unbuffered ECC RAM is more expensive than buffered ECC RAM. The upside to Precision workstations is that the range of CPUs available is significantly faster than for the T150.

The HP web site doesn’t offer prices on their Z workstations and is generally worse than the Dell web site in most ways.

Overall I’m disappointed in the range of workstations available now. As an aside if anyone knows of any other company selling workstations in Australia that support ECC RAM then please let me know.

31 July, 2022 08:27AM by etbe

July 30, 2022

Mike Hommey

Announcing git-cinnabar 0.5.10

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.9?

  • Fixed exceptions during config initialization.
  • Fixed swapped error messages.
  • Fixed correctness issues with bundle chunks with no delta node.
  • This is probably the last 0.5.x release before 0.6.0.

30 July, 2022 09:35PM by glandium

Ian Jackson

chiark’s skip-skip-cross-up-grade

Two weeks ago I upgraded chiark from Debian jessie i386 to bullseye amd64, after nearly 30 years running Debian i386. This went really quite well, in fact!

Background

chiark is my “colo” - a server I run, which lives in a data centre in London. It hosts ~200 users with shell accounts, various websites and mailing lists, moderators for a number of USENET newsgroups, and countless other services. chiark’s internal setup is designed to enable my users to do a maximum number of exciting things with a minimum of intervention from me.

chiark’s OS install dates to 1993, when I installed Debian 0.93R5, the first version of Debian to advertise the ability to be upgraded without reinstalling. I think that makes it one of the oldest Debian installations in existence.

Obviously it’s had several new hardware platforms too. (There was a prior install of Linux on the initial hardware, remnants of which can maybe still be seen in some obscure corners of chiark’s /usr/local.)

chiark’s install is also at the very high end of the installation complexity, and customisation, scale: reinstalling it completely would be an enormous amount of work. And it’s unique.

chiark’s upgrade history

chiark’s last major OS upgrade was to jessie (Debian 8, released in April 2015). That was in 2016. Since then we have been relying on Debian’s excellent security support posture, and the Debian LTS and more recently Freexian’s Debian ELTS projects and some local updates, The use of ELTS - which supports only a subset of packages - was particularly uncomfortable.

Additionally, chiark was installed with 32-bit x86 Linux (Debian i386), since that was what was supported and available at the time. But 32-bit is looking very long in the tooth.

Why do a skip upgrade

So, I wanted to move to the fairly recent stable release - Debian 11 (bullseye), which is just short of a year old. And I wanted to “crossgrade” (as its called) to 64-bit.

In the past, I have found I have had greater success by doing “direct” upgrades, skipping intermediate releases, rather than by following the officially-supported path of going via every intermediate release.

Doing a skip upgrade avoids exposure to any packaging bugs which were present only in intermediate release(s). Debian does usually fix bugs, but Debian has many cautious users, so it is not uncommon for bugs to be found after release, and then not be fixed until the next one.

A skip upgrade avoids the need to try to upgrade to already-obsolete releases (which can involve messing about with multiple snapshots from snapshot.debian.org. It is also significantly faster and simpler, which is important not only because it reduces downtime, but also because it removes opportunities (and reduces the time available) for things to go badly.

One downside is that sometimes maintainers aggressively remove compatibility measures for older releases. (And compatibililty packages are generally removed quite quickly by even cautious maintainers.) That means that the sysadmin who wants to skip-upgrade needs to do more manual fixing of things that haven’t been dealt with automatically. And occasionally one finds compatibility problems that show up only when mixing very old and very new software, that no-one else has seen.

Crossgrading

Crossgrading is fairly complex and hazardous. It is well supported by the low level tools (eg, dpkg) but the higher-level packaging tools (eg, apt) get very badly confused.

Nowadays the system is so complex that downloading things by hand and manually feeding them to dpkg is impractical, other than as a very occasional last resort.

The approach, generally, has been to set the system up to “want to” be the new architecture, run apt in a download-only mode, and do the package installation manually, with some fixing up and retrying, until the system is coherent enough for apt to work.

This is the approach I took. (In current releases, there are tools that will help but they are only in recent releases and I wanted to go direct. I also doubted that they would work properly on chiark, since it’s so unusual.)

Peril and planning

Overall, this was a risky strategy to choose. The package dependencies wouldn’t necessarily express all of the sequencing needed. But it still seemed that if I could come up with a working recipe, I could do it.

I restored most of one of chiark’s backups onto a scratch volume on my laptop. With the LVM snapshot tools and chroots. I was able to develop and test a set of scripts that would perform the upgrade. This was a very effective approach: my super-fast laptop, with local caches of the package repositories, was able to do many “edit, test, debug” cycles.

My recipe made heavy use of snapshot.debian.org, to make sure that it wouldn’t rot between testing and implementation.

When I had a working scheme, I told my users about the planned downtime. I warned everyone it might take even 2 or 3 days. I made sure that my access arrangemnts to the data centre were in place, in case I needed to visit in person. (I have remote serial console and power cycler access.)

Reality - the terrible rescue install

My first task on taking the service down was the check that the emergency rescue installation worked: chiark has an ancient USB stick in the back, which I can boot to from the BIOS. The idea being that many things that go wrong could be repaired from there.

I found that that install was too old to understand chiark’s storage arrangements. mdadm tools gave very strange output. So I needed to upgrade it. After some experiments, I rebooted back into the main install, bringing chiark’s service back online.

I then used the main install of chiark as a kind of meta-rescue-image for the rescue-image. The process of getting the rescue image upgraded (not even to amd64, but just to something not totally ancient) was fraught. Several times I had to rescue it by copying files in from the main install outside. And, the rescue install was on a truly ancient 2G USB stick which was terribly terribly slow, and also very small.

I hadn’t done any significant planning for this subtask, because it was low-risk: there was little way to break the main install. Due to all these adverse factors, sorting out the rescue image took five hours.

If I had known how long it would take, at the beginning, I would have skipped it. 5 hours is more than it would have taken to go to London and fix something in person.

Reality - the actual core upgrade

I was able to start the actual upgrade in the mid-afternoon. I meticulously checked and executed the steps from my plan.

The terrifying scripts which sequenced the critical package updates ran flawlessly. Within an hour or so I had a system which was running bullseye amd64, albeit with many important packages still missing or unconfigured.

So I didn’t need the rescue image after all, nor to go to the datacentre.

Fixing all the things

Then I had to deal with all the inevitable fallout from an upgrade.

Notable incidents:

exim4 has a new tainting system

This is to try to help the sysadmin avoid writing unsafe string interpolations. (“Little Bobby Tables.”) This was done by Exim upstream in a great hurry as part of a security response process.

The new checks meant that the mail configuration did not work at all. I had to turn off the taint check completely. I’m fairly confident that this is correct, because I am hyper-aware of quoting issues and all of my configuration is written to avoid the problems that tainting is supposed to avoid.

One particular annoyance is that the approach taken for sqlite lookups makes it totally impossible to use more than one sqlite database. I think the sqlite quoting operator which one uses to interpolate values produces tainted output? I need to investigate this properly.

LVM now ignores PVs which are directly contained within LVs by default

chiark has LVM-on-RAID-on-LVM. This generally works really well.

However, there was one edge case where I ended up without the intermediate RAID layer. The result is LVM-on-LVM.

But recent versions of the LVM tools do not look at PVs inside LVs, by default. This is to help you avoid corrupting the state of any VMs you have on your system. I didn’t know that at the time, though. All I knew was that LVM was claiming my PV was “unusable”, and wouldn’t explain why.

I was about to start on a thorough reading of the 15,000-word essay that is the commentary in the default /etc/lvm/lvm.conf to try to see if anything was relevant, when I received a helpful tipoff on IRC pointing me to the scan_lvs option.

I need to file a bug asking for the LVM tools to explain why they have declared a PV unuseable.

apache2’s default config no longer read one of my config files

I had to do a merge (of my changes vs the maintainers’ changes) for /etc/apache2/apache2.conf. When doing this merge I failed to notice that the file /etc/apache2/conf.d/httpd.conf was no longer included by default. My merge dropped that line. There were some important things in there, and until I found this the webserver was broken.

dpkg --skip-same-version DTWT during a crossgrade

(This is not a “fix all the things” - I found it when developing my upgrade process.)

When doing a crossgrade, one often wants to say to dpkg “install all these things, but don’t reinstall things that have already been done”. That’s what --skip-same-version is for.

However, the logic had not been updated as part of the work to support multiarch, so it was wrong. I prepared a patched version of dpkg, and inserted it in the appropriate point in my prepared crossgrade plan.

The patch is now filed as bug #1014476 against dpkg upstream

Mailman

Mailman is no longer in bullseye. It’s only available in the previous release, buster.

bullseye has Mailman 3 which is a totally different system - requiring basically, a completely new install and configuration. To even preserve existing archive links (a very important requirement) is decidedly nontrivial.

I decided to punt on this whole situation. Currently chiark is running buster’s version of Mailman. I will have to deal with this at some point and I’m not looking forward to it.

Python

Of course that Mailman is Python 2. The Python project’s extremely badly handled transition includes a recommendation to change the meaning of #!/usr/bin/python from Python 2, to Python 3.

But Python 3 is a new language, barely compatible with Python 2 even in the most recent iterations of both, and it is usual to need to coinstall them.

Happily Debian have provided the python-is-python2 package to make things work sensibly, albeit with unpleasant imprecations in the package summary description.

USENET news

Oh my god. INN uses many non-portable data formats, which just depend on your C types. And there are complicated daemons, statically linked libraries which cache on-disk data, and much to go wrong.

I had numerous problems with this, and several outages and malfunctions. I may write about that on a future occasion.

(edited 2022-07-20 11:36 +01:00 and 2022-07-30 12:28+01:00 to fix typos)


comment count unavailable comments

30 July, 2022 11:27AM

July 29, 2022

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (May and June 2022)

The following contributors got their Debian Developer accounts in the last two months:

  • Geoffroy Berret (kaliko)
  • Arnaud Ferraris (aferraris)

The following contributors were added as Debian Maintainers in the last two months:

  • Alec Leanas
  • Christopher Michael Obbard
  • Lance Lin
  • Stefan Kropp
  • Matteo Bini
  • Tino Didriksen

Congratulations!

29 July, 2022 02:00PM by Jean-Pierre Giraud

Reproducible Builds (diffoscope)

diffoscope 220 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 220. This version includes the following changes:

* Support Haskell 9.x series files and update the test files to match. Thanks
  to Scott Talbert for the relevant info about the new format.
  (Closes: reproducible-builds/diffoscope#309)
* Fix a regression introduced in diffoscope version 207 where diffoscope
  would crash if one directory contained a directory that wasn't in the
  other. Thanks to Alderico Gallo for the report and the testcase.
  (Closes: reproducible-builds/diffoscope#310)

You find out more by visiting the project homepage.

29 July, 2022 12:00AM

July 28, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

UEFI rootkits and UEFI secure boot

Kaspersky describes a UEFI-implant used to attack Windows systems. Based on it appearing to require patching of the system firmware image, they hypothesise that it's propagated by manually dumping the contents of the system flash, modifying it, and then reflashing it back to the board. This probably requires physical access to the board, so it's not especially terrifying - if you're in a situation where someone's sufficiently enthusiastic about targeting you that they're reflashing your computer by hand, it's likely that you're going to have a bad time regardless.

But let's think about why this is in the firmware at all. Sophos previously discussed an implant that's sufficiently similar in some technical details that Kaspersky suggest they may be related to some degree. One notable difference is that the MyKings implant described by Sophos installs itself into the boot block of legacy MBR partitioned disks. This code will only be executed on old-style BIOS systems (or UEFI systems booting in BIOS compatibility mode), and they have no support for code signatures, so there's no need to be especially clever. Run malicious code in the boot block, patch the next stage loader, follow that chain all the way up to the kernel. Simple.

One notable distinction here is that the MBR boot block approach won't be persistent - if you reinstall the OS, the MBR will be rewritten[1] and the infection is gone. UEFI doesn't really change much here - if you reinstall Windows a new copy of the bootloader will be written out and the UEFI boot variables (that tell the firmware which bootloader to execute) will be updated to point at that. The implant may still be on disk somewhere, but it won't be run.

But there's a way to avoid this. UEFI supports loading firmware-level drivers from disk. If, rather than providing a backdoored bootloader, the implant takes the form of a UEFI driver, the attacker can set a different set of variables that tell the firmware to load that driver at boot time, before running the bootloader. OS reinstalls won't modify these variables, which means the implant will survive and can reinfect the new OS install. The only way to get rid of the implant is to either reformat the drive entirely (which most OS installers won't do by default) or replace the drive before installation.

This is much easier than patching the system firmware, and achieves similar outcomes - the number of infected users who are going to wipe their drives to reinstall is fairly low, and the kernel could be patched to hide the presence of the implant on the filesystem[2]. It's possible that the goal was to make identification as hard as possible, but there's a simpler argument here - if the firmware has UEFI Secure Boot enabled, the firmware will refuse to load such a driver, and the implant won't work. You could certainly just patch the firmware to disable secure boot and lie about it, but if you're at the point of patching the firmware anyway you may as well just do the extra work of installing your implant there.

I think there's a reasonable argument that the existence of firmware-level rootkits suggests that UEFI Secure Boot is doing its job and is pushing attackers into lower levels of the stack in order to obtain the same outcomes. Technologies like Intel's Boot Guard may (in their current form) tend to block user choice, but in theory should be effective in blocking attacks of this form and making things even harder for attackers. It should already be impossible to perform attacks like the one Kaspersky describes on more modern hardware (the system should identify that the firmware has been tampered with and fail to boot), which pushes things even further - attackers will have to take advantage of vulnerabilities in the specific firmware they're targeting. This obviously means there's an incentive to find more firmware vulnerabilities, which means the ability to apply security updates for system firmware as easily as security updates for OS components is vital (hint hint if your system firmware updates aren't available via LVFS you're probably doing it wrong).

We've known that UEFI rootkits have existed for a while (Hacking Team had one in 2015), but it's interesting to see a fairly widespread one out in the wild. Protecting against this kind of attack involves securing the entire boot chain, including the firmware itself. The industry has clearly been making progress in this respect, and it'll be interesting to see whether such attacks become more common (because Secure Boot works but firmware security is bad) or not.

[1] As we all remember from Windows installs overwriting Linux bootloaders
[2] Although this does run the risk of an infected user booting another OS instead, and being able to see the implant

comment count unavailable comments

28 July, 2022 10:19PM

Dominique Dumont

How I investigated connection hogs on Kubernetes

Hi

My name is Dominhique Dumont, DevOps freelance in Grenoble, France.

My goal is to share my experience regarding a production issue that occurred last week where my client complained that the applications was very slow and sometime showed 5xx errors. The production service is hosted on a Kubernetes cluster on Azure and use a MongoDB on ScaleGrid.

I reproduced the issue on my side and found that the API calls were randomly failing due to timeouts on server side.

The server logs were showing some MongoDB disconnections and reconnections and some time-out on MongoDB connections, but did not give any clue on why some connections to MongoDB server were failing.

Since there was not clue in the cluster logs, I looked at ScaleGrid monitoring. There was about 2500 connections on MongoDB: 2022-07-19-scalegrid-connection-leak.png That seemed quite a lot given the low traffic at that time, but not necessarily a problem.

Then, I went to the Azure console, and I got the first hint about the origin of the problem: the SNATs were exhausted on some nodes of the clusters. 2022-07-28_no-more-free-snat.png

SNATs are involved in connections from the cluster to the outside world, i.e. to our MongoDB server and are quite limited: only 1024 SNAT ports are available per node. This was consistent with the number of used connections on MongoDB.

OK, then the number of used connections on MongoDB was a real problem.

The next question was: which pods and how many connections ?

First I had to filter out the pods that did not use MongoDB. Fortunately, all our pods have labels so I could list all pods using MongoDB:

$ kubectl -n prod get pods -l db=mongo | wc -l
236

Hmm, still quite a lot.

Next problem is to check which pod used too many MongoDB connections. Unfortunately, the logs mentioned that a connection to MongoDB was opened, but that did not give a clue on how many were used.

Netstat is not installed on the pods, and cannot be installed since the pods are running as root (which is a good idea for security reasons)

Then, my Debian Developer experience kicked in and I remembered that /proc file system on Linux gives a lot of information on consumed kernel resources, including resources consumed by each process.

The trick is to know the PID of the process using the connections.

In our case, Docker files are written in a way so the main process of a pod using NodeJS is 1, so, the command to list the connections of pod is:

$ kubectl -n prod exec redacted-pod-name-69875496f8-8bj4f -- cat /proc/1/net/tcp
  sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode                                                     
   0: AC00F00A:C9FA C2906714:6989 01 00000000:00000000 02:00000DA9 00000000  1001        0 376439162 2 0000000000000000 21 4 0 10 -1                 
   1: AC00F00A:CA00 C2906714:6989 01 00000000:00000000 02:00000E76 00000000  1001        0 376439811 2 0000000000000000 21 4 0 10 -1                 
   2: AC00F00A:8ED0 C2906714:6989 01 00000000:00000000 02:000004DA 00000000  1001        0 445806350 2 0000000000000000 21 4 30 10 -1                
   3: AC00F00A:CA02 C2906714:6989 01 00000000:00000000 02:000000DD 00000000  1001        0 376439812 2 0000000000000000 21 4 0 10 -1                 
   4: AC00F00A:C9FE C2906714:6989 01 00000000:00000000 02:00000DA9 00000000  1001        0 376439810 2 0000000000000000 21 4 0 10 -1                 
   5: AC00F00A:8760 C2906714:6989 01 00000000:00000000 02:00000810 00000000  1001        0 375803096 2 0000000000000000 21 4 0 10 -1                 
   6: AC00F00A:C9FC C2906714:6989 01 00000000:00000000 02:00000DA9 00000000  1001        0 376439809 2 0000000000000000 21 4 0 10 -1                 
   7: AC00F00A:C56C C2906714:6989 01 00000000:00000000 02:00000DA9 00000000  1001        0 376167298 2 0000000000000000 21 4 0 10 -1                 
   8: AC00F00A:883C C2906714:6989 01 00000000:00000000 02:00000734 00000000  1001        0 375823415 2 0000000000000000 21 4 30 10 -1 

OK, that’s less appealing that netstat output. The trick is that rem_address and port are expressed in hexa. A quick calculation confirms the port 0x6989 is indeed port 27017, which is the listening port of MongoDB server.

So the number of opened MongoDB connections is given by:

$ kubectl -n prod exec redacted-pod-name-69875496f8-8bj4f -- cat /proc/1/net/tcp | grep :6989 | wc -l
9

What’s next ?

The ideal solution would be to fix the NodeJS code to handle correctly the termination of the connections, but that would have taken too long to develop.

So I’ve written a small Perl script to:

  • list the pods using MongoDB using kubectl -n prod get pods -l db=mongo
  • find the pods using more that 10 connections using the kubectl exec command shown above
  • compute the deployment name of these pods (which was possible given the naming convention used with our pods and deployments)
  • restart the deployment of these pods with a kubectl rollout restart deployment command

Why restart a deployment instead of simply deleting the gluttonous pods? I wanted to avoid downtime if all pods of a deployment were to be killed. There’s no downtime when applying rollout restart command on deployments.

This script is now run regularly until the connections issue is fixed for good in NodeJS code. Thanks to this script, there’s no need to rush a code modification.

All in all, working around this connections issues was made somewhat easier thanks to:

  • the monitoring tools provided by the hosting services.
  • a good knowledge of Linux internals
  • consistent labels on our pods
  • the naming conventions used for our kubernetes artifacts

28 July, 2022 12:10PM by dod

July 27, 2022

Vincent Bernat

ClickHouse SF Bay Area Meetup: Akvorado

Here are the slides I presented for a ClickHouse SF Bay Area Meetup in July 2022, hosted by Altinity. They are about Akvorado, a network flow collector and visualizer, and notably on how it relies on ClickHouse, a column-oriented database.

The meetup was recorded and available on YouTube. Here is the part relevant to my presentation, with subtitles:1

I got a few questions about how to get information from the higher layers, like HTTP. As my use case for Akvorado was at the network edge, my answers were mostly negative. However, as sFlow is extensible, when collecting flows from Linux servers instead, you could embed additional data and they could be exported as well.

I also got a question about doing aggregation in a single table. ClickHouse can aggregate automatically data using TTL. My answer for not doing that is partial. There is another reason: the retention periods of the various tables may overlap. For example, the main table keeps data for 15 days, but even in these 15 days, if I do a query on a 12-hour window, it is faster to use the flows_1m0s aggregated table, unless I request something about ports and IP addresses.


  1. To generate the subtitles, I have used Amazon Transcribe, the speech-to-text solution from Amazon AWS. Unfortunately, there is no en-FR language available, which would have been useful for my terrible accent. While the subtitles were 100% accurate when the host, Robert Hodge from Altinity, was speaking, the success rate on my talk was quite lower. I had to rewrite almost all sentences. However, using speech-to-text is still useful to get the timings, as it is also something requiring a lot of work to do manually. ↩︎

27 July, 2022 09:00PM by Vincent Bernat

July 26, 2022

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, June 2022

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

No any major updates on running projects.
Two 1, 2 projects are in the pipeline now.
Tryton project is in a review phase. Gradle projects is still fighting in work.

In June, we put aside 2254 EUR to fund Debian projects.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In June, 15 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 14.00h (out of 14.00h assigned).
  • Andreas Rönnquist did 14.50h (out of 14.50h assigned and 10.50h from previous period, thus carrying over 10.50h to the next month).
  • Anton Gladky did 16.00h (out of 16.00h assigned).
  • Ben Hutchings did 16.00h (out of 0.00h assigned and 16.00h from previous period).
  • Chris Lamb did 18.00h (out of 18.00h assigned).
  • Dominik George did 1.83h (out of 6.00h assigned and 18.00h from previous period, thus carrying over 22.17h to the next month).
  • Emilio Pozuelo Monfort did 30.25h (out of 9.25h assigned and 21.00h from previous period).
  • Enrico Zini did 8.00h (out of 9.50h assigned and 6.50h from previous period, thus carrying over 8.00h to the next month).
  • Markus Koschany did 30.25h (out of 30.25h assigned).
  • Ola Lundqvist did nothing (out of 12.00 available hours, thus carrying them over to the next month).
  • Roberto C. Sánchez did 27.50h (out of 11.75h assigned and 18.50h from previous period, thus carrying over 2.75h to the next month).
  • Stefano Rivera did 8.00h (out of 30.25h assigned, thus carrying over 20.75h to the next month).
  • Sylvain Beucler did 30.25h (out of 13.75h assigned and 16.50h from previous period).
  • Thorsten Alteholz did 30.25h (out of 30.25h assigned).
  • Utkarsh Gupta did not report back about their work so we assume they did nothing (out of 30.25 available hours, thus carrying them over to the next month).

Evolution of the situation

In June we released 27 DLAs.

This is a special month, where we have two releases (stretch and jessie) as ELTS and NO release as LTS. Buster is still handled by the security team and will probably be given in LTS hands at the beginning of the August. During this month we are updating the infrastructure, documentation and improve our internal processes to switch to a new release.
Many developers have just returned back from Debconf22, hold in Prizren, Kosovo! Many (E)LTS members could meet face-to-face and discuss some technical and social topics! Also LTS BoF took place, where the project was introduced (link to video).

Thanks to our sponsors

Sponsors that joined recently are in bold. We are pleased to welcome Alter Way where their support of Debian is publicly acknowledged at the higher level, see this French quote of Alterway’s CEO.

26 July, 2022 08:38AM by Raphaël Hertzog

Michael Ablassmeier

Added remote capability to virtnbdbackup

Latest virtnbdbackup version now supports backing up remote libvirt hosts, too. No installation on the hypervisor required anymore:

virtnbdbackup -U qemu+ssh://usr@hypervisor/system -d vm1 -o /backup/vm1

Same applies for restore operations, other enhancements are:

  • New backup mode auto which allows easy backup rotation.
  • Option to freeze only specific filesystems within backed up domain.
  • Remote backup via dedicated network: use --nbd-ip to bind the remote NDB service to an specific interface.
  • If virtual machine requires additional files like specific UEFI/Kernel image, these are saved via SFTP from the remote host, too.
  • Restore operation can now adjust domain config accordingly (and redefine it if desired).

Next up: add TLS support for remote NBD connections.

26 July, 2022 12:00AM

July 25, 2022

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 closes in Prizren and DebConf23 dates announced

DebConf22 group photo - click to enlarge

On Sunday 24 July 2022, the annual Debian Developers and Contributors Conference came to a close. Hosting more than 210 attendees from 38 different countries over a combined 91 event talks, discussion sessions, Birds of a Feather (BoF) gatherings, workshops, and activities, DebConf22 was a large success.

The conference was preceded by the annual DebCamp held 10 July to 16 July which focused on individual work and team sprints for in-person collaboration towards developing Debian. In particular, this year there have been sprints to advance development of Mobian/Debian on mobile, reproducible builds and Python in Debian, and a BootCamp for newcomers, to get introduced to Debian and have some hands-on experience with using it and contributing to the community.

The actual Debian Developers Conference started on Sunday 17 July 2022. Together with activities such as the traditional 'Bits from the DPL' talk, the continuous key-signing party, lightning talks and the announcement of next year's DebConf (DebConf23 in Kochi, India), there were several sessions related to programming language teams such as Python, Perl and Ruby, as well as news updates on several projects and internal Debian teams, discussion sessions (BoFs) from many technical teams (Long Term Support, Android tools, Debian Derivatives, Debian Installer and Images team, Debian Science...) and local communities (Debian Brasil, Debian India, the Debian Local Teams), along with many other events of interest regarding Debian and free software.

The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the entire conference. Several activities that couldn\'t be organized in past years due to the COVID pandemic returned to the conference\'s schedule: a job fair, open-mic and poetry night, the traditional Cheese and Wine party, the group photos and the Day Trip.

For those who were not able to attend, most of the talks and sessions were recorded for live streams with videos made, available through the Debian meetings archive website. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents.

The DebConf22 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf23 will be held in Kochi, India, from September 10 to September 16, 2023. As tradition follows before the next DebConf the local organizers in India will start the conference activites with DebCamp (September 03 to September 09, 2023), with particular focus on individual and team work towards improving the distribution.

DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct in DebConf22 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf22, particularly our Platinum Sponsors: Lenovo, Infomaniak, ITP Prizren and Google.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/.

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

About Infomaniak

Infomaniak is Switzerland\'s largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About ITP Prizren

Innovation and Training Park Prizren intends to be a changing and boosting element in the area of ICT, agro-food and creatives industries, through the creation and management of a favourable environment and efficient services for SMEs, exploiting different kinds of innovations that can contribute to Kosovo to improve its level of development in industry and research, bringing benefits to the economy and society of the country as a whole.

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

Contact Information

For further information, please visit the DebConf22 web page at https://debconf22.debconf.org/ or send mail to press@debian.org.

25 July, 2022 08:30AM by Debian Publicity Team

July 24, 2022

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

AV1 live streaming: Exploring SVT-AV1 rate control

I'm looking into AV1 live streaming these days; it's still very early, but it looks like enough of the required parts may finally align, and it seems it's the way I'll have to go to get to that next quality level. (Specifically, I'd like to go from 720p60 to 1080p60 for sports, and it seems this is hard to do under H.264 as-is without making pretty big concessions in terms of artifacts/smudges, or else jack up the bitrate so much that clients will start having viewing problems.)

After some brief testing, it seems SVT-AV1 is the obvious choice; if you've got the cores, it produces pretty good-looking 10-bit AV1 using less CPU time than x264 veryfast (!), possibly mostly due to better parallelization. But information about using it for live streaming was hard to find, and asking online turned up zero useful information. So I did some practical tests for live-specific issues, starting with rate control.

First of all, we need to identify which problem we want to solve. For a live stream, there are two good reasons to have good rate control:

  • Bandwidth costs money, both for ourselves and for the client.
  • The client should be able to watch the stream without buffering.

The former is about long-term averages, the latter is about short-term averages. Usually, we ignore the former and focus mostly on the latter (especially since solving the latter will keep the former mostly or completely in check).

My testing is empirical and mostly a spot-check; I don't have a large library of interesting high-quality video, nor do I have the patience to run through it. As sample clip, I chose the first 60 seconds (without audio) of cathodoluminescence by mfx and holon; it is a very challenging clip both encoding- and rate control-wise (it goes from all black to spinning and swooshing things with lots of noise on top, with huge complexity swings on the order of seconds), and I happened to have a high-quality 1080p60 recording that I could use as a master. We'll encode this to match a hypothetical 3 Mbit/sec viewer, to really give the encoder a run for its money. Most clips will be much easier than this, but there's always more to see in the hard cases than the easy ones.

First, let's check what happens without rate control; I encoded the clip using SVT-AV1 at preset 10, which is comfortably realtime on my 28-core Broadwell. (I would assume it's also good at my 16-core Zen 3, since it is much higher clocked, but I haven't checked.) I used constant quantizer, ie., there is no rate control at all; every frame, easy or hard, is encoded at the same quality. (I encoded the clip several times with different quantizers to find one that got me close to 3000 kbit/sec. Obviously, in a real-time scenario, we would have no such luxury.) With the addition of FFmpeg as the driver and some Perl to analyze it afterwards, this is what I got:

Flags: -c:v libsvtav1 -pix_fmt yuv420p10le -preset 10 -qp 54

Histogram of rates over 1-second blocks:

  250  ********
  750  *****************
 1250  ***********
 1750  ******
 2250  ***
 2750  ***
 3250  *
 3750  *
 4250  *****
 4750  
 5250  
----- ----- ----- ----- -----
 9250  *
13250  *
17750  **
35250  *

Min:    11 kbit/sec
Max: 35020 kbit/sec
Avg:  2914 kbit/sec


Primitive VBV with 3000 kbit max buffer (starting at 100% full), 3000 kbit/sec network:

Buffer minimum fill:        0 kbit
Time stalled:           25448 ms
Time with full buffer:  27157 ms

VMAF:                   57.39

Some explanations are in order here. What I've done is pretty simplistic; chop the resulting video into one-second blocks, and then measure how many bytes those are. You can see that even though the average bit rate is near our 3000 kbit/sec target, the majority of the time is actually spent around 500–1500 kbit/sec. But some seconds are huge outliers; up to 29 Mbit/sec.

The next section is my toy VBV (video buffer verifier), which simulates a client downloading at a constant 3000 kbit/sec rate (as long as the buffer, set to one second, has room for it) and playing frames according to their timestamps. We can see that even though we're below the target bitrate, we spend a whopping 25 seconds buffering—for a 60 second clip! This is because most of the time, our buffer sits there comfortably full, which is blocking mor downloads until we get to those problematic sections where the bitrate goes sky-high, and we fall behind really quickly. (Why not allow our buffer to go more-than-full, which would fix the problem? Well, first of all, this assumes the encoder has a huge delay so that it could actually feed data for those frames way ahead of play time, or they would simply not exist yet. Second, what about clients that joined in the middle of the stream?)

Note that my VBV script is not a standards-compliant verifier (e.g. it doesn't really take B-frames into account), so you'll need to take it with a grain of salt; still, it's a pretty good proxy for what's going on.

OK, so let's now test what happens with a known-good case; we encode with x264 and CBR settings matching our VBV:

Flags: -c:v libx264 -pix_fmt yuv420p10le -preset veryfast -x264-params "nal-hrd=cbr"
       -b:v 3M -minrate 3M -maxrate 3M -bufsize 3M

Histogram of rates over 1-second blocks:

  250  
  750  
 1250  
 1750  
 2250  *****
 2750  *******************
 3250  *******************************
 3750  *****

Min:  2032 kbit/sec
Max:  3968 kbit/sec
Avg:  2999 kbit/sec


Primitive VBV with 3000 kbit max buffer (starting at 100% full), 3000 kbit/sec network:

Buffer minimum fill:     1447 kbit
Time stalled:               0 ms
Time with full buffer:    128 ms

VMAF:                   50.29

This is spot-on. The global average is within 1 kbit/sec of what we asked for, each second is nicely clustered around our range, and we never stall. In fact, our buffer hardly goes past half-full. (Don't read too much into the VMAF numbers, as I didn't ask either codec to optimize for visual quality. Still, it's not unexpected that we get higher values for AV1, and that neither codec really manages to good quality at these rates.)

Going back to AV1, we now move from constant quantizer to asking for a given bitrate. SVT-AV1 defaults to one-pass VBR, so we'll see what happens if we just give it a bitrate:

Flags: -c:v libsvtav1 -pix_fmt yuv420p10le -preset 10 -b:v 3M

Histogram of rates over 1-second blocks:

  250  **
  750  *******
 1250  **********
 1750  *****
 2250  **********
 2750  ************
 3250  ****
 3750  
 4250  *****
 4750  *
 5250  *
 5750  **
----- ----- ----- ----- -----
 7250  *

Min:    10 kbit/sec
Max:  7212 kbit/sec
Avg:  2434 kbit/sec


Primitive VBV with 3000 kbit max buffer (starting at 100% full), 3000 kbit/sec network:

Buffer minimum fill:        0 kbit
Time stalled:            3207 ms
Time with full buffer:  14639 ms

VMAF:                   61.81

It's not fantastic for streaming purposes (it's not designed for it either!), but it's much better than constant QP; the global average undershot a fair amount, and we still have some outliers causing stalls, but much less. Perhaps surprisingly, VMAF is significantly higher compared to constant QP (now roughly in “fair quality” territory), even though the overall rate is lower; the average frame just is much more important for quality. (Note that SVT-AV1 is not deterministic if you are using multithreading and rate control together, so if you run a second time, you could get different results.)

There is a “max bit rate” flag, too, but it seems not to do much for this clip (I don't even know if it's relevant for anything except capped CRF?), so I won't bore you with an identical set of data. Instead, let's try the CBR mode added in 1.0.0 (rc=2):

Svt[warn]: CBR Rate control is currently not supported for PRED_RANDOM_ACCESS, switching to VBR

Uh, OK. Switching to PRED_LOW_DELAY_B, then (pred-struct=1, helpfully undocumented):

Svt[warn]: Forced Low delay mode to use HierarchicalLevels = 3
Svt[warn]: Instance 1: The low delay encoding mode is a work-in-progress
project, and is only available for demos, experimentation, and further
development uses and should not be used for benchmarking until fully
implemented.
Svt[warn]: TPL is disabled in low delay applications.
Svt[info]: Number of logical cores available: 3

Ugh. So we're into experimental land, no TPL (SVT-AV1's variant of x264's mb-tree), and a maximum of three cores used. This means CBR is much slower; less than half the speed or so in these tests, and below the realtime threshold on this machine unless I reduce the preset. Still, let's see what it produces:

Flags: -c:v libsvtav1 -pix_fmt yuv420p10le -preset 10 -b:v 3M
       -svtav1-params pred-struct=1:rc=2

Histogram of rates over 1-second blocks:

  250  **
  750  *
 1250  *
 1750  ***
 2250  *********
 2750  ********************
 3250  ****************
 3750  *
 4250  ***
 4750  
 5250  
----- ----- ----- ----- -----
 6250  *
 6750  *
 7250  *
 7750  *

Min:    42 kbit/sec
Max:  7863 kbit/sec
Avg:  2998 kbit/sec


Primitive VBV with 3000 kbit max buffer (starting at 100% full), 3000 kbit/sec network:

Buffer minimum fill:        0 kbit
Time stalled:            4970 ms
Time with full buffer:   5522 ms

VMAF:                   61.53

This is not quite what we expected. The global average is now spot-on, but we are still bothered with outliers—and we're having more stalls than with the VBR mode (possibly because the lower bitrate overall helped a bit). Also note that the VMAF is no better, despite using more bitrate!

I believe these stalls point to a bug or shortcoming in SVT-AV1's CBR mode, so I've reported it, and we'll see what happens. But still, the limitations the low-delay prediction structure imposes on us (with associated quality loss) makes this a not terribly attractive option; it seems that this mode is a bit too new for serious use (perhaps not surprising, given the warnings it spits out).

So what is the best bet? I'd say that currently (as of git master, soon-to-be 1.2.0), it is using the default one-pass VBR mode (two-pass VBR obviously is a no-go for live streaming). Yes, it will fail VBV sometimes, but in practice, clients will usually have some headroom; again, we tune our bit rates lower than we'd need if buffering were our only constraint (to reduce people's bandwidth bills). It would be interesting to see how this pans out across a larger set of clips at some point; after all, most content isn't nearly as tricky as this.

There is still lots of exploration left to do; in particular, muxing the stream and getting it to actually play in browsers will be… fun? More to come, although I can't say exactly when.

24 July, 2022 02:40PM

July 23, 2022

hackergotchi for Wouter Verhelst

Wouter Verhelst

Planet Grep now running PtLink

Almost 2 decades ago, Planet Debian was created using the "planetplanet" RSS aggregator. A short while later, I created Planet Grep using the same software.

Over the years, the blog aggregator landscape has changed a bit. First of all, planetplanet was abandoned, forked into Planet Venus, and then abandoned again. Second, the world of blogging (aka the "blogosphere") has disappeared much, and the more modern world uses things like "Social Networks", etc, making blogs less relevant these days.

A blog aggregator community site is still useful, however, and so I've never taken Planet Grep down, even though over the years the number of blogs that was carried on Planet Grep has been reducing. In the past almost 20 years, I've just run Planet Grep on my personal server, upgrading its Debian release from whichever was the most recent stable release in 2005 to buster, never encountering any problems.

That all changed when I did the upgrade to Debian bullseye, however. Planet Venus is a Python 2 application, which was never updated to Python 3. Since Debian bullseye drops support for much of Python 2, focusing only on Python 3 (in accordance with python upstream's policy on the matter), that means I have had to run Planet Venus from inside a VM for a while now, which works as a short-term solution but not as a long-term one.

Although there are other implementations of blog aggregation software out there, I wanted to stick with something (mostly) similar. Additionally, I have been wanting to add functionality to it to also pull stuff from Social Networks, where possible (and legal, since some of these have... scary Terms Of Use documents).

So, as of today, Planet Grep is no longer powered by Planet Venus, but instead by PtLink. Rather than Python, it was written in Perl (a language with which I am more familiar), and there are plans for me to extend things in ways that have little to do with blog aggregation anymore...

There are a few other Planets out there that also use Planet Venus at this point -- Planet Debian and Planet FSFE are two that I'm currently already aware of, but I'm sure there might be more, too.

At this point, PtLink is not yet on feature parity with Planet Venus -- as shown by the fact that it can't yet build either Planet Debian or Planet FSFE successfully. But I'm not stopping my development here, and hopefully I'll have something that successfully builds both of those soon, too.

As a side note, PtLink is not intended to be bug compatible with Planet Venus. For one example, the configuration for Planet Grep contains an entry for Frederic Descamps, but somehow Planet Venus failed to fetch his feed. With the switch to PtLink, that seems fixed, and now some entries from Frederic seem to appear. I'm not going to be "fixing" that feature... but of course there might be other issues that will appear. If that's the case, let me know.

If you're reading this post through Planet Grep, consider this a public service announcement for the possibility (hopefully a remote one) of minor issues.

23 July, 2022 06:48PM

July 22, 2022

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

Debconf 22 photos

Finally after a long break, the in-person Debconf is a thing again, this time Debconf 22 is happening in Prizren, Kosovo.

And it has been my pleasure to again be here and take lots of pictures of the event and of the surroundings.

The photos can be found in this Google Photo shared album and also on this git-lfs share.

But the main photographic delight, as always is the DebConf 22 Group Photo. And here it is!!!

DebConf 22 Group photo small

You can also see it in:

22 July, 2022 01:54PM by Aigars Mahinovs

July 20, 2022

Antoine Beaupré

Relaying mail through debian.org

Back in 2020, I wrote this article about using DKIM to sign outgoing debian.org mail. This worked well for me for a while: outgoing mail was signed with DKIM and somehow was delivered. Maybe. Who knows.

But now we have a relay server which makes this kind of moot. So I have changed my configuration to use that relay instead of sending email on my own. It seems more reliable that mail seems to be coming from a real debian.org machine, so I'm hoping this will have better reputation than my current setup.

In general, you should follow the DSA documentation which includes a Postfix configuration. In my case, it was basically this patch:

diff --git a/postfix/main.cf b/postfix/main.cf
index 7fe6dd9e..eabe714a 100644
--- a/postfix/main.cf
+++ b/postfix/main.cf
@@ -55,3 +55,4 @@ smtp_sasl_security_options =
 smtp_sender_dependent_authentication = yes
 sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay
 sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport
+smtp_tls_policy_maps = hash:/etc/postfix/tls_policy
diff --git a/postfix/sender_relay b/postfix/sender_relay
index b486d687..997cce19 100644
--- /dev/null
+++ b/postfix/sender_relay
@@ -0,0 +1,2 @@
+# Per-sender provider; see also /etc/postfix/sasl_passwd.
+@debian.org    [mail-submit.debian.org]:submission
diff --git a/postfix/sender_transport b/postfix/sender_transport
index ca69bc7a..c506c1fc 100644
--- /dev/null
+++ b/postfix/sender_transport
@@ -0,0 +1,1 @@
+anarcat@debian.org     smtp:
diff --git a/postfix/tls_policy b/postfix/tls_policy
new file mode 100644
index 00000000..9347921a
--- /dev/null
+++ b/postfix/tls_policy
@@ -0,0 +1,1 @@
+submission.torproject.org:submission   verify ciphers=high

This configuration differs from the one provided by DSA because I already had the following configured:

sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay
smtp_sender_dependent_authentication = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_tls_security_options = noanonymous

I also don't show the patch on /etc/postfix/sasl_passwd for obvious security reasons.

I also had to setup a tls_policy map, because I couldn't use dane for all my remotes. You'll notice I also had to setup a sender_transport because I use a non-default default_transport as well.

It also seems like you can keep the previous DKIM configuration in parallel with this one, as long as you don't double-sign outgoing mail. Since this configuration here is done on my mail client (i.e. not on the server where I am running OpenDKIM), I'm not double-signing so I left the DKIM configuration alone. But if I wanted to remove it, the magic command is:

echo "del dkimPubKey" | gpg --clearsign | mail changes@db.debian.org

20 July, 2022 05:22PM

Enrico Zini

Deconstruction of the DAM hat

Further reading

Talk notes

Intro

  • I'm not speaking for the whole of DAM
  • Motivation in part is personal frustration, and need to set boundaries and negotiate expectations

Debian Account Managers

  • history

Responsibility for official membership

  • approve account creation
  • manage the New Member Process and nm.debian.org
  • close MIA accounts
  • occasional emergency termination of accounts
  • handle Emeritus
  • with lots of help from FrontDesk and MIA teams (big shoutout)

What DAM is not

  • we are not mediators
  • we are not a community management team
  • a list or IRC moderation team
  • we are not responsible for vision or strategic choices about how people are expected to interact in Debian
  • We shouldn't try and solve things because they need solving

Unexpected responsibilities

  • Over time, the community has grown larger and more complex, in a larger and more complex online environment
  • Enforcing the Diversity Statement and the Code of Conduct
  • Emergency list moderation
    • we have ended up using DAM warnings to compensate for the lack of list moderation, at least twice
  • contributors.debian.org (mostly only because of me, but it would be good to have its own team)

DAM warnings

  • except for rare glaring cases, patterns of behaviour / intentions / taking feedback in, are more relevant than individual incidents
  • we do not set out to fix people. It is enough for us to get people to acknowledge a problem
    • if they can't acknowledge a problem they're probably out
    • once a problem is acknowledged, fixing it could be their implementation detail
    • then again it's not that easy to get a number of troublesome people to acknowledge problems, so we go back to the problem of deciding when enough is enough

DAM warnings?

  • I got to a point where I look at DAM warnings as potential signals that DAM has ended up with the ball that everyone else in Debian dropped.
  • DAM warning means we haven't gotten to a last resort situation yet, meaning that it probably shouldn't be DAM dealing with this at this point
  • Everyone in the project can write a person "do you realise there's an issue here? Can you do something to stop?", and give them a chance to reflect on issues or ignore them, and build their reputation accordingly.
  • People in Debian should not have to endure, completey powerless, as trolls drag painful list discussions indefinitely until all the trolled people run out of energy and leave. At the same time, people who abuse a list should expect to be suspended or banned from the list, not have their Debian membership put into question (unless it is a recurring pattern of behaviour).
  • The push to grow DAM warnings as a tool, is a sign of the rest of Debian passing on their responsibilities, and DAM picking them up.
  • Then in DAM we end up passing on things, too, because we also don't have the energy to face another intensive megametathread, and as we take actions for things that shouldn't quite be our responsibility, we face a higher level of controversy, and therefore demotivation.
  • Also, as we take actions for things that shouldn't be our responsibility, and work on a higher level of controversy, our legitimacy is undermined (and understandably so)
    • there's a pothole on my street that never gets filled, so at some point I go out and fill it. Then people thank me, people complain I shouldn't have, people complain I didn't fill it right, people appreciate the gesture and invite me to learn how to fix potholes better, people point me out to more potholes, and then complain that potholes don't get fixed properly on the whole street. I end up being the problem, instead of whoever had responsibility of the potholes but wasn't fixing them
  • The Community Team, the Diversity Team, and individual developers, have no energy or entitlement for explaining what a healthy community looks like, and DAM is left with that responsibility in the form of accountability for their actions: to issue, say, a DAM warning for bullying, we are expected to explain what is bullying, and how that kind of behaviour constitutes bullying, in a way that is understandable by the whole project.
  • Since there isn't consensus in the project about what bullying loos like, we end up having to define it in a warning, which again is a responsibility we shouldn't have, and we need to do it because we have an escalated situation at hand, but we can't do it right

House rules

Interpreting house rules

  • you can't encode common sense about people behaviour in written rules: no matter how hard you try, people will find ways to cheat that
  • so one can use rules as a guideline, and someone responsible for the bits that can't go into rules.
    • context matters, privilege/oppression matters, patterns matter, histor matters
  • example:
    • call a person out for breaking a rule
    • get DARVO in response
    • state that DARVO is not acceptable
    • get concern trolling against margninalised people and accuse them of DARVO if they complain
  • example: assume good intentions vs enabling
  • example: rule lawyering and Figure skating
  • this cannot be solved by GRs: I/we (DAM)/possibly also we (Debian) don't want to do GRs about evaluating people

Governance by bullying

  • How to DoS discussions in Debian
    • example: gender, minority groups, affirmative action, inclusion, anything about the community team itself, anything about the CoC, systemd, usrmerge, dam warnings, expulsions
      • think of a topic. Think about sending a mail to debian-project about it. If you instinctively shiver at the thought, this is probably happening
      • would you send a mail about that to -project / -devel?
      • can you think of other topics?
    • it is an effective way of governance as it excludes topics from public discussion
  • A small number of people abuse all this, intentionally or not, to effectively manipulate decision making in the project.
  • Instead of using the rules of the community to bring forth the issues one cares about, it costs less energy to make it unthinkable or unbearable to have a discussion on issues one doesn't want to progress. What one can't stop constructively, one can oppose destructively.
  • even regularly diverting the discussion away from the original point or concern is enough to derail it without people realising you're doing it
  • This is an effective strategy for a few reckless people to unilaterally direct change, in the current state of Debian, at the cost of the health and the future of the community as a whole.
  • There are now a number of important issues nobody has the energy to discuss, because experience says that energy requirements to bring them to the foreground and deal with the consequences are anticipated to be disproportionate.
  • This is grave, as we're talking about trolling and bullying as malicious power moves to work around the accepted decision making structures of our community.
  • Solving this is out of scope for this talk, but it is urgent nevertheless, and can't be solved by expecting DAM to fix it

How about the Community Team?

  • It is also a small group of people who cannot pick up the responsibility of doing what the community isn't doing for itself
  • I believe we need to recover the Community Team: it's been years that every time they write something in public, they get bullied by the same recurring small group of people (see governance by bullying above)

How about DAM?

  • I was just saying that we are not the emergency catch all
  • When the only enforcement you have is "nuclear escalation", there's nothing you can do until it's too late, and meanwhile lots of people suffer (this was written before Russia invaded Ukraine)
  • Also, when issues happen on public lists, the BTS, or on IRC, some of the perpetrators are also outside of the jurisdiction of DAM, which shows how DAM is not the tool for this

How about the DPL?

  • Talking about emergency catch alls, don't they have enough to do already?

Concentrating responsibility

  • Concentrating all responsibility on social issues on a single point creates a scapegoat: we're blamed for any conduct issue, and we're blamed for any action we take on conduct issues
    • also, when you are a small group you are personally identified with it. Taking action on a person may mean making a new enemy, and becoming a target for harassment, retaliation, or even just the general unwarranted hostility of someone who is left with an axe to grind
  • As long as responsibility is centralised, any action one takes as a response of one micro-aggression (or one micro-aggression too many) is an overreaction. Distributing that responsibility allows a finer granularity of actions to be taken
    • you don't call the police to tell someone they're being annoying at the pub: the people at the pub will tell you you're being annoying, and the police is called if you want to beat them up in response
  • We are also a community where we have no tool to give feedback to posts, so it still looks good to nitpick stupid details with smart-looking tranchant one-liners, or elaborate confrontational put-downs, and one doesn't get the feedback of "that did not help". Compare with discussing https://salsa.debian.org/debian/grow-your-ideas/ which does have this kind of feedback
    • the lack of moderation and enforcement makes the Debian community ideal for easy baiting, concern trolling, dog whistling, and related fun, and people not empowered can be so manipulated to troll those responsible
    • if you're fragile in Debian, people will play cat and mouse with you. It might be social awkwardness, or people taking themselves too serious, but it can easily become bullying, and with no feedback it's hard to tell and course correct
  • Since DAM and DPL are where the ball stops, everyone else in Debian can afford to let the ball drop.
  • More generally, if only one group is responsible, nobody else is

Empowering developers

  • Police alone does not make a community safe: a community makes a community safe.
  • DDs currently have no power to act besides complaining to DAM, or complaining to Community Team that then can only pass complaints on to DAM.
    • you could act directly, but currently nobody has your back if the (micro-)aggression then starts extending to you, too
  • From no power comes no responsibility. And yet, the safety of a community is sustainable only if it is the responsibility of every member of the community.
  • don't wait for DAM as the only group who can do something
  • people should be able to address issues in smaller groups, without escalation at project level
  • but people don't have the tools for that
  • I/we've shouldered this responsibility for far too long because nobody else was doing it, and it's time the whole Debian community gets its act together and picks up this responsibility as they should be. You don't get to not care just because there's a small number of people who is caring for you.

What needs to happen

  • distinguish DAM decisions from decisions that are more about vision and direction, and would require more representation
  • DAM warnings shouldn't belong in DAM
  • who is responsible for interpretation of the CoC?
  • deciding what to do about controversial people shouldn't belong in DAM
  • curation of the community shouldn't belong in DAM
  • can't do this via GRs, it's a mess to do a GR to decide how acceptable is a specific person's behaviour, and a lot of this requires more and more frequent micro-decisions than one'd do via GRs

20 July, 2022 05:55AM

July 19, 2022

Russell Coker

DDC as a KVM Switch

With the recent resurgence in Covid19 I’ve been working from home a lot and using both my work laptop and personal PC on the same monitor. HDMI KVM switches start at $150 and I didn’t feel like buying one. So I wrote a script to change inputs on my monitor. The following script locks the session on the local machine and switches the monitor’s input to the other machine. I ran the command “ddcutil vcpinfo| grep Input” which shows that (on my monitor at least) 60 is the VCP for input. Then I ran the command “ddcutil getvcp 60” to get the current value and tried setting values sequentially to find the value for the other port.

Below is the script I’m using on one system, the other is the same but setting the different port via setvcp. The loginctl command is to lock the screen to prevent accidental keyboard or mouse input from messing anything up.

# lock the session, assumes that seat0 is the only session
loginctl lock-session $(loginctl list-sessions|grep "seat0 *$"|cut -c1-7)
# 0xf is DisplayPort, 0x11 is HDMI-1
ddcutil setvcp 60 0x11

For keyboard, mouse, and speakers I’m using a USB 2.0 hub that I can switch between computers. I idly considered getting a three-pole double-throw switch (four pole switches aren’t available at my local electronic store) to switch USB 2.0 as I only need to switch 3 of the 4 wires. But for the moment just plugging the hub into different systems is enough, I only do that a couple of times a day.

19 July, 2022 11:55AM by etbe

Craig Small

Linux Memory Statistics

Pretty much everyone who has spent some time on a command line in Linux would have looked at the free command. This command provides some overall statistics on the memory and how it is used. Typical output looks something like this:

             total        used        free      shared  buff/cache  available
Mem:      32717924     3101156    26950016      143608     2666752  29011928
Swap:      1000444           0     1000444

Memory sits in the first row after the headers then we have the swap statistics. Most of the numbers are directly fetched from the procfs file /proc/meminfo which are scaled and presented to the user. A good example of a “simple” stat is total, which is just the MemTotal row located in that file. For the rest of this post, I’ll make the rows from /proc/meminfo have an amber background.

What is Free, and what is Used?

While you could say that the free value is also merely the MemFree row, this is where Linux memory statistics start to get odd. While that value is indeed what is found for MemFree and not a calculated field, it can be misleading.

Most people would assume that Free means free to use, with the implication that only this amount of memory is free to use and nothing more. That would also mean the used value is really used by something and nothing else can use it.

In the early days of free and Linux statistics in general that was how it looked. Used is a calculated field (there is no MemUsed row) and was, initially, Total - Free.

The problem was, Used also included Buffers and Cached values. This meant that it looked like Linux was using a lot of memory for… something. If you read old messages before 2002 that are talking about excessive memory use, they quite likely are looking at the values printed by free.

The thing was, under memory pressure the kernel could release Buffers and Cached for use. Not all of the storage but some of it so it wasn’t all used. To counter this, free showed a row between Memory and Swap with Used having Buffers and Cached removed and Free having the same values added:

             total       used       free     shared    buffers     cached
Mem:      32717924    6063648   26654276          0     313552    2234436
-/+ buffers/cache:    3515660   29202264
Swap:      1000444          0    1000444

You might notice that this older version of free from around 2001 shows buffers and cached separately and there’s no available column (we’ll get to Available later.) Shared appears as zero because the old row was labelled MemShared and not Shmem which was changed in Linux 2.6 and I’m running a system way past that version.

It’s not ideal, you can say that the amount of free memory is something above 26654276 and below 29202264 KiB but nothing more accurate. buffers and cached are almost never all-used or all-unused so the real figure is not either of those numbers but something in-between.

Cached, just not for Caches

That appeared to be an uneasy truce within the Linux memory statistics world for a while. By 2014 we realised that there was a problem with Cached. This field used to have the memory used for a cache for files read from storage. While this value still has that component, it was also being used for tmpfs storage and the use of tmpfs went from an interesting idea to being everywhere. Cheaper memory meant larger tmpfs partitions went from a luxury to something everyone was doing.

The problem is with large files put into a tmpfs partition the Free would decrease, but so would Cached meaning the free column in the -/+ row would not change much and understate the impact of files in tmpfs.

Lucky enough in Linux 2.6.32 the developers added a Shmem row which was the amount of memory used for shmem and tmpfs. Subtracting that value from Cached gave you the “real” cached value which we call main_cache and very briefly this is what the cached value would show in free.

However, this caused further problems because not all Shem can be reclaimed and reused and probably swapped one set of problematic values for another. It did however prompt the Linux kernel community to have a look at the problem.

Enter Available

There was increasing awareness of the issues with working out how much memory a system has free within the kernel community. It wasn’t just the output of free or the percentage values in top, but load balancer or workload placing systems would have their own view of this value. As memory management and use within the Linux kernel evolved, what was or wasn’t free changed and all the userland programs were expected somehow to keep up.

The kernel developers realised the best place to get an estimate of the memory not used was in the kernel and they created a new memory statistic called Available. That way if how the memory is used or set to be unreclaimable they could change it and userland programs would go along with it.

procps has a fallback for this value and it’s a pretty complicated setup.

  1. Find the min_free_kybtes setting in sysfs which is the minimum amount of free memory the kernel will handle
  2. Add a 25% to this value (e.g. if it was 4000 make it 5000), this is the low watermark
  3. To find available, start with MemFree and subtract the low watermark
  4. If half of both Inactive(file) and Active(file) values are greater than the low watermark, add that half value otherwise add the sum of the values minus the low watermark
  5. If half of Slab Claimable is greater than the low watermark, add that half value otherwise add Slab Claimable minus the low watermark
  6. If what you get is less than zero, make available zero
  7. Or, just look at Available in /proc/meminfo

For the free program, we added the Available value and the +/- line was removed. The main_cache value was Cached + Slab while Used was calculated as Total - Free - main_cache - Buffers. This was very close to what the Used column in the +/- line used to show.

What’s on the Slab?

The next issue that came across was the use of slabs. At this point, main_cache was Cached + Slab, but Slab consists of reclaimable and unreclaimable components. One part of Slab can be used elsewhere if needed and the other cannot but the procps tools treated them the same. The Used calculation should not subtract SUnreclaim from the Total because it is actually being used.

So in 2015 main_cache was changed to be Cached + SReclaimable. This meant that Used memory was calculated as Total - Free - Cached - SReclaimable - Buffers.

Revenge of tmpfs and the return of Available

The tmpfs impacting Cached was still an issue. If you added a 10MB file into a tmpfs partition, then Free would reduce by 10MB and Cached would increase by 10MB meaning Used stayed unchanged even though 10MB had gone somewhere.

It was time to retire the complex calculation of Used. For procps 4.0.1 onwards, Used now means “not available”. We take the Total memory and subtract the Available memory. This is not a perfect setup but it is probably going to be the best one we have and testing is giving us much more sensible results. It’s also easier for people to understand (take the total value you see in free, then subtract the available value).

What does that mean for main_cache which is part of the buff/cache value you see? As this value is no longer in the used memory calculation, it is less important. Should it also be reverted to simply Cached without the reclaimable Slabs?

The calculated fields

In summary, what this means for the calculated fields in procps at least is:

  • Used: Total - Available, unless Available is not present then it’s Total – Free
  • Cached: Cached + Reclaimable Slabs
  • Swap/Low/HighUsed: Corresponding Total - Free (no change here)

Almost everything else, with the exception of some bounds checking, is what you get out of /proc/meminfo which is straight from the kernel.

19 July, 2022 11:53AM by dropbear

July 18, 2022

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 welcomes its sponsors!

DebConf22 is taking place in Prizren, Kosovo, from 17th to 24th July, 2022. It is the 23rd edition of the Debian conference and organizers are working hard to create another interesting and fruitful event for attendees.

We would like to warmly welcome the sponsors of DebConf22, and introduce you to them.

We have four Platinum sponsors.

Our first Platinum sponsor is Lenovo. As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

Infomaniak is our second Platinum sponsor. Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

The ITP Prizren is our third Platinum sponsor. ITP Prizren intends to be a changing and boosting element in the area of ICT, agro-food and creatives industries, through the creation and management of a favourable environment and efficient services for SMEs, exploiting different kinds of innovations that can contribute to Kosovo to improve its level of development in industry and research, bringing benefits to the economy and society of the country as a whole.

Google is our fourth Platinum sponsor. Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware. Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

Our Gold sponsors are:

Roche, a major international pharmaceutical provider and research company dedicated to personalized healthcare.

Microsoft, enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

Ipko Telecommunications, provides telecommunication services and it is the first and the most dominant mobile operator which offers fast-speed mobile internet – 3G and 4G networks in Kosovo.

Ubuntu, the Operating System delivered by Canonical.

U.S. Agency for International Development, leads international development and humanitarian efforts to save lives, reduce poverty, strengthen democratic governance and help people progress beyond assistance.

Our Silver sponsors are:

Pexip, is the video communications platform that solves the needs of large organizations. Deepin is a Chinese commercial company focusing on the development and service of Linux-based operating systems. Hudson River Trading, a company researching and developing automated trading algorithms using advanced mathematical techniques. Amazon Web Services (AWS), is one of the world's most comprehensive and broadly adopted cloud platforms, offering over 175 fully featured services from data centers globally. The Bern University of Applied Sciences with near 7,800 students enrolled, located in the Swiss capital. credativ, a service-oriented company focusing on open-source software and also a Debian development partner. Collabora, a global consultancy delivering Open Source software solutions to the commercial world. Arm: with the world’s Best SoC Design Portfolio, Arm powered solutions have been supporting innovation for more than 30 years and are deployed in over 225 billion chips to date. GitLab, an open source end-to-end software development platform with built-in version control, issue tracking, code review, CI/CD, and more. Two Sigma, rigorous inquiry, data analysis, and invention to help solve the toughest challenges across financial services. Starlabs, builds software experiences and focus on building teams that deliver creative Tech Solutions for our clients. Solaborate, has the world’s most integrated and powerful virtual care delivery platform. Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software. Matanel Foundation, operates in Israel, as its first concern is to preserve the cohesion of a society and a nation plagued by divisions.

Bronze sponsors:

bevuta IT, Kutia, Univention, Freexian.

And finally, our Supporter level sponsors:

Altus Metrum, Linux Professional Institute, Olimex, Trembelat, Makerspace IC Prizren, Cloud68.co, Gandi.net, ISG.EE, IPKO Foundation, The Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH.

Thanks to all our sponsors for their support! Their contributions make it possible for a large number of Debian contributors from all over the globe to work together, help and learn from each other in DebConf22.

DebConf22 logo

18 July, 2022 07:00AM by The Debian Publicity Team

July 17, 2022

hackergotchi for Martin-&#201;ric Racine

Martin-Éric Racine

Trying to chainload iPXE on old Etherboot hardware

Among my collection of PC hardware, I have a few rarities whose netboot implementation predates PXE. Since I recently managed to configure dnsmasq as a potent TFTP and PXE server, I figured that I'd try chainloading iPXE via BOOTP options. This required preparing a boot image using antiquated tools:

$ sudo mkelf-linux --param=autoboot --output=/srv/tftp/ipxe.nbi /srv/tftp/ipxe.lkrn

The host succesufully loads the boot image, except that the iPXE blob fails to find the network card:

Any ideas?

17 July, 2022 07:10PM by Martin-Éric (noreply@blogger.com)

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 starts today in Prizren

DebConf22, the 23rd annual Debian Conference, is taking place in Prizren, Kosovo from July 17th to 24th, 2022.

Debian contributors from all over the world have come together at Innovation and Training Park (ITP) in Prizren, Kosovo, to participate and work in a conference exclusively run by volunteers.

Today the main conference starts with over 270 attendants expected and 82 activities scheduled, including 45-minute and 20-minute talks and team meetings ("BoF"), workshops, and a job fair, as well as a variety of other events.

The full schedule at https://debconf2.debconf.org/schedule/ is updated every day, including activities planned ad-hoc by attendees during the whole conference.

If you want to engage remotely, you can follow the video streaming available from the DebConf22 website of the events happening in the three talk rooms: Drini, Lumbardhi and Ereniku. Or you can join the conversation about what is happening in the talk rooms: #debconf-drini, #debconf-lumbardhi and #debconf-ereniku (all those channels in the OFTC IRC network).

You can also follow the live coverage of news about DebConf22 on https://micronews.debian.org or the @debian profile in your favorite social network.

DebConf is committed to a safe and welcoming environment for all participants. See the web page about the Code of Conduct in DebConf22's website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf22, particularly our Platinum Sponsors: Lenovo, Infomaniak, ITP Prizren and Google.

DebConf22 banner open registration

17 July, 2022 07:00AM by The Debian Publicity Team

Russ Allbery

Review: Trang

Review: Trang, by Mary Sisson

Series: Trang #1
Publisher: Mary Sisson
Copyright: 2011
Printing: December 2013
ASIN: B004I6DAQ8
Format: Kindle
Pages: 374

In 2113, a radio mapping satellite near the Titan station disappeared. It then reappeared five days later, apparently damaged and broadcasting a signal that made computers crash. The satellite was immediately sent back to the Space Authority base in Beijing for careful examination, but the techs on the station were able to decode the transmission: a request for the contents of databases. The general manager of the station sent a probe to the same location and it too vanished, returning two days later with a picture of a portal, followed shortly by an alien probe.

Five years later, Philippe Trang has been assigned as the first human diplomat to an alien space station in intergalactic space at the nexus of multiple portals. Humans will apparently be the eighth type of intelligent life to send a representative to the station. He'll have a translation system, a security detail, and the groundwork of five years of audiovisual communications with the aliens, including one that was able to learn English. But he'll be the first official diplomatic representative physically there.

The current style in SF might lead you to expect a tense thriller full of nearly incomprehensible aliens, unexplained devices, and creepy mysteries. This is not that sort of book. The best comparison point I could think of is James White's Sector General novels, except with a diplomat rather than a doctor. The aliens are moderately strange (not just humans in prosthetic makeup), but are mostly earnest, well-meaning, and welcoming. Trang's security escort is more military than he expects, but that becomes a satisfying negotiation rather than an ongoing problem. There is confusion, misunderstandings, and even violence, but most of it is sorted out by earnest discussion and attempts at mutual understanding.

This is, in other words, diplomat competence porn (albeit written by someone who is not a diplomat, so I wouldn't expect too much realism). Trang defuses rather than confronts, patiently sorts through the nuances of a pre-existing complex dynamic between aliens without prematurely picking sides, and has the presence of mind to realize that the special forces troops assigned to him are another culture he needs to approach with the same skills. Most of the book is low-stakes confusion, curiosity, and careful exploration, which could have been boring but wasn't. It helps that Sisson packs a lot of complexity into the station dynamics and reveals it in ways that I found enjoyably unpredictable.

Some caveats: This is a self-published first novel (albeit by an experienced reporter and editor) and it shows. The book has a sort of plastic Technicolor feel that I sometimes see in self-published novels, where the details aren't quite deep enough, the writing isn't quite polished, and the dialog isn't quite as tight as I'm used to. It also meanders in a way that few commercial novels do, including slice-of-life moments and small asides that don't go anywhere. This can be either a bug or a feature depending on what you're in the mood for. I found it relaxing and stress-relieving, which is what I was looking for, but you may have a different experience.

I will warn that the climax features a sudden escalation of stakes that I don't think was sufficiently signaled by the tone of the writing, and thus felt a bit unreal. Sisson also includes a couple deus ex machina twists that felt a bit predictable and easy, and I didn't find the implied recent history of one of the alien civilizations that believable. The conclusion is therefore not the strongest part of the book; if you're not enjoying the journey, it probably won't get better.

But, all that said, this was fun, and I've already bought the second book in the series. It's low-stakes, gentle SF with a core of discovery and exploration rather than social dynamics, and I haven't run across much of that recently. The worst thing in the book is some dream glimpses at a horrific event in Trang's past that's never entirely on camera. It's not as pacifist as James White, but it's close.

Recommended, especially if you liked Sector General. White's series is so singular that I previously would have struggled to find a suggestion for someone who wanted more exactly like that (but without the Bewitched-era sexism). Now I have an answer. Score another one for Susan Stepney, who is also how I found Julie Czerneda. Trang is also currently free for Kindle, so you can't beat the price.

Followed by Trust.

Rating: 8 out of 10

17 July, 2022 04:06AM

July 16, 2022

Petter Reinholdtsen

Automatic LinuxCNC servo PID tuning?

While working on a CNC with servo motors controlled by the LinuxCNC PID controller, I recently had to learn how to tune the collection of values that control such mathematical machinery that a PID controller is. It proved to be a lot harder than I hoped, and I still have not succeeded in getting the Z PID controller to successfully defy gravity, nor X and Y to move accurately and reliably. But while climbing up this rather steep learning curve, I discovered that some motor control systems are able to tune their PID controllers. I got the impression from the documentation that LinuxCNC were not. This proved to be not true

The LinuxCNC pid component is the recommended PID controller to use. It uses eight constants Pgain, Igain, Dgain, bias, FF0, FF1, FF2 and FF3 to calculate the output value based on current and wanted state, and all of these need to have a sensible value for the controller to behave properly. Note, there are even more values involved, theser are just the most important ones. In my case I need the X, Y and Z axes to follow the requested path with little error. This has proved quite a challenge for someone who have never tuned a PID controller before, but there is at least some help to be found.

I discovered that included in LinuxCNC was this old PID component at_pid claiming to have auto tuning capabilities. Sadly it had been neglected since 2011, and could not be used as a plug in replacement for the default pid component. One would have to rewriting the LinuxCNC HAL setup to test at_pid. This was rather sad, when I wanted to quickly test auto tuning to see if it did a better job than me at figuring out good P, I and D values to use.

I decided to have a look if the situation could be improved. This involved trying to understand the code and history of the pid and at_pid components. Apparently they had a common ancestor, as code structure, comments and variable names were quite close to each other. Sadly this was not reflected in the git history, making it hard to figure out what really happened. My guess is that the author of at_pid.c took a version of pid.c, rewrote it to follow the structure he wished pid.c to have, then added support for auto tuning and finally got it included into the LinuxCNC repository. The restructuring and lack of early history made it harder to figure out which part of the code were relevant to the auto tuning, and which part of the code needed to be updated to work the same way as the current pid.c implementation. I started by trying to isolate relevant changes in pid.c, and applying them to at_pid.c. My aim was to make sure the at_pid component could replace the pid component with a simple change in the HAL setup loadrt line, without having to "rewire" the rest of the HAL configuration. After a few hours following this approach, I had learned quite a lot about the code structure of both components, while concluding I was heading down the wrong rabbit hole, and should get back to the surface and find a different path.

For the second attempt, I decided to throw away all the PID control related part of the original at_pid.c, and instead isolate and lift the auto tuning part of the code and inject it into a copy of pid.c. This ensured compatibility with the current pid component, while adding auto tuning as a run time option. To make it easier to identify the relevant parts in the future, I wrapped all the auto tuning code with '#ifdef AUTO_TUNER'. The end result behave just like the current pid component by default, as that part of the code is identical. The end result entered the LinuxCNC master branch a few days ago.

To enable auto tuning, one need to set a few HAL pins in the PID component. The most important ones are tune-effort, tune-mode and tune-start. But lets take a step back, and see what the auto tuning code will do. I do not know the mathematical foundation of the at_pid algorithm, but from observation I can tell that the algorithm will, when enabled, produce a square wave pattern centered around the bias value on the output pin of the PID controller. This can be seen using the HAL Scope provided by LinuxCNC. In my case, this is translated into voltage (+-10V) sent to the motor controller, which in turn is translated into motor speed. So at_pid will ask the motor to move the axis back and forth. The number of cycles in the pattern is controlled by the tune-cycles pin, and the extremes of the wave pattern is controlled by the tune-effort pin. Of course, trying to change the direction of a physical object instantly (as in going directly from a positive voltage to the equivalent negative voltage) do not change velocity instantly, and it take some time for the object to slow down and move in the opposite direction. This result in a more smooth movement wave form, as the axis in question were vibrating back and forth. When the axis reached the target speed in the opposing direction, the auto tuner change direction again. After several of these changes, the average time delay between the 'peaks' and 'valleys' of this movement graph is then used to calculate proposed values for Pgain, Igain and Dgain, and insert them into the HAL model to use by the pid controller. The auto tuned settings are not great, but htye work a lot better than the values I had been able to cook up on my own, at least for the horizontal X and Y axis. But I had to use very small tune-effort values, as my motor controllers error out if the voltage change too quickly. I've been less lucky with the Z axis, which is moving a heavy object up and down, and seem to confuse the algorithm. The Z axis movement became a lot better when I introduced a bias value to counter the gravitational drag, but I will have to work a lot more on the Z axis PID values.

Armed with this knowledge, it is time to look at how to do the tuning. Lets say the HAL configuration in question load the PID component for X, Y and Z like this:

loadrt pid names=pid.x,pid.y,pid.z

Armed with the new and improved at_pid component, the new line will look like this:

loadrt at_pid names=pid.x,pid.y,pid.z

The rest of the HAL setup can stay the same. This work because the components are referenced by name. If the component had used count=3 instead, all use of pid.# had to be changed to at_pid.#.

To start tuning the X axis, move the axis to the middle of its range, to make sure it do not hit anything when it start moving back and forth. Next, set the tune-effort to a low number in the output range. I used 0.1 as my initial value. Next, assign 1 to the tune-mode value. Note, this will disable the pid controlling part and feed 0 to the output pin, which in my case initially caused a lot of drift. In my case it proved to be a good idea with X and Y to tune the motor driver to make sure 0 voltage stopped the motor rotation. On the other hand, for the Z axis this proved to be a bad idea, so it will depend on your setup. It might help to set the bias value to a output value that reduce or eliminate the axis drift. Finally, after setting tune-mode, set tune-start to 1 to activate the auto tuning. If all go well, your axis will vibrate for a few seconds and when it is done, new values for Pgain, Igain and Dgain will be active. To test them, change tune-mode back to 0. Note that this might cause the machine to suddenly jerk as it bring the axis back to its commanded position, which it might have drifted away from during tuning. To summarize with some halcmd lines:

setp pid.x.tune-effort 0.1
setp pid.x.tune-mode 1
setp pid.x.tune-start 1
# wait for the tuning to complete
setp pid.x.tune-mode 0

After doing this task quite a few times while trying to figure out how to properly tune the PID controllers on the machine in, I decided to figure out if this process could be automated, and wrote a script to do the entire tuning process from power on. The end result will ensure the machine is powered on and ready to run, home all axis if it is not already done, check that the extra tuning pins are available, move the axis to its mid point, run the auto tuning and re-enable the pid controller when it is done. It can be run several times. Check out the run-auto-pid-tuner script on github if you want to learn how it is done.

My hope is that this little adventure can inspire someone who know more about motor PID controller tuning can implement even better algorithms for automatic PID tuning in LinuxCNC, making life easier for both me and all the others that want to use LinuxCNC but lack the in depth knowledge needed to tune PID controllers well.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

16 July, 2022 08:30PM

hackergotchi for Thomas Goirand

Thomas Goirand

My work during debcamp

I arrived in Prizren late on Wednesday. Here’s what I did during debcamp (so over 3 days). I hope this post just motivates others to contribute more to Debian.

At least 2 DDs want to upload packages that need a new version of python3-jsonschema (ie: version > 4.x). Unfortunately, version 4 broke a few packages. I therefore uploaded it to Experimental a few months/week, so I could see the result of autopkgtest reading the pseudo excuse page. And it showed a few packages broke. Here’s the one used (or part of) OpenStack:

  • Nova
  • Designate
  • Ironic
  • python-warlock
  • Sahara
  • Vitrage

Thanks to a reactive upstream, I was able to fix the first 4 above, but not Sahara yet. Vitrage poped-up when I uploade Debian release 2 of jsonschema, surprisingly. Also python3-jsonschema autopkgtest itself was broken because missing python3-pip in depends, but that should be fixed also.
I then filed bugs for packages not under my control:

  • bmtk
  • python-asdf

It looks tlike now there’s also spyder which wasn’t in the list a few hours ago. Maybe I should also file a bug against it. At this point, I don’t think the python-jsonschema transition is finished, but it’s on good tracks.

Then I also uploaded a new package of Ceph removing the ceph-mgr-diskprediction-local because it depended on python3-sklearn that the release team wanted to remove. I also prepared a point release update for it, but I’m currently waiting for the previous upload to migrate to testing before uploading the point release.

Last, I wrote the missing “update” command for extrepo, and pushed the merge request to Salsa. Now extrepo should be feature complete (at least from my point of view).

I also merged the patch for numberstation fixing the debian/copyright, and uploaded it to the NEW queue. It’s a new package that does 2 factor authentication, and is mobile friendly: it works perfectly on any Mobian powered phone.

Next, I intend to work with Arthur on the Cloud image finder. I hope we can find the time to work on it so it does what I need (ie: support the kind of setup I want to do, with HA, puppet, etc.).

16 July, 2022 08:22PM by Goirand Thomas

Russ Allbery

INN 2.7.0

This is the first major release of the INN news server package since 2015. It incorporates tons of work on just about every part of INN, ranging from a brand new overview backend contributed by Bo Lindbergh through Cancel-Lock support contributed by Julien ÉLIE to numerous smaller changes in configuration files, protocol support, and overall simplification.

Since this represents seven years of development, there are too many major changes to summarize in a short blog post, so I'll simply link to the INN 2.7.0 NEWS file for all of the details, including breaking changes to watch out for when upgrading.

INN 2.7 is now the stable branch, and will be maintained on the 2.7 Git branch. The main branch is now open for development targeting 2.8.0. (I'm still hoping to get to the build system overhaul before 2.8.0 is released.) As of tonight, if all goes well, the nightly stable snapshots will be generated from the 2.7 branch instead of the 2.6 branch, so be aware that you will need to pay close attention to the upgrade if you're using a snapshot.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page or from my personal INN pages. The latter also has links to the other INN documentation.

16 July, 2022 06:16PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rust GUI advice

The piece is largely about Rust, but Raph Levien's blog post about Rust GUI toolkits contains some of the most thoughtful writings on GUI toolkits that I've seen in a while, regardless of language. Recommended.

16 July, 2022 10:13AM

July 15, 2022

Mike Hommey

Announcing git-cinnabar 0.5.9

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.8?

  • Updated git to 2.37.1 for the helper.
  • Various python 3 fixes.
  • Fixed stream bundle
  • Added python and py.exe as executables tried on top of python3 and python2.
  • Improved handling of ill-formed local urls.
  • Fixed using old mercurial libraries that don’t support bundlev2 with a server that does.
  • When fsck reports the metadata as broken, prevent further updates to the repo.
  • When issue #207 is detected, mark the metadata as broken
  • Added support for logging redirection to a file
  • Now ignore refs/cinnabar/replace/ refs, and always use the corresponding metadata instead.
  • Various git cinnabar fsck fixes.

15 July, 2022 10:11PM by glandium

hackergotchi for Bits from Debian

Bits from Debian

(Unofficial) Debian Perl Sprint 2022

Three members of the Debian Perl Group met in Hamburg between May 23 and May 30 2022 as part of the Debian Reunion Hamburg to continue perl development work for Bookworm and to work on QA tasks across our 3800+ packages.

The participants had a good time and met other Debian friends. The sprint was also productive:

  • pkg-perl-tools and dh-make-perl were improved and extended.
  • More than 50 uploads were done, and more than 30 bugs were fixed or at least triaged.
  • autopkgtests were added to lots of packages.
  • Some requests to remove obsolete packages were filed as well.

The more detailed report was posted to the Debian Perl mailing list.

The participants would like to thank the Debian Reunion Hamburg organizers for providing the framework for our sprint, all sponsors of the event, and all donors to the Debian project who helped to cover parts of our expenses.

Debian Reunion Hamburg 2022 group photo

15 July, 2022 03:35PM by gregor herrmann

hackergotchi for Steve Kemp

Steve Kemp

So we come to Lisp

Recently I've been working with simple/trivial scripting languages, and I guess I finally reached a point where I thought "Lisp? Why not". One of the reasons for recent experimentation was thinking about the kind of minimalism that makes implementing a language less work - being able to actually use the language to write itself.

FORTH is my recurring example, because implementing it mostly means writing a virtual machine which consists of memory ("cells") along with a pair of stacks, and some primitives for operating upon them. Once you have that groundwork in place you can layer the higher-level constructs (such as "for", "if", etc).

Lisp allows a similar approach, albeit with slightly fewer low-level details required, and far less tortuous thinking. Lisp always feels higher-level to me anyway, given the explicit data-types ("list", "string", "number", etc).

Here's something that works in my toy lisp:

;; Define a function, `fact`, to calculate factorials (recursively).
(define fact (lambda (n)
  (if (<= n 1)
    1
      (* n (fact (- n 1))))))

;; Invoke the factorial function, using apply
(apply (list 1 2 3 4 5 6 7 8 9 10)
  (lambda (x)
    (print "%s! => %s" x (fact x))))

The core language doesn't have helpful functions to filter lists, or build up lists by applying a specified function to each member of a list, but adding them is trivial using the standard car, cdr, and simple recursion. That means you end up writing lots of small functions like this:

(define zero? (lambda (n) (if (= n 0) #t #f)))
(define even? (lambda (n) (if (zero? (% n 2)) #t #f)))
(define odd?  (lambda (n) (! (even? n))))
(define sq    (lambda (x) (* x x)))

Once you have them you can use them in a way that feels simple and natural:

(print "Even numbers from 0-10: %s"
  (filter (nat 11) (lambda (x) (even? x))))

(print "Squared numbers from 0-10: %s"
  (map (nat 11) (lambda (x) (sq x))))

This all feels very sexy and simple, because the implementations of map, apply, filter are all written using the lisp - and they're easy to write.

Lisp takes things further than some other "basic" languages because of the (infamous) support for Macros. But even without them writing new useful functions is pretty simple. Where things struggle? I guess I don't actually have a history of using lisp to actually solve problems - although it's great for configuring my editor..

Anyway I guess the journey continues. Having looked at the obvious "minimal core" languages I need to go further afield:

I'll make an attempt to look at some of the esoteric programming languages, and see if any of those are fun to experiment with.

15 July, 2022 01:00AM

Reproducible Builds (diffoscope)

diffoscope 219 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 219. This version includes the following changes:

* Don't traceback if we encounter an invalid Unicode character in Haskell
  versioning headers. (Closes: reproducible-builds/diffoscope#307)
* Update various copyright years.

You find out more by visiting the project homepage.

15 July, 2022 12:00AM

July 14, 2022

hackergotchi for Patryk Cisek

Patryk Cisek

Playing with NitroKey 3 -- PC runner using USBIP

I’ve been wanting to use my brand new NitroKey 3, but TOTP is not supported yet. So, I’m looking to implement it myself, since firmware and tooling are open-source. NitroKey 3’s firmware is based on Trussed framework. In essence, it’s been designed so that anyone can implement an independent Trussed application. Each such application is like a module that can be added to Trussed-based product. So if I write a Trussed app, I’d be able to add it to NK3’s firmware.

14 July, 2022 11:01PM by l (Patryk Cisek (patryk@cisek.emai)

July 13, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rfoaas 2.3.2: New upstream accessors

rfoaas greed example

FOAAS by now moved to version 2.3.2 in its repo. This releases 2.3.2 of rfoaas catches up, and brings the first release in about two and a half years.

This 2.3.2 release of FOAAS brings us six new REST access points: absolutely(), dense(), dumbledore(), lowpoly(), understand(), and yeah(). Along with these new functions, documentation and tests were updated.

My CRANberries service provides a diff to the previous CRAN release. Questions, comments etc should go to the GitHub issue tracker. More background information is on the project page as well as on the github repo

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 July, 2022 11:08PM

Reproducible Builds

Reproducible Builds in June 2022

Welcome to the June 2022 report from the Reproducible Builds project. In these reports, we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.


Save the date!

Despite several delays, we are pleased to announce dates for our in-person summit this year:

November 1st 2022 — November 3rd 2022


The event will happen in/around Venice (Italy), and we intend to pick a venue reachable via the train station and an international airport. However, the precise venue will depend on the number of attendees.

Please see the announcement mail from Mattia Rizzolo, and do keep an eye on the mailing list for further announcements as it will hopefully include registration instructions.


News

David Wheeler filed an issue against the Rust programming language to report that builds are “not reproducible because full path to the source code is in the panic and debug strings”. Luckily, as one of the responses mentions: “the --remap-path-prefix solves this problem and has been used to great effect in build systems that rely on reproducibility (Bazel, Nix) to work at all” and that “there are efforts to teach cargo about it here”.


The Python Security team announced that:

The ctx hosted project on PyPI was taken over via user account compromise and replaced with a malicious project which contained runtime code which collected the content of os.environ.items() when instantiating Ctx objects. The captured environment variables were sent as a base64 encoded query parameter to a Heroku application […]

As their announcement later goes onto state, version-pinning using “hash-checking mode” can prevent this attack, although this does depend on specific installations using this mode, rather than a prevention that can be applied systematically.


Developer vanitasvitae published an interesting and entertaining blog post detailing the blow-by-blow steps of debugging a reproducibility issue in PGPainless, a library which “aims to make using OpenPGP in Java projects as simple as possible”.

Whilst their in-depth research into the internals of the .jar may have been unnecessary given that diffoscope would have identified the, it must be said that there is something to be said with occasionally delving into seemingly “low-level” details, as well describing any debugging process. Indeed, as vanitasvitae writes:

Yes, this would have spared me from 3h of debugging 😉 But I probably would also not have gone onto this little dive into the JAR/ZIP format, so in the end I’m not mad.


Kees Cook published a short and practical blog post detailing how he uses reproducibility properties to aid work to replace one-element arrays in the Linux kernel. Kees’ approach is based on the principle that if a (small) proposed change is considered equivalent by the compiler, then the generated output will be identical… but only if no other arbitrary or unrelated changes are introduced. Kees mentions the “fantastic” diffoscope tool, as well as various kernel-specific build options (eg. KBUILD_BUILD_TIMESTAMP) in order to “prepare my build with the ‘known to disrupt code layout’ options disabled”.


Stefano Zacchiroli gave a presentation at GDR Sécurité Informatique based in part on a paper co-written with Chris Lamb titled Increasing the Integrity of Software Supply Chains. (Tweet)


Debian

In Debian in this month, 28 reviews of Debian packages were added, 35 were updated and 27 were removed this month adding to our knowledge about identified issues. Two issue types were added: nondeterministic_checksum_generated_by_coq and nondetermistic_js_output_from_webpack.

After Holger Levsen found hundreds of packages in the bookworm distribution that lack .buildinfo files, he uploaded 404 source packages to the archive (with no meaningful source changes). Currently bookworm now shows only 8 packages without .buildinfo files, and those 8 are fixed in unstable and should migrate shortly. By contrast, Debian unstable will always have packages without .buildinfo files, as this is how they come through the NEW queue. However, as these packages were not built on the official build servers (ie. they were uploaded by the maintainer) they will never migrate to Debian testing. In the future, therefore, testing should never have packages without .buildinfo files again.

Roland Clobus posted yet another in-depth status report about his progress making the Debian Live images build reproducibly to our mailing list. In this update, Roland mentions that “all major desktops build reproducibly with bullseye, bookworm and sid” but also goes on to outline the progress made with automated testing of the generated images using openQA.


GNU Guix

Vagrant Cascadian made a significant number of contributions to GNU Guix:

Elsewhere in GNU Guix, Ludovic Courtès published a paper in the journal The Art, Science, and Engineering of Programming called Building a Secure Software Supply Chain with GNU Guix:

This paper focuses on one research question: how can [Guix]((https://www.gnu.org/software/guix/) and similar systems allow users to securely update their software? […] Our main contribution is a model and tool to authenticate new Git revisions. We further show how, building on Git semantics, we build protections against downgrade attacks and related threats. We explain implementation choices. This work has been deployed in production two years ago, giving us insight on its actual use at scale every day. The Git checkout authentication at its core is applicable beyond the specific use case of Guix, and we think it could benefit to developer teams that use Git.

A full PDF of the text is available.


openSUSE

In the world of openSUSE, SUSE announced at SUSECon that they are preparing to meet SLSA level 4. (SLSA (Supply chain Levels for Software Artifacts) is a new industry-led standardisation effort that aims to protect the integrity of the software supply chain.)

However, at the time of writing, timestamps within RPM archives are not normalised, so bit-for-bit identical reproducible builds are not possible. Some in-toto provenance files published for SUSE’s SLE-15-SP4 as one result of the SLSA level 4 effort. Old binaries are not rebuilt, so only new builds (e.g. maintenance updates) have this metadata added.

Lastly, Bernhard M. Wiedemann posted his usual monthly openSUSE reproducible builds status report.


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 215, 216 and 217 to Debian unstable. Chris Lamb also made the following changes:

  • New features:

    • Print profile output if we were called with --profile and we were killed via a TERM signal. This should help in situations where diffoscope is terminated due to some sort of timeout. []
    • Support both PyPDF 1.x and 2.x. []
  • Bug fixes:

    • Also catch IndexError exceptions (in addition to ValueError) when parsing .pyc files. (#1012258)
    • Correct the logic for supporting different versions of the argcomplete module. []
  • Output improvements:

    • Don’t leak the (likely-temporary) pathname when comparing PDF documents. []
  • Logging improvements:

    • Update test fixtures for GNU readelf 2.38 (now in Debian unstable). [][]
    • Be more specific about the minimum required version of readelf (ie. binutils), as it appears that this ‘patch’ level version change resulted in a change of output, not the ‘minor’ version. []
    • Use our @skip_unless_tool_is_at_least decorator (NB. at_least) over @skip_if_tool_version_is (NB. is) to fix tests under Debian stable. []
    • Emit a warning if/when we are handling a UNIX TERM signal. []
  • Codebase improvements:

    • Clarify in what situations the main finally block gets called with respect to TERM signal handling. []
    • Clarify control flow in the diffoscope.profiling module. []
    • Correctly package the scripts/ directory. []

In addition, Edward Betts updated a broken link to the RSS on the diffoscope homepage and Vagrant Cascadian updated the diffoscope package in GNU Guix [][][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Add a package set for packages that use the R programming language [] as well as one for Rust [].
    • Improve package set matching for Python [] and font-related [] packages.
    • Install the lz4, lzop and xz-utils packages on all nodes in order to detect running kernels. []
    • Improve the cleanup mechanisms when testing the reproducibility of Debian Live images. [][]
    • In the automated node health checks, deprioritise the “generic kernel warning”. []
  • Roland Clobus (Debian Live image reproducibility):

    • Add various maintenance jobs to the Jenkins view. []
    • Cleanup old workspaces after 24 hours. []
    • Cleanup temporary workspace and resulting directories. []
    • Implement a number of fixes and improvements around publishing files. [][][]
    • Don’t attempt to preserve the file timestamps when copying artifacts. []

And finally, node maintenance was also performed by Mattia Rizzolo [].


Mailing list and website

On our mailing list this month:

Lastly, Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but primarily published an interview with Hans-Christoph Steiner of the F-Droid project. Chris Lamb also added a Coffeescript example for parsing and using the SOURCE_DATE_EPOCH environment variable []. In addition, Sebastian Crane very-helpfully updated the screenshot of salsa.debian.org’s “request access” button on the How to join the Salsa group. []


Contact

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

13 July, 2022 01:58PM

July 12, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

Responsible stewardship of the UEFI secure boot ecosystem

After I mentioned that Lenovo are now shipping laptops that only boot Windows by default, a few people pointed to a Lenovo document that says:

Starting in 2022 for Secured-core PCs it is a Microsoft requirement for the 3rd Party Certificate to be disabled by default.

"Secured-core" is a term used to describe machines that meet a certain set of Microsoft requirements around firmware security, and by and large it's a good thing - devices that meet these requirements are resilient against a whole bunch of potential attacks in the early boot process. But unfortunately the 2022 requirements don't seem to be publicly available, so it's difficult to know what's being asked for and why. But first, some background.

Most x86 UEFI systems that support Secure Boot trust at least two certificate authorities:

1) The Microsoft Windows Production PCA - this is used to sign the bootloader in production Windows builds. Trusting this is sufficient to boot Windows.
2) The Microsoft Corporation UEFI CA - this is used by Microsoft to sign non-Windows UEFI binaries, including built-in drivers for hardware that needs to work in the UEFI environment (such as GPUs and network cards) and bootloaders for non-Windows.

The apparent secured-core requirement for 2022 is that the second of these CAs should not be trusted by default. As a result, drivers or bootloaders signed with this certificate will not run on these systems. This means that, out of the box, these systems will not boot anything other than Windows[1].

Given the association with the secured-core requirements, this is presumably a security decision of some kind. Unfortunately, we have no real idea what this security decision is intended to protect against. The most likely scenario is concerns about the (in)security of binaries signed with the third-party signing key - there are some legitimate concerns here, but I'm going to cover why I don't think they're terribly realistic.

The first point is that, from a boot security perspective, a signed bootloader that will happily boot unsigned code kind of defeats the point. Kaspersky did it anyway. The second is that even a signed bootloader that is intended to only boot signed code may run into issues in the event of security vulnerabilities - the Boothole vulnerabilities are an example of this, covering multiple issues in GRUB that could allow for arbitrary code execution and potential loading of untrusted code.

So we know that signed bootloaders that will (either through accident or design) execute unsigned code exist. The signatures for all the known vulnerable bootloaders have been revoked, but that doesn't mean there won't be other vulnerabilities discovered in future. Configuring systems so that they don't trust the third-party CA means that those signed bootloaders won't be trusted, which means any future vulnerabilities will be irrelevant. This seems like a simple choice?

There's actually a couple of reasons why I don't think it's anywhere near that simple. The first is that whenever a signed object is booted by the firmware, the trusted certificate used to verify that object is measured into PCR 7 in the TPM. If a system previously booted with something signed with the Windows Production CA, and is now suddenly booting with something signed with the third-party UEFI CA, the values in PCR 7 will be different. TPMs support "sealing" a secret - encrypting it with a policy that the TPM will only decrypt it if certain conditions are met. Microsoft make use of this for their default Bitlocker disk encryption mechanism. The disk encryption key is encrypted by the TPM, and associated with a specific PCR 7 value. If the value of PCR 7 doesn't match, the TPM will refuse to decrypt the key, and the machine won't boot. This means that attempting to attack a Windows system that has Bitlocker enabled using a non-Windows bootloader will fail - the system will be unable to obtain the disk unlock key, which is a strong indication to the owner that they're being attacked.

The second is that this is predicated on the idea that removing the third-party bootloaders and drivers removes all the vulnerabilities. In fact, there's been rather a lot of vulnerabilities in the Windows bootloader. A broad enough vulnerability in the Windows bootloader is arguably a lot worse than a vulnerability in a third-party loader, since it won't change the PCR 7 measurements and the system will boot happily. Removing trust in the third-party CA does nothing to protect against this.

The third reason doesn't apply to all systems, but it does to many. System vendors frequently want to ship diagnostic or management utilities that run in the boot environment, but would prefer not to have to go to the trouble of getting them all signed by Microsoft. The simple solution to this is to ship their own certificate and sign all their tooling directly - the secured-core Lenovo I'm looking at currently is an example of this, with a Lenovo signing certificate. While everything signed with the third-party signing certificate goes through some degree of security review, there's no requirement for any vendor tooling to be reviewed at all. Removing the third-party CA does nothing to protect the user against the code that's most likely to contain vulnerabilities.

Obviously I may be missing something here - Microsoft may well have a strong technical justification. But they haven't shared it, and so right now we're left making guesses. And right now, I just don't see a good security argument.

But let's move on from the technical side of things and discuss the broader issue. The reason UEFI Secure Boot is present on most x86 systems is that Microsoft mandated it back in 2012. Microsoft chose to be the only trusted signing authority. Microsoft made the decision to assert that third-party code could be signed and trusted.

We've certainly learned some things since then, and a bunch of things have changed. Third-party bootloaders based on the Shim infrastructure are now reviewed via a community-managed process. We've had a productive coordinated response to the Boothole incident, which also taught us that the existing revocation strategy wasn't going to scale. In response, the community worked with Microsoft to develop a specification for making it easier to handle similar events in future. And it's also worth noting that after the initial Boothole disclosure was made to the GRUB maintainers, they proactively sought out other vulnerabilities in their codebase rather than simply patching what had been reported. The free software community has gone to great lengths to ensure third-party bootloaders are compatible with the security goals of UEFI Secure Boot.

So, to have Microsoft, the self-appointed steward of the UEFI Secure Boot ecosystem, turn round and say that a bunch of binaries that have been reviewed through processes developed in negotiation with Microsoft, implementing technologies designed to make management of revocation easier for Microsoft, and incorporating fixes for vulnerabilities discovered by the developers of those binaries who notified Microsoft of these issues despite having no obligation to do so, and which have then been signed by Microsoft are now considered by Microsoft to be insecure is, uh, kind of impolite? Especially when unreviewed vendor-signed binaries are still considered trustworthy, despite no external review being carried out at all.

If Microsoft had a set of criteria used to determine whether something is considered sufficiently trustworthy, we could determine which of these we fell short on and do something about that. From a technical perspective, Microsoft could set criteria that would allow a subset of third-party binaries that met additional review be trusted without having to trust all third-party binaries[2]. But, instead, this has been a decision made by the steward of this ecosystem without consulting major stakeholders.

If there are legitimate security concerns, let's talk about them and come up with solutions that fix them without doing a significant amount of collateral damage. Don't complain about a vendor blocking your apps and then do the same thing yourself.

[Edit to add: there seems to be some misunderstanding about where this restriction is being imposed. I bought this laptop because I'm interested in investigating the Microsoft Pluton security processor, but Pluton is not involved at all here. The restriction is being imposed by the firmware running on the main CPU, not any sort of functionality implemented on Pluton]

[1] They'll also refuse to run any drivers that are stored in flash on Thunderbolt devices, which means eGPU setups may be more complicated, as will netbooting off Thunderbolt-attached NICs
[2] Use a different leaf cert to sign the new trust tier, add the old leaf cert to dbx unless a config option is set, leave the existing intermediate in db

comment count unavailable comments

12 July, 2022 05:50AM

July 09, 2022

Andrew Cater

20220709 2100 UTC - Finished Debian media testing for the day

 I've just finished my last test: Sledge is finishing his and will then push the release out. Today's been a bit slow and steady - but we've finally got there.

Thanks, as ever, due to the release team for actually giving us an update, the press team for announcements - and, of course, the various sponsors, administrators and maintainers of Debian infrastructure like cdimage.debian.org and the CD building machines.

It's been a quiet release for the media team in terms of participation - we've not had our usual tester for debian-edu and it's been a bit subdued altogether.

Not even as many blog posts as usual: I suppose I'll make up for it in August at the BBQ in Cambridge - if we don't all get another lockdown / COVID-19 variants / fuel prices at ££££ per litre to dissuade us.

09 July, 2022 09:07PM by Andrew Cater (noreply@blogger.com)

Testing 11.4 Debian media images - almost finished - 20220709 1933 UTC

 We're flagging a bit now, I think but close to the end. The standard Debian images caused no problems: Sledge and I are just finishing up the last few live images to test now.

Thanks, as ever, to the crew: RattusRattus and Isy, Sledge struggling through feeling awful. No debian-edu testing today, unfortunately, but that almost never breaks anyway.

Everyone's getting geared up for Kosovo - you'll see the other three there with any luck - and you'd catch all of us at the BBQ in Cambridge. It's going to be a hugely busy month and a bit for Steve and the others. :)

09 July, 2022 07:34PM by Andrew Cater (noreply@blogger.com)

As has become traditional - blogging as part of the media release for Debian 11.4 - 202207091436 UTC

 A lower profile release today: Sledge working in the background as affected by COVID. RattusRattus and Isy doing sterling service on the other side of Cambridge, /me over here.

Testing on the standard install media is pretty much done: Isy, Andy and Sledge have moved on to testing the live images.

Stupidly hot for UK - it's 28 degrees indoors with windows open.

All good so far :)


09 July, 2022 02:37PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.9 on CRAN: Regular Updates

rcpp logo

The Rcpp team is please to announce the newest release 1.0.9 of Rcpp which hit CRAN late yesterday, and has been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution and of course at r2u. The release was prepared om July 2, but it took a few days to clear a handful of spurious errors as false positives with CRAN — this can when the set of reverse dependencies is so large, and the CRAN team remains busy. This release continues with the six-months cycle started with release 1.0.5 in July 2020. (This time, CRAN had asked for an interim release to silence a C++ warning; we then needed a quick follow-up to tweak tests.) As a reminder, interim ‘dev’ or ‘rc’ releases should generally be available in the Rcpp drat repo. These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2559 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 252 in BioConductor. On CRAN, 13.9% of all packages depend (directly) on CRAN, and 58.5% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 61.5 million times.

This release is incremental and extends Rcpp with a number of small improvements all detailed in the NEWS file as well as below. We want to highlight the external contributions: a precious list tag is cleared on removal, and a move constructor and assignment for strings has been added (thanks to Dean Scarff), and (thanks to Bill Denney and Marco Colombo) two minor errors are corrected in the vignette documentation. A big Thank You! to everybody who contributed pull request, opened or answered issues, or questions at StackOverflow or on the mailing list.

The full list of details follows.

Changes in Rcpp hotfix release version 1.0.9 (2022-07-02)

  • Changes in Rcpp API:

    • Accomodate C++98 compilation by adjusting attributes.cpp (Dirk in #1193 fixing #1192)

    • Accomodate newest compilers replacing deprecated std::unary_function and std::binary_function with std::function (Dirk in #1202 fixing #1201 and CRAN request)

    • Upon removal from precious list, the tag is set to null (Iñaki in #1205 fixing #1203)

    • Move constructor and assignment for strings have been added (Dean Scarff in #1219).

  • Changes in Rcpp Documentation:

    • Adjust one overflowing column (Bill Denney in #1196 fixing #1195)

    • Correct a typo in the FAQ (Marco Colombo in #1217)

  • Changes in Rcpp Deployment:

    • Accomodate four digit version numbers in unit test (Dirk)

    • Do not run complete test suite to limit test time to CRAN preference (Dirk in #1206)

    • Small updates to the CI test containers have been made

    • Some of changes also applied to an interim release 1.0.8.3 made for CRAN on 2022-03-14.

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2886 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 July, 2022 02:08PM

July 08, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

Lenovo shipping new laptops that only boot Windows by default

I finally managed to get hold of a Thinkpad Z13 to examine a functional implementation of Microsoft's Pluton security co-processor. Trying to boot Linux from a USB stick failed out of the box for no obvious reason, but after further examination the cause became clear - the firmware defaults to not trusting bootloaders or drivers signed with the Microsoft 3rd Party UEFI CA key. This means that given the default firmware configuration, nothing other than Windows will boot. It also means that you won't be able to boot from any third-party external peripherals that are plugged in via Thunderbolt.

There's no security benefit to this. If you want security here you're paying attention to the values measured into the TPM, and thanks to Microsoft's own specification for measurements made into PCR 7, switching from booting Windows to booting something signed with the 3rd party signing key will change the measurements and invalidate any sealed secrets. It's trivial to detect this. Distrusting the 3rd party CA by default doesn't improve security, it just makes it harder for users to boot alternative operating systems.

Lenovo, this isn't OK. The entire architecture of UEFI secure boot is that it allows for security without compromising user choice of OS. Restricting boot to Windows by default provides no security benefit but makes it harder for people to run the OS they want to. Please fix it.

comment count unavailable comments

08 July, 2022 06:49AM

Reproducible Builds (diffoscope)

diffoscope 218 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 218. This version includes the following changes:

* Improve output of Markdown and reStructuredText to use code blocks with
  syntax highlighting. (Closes: reproducible-builds/diffoscope#306)

You find out more by visiting the project homepage.

08 July, 2022 12:00AM

July 07, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

Musick To Play In The Dark 2

This took a while to arrive! After the success of the reissue of Coil's Musick To Play In The Dark, it was inevitable that the second edition would also be reissued. The pre-order opened late November 2021 and mine arrived in late April this year.

Record cover

I was toying with the idea of ordering one of the most exclusive editions direct from DAIS Records, in particular the glow in the dark one, but with international shipping the cost was pretty high. I went with a UK distributor (Boomkat) instead, who had their own exclusive edition: black-in-purple-in-clear.

records

I'm happy with my decision: it's one of the most interesting variants I own, and from what I've read, glow in the dark vinyl never sounds great anyway. (I think I have at least one glow in the dark 7" single somewhere)

Stand-out track: Tiny Golden Books

Since I didn't say so last time, the stand-out track on volume one is Red Birds Will Fly Out of the East and Destroy Paris in a Night, but both volumes are full of really interesting moments (🎵 One day, your eggs are going to hatch and some very strange birds are going to emerge. 🎵)

07 July, 2022 01:54PM

July 06, 2022

Russ Allbery

Review: A Master of Djinn

Review: A Master of Djinn, by P. Djèlí Clark

Series: Dead Djinn Universe #1
Publisher: Tordotcom
Copyright: 2021
ISBN: 1-250-26767-6
Format: Kindle
Pages: 391

A Master of Djinn is the first novel in the Dead Djinn Universe, but (as you might guess from the series title) is a direct sequel to the novelette "A Dead Djinn in Cairo". The novelette is not as good as the novel, but I recommend reading it first for the character introductions and some plot elements that carry over. Reading The Haunting of Tram Car 015 first is entirely optional.

In 1912 in a mansion in Giza, a secret society of (mostly) British men is meeting. The Hermetic Brotherhood of Al-Jahiz is devoted to unlocking the mysteries of the Soudanese mystic al-Jahiz. In our world, these men would likely be colonialist plunderers. In this world, they still aspire to that role, but they're playing catch-up. Al-Jahiz bored into the Kaf, releasing djinn and magic into the world and making Egypt a world power in its own right. Now, its cities are full of clockwork marvels, djinn walk the streets as citizens, and British rule has been ejected from India and Africa by local magic. This group of still-rich romantics and crackpots hopes to discover the knowledge lost when al-Jahiz disappeared. They have not had much success.

This will not save their lives.

Fatma el-Sha'arawi is a special investigator for the Ministry of Alchemy, Enchantments, and Supernatural Entities. Her job is sorting out the problems caused by this new magic, such as a couple of young thieves with a bottle full of sleeping djinn whose angry reaction to being unexpectedly woken has very little to do with wishes. She is one of the few female investigators in a ministry that is slowly modernizing with the rest of society (Egyptian women just got the vote). She's also the one called to investigate the murder of a secret society of British men and a couple of Cairenes by a black-robed man in a golden mask.

The black-robed man claims to be al-Jahiz returned, and proves to be terrifyingly adept at manipulating crowds and sparking popular dissent. Fatma and the Ministry's first attempt to handle him is a poorly-judged confrontation stymied by hostile crowds, the man's duplicating bodyguard, and his own fighting ability. From there, it's a race between Fatma's pursuit of linear clues and the black-robed man's efforts to destabilize society.

This, like the previous short stories, is a police procedural, but it has considerably more room to breathe at novel length. That serves it well, since as with "A Dead Djinn in Cairo" the procedural part is a linear, reactive vehicle for plot exposition. I was more invested in Fatma's relationships with the supporting characters. Since the previous story, she's struck up a romance with Siti, a highly competent follower of the old Egyptian gods (Hathor in particular) and my favorite character in the book. She's also been assigned a new partner, Hadia, a new graduate and another female agent. The slow defeat of Fatma's irritation at not being allowed to work alone by Hadia's cheerful competence and persistence (and willingness to do paperwork) adds a lot to the characterization.

The setting felt a bit less atmospheric than The Haunting of Tram Car 015, but we get more details of international politics, and they're a delight. Clark takes obvious (and warranted) glee in showing how the reintroduction of magic has shifted the balance of power away from the colonial empires. Cairo is a bustling steampunk metropolis and capital of a world power, welcoming envoys from West African kingdoms alongside the (still racist and obnoxious but now much less powerful) British and other Europeans. European countries were forced to search their own mythology for possible sources of magic power, which leads to the hilarious scene of the German Kaiser carrying a sleepy goblin on his shoulder to monitor his diplomacy.

The magic of the story was less successful for me, although still enjoyable. The angels from "A Dead Djinn in Cairo" make another appearance and again felt like the freshest bit of world-building, but we don't find out much more about them. I liked the djinn and their widely-varied types and magic, but apart from them and a few glimpses of Egypt's older gods, that was the extent of the underlying structure. There is a significant magical artifact, but the characters are essentially handed an instruction manual, use it according to its instructions, and it then does what it was documented to do. It was a bit unsatisfying. I'm the type of fantasy reader who always wants to read the sourcebook for the magic system, but this is not that sort of a book.

Instead, it's the kind of book where the investigator steadily follows a linear trail of clues and leads until they reach the final confrontation. Here, the confrontation felt remarkably like cut scenes from a Japanese RPG: sudden vast changes in scale, clockwork constructs, massive monsters, villains standing on mobile platforms, and surprise combat reversals. I could almost hear the fight music and see the dialog boxes pop up. This isn't exactly a complaint — I love Japanese RPGs — but it did add to the feeling that the plot was on rails and didn't require many decisions from the protagonist. Clark also relies on an overused plot cliche in the climactic battle, which was a minor disappointment.

A Master of Djinn won the Nebula for best 2021 novel, I suspect largely on the basis of its setting and refreshingly non-European magical system. I don't entirely agree; the writing is still a bit clunky, with unnecessary sentences and stock phrases showing up here and there, and I think it suffers from the typical deficiencies of SFF writers writing mysteries or police procedurals without the plot sophistication normally found in that genre. But this is good stuff for a first novel, with fun supporting characters (loved the librarian) and some great world-building. I would happily read more in this universe.

Rating: 7 out of 10

06 July, 2022 03:04AM

July 05, 2022

Alberto García

Running the Steam Deck’s OS in a virtual machine using QEMU

SteamOS desktop

Introduction

The Steam Deck is a handheld gaming computer that runs a Linux-based operating system called SteamOS. The machine comes with SteamOS 3 (code name “holo”), which is in turn based on Arch Linux.

Although there is no SteamOS 3 installer for a generic PC (yet), it is very easy to install on a virtual machine using QEMU. This post explains how to do it.

The goal of this VM is not to play games (you can already install Steam on your computer after all) but to use SteamOS in desktop mode. The Gamescope mode (the console-like interface you normally see when you use the machine) requires additional development to make it work with QEMU and will not work with these instructions.

A SteamOS VM can be useful for debugging, development, and generally playing and tinkering with the OS without risking breaking the Steam Deck.

Running the SteamOS desktop in a virtual machine only requires QEMU and the OVMF UEFI firmware and should work in any relatively recent distribution. In this post I’m using QEMU directly, but you can also use virt-manager or some other tool if you prefer, we’re emulating a standard x86_64 machine here.

General concepts

SteamOS is a single-user operating system and it uses an A/B partition scheme, which means that there are two sets of partitions and two copies of the operating system. The root filesystem is read-only and system updates happen on the partition set that is not active. This allows for safer updates, among other things.

There is one single /home partition, shared by both partition sets. It contains the games, user files, and anything that the user wants to install there.

Although the user can trivially become root, make the root filesystem read-write and install or change anything (the pacman package manager is available), this is not recommended because

  • it increases the chances of breaking the OS, and
  • any changes will disappear with the next OS update.

A simple way for the user to install additional software that survives OS updates and doesn’t touch the root filesystem is Flatpak. It comes preinstalled with the OS and is integrated with the KDE Discover app.

Preparing all necessary files

The first thing that we need is the installer. For that we have to download the Steam Deck recovery image from here: https://store.steampowered.com/steamos/download/?ver=steamdeck&snr=

Once the file has been downloaded, we can uncompress it and we’ll get a raw disk image called steamdeck-recovery-4.img (the number may vary).

Note that the recovery image is already SteamOS (just not the most up-to-date version). If you simply want to have a quick look you can play a bit with it and skip the installation step. In this case I recommend that you extend the image before using it, for example with ‘truncate -s 64G steamdeck-recovery-4.img‘ or, better, create a qcow2 overlay file and leave the original raw image unmodified: ‘qemu-img create -f qcow2 -F raw -b steamdeck-recovery-4.img steamdeck-recovery-extended.qcow2 64G

But here we want to perform the actual installation, so we need a destination image. Let’s create one:

$ qemu-img create -f qcow2 steamos.qcow2 64G

Installing SteamOS

Now that we have all files we can start the virtual machine:

$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \
    -device usb-ehci -device usb-tablet \
    -device intel-hda -device hda-duplex \
    -device VGA,xres=1280,yres=800 \
    -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \
    -drive if=virtio,file=steamdeck-recovery-4.img,driver=raw \
    -device nvme,drive=drive0,serial=badbeef \
    -drive if=none,id=drive0,file=steamos.qcow2

Note that we’re emulating an NVMe drive for steamos.qcow2 because that’s what the installer script expects. This is not strictly necessary but it makes things a bit easier. If you don’t want to do that you’ll have to edit ~/tools/repair_device.sh and change DISK and DISK_SUFFIX.

SteamOS installer shortcuts

Once the system has booted we’ll see a KDE Plasma session with a few tools on the desktop. If we select “Reimage Steam Deck” and click “Proceed” on the confirmation dialog then SteamOS will be installed on the destination drive. This process should not take a long time.

Now, once the operation finishes a new confirmation dialog will ask if we want to reboot the Steam Deck, but here we have to choose “Cancel”. We cannot use the new image yet because it would try to boot into the Gamescope session, which won’t work, so we need to change the default desktop session.

SteamOS comes with a helper script that allows us to enter a chroot after automatically mounting all SteamOS partitions, so let’s open a Konsole and make the Plasma session the default one in both partition sets:

$ sudo steamos-chroot --disk /dev/nvme0n1 --partset A
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit

$ sudo steamos-chroot --disk /dev/nvme0n1 --partset B
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit

After this we can shut down the virtual machine. Our new SteamOS drive is ready to be used. We can discard the recovery image now if we want.

Booting SteamOS and first steps

To boot SteamOS we can use a QEMU line similar to the one used during the installation. This time we’re not emulating an NVMe drive because it’s no longer necessary.

$ cp /usr/share/OVMF/OVMF_VARS.fd .
$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \
   -device usb-ehci -device usb-tablet \
   -device intel-hda -device hda-duplex \
   -device VGA,xres=1280,yres=800 \
   -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \
   -drive if=pflash,format=raw,file=OVMF_VARS.fd \
   -drive if=virtio,file=steamos.qcow2 \
   -device virtio-net-pci,netdev=net0 \
   -netdev user,id=net0,hostfwd=tcp::2222-:22

(the last two lines redirect tcp port 2222 to port 22 of the guest to be able to SSH into the VM. If you don’t want to do that you can omit them)

If everything went fine, you should see KDE Plasma again, this time with a desktop icon to launch Steam and another one to “Return to Gaming Mode” (which we should not use because it won’t work). See the screenshot that opens this post.

Congratulations, you’re running SteamOS now. Here are some things that you probably want to do:

  • (optional) Change the keyboard layout in the system settings (the default one is US English)
  • Set the password for the deck user: run ‘passwd‘ on a terminal
  • Enable / start the SSH server: ‘sudo systemctl enable sshd‘ and/or ‘sudo systemctl start sshd‘.
  • SSH into the machine: ‘ssh -p 2222 deck@localhost

Updating the OS to the latest version

The Steam Deck recovery image doesn’t install the most recent version of SteamOS, so now we should probably do a software update.

  • First of all ensure that you’re giving enought RAM to the VM (in my examples I run QEMU with -m 8G). The OS update might fail if you use less.
  • (optional) Change the OS branch if you want to try the beta release: ‘sudo steamos-select-branch beta‘ (or main, if you want the bleeding edge)
  • Check the currently installed version in /etc/os-release (see the BUILD_ID variable)
  • Check the available version: ‘steamos-update check
  • Download and install the software update: ‘steamos-update

Note: if the last step fails after reaching 100% with a post-install handler error then go to Connections in the system settings, rename Wired Connection 1 to something else (anything, the name doesn’t matter), click Apply and run steamos-update again. This works around a bug in the update process. Recent images fix this and this workaround is not necessary with them.

As we did with the recovery image, before rebooting we should ensure that the new update boots into the Plasma session, otherwise it won’t work:

$ sudo steamos-chroot --partset other
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit

After this we can restart the system.

If everything went fine we should be running the latest SteamOS release. Enjoy!

Reporting bugs

SteamOS is under active development. If you find problems or want to request improvements please go to the SteamOS community tracker.

Edit 06 Jul 2022: Small fixes, mention how to install the OS without using NVMe.

05 July, 2022 07:11PM by berto

Russ Allbery

Review: A Mirror Mended

Review: A Mirror Mended, by Alix E. Harrow

Series: Fractured Fables #2
Publisher: Tordotcom
Copyright: 2022
ISBN: 1-250-76665-6
Format: Kindle
Pages: 129

This is a direct sequel to A Spindle Splintered and will completely spoil that story, so start there rather than here.

A Mirror Mended opens with a glimpse at yet another version of the Sleeping Beauty story, this one (delightfully) a Spanish telenovela. Zinnia is world-hopping, something that's lost some of the meaning from A Spindle Splintered and become an escape from other problems. She's about ready to leave this world as well when she sees a face that is not hers in the bathroom mirror, pleading for help. Zinnia assumes this is yet another sleeping beauty, albeit an unusual one. Zinnia is wrong.

Readers of A Spindle Splintered are going to groan when I tell you that Zinnia has managed to damage most of the relationships that she made in the first story, which means we get a bit of an episodic reset of unhappiness mixed with an all-new glob of guilt. Not only is this a depressing way to start a new story, it also means there are no snarky text messages and side commentary. Grumble. Harrow is isolating Zinnia to set up a strange and fraught alliance that turns into a great story, but given that Zinnia's friend network was my favorite part of the first novella, the start of this story made me grumpy.

Stick with it, though, since Harrow does more than introduce another fairy tale. She also introduces a villain, one who wishes to be more complicated than her story allows and who knows rather more about the structure of the world than she should. This time, the fairy tale goes off the rails in a more directly subversive way that prods at the bones of Harrow's world-building.

This may or may not be what you want, and I admit I liked the first story better. A Spindle Splintered took fairy tales just seriously enough to make a plot, but didn't poke at its premises deeply enough to destabilize them. It played off of fairy tales themselves; A Mirror Mended instead plays off of Harrow's previous story by looking directly at the invented metaphysics of parallel worlds playing out fairy tale archetypes. Some of this worked for me: Eva is a great character and the dynamic between her and Zinnia is highly entertaining. Some of it didn't: the impact on universal metaphysics of Zinnia's adventuring is a bit cliched and inadequately explained. A Mirror Mended is a character exploration with a bit more angst and ambiguity, which means it isn't as delightfully balanced and free-wheeling.

I will reassure you with the minor spoiler that Zinnia does eventually pull her head out of her ass when she has to, and while there is nowhere near enough Charm in this book for my taste, there is some. In exchange for the relationship screw-ups, we get the Zinnia/Eva dynamic, which I was really enjoying by the end. One of my favorite tropes is accidental empathy, where someone who is being flippant and sarcastic stumbles into a way of truly helping someone else and is wise enough to notice it. There are several great moments of that. I like Zinnia, even this older, more conflicted, and less cavalier version.

Recommended if you liked the first story, although be warned that this replaces the earlier magic with some harder relationship work and the payoff is more hinted at than fully shown.

Rating: 7 out of 10

05 July, 2022 02:27AM

July 04, 2022

Review: She Who Became the Sun

Review: She Who Became the Sun, by Shelley Parker-Chan

Series: Radiant Emperor #1
Publisher: Tor
Copyright: 2021
Printing: 2022
ISBN: 1-250-62179-8
Format: Kindle
Pages: 414

In 1345 in Zhongli village, in fourth year of a drought, lived a man with his son and his daughter, the last surviving of seven children. The son was promised by his father to the Wuhuang Monastery on his twelfth birthday if he survived. According to the fortune-teller, that son, Zhu Chongba, will be so great that he will bring a hundred generations of pride to the family name. When the girl dares ask her fate, the fortune-teller says, simply, "Nothing."

Bandits come looking for food and kill their father. Zhu goes catatonic rather than bury his father, so the girl digs a grave, only to find her brother dead inside it with her father. It leaves her furious: he had a great destiny and he gave it up without a fight, choosing to become nothing. At that moment, she decides to seize his fate for her own, to become Zhu so thoroughly that Heaven itself will be fooled. Through sheer determination and force of will, she stays at the gates of Wuhuang Monastery until the monks are impressed enough with her stubbornness that they let her in under Zhu's name. That puts her on a trajectory that will lead her to the Red Turbans and the civil war over the Mandate of Heaven.

She Who Became the Sun is historical fiction with some alternate history and a touch of magic. The closest comparison I can think of is Guy Gavriel Kay: a similar touch of magic that is slight enough to have questionable impact on the story, and a similar starting point of history but a story that's not constrained to follow the events of our world. Unlike Kay, Parker-Chan doesn't change the names of places and people. It's therefore not difficult to work out the history this story is based on (late Yuan dynasty), although it may not be clear at first what role Zhu will play in that history.

The first part of the book focuses on Zhu, her time in the monastery, and her (mostly successful) quest to keep her gender secret. The end of that part introduces the second primary protagonist, the eunuch general Ouyang of the army of the Prince of Henan. Ouyang is Nanren, serving a Mongol prince or, more precisely, his son Esen. On the surface, Ouyang is devoted to Esen and serves capably as his general. What lies beneath that surface is far darker and more complicated.

I think how well you like this book will depend on how well you get along with the characters. I thought Zhu was a delight. She spends the first half of the book proving herself to be startlingly competent and unpredictable while outwitting Heaven and pursuing her assumed destiny. A major hinge event at the center of the book could have destroyed her character, but instead makes her even stronger, more relaxed, and more comfortable with herself. Her story's exploration of gender identity only made that better for me, starting with her thinking of herself as a woman pretending to be a man and turning into something more complex and self-chosen (and, despite some sexual encounters, apparently asexual, which is something you still rarely see in fiction). I also appreciated how Parker-Chan varies Zhu's pronouns depending on the perspective of the narrator.

That said, Zhu is not a good person. She is fiercely ambitious to the point of being a sociopath, and the path she sees involves a lot of ruthlessness and some cold-blooded murder. This is less of a heroic journey than a revenge saga, where the target of revenge is the entire known world and Zhu is as dangerous as she is competent. If you want your protagonist to be moral, this may not work for you. Zhu's scenes are partly told from her perspective and partly from the perspective of a woman named Ma who is a good person, and who is therefore intermittently horrified. The revenge story worked for me, and as a result I found Ma somewhat irritating. If your tendency is to agree with Ma, you may find Zhu too amoral to root for.

Ouyang's parts I just hated, which is fitting because Ouyang loathes himself to a degree that is quite difficult to read. He is obsessed with being a eunuch and therefore not fully male. That internal monologue is disturbing enough that it drowned out the moderately interesting court intrigue that he's a part of. I know some people like reading highly dramatic characters who are walking emotional disaster zones. I am not one of those people; by about three quarters of the way through the book I was hoping someone would kill Ouyang already and put him out of everyone's misery.

One of the things I disliked about this book is that, despite the complex gender work with Zhu, gender roles within the story have a modern gloss while still being highly constrained. All of the characters except Zhu (and the monk Xu, who has a relatively minor part but is the most likable character in the book) feel like they're being smothered in oppressive gender expectations. Ouyang has a full-fledged case of toxic masculinity to fuel his self-loathing, which Parker-Chan highlights with some weirdly disturbing uses of BDSM tropes.

So, I thought this was a mixed bag, and I suspect reactions will differ. I thoroughly enjoyed Zhu's parts despite her ruthlessness and struggled through Ouyang's parts with a bad taste in my mouth. I thought the pivot Parker-Chan pulls off in the middle of the book with Zhu's self-image and destiny was beautifully done and made me like the character even more, but I wish the conflict between Ma's and Zhu's outlooks hadn't been so central. Because of that, the ending felt more tragic than triumphant, which I think was intentional but which wasn't to my taste.

As with Kay's writing, I suspect there will be some questions about whether She Who Became the Sun is truly fantasy. The only obvious fantastic element is the physical manifestation of the Mandate of Heaven, and that has only a minor effect on the plot. And as with Kay, I think this book needed to be fantasy, not for the special effects, but because it needs the space to take fate literally. Unlike Kay, Parker-Chan does not use the writing style of epic fantasy, but Zhu's campaign to assume a destiny which is not her own needs to be more than a metaphor for the story to work.

I enjoyed this with some caveats. For me, the Zhu portions made up for the Ouyang portions. But although it's clearly the first book of a series, I'm not sure I'll read on. I felt like Zhu's character arc reached a satisfying conclusion, and the sequel seems likely to be full of Ma's misery over ethical conflicts and more Ouyang, neither of which sound appealing.

So far as I can tell, the sequel I assume is coming has not yet been announced.

Rating: 7 out of 10

04 July, 2022 02:58AM

July 03, 2022

Thorsten Alteholz

My Debian Activities in June 2022

FTP master

This month I accepted 305 and rejected 59 packages. The overall number of packages that got accepted was 310.

From time to time I am also looking at the list of packages to be removed. If you would like to make life easier for the people who remove packages, please make sure that the resulting dak command really makes sense. If this command consists of garbage, please adapt the Subject: of your bug report accordingly.

Also it does not make sense to file bugs to remove packages from NEW. Please don’t hesitate to close such bugs again …

Debian LTS

This was my ninety-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30.25h. During that time I did LTS and normal security uploads of:

  • [DLA 3058-1] libsndfile security update for two CVEs
  • [DLA 3060-1] blender security update for three CVEs
  • [#1008577] bullseye-pu: golang-github-russellhaering-goxmldsig/1.1.0-1+deb11u1 package has been accepted
  • [#1009077] bullseye-pu: minidlna/1.3.0+dfsg-2+deb11u1 package has been accepted
  • upload of blender to buster-security, no DSA yet
  • upload of blender to bullseye-security, no DSA yet, this upload seems to have failed 🙁

I have to admit that I totally ignored the EOL of Stretch LTS, so my upload of ncurses needs to go to Stretch ELTS now.

This month I also moved/refactored the current LTS documentation to a new repository and started to move the LTS Wiki as well.

I also continued to work on security support for golang packages.

Last but not least I did some days of frontdesk duties and took care of issues on security-master.

At this point I also need to mention my first “business trip”. I drove the short distance between Chemnitz and Freiberg and met Anton to have a face to face talk about LTS/ELTS. It was a great pleasure and definitely more fun than a meeting on IRC.

Debian ELTS

This month was the forty-seventh ELTS month.

During my allocated time I uploaded:

  • ELS-629-1 for libsndfile

Due to the delay of my ncurses upload to Stretch LTS, the ELTS upload got delayed as well. Now I will do both uploads to ELTS in July.

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

Debian Astro

As there has been a new indi release arriving in Debian, I uploaded new upstream versions of most of the indi-3rdparty packages. Don’t hesitate to tell me whether you really use one of them :-).

Other stuff

This month I uploaded new upstream versions or improved packaging of:

03 July, 2022 11:37AM by alteholz

hackergotchi for Martin-&#201;ric Racine

Martin-Éric Racine

Refactoring Debian's dhcpcd packaging

Given news that ISC's DHCP suite is getting deprecated by upstream and seeing how dhclient has never worked properly for DHCPv6, I decided to look into alternatives. ISC itself recommends Roy Maple's dhcpcd as a migration path. Sadly, Debian's package had been left unattended for a good 2 years. After refactoring the packaging, updating to the latest upstream and performing one NMU, I decided to adopt the package.

Numerous issues were exposed in the process:

  • Upstream's ./configure makes BSD assumptions. No harm done, but still...
  • Upstream's ./configure is broken. --prefix does not propagate to all components. For instance, I had to manually specify the full path for manual pages. Patches are welcome.
  • Debian had implemented custom exit hooks for all its NTP packages. Since then, upstream has implemented this in a much more concise way. All that's missing upstream is support for timesyncd. Patches are welcome.
  • I'm still undecided on whether --prefix should assume / or /usr for networking binaries on a Debian system. Feedback is welcome.
  • The previous maintainer had implemented plenty of transitional measures in maintainer scripts such as symbolically linking /sbin/dhcpcd and /usr/sbin/dhcpcd. Most of this can probably be removed, but I haven't gotten around verifying this. Feedback and patches are welcome.
  • The previous maintainer had created an init.d script and systemd unit. Both of these interfere with launching dhcpcd using ifupdown via /etc/network/interfaces which I really need for configuring a router for IPv4 MASQ and IPv6 bridge. I solved this by putting them in a separate package and shipping the rest via a new binary target called dhcpcd-base along a logic similar to dnsmasq.
  • DHCPv6 Prefix Delegation mysteriously reports enp4s0: no global addresses for default route after a reboot. Yet if I manually restart the interface, none of this appears. Help debuging this is welcome.
  • Support for Predictable Interface Names was missing because Debian's package didn't Build-Depends on libudev-dev. Fixed.
  • Support for priviledge separation was missing because Debian's package did not ./configure this or create a system user for this. Fixed.
  • I am pondering moving the Debian package out of the dhcpcd5 namespace back into the dhcpcd namespace. The 5 was the result of an upstream fork that happened a long time ago and the original dhcpcd package no longer is in the Debian archive. Feedback is welcome on whether this would be desirable.

The key advantage of dhcpcd over dhclient is that works as a dual-stack DHCP client by design. With privilege separation enabled, this means separate child processes handling IPv4 and IPv6 configuration and passing the received information to the parent process to configure networking and update /etc/resolv.conf with nameservers for both stacks. Additionally, /etc/network/interfaces no longer needs separate inet and inet6 lines for each DHCP interface, which makes for much cleaner configuration files.

A secondary advantage is that the dual-stack includes built-in fallback to Bonjour for IPv4 and SLAAC for IPv6. Basically, unless the interface needs a static IP address, this client handles network configuration in a smart and transparent way.

A third advantage is built-in support for DHCPv6 Prefix Delegation. Enabling this requires just two lines in the configuration file.

In the long run, I feel that dhcpcd-base should probably replace isc-dhcp-client as the default DHCP client with priority Important. Adequate IPv6 support should come out of the box on a standard Debian installation, yet dhclient never got around implementing that properly.

03 July, 2022 08:57AM by Martin-Éric (noreply@blogger.com)

July 02, 2022

François Marier

Remote logging of Turris Omnia log messages using syslog-ng and rsyslog

As part of debugging an upstream connection problem I've been seeing recently, I wanted to be able to monitor the logs from my Turris Omnia router. Here's how I configured it to send its logs to a server I already had on the local network.

Server setup

The first thing I did was to open up my server's rsyslog (Debian's default syslog server) to remote connections since it's going to be the destination host for the router's log messages.

I added the following to /etc/rsyslog.d/router.conf:

module(load="imtcp")
input(type="imtcp" port="514")

if $fromhost-ip == '192.168.1.1' then {
    if $syslogseverity <= 5 then {
        action(type="omfile" file="/var/log/router.log")
    }
    stop
}

This is using the latest rsyslog configuration method: a handy scripting language called RainerScript. Severity level 5 maps to "notice" which consists of unusual non-error conditions, and 192.168.1.1 is of course the IP address of the router on the LAN side. With this, I'm directing all router log messages to a separate file, filtering out anything less important than severity 5.

In order for rsyslog to pick up this new configuration file, I restarted it:

systemctl restart rsyslog.service

and checked that it was running correctly (e.g. no syntax errors in the new config file) using:

systemctl status rsyslog.service

Since I added a new log file, I also setup log rotation for it by putting the following in /etc/logrotate.d/router:

/var/log/router.log
{
    rotate 4
    weekly
    missingok
    notifempty
    compress
    delaycompress
    sharedscripts
    postrotate
        /usr/lib/rsyslog/rsyslog-rotate
    endscript
}

In addition, since I use logcheck to monitor my server logs and email me errors, I had to add /var/log/router.log to /etc/logcheck/logcheck.logfiles.

Finally I opened the rsyslog port to the router in my server's firewall by adding the following to /etc/network/iptables.up.rules:

# Allow logs from the router
-A INPUT -s 192.168.1.1 -p tcp --dport 514 -j ACCEPT

and ran iptables-apply.

With all of this in place, it was time to get the router to send messages.

Router setup

As suggested on the Turris forum, I ssh'ed into my router and added this in /etc/syslog-ng.d/remote.conf:

destination d_loghost {
        network("192.168.1.200" time-zone("America/Vancouver"));
};

source dns {
        file("/var/log/resolver");
};

log {
        source(src);
        source(net);
        source(kernel);
        source(dns);
        destination(d_loghost);
};

Setting the timezone to the same as my server was needed because the router messages were otherwise sent with UTC timestamps.

To ensure that the destination host always gets the same IP address (192.168.1.200), I went to the advanced DHCP configuration page and added a static lease for the server's MAC address so that it always gets assigned 192.168.1.200. If that wasn't already the server's IP address, you'll have to restart it for this to take effect.

Finally, I restarted the syslog-ng daemon on the router to pick up the new config file:

/etc/init.d/syslog-ng restart

Testing

In order to test this configuration, I opened three terminal windows:

  1. tail -f /var/log/syslog on the server
  2. tail -f /var/log/router.log on the server
  3. tail -f /var/log/messages on the router

I immediately started to see messages from the router in the third window and some of these, not all because of my severity-5 filter, were flowing to the second window as well. Also important is that none of the messages make it to the first window, otherwise log messages from the router would be mixed in with the server's own logs. That's the purpose of the stop command in /etc/rsyslog.d/router.conf.

To force a log messages to be emitted by the router, simply ssh into it and issue the following command:

logger Test

It should show up in the second and third windows immediately if you've got everything setup correctly

02 July, 2022 03:45AM

July 01, 2022

hackergotchi for Steve Kemp

Steve Kemp

An update on my simple golang TCL interpreter

So my previous post introduced a trivial interpreter for a TCL-like language.

In the past week or two I've cleaned it up, fixed a bunch of bugs, and added 100% test-coverage. I'm actually pretty happy with it now.

One of the reasons for starting this toy project was to experiment with how easy it is to extend the language using itself

Some things are simple, for example replacing this:

puts "3 x 4 = [expr 3 * 4]"

With this:

puts "3 x 4 = [* 3 4]"

Just means defining a function (proc) named *. Which we can do like so:

proc * {a b} {
    expr $a * $b
}

(Of course we don't have lists, or variadic arguments, so this is still a bit of a toy example.)

Doing more than that is hard though without support for more primitives written in the parent language than I've implemented. The obvious thing I'm missing is a native implementation of upvalue, which is TCL primitive allowing you to affect/update variables in higher-scopes. Without that you can't write things as nicely as you would like, and have to fall back to horrid hacks or be unable to do things.

# define a procedure to run a body N times
proc repeat {n body} {
    set res ""
    while {> $n 0} {
        decr n
        set res [$body]
    }
    $res
}

# test it out
set foo 12
repeat 5 { incr foo }

#  foo is now 17 (i.e. 12 + 5)

A similar story implementing the loop word, which should allow you to set the contents of a variable and run a body a number of times:

proc loop {var min max bdy} {
    // result
    set res ""

    // set the variable.  Horrid.
    // We miss upvalue here.
    eval "set $var [set min]"

    // Run the test
    while {<= [set "$$var"] $max } {
        set res [$bdy]

        // This is a bit horrid
        // We miss upvalue here, and not for the first time.
        eval {incr "$var"}
    }

    // return the last result
    $res
}


loop cur 0 10 { puts "current iteration $cur ($min->$max)" }
# output is:
# => current iteration 0 (0-10)
# => current iteration 1 (0-10)
# ...

That said I did have fun writing some simple test-cases, and implementing assert, assert_equal, etc.

In conclusion I think the number of required primitives needed to implement your own control-flow, and run-time behaviour, is a bit higher than I'd like. Writing switch, repeat, while, and similar primitives inside TCL is harder than creating those same things in FORTH, for example.

01 July, 2022 07:00PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2022

In June I was not assigned additional hours of work by Freexian's Debian LTS initiative, but carried over 16 hours from May and worked all of those hours.

I spent some time triaging security issues for Linux. I tested several security fixes for Linux 4.9 and 4.19 and submitted them for inclusion in the upstream stable branches.

I rebased the Linux 4.9 (linux) package on the latest stable update (4.9.320), uploaded this and issued the final DLA for stretch, DLA-3065-1.

01 July, 2022 01:12PM

Paul Wise

FLOSS Activities June 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 5 Debian bug reports and 45 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration

  • Debian wiki: unblock IP addresses, assist with account recovery, approve accounts

Communication

Sponsors

The sptag work was sponsored. All other work was done on a volunteer basis.

01 July, 2022 02:51AM

June 30, 2022

Russell Coker

June 29, 2022

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

Long travel in an electric car

Since the first week of April 2022 I have (finally!) changed my company car from a plug-in hybrid to a fully electic car. My new ride, for the next two years, is a BMW i4 M50 in Aventurine Red metallic. An ellegant car with very deep and memorable color, insanely powerful (544 hp/795 Nm), sub-4 second 0-100 km/h, large 84 kWh battery (80 kWh usable), charging up to 210 kW, top speed of 225 km/h and also very efficient (which came out best in this trip) with WLTP range of 510 km and EVDB real range of 435 km. The car also has performance tyres (Hankook Ventus S1 evo3 245/45R18 100Y XL in front and 255/45R18 103Y XL in rear all at recommended 2.5 bar) that have reduced efficiency.

So I wanted to document and describe how was it for me to travel ~2000 km (one way) with this, electric, car from south of Germany to north of Latvia. I have done this trip many times before since I live in Germany now and travel back to my relatives in Latvia 1-2 times per year. This was the first time I made this trip in an electric car. And as this trip includes both travelling in Germany (where BEV infrastructure is best in the world) and across Eastern/Northen Europe, I believe that this can be interesting to a few people out there.

Normally when I travelled this trip with a gasoline/diesel car I would normally drive for two days with an intermediate stop somewhere around Warsaw with about 12 hours of travel time in each day. This would normally include a couple bathroom stops in each day, at least one longer lunch stop and 3-4 refueling stops on top of that. Normally this would use at least 6 liters of fuel per 100 km on average with total usage of about 270 liters for the whole trip (or about 540€ just in fuel costs, nowadays). My (personal) quirk is that both fuel and recharging of my (business) car inside Germany is actually paid by my employer, so it is useful for me to charge up (or fill up) at the last station in Gemany before driving on.

The plan for this trip was made in a similar way as when travelling with a gasoline car: travelling as fast as possible on German Autobahn network to last chargin stop on the A4 near Görlitz, there charging up as much as reasonable and then travelling to a hotel in Warsaw, charging there overnight and travelling north towards Ionity chargers in Lithuania from where reaching the final target in north of Latvia should be possible. How did this plan meet the reality?

Travelling inside Germany with an electric car was basically perfect. The most efficient way would involve driving fast and hard with top speed of even 180 km/h (where possible due to speed limits and traffic). BMW i4 is very efficient at high speeds with consumption maxing out at 28 kWh/100km when you actually drive at this speed all the time. In real situation in this trip we saw consumption of 20.8-22.2 kWh/100km in the first legs of the trip. The more traffic there is, the more speed limits and roadworks, the lower is the average speed and also the lower the consumption. With this kind of consumption we could comfortably drive 2 hours as fast as we could and then pick any fast charger along the route and in 26 minutes at a charger (50 kWh charged total) we'd be ready to drive for another 2 hours. This lines up very well with recommended rest stops for biological reasons (bathroom, water or coffee, a bit of movement to get blood circulating) and very close to what I had to do anyway with a gasoline car. With a gasoline car I had to refuel first, then park, then go to bathroom and so on. With an electric car I can do all of that while the car is charging and in the end the total time for a stop is very similar. Also not that there was a crazy heat wave going on and temperature outside was at about 34C minimum the whole day and hitting 40C at one point of the trip, so a lot of power was used for cooling. The car has a heat pump standard, but it still was working hard to keep us cool in the sun.

The car was able to plan a charging route with all the charging stops required and had all the good options (like multiple intermediate stops) that many other cars (hi Tesla) and mobile apps (hi Google and Apple) do not have yet. There are a couple bugs with charging route and display of current route guidance, those are already fixed and will be delivered with over the air update with July 2022 update. Another good alterantive is the ABRP (A Better Route Planner) that was specifically designed for electric car routing along the best route for charging. Most phone apps (like Google Maps) have no idea about your specific electric car - it has no idea about the battery capacity, charging curve and is missing key live data as well - what is the current consumption and remaining energy in the battery. ABRP is different - it has data and profiles for almost all electric cars and can also be linked to live vehicle data, either via a OBD dongle or via a new Tronity cloud service. Tronity reads data from vehicle-specific cloud service, such as MyBMW service, saves it, tracks history and also re-transmits it to ABRP for live navigation planning. ABRP allows for options and settings that no car or app offers, for example, saying that you want to stop at a particular place for an hour or until battery is charged to 90%, or saying that you have specific charging cards and would only want to stop at chargers that support those. Both the car and the ABRP also support alternate routes even with multiple intermediate stops. In comparison, route planning by Google Maps or Apple Maps or Waze or even Tesla does not really come close.

After charging up in the last German fast charger, a more interesting part of the trip started. In Poland the density of high performance chargers (HPC) is much lower than in Germany. There are many chargers (west of Warsaw), but vast majority of them are (relatively) slow 50kW chargers. And that is a difference between putting 50kWh into the car in 23-26 minutes or in 60 minutes. It does not seem too much, but the key bit here is that for 20 minutes there is easy to find stuff that should be done anyway, but after that you are done and you are just waiting for the car and if that takes 4 more minutes or 40 more minutes is a big, perceptual, difference. So using HPC is much, much preferable. So we put in the Ionity charger near Lodz as our intermediate target and the car suggested an intermediate stop at a Greenway charger by Katy Wroclawskie. The location is a bit weird - it has 4 charging stations with 150 kW each. The weird bits are that each station has two CCS connectors, but only one parking place (and the connectors share power, so if two cars were to connect, each would get half power). Also from the front of the location one can only see two stations, the otehr two are semi-hidden around a corner. We actually missed them on the way to Latvia and one person actually waited for the charger behind us for about 10 minutes. We only discovered the other two stations on the way back. With slower speeds in Poland the consumption goes down to 18 kWh/100km which translates to now up to 3 hours driving between stops.

At the end of the first day we drove istarting from Ulm from 9:30 in the morning until about 23:00 in the evening with total distance of about 1100 km, 5 charging stops, starting with 92% battery, charging for 26 min (50 kWh), 33 min (57 kWh + lunch), 17 min (23 kWh), 12 min (17 kWh) and 13 min (37 kW). In the last two chargers you can see the difference between a good and fast 150 kW charger at high battery charge level and a really fast Ionity charger at low battery charge level, which makes charging faster still.

Arriving to hotel with 23% of battery. Overnight the car charged from a Porsche Destination Charger to 87% (57 kWh). That was a bit less than I would expect from a full power 11kW charger, but good enough. Hotels should really install 11kW Type2 chargers for their guests, it is a really significant bonus that drives more clients to you.

The road between Warsaw and Kaunas is the most difficult part of the trip for both driving itself and also for charging. For driving the problem is that there will be a new highway going from Warsaw to Lithuanian border, but it is actually not fully ready yet. So parts of the way one drives on the new, great and wide highway and parts of the way one drives on temporary roads or on old single lane undivided roads. And the most annoying part is navigating between parts as signs are not always clear and the maps are either too old or too new. Some maps do not have the new roads and others have on the roads that have not been actually build or opened to traffic yet. It's really easy to loose ones way and take a significant detour. As far as charging goes, basically there is only the slow 50 kW chargers between Warsaw and Kaunas (for now). We chose to charge on the last charger in Poland, by Suwalki Kaufland. That was not a good idea - there is only one 50 kW CCS and many people decide the same, so there can be a wait. We had to wait 17 minutes before we could charge for 30 more minutes just to get 18 kWh into the battery. Not the best use of time. On the way back we chose a different charger in Lomza where would have a relaxed dinner while the car was charging. That was far more relaxing and a better use of time.

We also tried charging at an Orlen charger that was not recommended by our car and we found out why. Unlike all other chargers during our entire trip, this charger did not accept our universal BMW Charging RFID card. Instead it demanded that we download their own Orlen app and register there. The app is only available in some countries (and not in others) and on iPhone it is only available in Polish. That is a bad exception to the rule and a bad example. This is also how most charging works in USA. Here in Europe that is not normal. The normal is to use a charging card - either provided from the car maker or from another supplier (like PlugSufring or Maingau Energy). The providers then make roaming arrangements with all the charging networks, so the cards just work everywhere. In the end the user gets the prices and the bills from their card provider as a single monthly bill. This also saves all any credit card charges for the user. Having a clear, separate RFID card also means that one can easily choose how to pay for each charging session. For example, I have a corporate RFID card that my company pays for (for charging in Germany) and a private BMW Charging card that I am paying myself for (for charging abroad). Having the car itself authenticate direct with the charger (like Tesla does) removes the option to choose how to pay. Having each charge network have to use their own app or token bring too much chaos and takes too much setup. The optimum is having one card that works everywhere and having the option to have additional card or cards for specific purposes.

Reaching Ionity chargers in Lithuania is again a breath of fresh air - 20-24 minutes to charge 50 kWh is as expected. One can charge on the first Ionity just enough to reach the next one and then on the second charger one can charge up enough to either reach the Ionity charger in Adazi or the final target in Latvia. There is a huge number of CSDD (Road Traffic and Safety Directorate) managed chargers all over Latvia, but they are 50 kW chargers. Good enough for local travel, but not great for long distance trips. BMW i4 charges at over 50 kW on a HPC even at over 90% battery state of charge (SoC). This means that it is always faster to charge up in a HPC than in a 50 kW charger, if that is at all possible. We also tested the CSDD chargers - they worked without any issues. One could pay with the BMW Charging RFID card, one could use the CSDD e-mobi app or token and one could also use Mobilly - an app that you can use in Latvia for everything from parking to public transport tickets or museums or car washes.

We managed to reach our final destination near Aluksne with 17% range remaining after just 3 charging stops: 17+30 min (18 kWh), 24 min (48 kWh), 28 min (36 kWh). Last stop we charged to 90% which took a few extra minutes that would have been optimal.

For travel around in Latvia we were charging at our target farmhouse from a normal 3 kW Schuko EU socket. That is very slow. We charged for 33 hours and went from 17% to 94%, so not really full. That was perfectly fine for our purposes. We easily reached Riga, drove to the sea and then back to Aluksne with 8% still in reserve and started charging again for the next trip. If it were required to drive around more and charge faster, we could have used the normal 3-phase 440V connection in the farmhouse to have a red CEE 16A plug installed (same as people use for welders). BMW i4 comes standard with a new BMW Flexible Fast Charger that has changable socket adapters. It comes by default with a Schucko connector in Europe, but for 90€ one can buy an adapter for blue CEE plug (3.7 kW) or red CEE 16A or 32A plugs (11 kW). Some public charging stations in France actually use the blue CEE plugs instead of more common Type2 electric car charging stations. The CEE plugs are also common in camping parking places.

On the way back the long distance BEV travel was already well understood and did not cause us any problem. From our destination we could easily reach the first Ionity in Lithuania, on the Panevezhis bypass road where in just 8 minutes we got 19 kWh and were ready to drive on to Kaunas, there a longer 32 minute stop before the charging desert of Suwalki Gap that gave us 52 kWh to 90%. That brought us to a shopping mall in Lomzha where we had some food and charged up 39 kWh in lazy 50 minutes. That was enough to bring us to our return hotel for the night - Hotel 500W in Strykow by Lodz that has a 50kW charger on site, while we were having late dinner and preparing for sleep, the car easily recharged to full (71 kWh in 95 minutes), so I just moved it from charger to a parking spot just before going to sleep. Really easy and well flowing day.

Second day back went even better as we just needed an 18 minute stop at the same Katy Wroclawskie charger as before to get 22 kWh and that was enough to get back to Germany. After that we were again flying on the Autobahn and charging as needed, 15 min (31 kWh), 23 min (48 kWh) and 31 min (54 kWh + food). We started the day on about 9:40 and were home at 21:40 after driving just over 1000 km on that day. So less than 12 hours for 1000 km travelled, including all charging, bio stops, food and some traffic jams as well. Not bad.

Now let's take a look at all the apps and data connections that a technically minded customer can have for their car. Architecturally the car is a network of computers by itself, but it is very secured and normally people do not have any direct access. However, once you log in into the car with your BMW account the car gets your profile info and preferences (seat settings, navigation favorites, ...) and the car then also can start sending information to the BMW backend about its status. This information is then available to the user over multiple different channels. There is no separate channel for each of those data flow. The data only goes once to the backend and then all other communication of apps happens with the backend.

First of all the MyBMW app. This is the go-to for everything about the car - seeing its current status and location (when not driving), sending commands to the car (lock, unlock, flash lights, pre-condition, ...) and also monitor and control charging processes. You can also plan a route or destination in the app in advance and then just send it over to the car so it already knows where to drive to when you get to the car. This can also integrate with calendar entries, if you have locations for appointments, for example. This also shows full charging history and allows a very easy export of that data, here I exported all charging sessions from June and then trimmed it back to only sessions relevant to the trip and cut off some design elements to have the data more visible. So one can very easily see when and where we were charging, how much power we got at each spot and (if you set prices for locations) can even show costs.

I've already mentioned the Tronity service and its ABRP integration, but it also saves the information that it gets from the car and gathers that data over time. It has nice aspects, like showing the driven routes on a map, having ways to do business trip accounting and having good calendar view. Sadly it does not correctly capture the data for charging sessions (the amounts are incorrect).

Update: after talking to Tronity support, it looks like the bug was in the incorrect value for the usable battery capacity for my car. They will look into getting th eright values there by default, but as a workaround one can edit their car in their system (after at least one charging session) and directly set the expected battery capacity (usable) in the car properties on the Tronity web portal settings.

One other fun way to see data from your BMW is using the BMW integration in Home Assistant. This brings the car as a device in your own smart home. You can read all the variables from the car current status (and Home Asisstant makes cute historical charts) and you can even see interesting trends, for example for remaining range shows much higher value in Latvia as its prediction is adapted to Latvian road speeds and during the trip it adapts to Polish and then to German road speeds and thus to higher consumption and thus lower maximum predicted remaining range. Having the car attached to the Home Assistant also allows you to attach the car to automations, both as data and event source (like detecting when car enters the "Home" zone) and also as target, so you could flash car lights or even unlock or lock it when certain conditions are met.

So, what in the end was the most important thing - cost of the trip? In total we charged up 863 kWh, so that would normally cost one about 290€, which is close to half what this trip would have costed with a gasoline car. Out of that 279 kWh in Germany (paid by my employer) and 154 kWh in the farmhouse (paid by our wonderful relatives :D) so in the end the charging that I actually need to pay adds up to 430 kWh or about 150€. Typically, it took about 400€ in fuel that I had to pay to get to Latvia and back. The difference is really nice!

In the end I believe that there are three different ways of charging:

  • incidental charging - this is wast majority of charging in the normal day-to-day life. The car gets charged when and where it is convinient to do so along the way. If we go to a movie or a shop and there is a chance to leave the car at a charger, then it can charge up. Works really well, does not take extra time for charging from us.

  • fast charging - charging up at a HPC during optimal charging conditions - from relatively low level to no more than 70-80% while you are still doing all the normal things one would do in a quick stop in a long travel process: bio things, cleaning the windscreen, getting a coffee or a snack.

  • necessary charging - charging from a whatever charger is available just enough to be able to reach the next destination or the next fast charger.

The last category is the only one that is really annoying and should be avoided at all costs. Even by shifting your plans so that you find something else useful to do while necessary charging is happening and thus, at least partially, shifting it over to incidental charging category. Then you are no longer just waiting for the car, you are doing something else and the car magically is charged up again.

And when one does that, then travelling with an electric car becomes no more annoying than travelling with a gasoline car. Having more breaks in a trip is a good thing and makes the trips actually easier and less stressfull - I was more relaxed during and after this trip than during previous trips. Having the car air conditioning always be on, even when stopped, was a godsend in the insane heat wave of 30C-38C that we were driving trough.

Final stats: 4425 km driven in the trip. Average consumption: 18.7 kWh/100km. Time driving: 2 days and 3 hours. Car regened 152 kWh. Charging stations recharged 863 kWh.

Questions? You can use this i4talk forum thread or this Twitter thread to ask them to me.

29 June, 2022 06:37PM by Aigars Mahinovs

Tim Retout

Git internals and SHA-1

LWN reminds us that Git still uses SHA-1 by default. Commit or tag signing is not a mitigation, and to understand why you need to know a little about Git’s internal structure.

Git internally looks rather like a content-addressable filesystem, with four object types: tags, commits, trees and blobs.

Content-addressable means changing the content of an object changes the way you address or reference it, and this is achieved using a cryptographic hash function. Here is an illustration of the internal structure of an example repository I created, containing two files (./foo.txt and ./bar/bar.txt) committed separately, and then tagged:

Graphic showing an example Git internal structure featuring tags, commits, trees and blobs, and how these relate to each other.

You can see how ‘trees’ represent directories, ‘blobs’ represent files, and so on. Git can avoid internal duplication of files or directories which remain identical. The hash function allows very efficient lookup of each object within git’s on-disk storage.

Tag and commit signatures do not directly sign the files in the repository; that is, the input to the signature function is the content of the tag/commit object, rather than the files themselves. This is analogous to the way that GPG signatures actually sign a cryptographic hash of your email, and there was a time when this too defaulted to SHA-1. An attacker who can break that hash function can bypass the guarantees of the signature function.

A motivated attacker might be able to replace a blob, commit or tree in a git repository using a SHA-1 collision. Replacing a blob seems easier to me than a commit or tree, because there is no requirement that the content of the files must conform to any particular format.

There is one key technical mitigation to this in Git, which is the SHA-1DC algorithm; this aims to detect and prevent known collision attacks. However, I will have to leave the cryptanalysis of this to the cryptographers!

So, is this in your threat model? Do we need to lobby GitHub for SHA-256 support? Either way, I look forward to the future operational challenge of migrating the entire world’s git repositories across to SHA-256.

29 June, 2022 05:54PM

Russell Coker

Philips 438P1 43″ 4K Monitor

I have just returned a Philips 438P1 43″ 4K Monitor [1] and gone back to my Samsung 28″ 4K monitor model LU28E590DS/XY AKA UE590.

The main listed differences are the size and the fact that the Samsung is TN but the Philips is IPS. Here’s a comparison of TN and IPS technologies [2]. Generally I think that TN is probably best for a monitor but in theory IPS shouldn’t be far behind.

The Philips monitor has a screen with a shiny surface which may be good for a TV but isn’t good for a monitor. Also it seemed to blur the pixels a bit which again is probably OK for a TV that is trying to emulate curved images but not good for a monitor where it’s all artificial straight lines. The most important thing for me in a monitor is how well it displays text in small fonts, for that I don’t really want the round parts of the letters to look genuinely round as a clear octagon or rectangle is better than a fuzzy circle.

There is some controversy about the ideal size for monitors. Some people think that nothing larger than 28″ is needed and some people think that a 43″ is totally usable. After testing I determined that 43″ is really too big, I had to move to see it all. Also for my use it’s convenient to be able to turn a monitor slightly to allow someone else to get a good view and a 43″ monitor is too large to move much (maybe future technology for lighter monitors will change this).

Previously I had been unable to get my Samsung monitor to work at 4K resolution with 60Hz and had believed it was due to cheap video cards. I got the Philips monitor to work with HDMI so it’s apparent that the Samsung monitor doesn’t do 4K@60Hz on HDMI. This isn’t a real problem as the Samsung monitor doesn’t have built in speakers. The Philips monitor has built in speakers for HDMI sound which means one less cable to my PC and no desk space taken by speakers.

I bought the Philips monitor on eBay in “opened unused” condition. Inside the box was a sheet with a printout stating that the monitor blanks the screen periodically, so the seller knew that it wasn’t in unused condition, it was tested and failed the test. If the Philips monitor had been as minimally broken as described then I might have kept it. However it seems that certain patterns of input caused it to reboot. For example I could be watching Netflix and have it drop out, I would press the left arrow to watch that bit again and have it drop out again. On one occasion I did a test and found that a 5 second section of Netflix content caused the monitor to reboot on 6/8 times I viewed it. The workaround I discovered was to switch between maximised window and full-screen mode when it had a dropout. So I just press left-arrow and then ‘F’ and I can keep watching. That’s not what I expect from a $700 monitor!

I considered checking for Philips firmware updates but decided against it because I didn’t want to risk voiding the warranty if it didn’t work correctly and I decided I just didn’t like the monitor that much.

Ideally for my next monitor I’ll get a 4K screen of about 35″, TN, and a screen that’s not shiny. At the moment there doesn’t seem to be many monitors between 32″ and 43″ in size, so 32″ may do. I am quite happy with the Samsung monitor so getting the same but slightly larger is fine. It’s a pity they stopped making 5K displays.

29 June, 2022 12:00PM by etbe