May 16, 2023

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, April 2023 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In April, 18 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 6.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 18.0h (out of 16.5h assigned and 24.0h from previous period), thus carrying over 22.5h to the next month.
  • Anton Gladky did 8.0h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 7.0h to the next month.
  • Bastien Roucariès did 17.0h (out of 17.0h assigned and 3.0h from previous period), thus carrying over 3.0h to the next month.
  • Ben Hutchings did 16.0h (out of 12.0h assigned and 12.0h from previous period), thus carrying over 8.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Dominik George did 0.0h (out of 0h assigned and 20.34h from previous period), thus carrying over 20.34h to the next month.
  • Emilio Pozuelo Monfort did 4.5h (out of 11.0h assigned and 9.5h from previous period), thus carrying over 16.0h to the next month.
  • Guilhem Moulin did 8.5h (out of 8.0h assigned and 12.0h from previous period), thus carrying over 11.5h to the next month.
  • Helmut Grohne did 5.0h (out of 2.5h assigned and 7.5h from previous period), thus carrying over 5.0h to the next month.
  • Lee Garrett did 0.0h (out of 31.5h assigned and 9.0h from previous period), thus carrying over 40.5h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 12.5h (out of 0h assigned and 24.0h from previous period), thus carrying over 11.5h to the next month.
  • Roberto C. Sánchez did 8.5h (out of 4.75h assigned and 15.25h from previous period), thus carrying over 11.5h to the next month.
  • Stefano Rivera did 1.0h (out of 0h assigned and 28.0h from previous period), thus carrying over 27.0h to the next month.
  • Sylvain Beucler did 35.0h (out of 40.5h assigned), thus carrying over 5.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 15.0h (out of 15.0h assigned and 1.0h from previous period), thus carrying over 1.0h to the next month.

Evolution of the situation

In April, we have released 35 DLAs.

The LTS team would like to welcome our newest sponsor, Institut Camille Jordan, a French research lab. Thanks to the support of the many LTS sponsors, the entire Debian community benefits from direct security updates, as well as indirect improvements and collaboration with other members of the Debian community.

As part of improving the efficiency of our work and the quality of the security updates we produce, the LTS has continued improving our workflow. Improvements include more consistent tagging of release versions in Git and broader use of continuous integration (CI) to ensure packages are tested thoroughly and consistently. Sponsors and users can rest assured that we work continuously to maintain and improve the already high quality of the work that we do.

Thanks to our sponsors

Sponsors that joined recently are in bold.

16 May, 2023 12:00AM by Roberto C. Sánchez

May 15, 2023

Sven Hoexter

GCP: Private Service Connect Forwarding Rules can not be Updated

PSA for those foolish enough to use Google Cloud and try to use private service connect: If you want to change the serviceAttachment your private service connect forwarding rule points at, you must delete the forwarding rule and create a new one. Updates are not supported. I've done that in the past via terraform, but lately encountered strange errors like this:

Error updating ForwardingRule: googleapi: Error 400: Invalid value for field 'target.target':
'<https://www.googleapis.com/compute/v1/projects/mydumbproject/regions/europe-west1/serviceAttachments/
k8s1-sa-xyz-abc>'. Unexpected resource collection 'serviceAttachments'., invalid

Worked around that with the help of terrraform_data and lifecycle:

resource "terraform_data" "replacement" {
    input = var.gcp_psc_data["target"]
}

resource "google_compute_forwarding_rule" "this" {
    count   = length(var.gcp_psc_data["target"]) > 0 ? 1 : 0
    name    = "${var.gcp_psc_name}-psc"
    region  = var.gcp_region
    project = var.gcp_project

    target                = var.gcp_psc_data["target"]
    load_balancing_scheme = "" # need to override EXTERNAL default when target is a service attachment
    network               = var.gcp_network
    ip_address            = google_compute_address.this.id

    lifecycle {
        replace_triggered_by = [
            terraform_data.replacement
        ]
    }
}

See also terraform data for replace_triggered_by.

15 May, 2023 07:21AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.10 on CRAN: New Upstream

We are happy to share that the RcppSimdJson package has been updated to release 0.1.10.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This release updates the underlying simdjson library to version 3.1.8 (also made today). Otherwise we only made a minor edit to the README and adjusted one tweek for code coverage.

The (very short) NEWS entry for this release follows.

Changes in version 0.1.10 (2023-05-14)

  • simdjson was upgraded to version 3.1.8 (Dirk in #85).

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 May, 2023 12:41AM

May 14, 2023

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Joining files with FFmpeg

Joining video files (back-to-back) losslessly with FFmpeg is a surprisingly cumbersome operation. You can't just, like, write all the inputs on the command line or something; you need to use a special demuxer and then write all the names in a text file and override the security for that file, which is pretty crazy.

But there's one issue I had that I crashed into and which random searching around didn't help for, namely this happening sometimes on switching files (and the resulting files just having no video in that area):

[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290238, current: 86263699; changing to 162290239. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290239, current: 86264723; changing to 162290240. This may result in incorrect timestamps in the output file.
[mp4 @ 0x55d4d2ed9b40] Non-monotonous DTS in output stream 0:0; previous: 162290240, current: 86265747; changing to 162290241. This may result in incorrect timestamps in the output file.

There are lots of hits about this online, most of them around different codecs and such, but the problem was surprisingly mundane: Some of the segments had video in stream 0 and audio in stream 1, and some the other way round, and the concat demuxer doesn't account for this.

Simplest workaround; just remux the files first. FFmpeg will put the streams in a consistent order. (Inspired by a Stack Overflow answer that suggested remuxing to MPEG-TS in order to use the concat protocol instead of the concat demuxer.)

14 May, 2023 09:00PM

Petter Reinholdtsen

The 2023 LinuxCNC Norwegian developer gathering

The LinuxCNC project is making headway these days. A lot of patches and issues have seen activity on the project github pages recently. A few weeks ago there was a developer gathering over at the Tormach headquarter in Wisconsin, and now we are planning a new gathering in Norway. If you wonder what LinuxCNC is, lets quote Wikipedia:

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

The Norwegian developer gathering take place the weekend June 16th to 18th this year, and is open for everyone interested in contributing to LinuxCNC. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian, Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14 May, 2023 06:30PM

hackergotchi for Holger Levsen

Holger Levsen

20230514-fwupd

How-To use fwupd

As one cannot use fwupd on Qubes OS to update firmwares this is a quick How-To for using fwupd on Grml for future me.

  • boot into Grml.
  • mkdir /efi ; mount /boot/efi to /efi or set OverrideESPMountPoint=/boot/efi/EFI if you mount to the usual path.
  • apt update ; apt install fwupd fwupd-amd64-signed udisks2 policykit-1
  • fwupdmgr get-devices
  • fwupdmgr refresh
  • fwupdmgr get-updates
  • fwupdmgr update
  • reboot into Qubes OS.

14 May, 2023 03:20PM

hackergotchi for C.J. Collier

C.J. Collier

Early Access: Inserting JSON data to BigQuery from Spark on Dataproc

Hello folks!

We recently received a case letting us know that Dataproc 2.1.1 was unable to write to a BigQuery table with a column of type JSON. Although the BigQuery connector for Spark has had support for JSON columns since 0.28.0, the Dataproc images on the 2.1 line still cannot create tables with JSON columns or write to existing tables with JSON columns.

The customer has graciously granted permission to share the code we developed to allow this operation. So if you are interested in working with JSON column tables on Dataproc 2.1 please continue reading!

Use the following gcloud command to create your single-node dataproc cluster:

IMAGE_VERSION=2.1.1-debian11
REGION=us-west1
ZONE=${REGION}-a
CLUSTER_NAME=pick-a-cluster-name
gcloud dataproc clusters create ${CLUSTER_NAME} \
    --region ${REGION} \
    --zone ${ZONE} \
    --single-node \
    --master-machine-type n1-standard-4 \
    --master-boot-disk-type pd-ssd \
    --master-boot-disk-size 50 \
    --image-version ${IMAGE_VERSION} \
    --max-idle=90m \
    --enable-component-gateway \
    --scopes 'https://www.googleapis.com/auth/cloud-platform'

The following file is the Scala code used to write JSON structured data to a BigQuery table using Spark. The file following this one can be executed from your single-node Dataproc cluster.

Main.scala

import org.apache.spark.sql.functions.col
import org.apache.spark.sql.types.{Metadata, StringType, StructField, StructType}
import org.apache.spark.sql.{Row, SaveMode, SparkSession}
import org.apache.spark.sql.avro
import org.apache.avro.specific

  val env = "x"
  val my_bucket = "cjac-docker-on-yarn"
  val my_table = "dataset.testavro2"
    val spark = env match {
      case "local" =>
        SparkSession
          .builder()
          .config("temporaryGcsBucket", my_bucket)
          .master("local")
          .appName("isssue_115574")
          .getOrCreate()
      case _ =>
        SparkSession
          .builder()
          .config("temporaryGcsBucket", my_bucket)
          .appName("isssue_115574")
          .getOrCreate()
    }

  // create DF with some data
  val someData = Seq(
    Row("""{"name":"name1", "age": 10 }""", "id1"),
    Row("""{"name":"name2", "age": 20 }""", "id2")
  )
  val schema = StructType(
    Seq(
      StructField("user_age", StringType, true),
      StructField("id", StringType, true)
    )
  )

  val avroFileName = s"gs://${my_bucket}/issue_115574/someData.avro"
  
  val someDF = spark.createDataFrame(spark.sparkContext.parallelize(someData), schema)
  someDF.write.format("avro").mode("overwrite").save(avroFileName)

  val avroDF = spark.read.format("avro").load(avroFileName)
  // set metadata
  val dfJSON = avroDF
    .withColumn("user_age_no_metadata", col("user_age"))
    .withMetadata("user_age", Metadata.fromJson("""{"sqlType":"JSON"}"""))

  dfJSON.show()
  dfJSON.printSchema

  // write to BigQuery
  dfJSON.write.format("bigquery")
    .mode(SaveMode.Overwrite)
    .option("writeMethod", "indirect")
    .option("intermediateFormat", "avro")
    .option("useAvroLogicalTypes", "true")
    .option("table", my_table)
    .save()


repro.sh:

#!/bin/bash

PROJECT_ID=set-yours-here
DATASET_NAME=dataset
TABLE_NAME=testavro2

# We have to remove all of the existing spark bigquery jars from the local
# filesystem, as we will be using the symbols from the
# spark-3.3-bigquery-0.30.0.jar below.  Having existing jar files on the
# local filesystem will result in those symbols having higher precedence
# than the one loaded with the spark-shell.
sudo find /usr -name 'spark*bigquery*jar' -delete

# Remove the table from the bigquery dataset if it exists
bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME

# Create the table with a JSON type column
bq mk --table $PROJECT_ID:$DATASET_NAME.$TABLE_NAME \
  user_age:JSON,id:STRING,user_age_no_metadata:STRING

# Load the example Main.scala 
spark-shell -i Main.scala \
  --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar

# Show the table schema when we use `bq mk --table` and then load the avro
bq query --use_legacy_sql=false \
  "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"

# Remove the table so that we can see that the table is created should it not exist
bq rm -f -t $PROJECT_ID:$DATASET_NAME.$TABLE_NAME

# Dynamically generate a DataFrame, store it to avro, load that avro,
# and write the avro to BigQuery, creating the table if it does not already exist

spark-shell -i Main.scala \
  --jars /usr/lib/spark/external/spark-avro.jar,gs://spark-lib/bigquery/spark-3.3-bigquery-0.30.0.jar

# Show that the table schema does not differ from one created with a bq mk --table
bq query --use_legacy_sql=false \
  "SELECT ddl FROM $DATASET_NAME.INFORMATION_SCHEMA.TABLES where table_name='$TABLE_NAME'"

Google BigQuery has supported JSON data since October of 2022, but until now, it has not been possible, on generally available Dataproc clusters, to interact with these columns using the Spark BigQuery Connector.

JSON column type support was introduced in spark-bigquery-connector release 0.28.0.

14 May, 2023 03:52AM by C.J. Collier

May 13, 2023

Sergio Durigan Junior

Ubuntu debuginfod and source code indexing

You might remember that in my last post about the Ubuntu debuginfod service I talked about wanting to extend it and make it index and serve source code from packages. I’m excited to announce that this is now a reality since the Ubuntu Lunar (23.04) release.

The feature should work for a lot of packages from the archive, but not all of them. Keep reading to better understand why.

The problem

While debugging a package in Ubuntu, one of the first steps you need to take is to install its source code. There are some problems with this:

  • apt-get source required dpkg-dev to be installed, which ends up pulling in a lot of other dependencies.
  • GDB needs to be taught how to find the source code for the package being debugged. This can usually be done by using the dir command, but finding the proper path to be is usually not trivial, and you find yourself having to use more “complex” commands like set substitute-path, for example.
  • You have to make sure that the version of the source package is the same as the version of the binary package(s) you want to debug.
  • If you want to debug the libraries that the package links against, you will face the same problems described above for each library.

So yeah, not a trivial/pleasant task after all.

The solution…

Debuginfod can index source code as well as debug symbols. It is smart enough to keep a relationship between the source package and the corresponding binary’s Build-ID, which is what GDB will use when making a request for a specific source file. This means that, just like what happens for debug symbol files, the user does not need to keep track of the source package version.

While indexing source code, debuginfod will also maintain a record of the relative pathname of each source file. No more fiddling with paths inside the debugger to get things working properly.

Last, but not least, if there’s a need for a library source file and if it’s indexed by debuginfod, then it will get downloaded automatically as well.

… but not a perfect one

In order to make debuginfod happy when indexing source files, I had to patch dpkg and make it always use -fdebug-prefix-map when compiling stuff. This GCC option is used to remap pathnames inside the DWARF, which is needed because in Debian/Ubuntu we build our packages inside chroots and the build directories end up containing a bunch of random cruft (like /build/ayusd-ASDSEA/something/here). So we need to make sure the path prefix (the /build/ayusd-ASDSEA part) is uniform across all packages, and that’s where -fdebug-prefix-map helps.

This means that the package must honour dpkg-buildflags during its build process, otherwise the magic flag won’t be passed and your DWARF will end up with bogus paths. This should not be a big problem, because most of our packages do honour dpkg-buildflags, and those who don’t should be fixed anyway.

… especially if you’re using LTO

Ubuntu enables LTO by default, and unfortunately we are affected by an annoying (and complex) bug that results in those bogus pathnames not being properly remapped. The bug doesn’t affect all packages, but if you see GDB having trouble finding a source file whose full path starts without /usr/src/..., that is a good indication that you’re being affected by this bug. Hopefully we should see some progress in the following weeks.

Your feedback is important to us

If you have any comments, or if you found something strange that looks like a bug in the service, please reach out. You can either send an email to my public inbox (see below) or file a bug against the ubuntu-debuginfod project on Launchpad.

13 May, 2023 08:43PM

Petter Reinholdtsen

OpenSnitch in Debian ready for prime time

A bit delayed, the interactive application firewall OpenSnitch package in Debian now got the latest fixes ready for Debian Bookworm. Because it depend on a package missing on some architectures, the autopkgtest check of the testing migration script did not understand that the tests were actually working, so the migration was delayed. A bug in the package dependencies is also fixed, so those installing the firewall package (opensnitch) now also get the GUI admin tool (python3-opensnitch-ui) installed by default. I am very grateful to Gustavo Iñiguez Goya for his work on getting the package ready for Debian Bookworm.

Armed with this package I have discovered some surprising connections from programs I believed were able to work completly offline, and it has already proven its worth, at least to me. If you too want to get more familiar with the kind of programs using Internett connections on your machine, I recommend testing apt install opensnitch in Bookworm and see what you think.

The package is still not able to build its eBPF module within Debian. Not sure how much work it would be to get it working, but suspect some kernel related packages need to be extended with more header files to get it working.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 May, 2023 10:10AM

May 12, 2023

hackergotchi for Holger Levsen

Holger Levsen

20230512-Debian-Reunion-Hamburg-2023

Small reminder for the Debian Reunion Hamburg 2023 from May 23 to 30

As in previous years there will be a rather small Debian Reunion Hamburg 2023 event taking place from May 23rd until the 30th (with the 29th being a public holiday in Germany and elsewhere).

We'll have days of hacking (inside and outside), a day trip and a small cheese & wine party, as well as daily standup meetings to learn what others are doing, and there shall also be talks and workshops. At the moment there are even still some beds on site available and the CfP is still open!

For more information on all of this: please check the above wiki page!

May the force be with you.

12 May, 2023 02:28PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

crc32c 0.0.2 on CRAN: Build Fixes

A first follow-up to the initial announcement just days ago of the new crc32c package. The package offers cyclical checksum with parity in hardware-accelerated form on (recent enough) intel cpus as well as on arm64.

This follow-up was needed because I missed, when switching to a default static library build, that newest compilers would complain if -fPIC was not set. gcc-12 on my box was happy, gcc-13 on recent Fedora as used at CRAN was not. A second error was assuming that saying SystemRequirements: cmake would suffice. But hold on whippersnapper: macOS always has a surprise for you! As described at the end of the appropriate section in Writing R Extensions, on that OS you have to go the basement, open four cupboards, rearrange three shelves and then you get to use it. And then in doing so (from an added configure script) I failed to realize Windows needed a fallback. Gee.

The NEWS entry for this (as well the initial release) follows.

Changes in version 0.0.2 (2023-05-11)

  • Explicitly set cmake property for position-independent code

  • Help macOS find its cmake binary as detailed also in WRE

  • Help Windows with a non-conditional Makevars.win pointing at cmake

  • Add more badges to README.md

Changes in version 0.0.1 (2023-05-07)

  • Initial release version and CRAN upload

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 May, 2023 12:37AM

Valhalla's Things

I hate proprietary software

Posted on May 12, 2023

Even when it’s \m/.

Years ago I watched my SO play Brütal Legend and of course loved it, but I’ve been only using used computers for a long time, and none of them was really able to run modern games.

Admittedly, he told me that I could use his computer to play the game while he wasn’t home (and I do have an account on that computer, that I’ve sporadically used to do computationally intensive stuff, but always remotely), but it was a hassle, and I never did.

This year, however, he gifted me a shiny new CPU and motherboard, and among other things that meant games from this century!

The first thing I’ve spent time on was 0ad (which admittedly already worked on one of the old computers, as long as the map wasn’t too big), but now it was time to play basically the one recent proprietary game I had been wanting to play.

So, this afternoon I started by trying to copy the installer (it was bought from an humble bundle, I don’t have steam) from the home server to my PC, and the home server froze. Ok, I could copy it through something else than git annex (or from the offline hard disk backup, as I did).

Then I tried to run the installer, which resulted in the really helpful error message:

bash: ./BrutalLegend-Linux-2013-05-07-setup.bin: cannot execute: required file not found

ok, then surely ldd can help:

not a dynamic executable

maybe it doesn’t like being a symlink (remember, git annex), but no, that wasn’t the problem. ah! maybe file can help, and indeed:

BrutalLegend-Linux-2013-05-07-setup.bin: ELF 32-bit LSB executable

argh. Why does proprietary software hate us?

Oh, well, https://wiki.debian.org/Multiarch/HOWTO , dpkg --add-architecture i386 followed by apt update and apt install libc6-i386 and the installer started.

Of course this didn’t mean that the game could run, but at least it was spitting out the right error messages, and I could quickly see what the other missing packages were:

apt install lib32z1 libbz2-1.0:i386 libgl1:i386 libglu1-mesa:i386

and the game started!

and…

no. audio.

I often play games with no audio, because I can’t wear headphones, but here the soundtrack is basically 50% of the reason one would play this game.

Back when my SO had played the game audio was still through pulseaudio, while now I’m using pipewire (and I wasn’t sure that the game wasn’t old enough to be wanting to use alsa), so I started to worry a bit.

And this time, there was no error message to help, but some googling (on searx) and trial and error gave me this list of packages:

apt install pipewire-audio libpipewire-0.3-0:i386 libpulse0:i386 pipewire-alsa:i386

and that was it! the game started AND I could hear music!

And then it was time for dinner, and I couldn’t play.

(You may notice that this post has been posted quite some time after dinner. Most of this time wasn’t spent writing the post.)

Anyway, as soon as I’ve defeated and crushed Doviculus I’m going back to 0ad. or maybe wesnoth. or some other Free Software and frustration-free game.

12 May, 2023 12:00AM

May 11, 2023

Simon Josefsson

Streamlined NTRU Prime sntrup761 goes to IETF

The OpenSSH project added support for a hybrid Streamlined NTRU Prime post-quantum key encapsulation method sntrup761 to strengthen their X25519-based default in their version 8.5 released on 2021-03-03. While there has been a lot of talk about post-quantum crypto generally, my impression has been that there has been a slowdown in implementing and deploying them in the past two years. Why is that? Regardless of the answer, we can try to collaboratively change things, and one effort that appears strangely missing are IETF documents for these algorithms.

Building on some earlier work that added X25519/X448 to SSH, writing a similar document was relatively straight-forward once I had spent a day reading OpenSSH and TinySSH source code to understand how it worked. While I am not perfectly happy with how the final key is derived from the sntrup761/X25519 secrets – it is a SHA512 call on the concatenated secrets – I think the construct deserves to be better documented, to pave the road for increased confidence or better designs. Also, reusing the RFC5656§4 structs makes for a worse specification (one unnecessary normative reference), but probably a simpler implementation. I have published draft-josefsson-ntruprime-ssh-00 here. Credit here goes to Jan Mojžíš of TinySSH that designed the earlier sntrup4591761x25519-sha512@tinyssh.org in 2018, Markus Friedl who added it to OpenSSH in 2019, and Damien Miller that changed it to sntrup761 in 2020. Does anyone have more to add to the history of this work?

Once I had sharpened my xml2rfc skills, preparing a document describing the hybrid construct between the sntrup761 key-encapsulation mechanism and the X25519 key agreement method in a non-SSH fashion was easy. I do not know if this work is useful, but it may serve as a reference for further study. I published draft-josefsson-ntruprime-hybrid-00 here.

Finally, how about a IETF document on the base Streamlined NTRU Prime? Explaining all the details, and especially the math behind it would be a significant effort. I started doing that, but realized it is a subjective call when to stop explaining things. If we can’t assume that the reader knows about lattice math, is a document like this the best place to teach it? I settled for the most minimal approach instead, merely giving an introduction to the algorithm, included SageMath and C reference implementations together with test vectors. The IETF audience rarely understands math, so I think it is better to focus on the bits on the wire and the algorithm interfaces. Everything here was created by the Streamlined NTRU Prime team, I merely modified it a bit hoping I didn’t break too much. I have now published draft-josefsson-ntruprime-streamlined-00 here.

I maintain the IETF documents on my ietf-ntruprime GitLab page, feel free to open merge requests or raise issues to help improve them.

To have confidence in the code was working properly, I ended up preparing a branch with sntrup761 for the GNU-project Nettle and have submitted it upstream for review. I had the misfortune of having to understand and implement NIST’s DRBG-CTR to compute the sntrup761 known-answer tests, and what a mess it is. Why does a deterministic random generator support re-seeding? Why does it support non-full entropy derivation? What’s with the key size vs block size confusion? What’s with the optional parameters? What’s with having multiple algorithm descriptions? Luckily I was able to extract a minimal but working implementation that is easy to read. I can’t locate DRBG-CTR test vectors, anyone? Does anyone have sntrup761 test vectors that doesn’t use DRBG-CTR? One final reflection on publishing known-answer tests for an algorithm that uses random data: are the test vectors stable over different ways to implement the algorithm? Just consider of some optimization moved one randomness-extraction call before another, then wouldn’t the output be different? Are there other ways to verify correctness of implementations?

As always, happy hacking!

11 May, 2023 10:03PM by simon

hackergotchi for Shirish Agarwal

Shirish Agarwal

India Press freedom, Profiteering, AMD issues in the wild.

India Press Freedom

Just about a week back, India again slipped in the Freedom index, this time falling to 161 out of 180 countries. The RW again made lot of noise as they cannot fathom why it has been happening so. A recent news story gives some idea. Every year NCRB (National Crime Records Bureau) puts out its statistics of crimes happening across the country. The report is in public domain. Now according to report shared, around 40k women from Gujarat alone disappeared in the last five years. This is a state where BJP has been ruling for the last 30 odd years. When this report became viral, almost all national newspapers the news was censored/blacked out. For e.g. check out newindianexpress.com, likewise TOI and other newspapers, the news has been 404. The only place that you can get that news is in minority papers like siasat. But the story didn’t remain till there. While the NCW (National Commission of Women) pointed out similar stuff happening in J&K, Gujarat Police claimed they got almost 39k women back. Now ideally, it should have been in NCRB data as an addendum as the report can be challenged. But as this news was made viral, nobody knows the truth or false in the above. What BJP has been doing is whenever they get questioned, they try to muddy the waters like that. And most of the time, such news doesn’t make to court so the party gets a freebie in a sort as they are not legally challenged. Even if somebody asks why didn’t Gujarat Police do it as NCRB report is jointly made with the help of all states, and especially with BJP both in Center and States, they cannot give any excuse. The only excuse you see or hear is whataboutism unfortunately 😦

Profiteering on I.T. Hardware

I was chatting with a friend yesterday who is an enthusiast like me but has been more alert about what has been happening in the CPU, motherboard, RAM world. I was simply shocked to hear the prices of motherboards which are three years old, even a middling motherboard. For e.g. the last time I bought a mobo, I spent about 6k but that was for an ATX motherboard. Most ITX motherboards usually sold for around INR 4k/- or even lower. I remember Via especially as their mobos were even cheaper around INR 1.5-2k/-. Even before pandemic, many motherboard manufacturers had closed down shop leaving only a few in the market. As only a few remained, prices started going higher. The pandemic turned it to a seller’s market overnight as most people were stuck at home and needed good rigs for either work or leisure or both. The manufacturers of CPU, motherboards, GPU’s, Powersupply (SMPS) named their prices and people bought it. So in 2023, high prices remained while warranty periods started coming down. Governments also upped customs and various other duties. So all are in hand in glove in the situation. So as shared before, what I have been offered is a 4 year motherboard with a CPU of that time. I haven’t bought it nor do I intend to in short-term future but extremely disappointed with the state of affairs 😦

AMD Issues

It’s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response. 😦 Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever 😦

11 May, 2023 06:17AM by shirishag75

May 10, 2023

hackergotchi for Charles Plessy

Charles Plessy

Upvote to patch Firefox to render Markdown

I previously wrote that when Firefox receives a file whose media type is text/markdown, it prompts the user to download it, whereas other browsers display rendered results.

Now it is possible to upvote a proposal on connect.mozilla.org asking that Firefox renders Markdown by default.

10 May, 2023 11:43PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: DEP-17, Debian Reimbursements Web App, and more! (by Utkarsh Gupta, Stefano Rivera)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DEP-17 progress, by Helmut and Emilio

We posted a proposal for modifying dpkg to better cope with directory aliasing. After an initial period of silence, the discussion took off, but was mostly diverted to a competing proposal by Luca Boccassi: Do not change dpkg at all, but still move all files affected by aliasing to their canonical location and thus removing the bad effects of aliasing. We facilitated this discussion and performed extensive analysis of this and competing proposals highlighting resulting problems and proposing solutions or workarounds. We performed a detailed analysis of how aliasing affects usage of dpkg-divert, dpkg-statoverride and update-alternatives. Details are available on the debian-dpkg mailinglist thread.

Debian Reimbursements Web App Progress, by Stefano Rivera

In a project funded by Freexian’s Project Funding initiative, Stefano made some more progress on the Debian Reimbursements Web App. The full workflow can now be exercised, completing the first milestone of the project, the Working Prototype.

DebConf Bursary Team, by Utkarsh Gupta

Utkarsh started to prep the bursary team work, gearing up for DebConf, happening in India in September 2023. To learn more about the bursaries team, head to https://wiki.debian.org/Teams/DebConf/Bursaries. For learning how to apply for bursaries, visit https://debconf23.debconf.org/about/bursaries.

Miscellaneous contributions

  • Stefano attended several DebConf planning meetings, and did some work on the DebConf 23 website.
  • Stefano updated distro-info-data to include the release date of Debian bullseye, and added the next Ubuntu release, Mantic Minotour. This required a round of updates to all the stable releases, LTS, and ELTS.
  • Helmut sent patches for 13 cross build failures and filed 104 RC bugs for missing Breaks and Replaces.

10 May, 2023 12:00AM by Utkarsh Gupta, Stefano Rivera

May 09, 2023

hackergotchi for C.J. Collier

C.J. Collier

Instructions for installing Proxmox onto the Qotom device

These instructions are for qotom devices Q515P and Q1075GE. You can order one from Amazon or directly from Cherry Ni <export03@qotom.com>. Instructions are for those coming from Windows.

Prerequisites:

  • A USB keyboard and mouse
  • A powered HDMI monitor and an HDMI cable
  • A copy of the Proxmox VE Installer ISO
  • A USB disk from which to boot the installer
  • Software and instructions to burn the raw image to USB
  • The details of your wireless network including wireless network ID (SSID), WPA password, gateway address and network prefix length

To find your windows network details, run the following command at the command prompt:

netsh interface ip show addresses

Here’s my output:

PS C:\Users\cjcol> netsh interface ip show addresses "Wi-Fi"

Configuration for interface "Wi-Fi"
    DHCP enabled:                         Yes
    IP Address:                           172.16.79.53
    Subnet Prefix:                        172.16.79.0/24 (mask 255.255.255.0)
    Default Gateway:                      172.16.79.1
    Gateway Metric:                       0
    InterfaceMetric:                      50

Did you follow the instructions linked above in the “prerequisites” section? If not, take a moment to do so now.
Open Rufus and select the proxmox iso which you downloaded.

You may be warned that Rufus will be acting as dd.

Don’t forget to select the USB drive that you want to write the image to. In my example, the device is creatively called “NO_LABEL”.

You may be warned that re-imaging the USB disk will result in the previous data on the USB disk being lost.

Once the process is complete, the application will indicate that it is complete.

You should now have a USB disk with the Proxmox installer image on it. Place the USB disk into one of the blue, USB-3.0, USB-A slots on the Qotom device so that the system can read the installer image from it at full speed. The Proxmox installer requires a keyboard, video and mouse. Please attach these to the device along with inserting the USB disk you just created.

Press the power button on the Qotom device. Press the F11 key repeatedly until you see the AMI BIOS menu. Press F11 a couple more times. You’ll be presented with a boot menu. One of the options will launch the Proxmox installer. By trial and error, I found that the correct boot menu option was “UEFI OS”

Once you select the correct option, you will be presented with a menu that looks like this. Select the default option and install.

During the install, you will be presented with an option of the block device to install to. I think there’s only a single block device in this celeron, but if there are more than one, I prefer the smaller one for the ProxMox OS. I also make a point to limit the size of the root filesystem to 16G. I think it will take up the entire volume group if you don’t set a limit.

Okay, I’ll do another install and select the correct filesystem.

If you read this far and want me to add some more screenshots and better instructions, leave a comment.

09 May, 2023 11:43PM by C.J. Collier

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

crc32c 0.0.1 on CRAN: New Package

Happy to announce a new package: crc32c. This arose out of a user request to add crc32c (which is related to but differnt from crc32 without the trailing c) to my digest package. Which I did (for now in a branch), using the software-fallback version of crc32c from the reference implementation by Google at their crc32c repo.

However, the Google repo also offers hardware-accelerated versions and switches at run-time. So I pondered a little about how to offer the additional performance without placing a dependency burden on all users of digest.

Lo and behold, I think I found a solution by reusing what R offers. First off, the crc32c package wraps the Google repo cleanly and directly. We include all the repo code – but not the logging or benchmarking code. This keeps external dependencies down to just cmake. Which while still rare in the CRAN world is at least not entirely uncommon. So now each time you build the crc32c R package, the upstream hardware detection is added transparently thanks in part to cmake. We build libcrc32c.a as a static library and include it in the R package and its own shared library.

And we added exporting of three key functions, directly at the C level, along with exporting one C level function of package that other packages can call. The distinction matters. The second step of offering a function R can call (also from other packages) is used and documented. By means of an Imports: statement to instantiate the package providing the functionality, the client package obtains access to a compiled functions its R code can then call. A number of other R packages use this.

But what is less well known is that we can do the same with C level functions. Again, it takes an imports statement but once that is done we can call ‘C to C’. Which is quite nice. I am working currently on the branch in digest which motivated this, and it can import the automagically hardware-accelerated functionality on the suitable hardware. Stay tuned for a change in digest.

I also won and lost the CRAN lottery for once: the package made it through the ‘new package’ checks without any revisions. Only to then immediately fail on the CRAN machines using gcc-13 as a -fPIC was seen as missing when making the shared library. Even though both my CI and the r-universe builds were all green. Ah well. So a 0.0.2 release will be coming in a day or two.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 May, 2023 01:13AM

May 08, 2023

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Open to work!

I decided to leave my job (Principal Software Engineer) after 4 years. I have no idea what I want to do next, so I’ve been having loads of chats to try and work that out.

I like working in mission focused organizations, working to fix problems across the stack, from interpersonal down to the operating system. I enjoy “going where I’m rare”, places that don’t always get the most attention. At my last job, I most enjoyed working to drive engineering standards for all products across the company, mentoring engineers across all teams and seniority levels, and serving as an advisor for senior leadership as we grew the engineering team from 3 to 150 people.

If you have a role that you think I’d like to hear about, I’d love to hear about it at jobs{}pault.ag (where the {} is an @ sign).

08 May, 2023 06:19PM

May 07, 2023

Thorsten Alteholz

My Debian Activities in April 2023

FTP master

This month I accepted 103 and rejected 11 packages. The overall number of packages that got accepted was 103.

Debian LTS

This was my hundred-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3405-1] libxml2 security update for two CVE
  • [DLA 3406-1] sniproxy security update for one CVE
  • [sniproxy] updates for Unstable + Bullseye prepared and debdiffs sent to maintainer
  • [1033759] pu-bug: duktape/bullseye uploaded and accepted
  • [1029976] pu-bug: libzen/bullseye uploaded and accepted

I also continued to work on ring in Buster and Bullseye, where some new CVEs appeared.

Debian ELTS

This month was the fifty seventh ELTS month.

Unfortunately I couldn’t use up all my allocated hours and I was only able to continue my work on openssl1.0. I plan to do an upload in May.

Debian Astro

Due to a change in numpy the planetary-system-stacker stopped working. I created a patch and uploaded a new package. Meanwhile it already arrived in testing and I could analyse some pictures of the sun again.

Other stuff

Looking at my notes, there is nothing to be reported here.

07 May, 2023 11:41AM by alteholz

May 06, 2023

Reproducible Builds

Reproducible Builds in April 2023

Welcome to the April 2023 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up to over the past month. And, as always, if you are interested in contributing to the project, please visit our Contribute page on our website.

General news

Trisquel is a fully-free operating system building on the work of Ubuntu Linux. This month, Simon Josefsson published an article on his blog titled Trisquel is 42% Reproducible!. Simon wrote:

The absolute number may not be impressive, but what I hope is at least a useful contribution is that there actually is a number on how much of Trisquel is reproducible. Hopefully this will inspire others to help improve the actual metric.

Simon wrote another blog post this month on a new tool to ensure that updates to Linux distribution archive metadata (eg. via apt-get update) will only use files that have been recorded in a globally immutable and tamper-resistant ledger. A similar solution exists for Arch Linux (called pacman-bintrans) which was announced in August 2021 where an archive of all issued signatures is publically accessible.


Joachim Breitner wrote an in-depth blog post on a bootstrap-capable GHC, the primary compiler for the Haskell programming language. As a quick background to what this is trying to solve, in order to generate a fully trustworthy compile chain, trustworthy root binaries are needed… and a popular approach to address this problem is called bootstrappable builds where the core idea is to address previously-circular build dependencies by creating a new dependency path using simpler prerequisite versions of software. Joachim takes an somewhat recursive approach to the problem for Haskell, leading to the inadvertently humourous question: “Can I turn all of GHC into one module, and compile that?”

Elsewhere in the world of bootstrapping, Janneke Nieuwenhuizen and Ludovic Courtès wrote a blog post on the GNU Guix blog announcing The Full-Source Bootstrap, specifically:

[…] the third reduction of the Guix bootstrap binaries has now been merged in the main branch of Guix! If you run guix pull today, you get a package graph of more than 22,000 nodes rooted in a 357-byte program—something that had never been achieved, to our knowledge, since the birth of Unix.

More info about this change is available on the post itself, including:

The full-source bootstrap was once deemed impossible. Yet, here we are, building the foundations of a GNU/Linux distro entirely from source, a long way towards the ideal that the Guix project has been aiming for from the start.

There are still some daunting tasks ahead. For example, what about the Linux kernel? The good news is that the bootstrappable community has grown a lot, from two people six years ago there are now around 100 people in the #bootstrappable IRC channel.


Michael Ablassmeier created a script called pypidiff as they were looking for a way to track differences between packages published on PyPI. According to Micahel, pypidiff “uses diffoscope to create reports on the published releases and automatically pushes them to a GitHub repository.” This can be seen on the pypi-diff GitHub page (example).


Eleuther AI, a non-profit AI research group, recently unveiled Pythia, a collection of 16 Large Language Model (LLMs) trained on public data in the same order designed specifically to facilitate scientific research. According to a post on MarkTechPost:

Pythia is the only publicly available model suite that includes models that were trained on the same data in the same order [and] all the corresponding data and tools to download and replicate the exact training process are publicly released to facilitate further research.

These properties are intended to allow researchers to understand how gender bias (etc.) can affected by training data and model scale.


Back in February’s report we reported on a series of changes to the Sphinx documentation generator that was initiated after attempts to get the alembic Debian package to build reproducibly. Although Chris Lamb was able to identify the source problem and provided a potential patch that might fix it, James Addison has taken the issue in hand, leading to a large amount of activity resulting in a proposed pull request that is waiting to be merged.


WireGuard is a popular Virtual Private Network (VPN) service that aims to be faster, simpler and leaner than other solutions to create secure connections between computing devices. According to a post on the WireGuard developer mailing list, the WireGuard Android app can now be built reproducibly so that its contents can be publicly verified. According to the post by Jason A. Donenfeld, “the F-Droid project now does this verification by comparing their build of WireGuard to the build that the WireGuard project publishes. When they match, the new version becomes available. This is very positive news.”


Author and public speaker, V. M. Brasseur published a sample chapter from her upcoming book on “corporate open source strategy” which is the topic of Software Bill of Materials (SBOM):

A software bill of materials (SBOM) is defined as “…a nested inventory for software, a list of ingredients that make up software components.” When you receive a physical delivery of some sort, the bill of materials tells you what’s inside the box. Similarly, when you use software created outside of your organisation, the SBOM tells you what’s inside that software. The SBOM is a file that declares the software supply chain (SSC) for that specific piece of software. []


Several distributions noticed recent versions of the Linux Kernel are no longer reproducible because the BPF Type Format (BTF) metadata is not generated in a deterministic way. This was discussed on the #reproducible-builds IRC channel, but no solution appears to be in sight for now.


Community news

On our mailing list this month:

Holger Levsen gave a talk at foss-north 2023 in Gothenburg, Sweden on the topic of Reproducible Builds, the first ten years.

Lastly, there were a number of updates to our website, including:

  • Chris Lamb attempted a number of ways to try and fix literal {: .lead} appearing in the page [][][], made all the Back to who is involved links italics [], and corrected the syntax of the _data/sponsors.yml file [].

  • Holger Levsen added his recent talk [], added Simon Josefsson, Mike Perry and Seth Schoen to the contributors page [][][], reworked the People page a little [] [], as well as fixed spelling of ‘Arch Linux’ [].

Lastly, Mattia Rizzolo moved some old sponsors to a ‘former’ section [] and Simon Josefsson added Trisquel GNU/Linux. []



Debian

  • Vagrant Cascadian reported on the Debian’s build-essential package set, which was “inspired by how close we are to making the Debian build-essential set reproducible and how important that set of packages are in general”. Vagrant mentioned that: “I have some progress, some hope, and I daresay, some fears…”. […]

  • Debian Developer Cyril Brulebois (kibi) filed a bug against snapshot.debian.org after they noticed that “there are many missing dinstalls” — that is to say, the snapshot service is not capturing 100% of all of historical states of the Debian archive. This is relevant to reproducibility because without the availability historical versions, it is becomes impossible to repeat a build at a future date in order to correlate checksums. .

  • 20 reviews of Debian packages were added, 21 were updated and 5 were removed this month adding to our knowledge about identified issues. Chris Lamb added a new build_path_in_line_annotations_added_by_ruby_ragel toolchain issue. […]

  • Mattia Rizzolo announced that the data for the stretch archive on tests.reproducible-builds.org has been archived. This matches the archival of stretch within Debian itself. This is of some historical interest, as stretch was the first Debian release regularly tested by the Reproducible Builds project.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


diffoscope development

diffoscope version 241 was uploaded to Debian unstable by Chris Lamb. It included contributions already covered in previous months as well a change by Chris Lamb to add a missing raise statement that was accidentally dropped in a previous commit. []



Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In April, a number of changes were made, including:

  • Holger Levsen:

    • Significant work on a new Documented Jenkins Maintenance (djm) script to support logged maintenance of nodes, etc. [][][][][][]
    • Add the new APT repo url for Jenkins itself with a new signing key. [][]
    • In the Jenkins shell monitor, allow 40 GiB of files for diffoscope for the Debian experimental distribution as Debian is frozen around the release at the moment. []
    • Updated Arch Linux testing to cleanup leftover files left in /tmp/archlinux-ci/ after three days. [][][]
    • Mark a number of nodes hosted by Oregon State University Open Source Lab (OSUOSL) as online and offline. [][][]
    • Update the node health checks to detect failures to end schroot sessions. []
    • Filter out another duplicate contributor from the contributor statistics. []
  • Mattia Rizzolo:




If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 May, 2023 07:55PM

May 05, 2023

Dima Kogan

mrcal 2.3 released!

Today I released mrcal 2.3 (the release notes are available here). Once again, in the code there are lots of useful improvements, but nothing major. The big update in this release is the documentation. Much of it was improved and extended, especially practical guides in the how-to-calibrate page and the recipes.

Major updates are imminent. I'm about to merge the cross-projection uncertainty branch and the triangulated-points-in-the-solver branch to study chessboard-less calibrations and structure from motion. Neither of these are novel, but mrcal's improved lens models and uncertainty propagation will hopefully produce better results.

05 May, 2023 09:13PM by Dima Kogan

hackergotchi for Shirish Agarwal

Shirish Agarwal

CAT-6, AMD 5600G, Dealerships closing down, TRAI-caller and privacy.

CAT-6 patch cord & ONU

Few months back I was offered a fibre service. Most of the service offering has been using Chinese infrastructure including the ONU (Optical Network Unit). Wikipedia doesn’t have a good page on ONU hence had to rely on third-party sites. FS (a name I don’t really know) has some (good basic info. on ONU and how it’s part and parcel of the whole infrastructure. I also got an ONT (Optical Network Terminal) but it seems to be very basic and mostly dumb. I used the old CAT-6 cable ( a decade old) to connect them and it worked for couple of months. Had to change it, first went to know if a higher cable solution offered themselves. CAT-7 is there but not backward compatible. CAT-8 is the next higher version but apparently it’s expensive and also not easily bought. I did quite a few tests on CAT-6 and the ONU and it conks out at best 1 mbps which is still far better than what I am used to. CAT-8 are either not available or simply too expensive for home applications atm. A good summary of CAT-8 and what they stand for can be found here. The networking part is hopeless as most consumer facing CPU’s and motherboards don’t even offer 10 mbps, so asking anything more is just overkill without any benefit. Which does bring me to the next question, something that I may do in a few months or a year down the road. Just to clarify they may say it is 100 mbps or even 1 Gbps but that’s plain wrong.

AMD APU, Asus Motherboard & Dealerships

I had been thinking of an AMD APU, could wait a while but sooner or later would have to get one. I got quoted an AMD Ryzen 3 3200G with an Asus A320 Motherboard for around 14k which kinda looked steep to me. Quite a few hardware dealers whom I had traded, consulted over years simply shut down. While there are new people, it’s much more harder now to make relationships (due to deafness) rather than before. The easiest to share which was also online was pcpartpicker.com that had an Indian domain now no longer available. The number of offline brick and mortar PC business has also closed quite a bit. There are a few new ones but it takes time and the big guys have made more of a killing. I was shocked quite a bit. Came home and browsed a bit and was hit by this. Both AMD and Intel PC business has taken a beating. AMD a bit more as Intel still holds part of the business segment as traditionally been theirs. There have been proofs and allegations of bribing in the past (do remember the EU Antitrust case against Intel for monopoly) but Intel’s own cutting corners with the Spectre and Meltdown flaws hasn’t helped its case, nor the suits themselves. AMD on the other hand under expertise of Lisa Su has simply grown strength by strength. Inflation and Profiteering by other big companies has made the outlook for both AMD and Intel a bit lackluster. AMD is supposed to show Zen5 chips in a few days time and the rumor mill has been ongoing.

Correction – Not few days but 2025.

Personally, I would be happy with maybe a Ryzen 5600G with an Asus motherboard. My main motive whenever I buy an APU is not to hit beyond 65 TDP. It’s kinda middle of the road. As far as what I could read this year and next year we could have AM4+ or something like those updates, AM5 APU’s, CPU’s and boards are slated to be launched in 2025. I did see pcpricetracker and it does give idea of various APU prices although have to say pcpartpicker was much intuitive to work with than the above.

I just had my system cleaned couple of months so touchwood I should be able to use it for another couple of years or more before I have to get one of these APU’s and do hope they are worth it. My idea is to use that not only for testing various softwares but also delve a bit into VR if that’s possible. I did read a bit about deafness and VR as well. A good summary can be found here. I am hopeful that there may be few people in the community who may look and respond to that. It’s crucial.

TRAI-caller, Privacy 101& Element.

While most of us in Debian and FOSS communities do engage in privacy, lots of times it’s frustrating. I’m always looking for videos that seek to share that view why Privacy is needed by individuals and why Governments and other parties hate it. There are a couple of basic Youtube Videos that does explain the same quite practically.

Now why am I sharing the above. It isn’t that people do not privacy and how we hold it dear. I share it because GOI just today blocked Element. While it may be trivial for us to workaround the issues, it does tell what GOI is doing. And it still acts as if surprised why it’s press ranking is going to pits.

Even our Women Wrestlers have been protesting for a week to just file an FIR (First Information Report) . And these are women who have got medals for the country. More than half of these organizations, specifically the women wrestling team don’t have POSH which is a mandatory body supposed to be in every organization. POSH stands for Prevention of Sexual Harassment at Workplace. The ‘gentleman’ concerned is a known rowdy/Goon hence it took almost a week of protest to do the needful 😦

I do try not to report because right now every other day we see somewhere or the other the Govt. curtailing our rights and most people are mute 😦

Signing out, till later 😦

05 May, 2023 02:30PM by shirishag75

hackergotchi for Jonathan Dowland

Jonathan Dowland

sidebar dividers for mutt

I wanted to start using (neo)mutt's sidebar and I wanted a way of separating groups of mail folders in the list. To achieve that I interleaved a couple of fake "divider" folder names. It looks like this:

  Screenshot of neomutt with sidebar

Screenshot of neomutt with sidebar

This was spurred on by an attempt to revamp my personal organisation.

I've been using mutt for at least 20 years (these days neomutt), which, by default, does not show you a list of mail folders all the time. The default view is an index of your default mailbox, from which you can view a mail (pager view), switch to a mailbox, or do a bunch of other things, some of which involve showing a list of mailboxes. But the list is not omnipresent. That's somewhat of a feature, if you believe that you don't need to see that list until you are actually planning to pick from it.

There's an old and widespread "sidebar" patch for mutt (which neomutt ships out of the box). It reserves a portion of the left-hand side of the terminal to render a list of mailboxes. It felt superfluous to me so I never really thought to use it, until now: I wanted to make my Inbox functional again, and to achieve that, I needed to move mail out of it which was serving as a placeholder for a particular Action, or as a reminder that I was Waiting on a response. What was stopping me was a feeling I'd forget to check other mailboxes. So, I need to have them up in my face all the time to remind me.

Key for me, to make it useful, is to control the ordering of mailboxes and to divide them up using the interleaved fake mailboxes. The key configuration is therefore

set sidebar_sort_method = 'unsorted'
mailboxes =INBOX =Action =Waiting
mailboxes '=   ~~~~~~~~' # divider
...

My groupings, for what it's worth, are: the key functional mailboxes (INBOX/Action/Waiting) come first; last, is reference ('2023' is the name of my current Archive folder; the other folders listed are project-specific reference and the two mailing lists I still directly subscribe to). Sandwiched in between is currently a single mailbox which is for a particular project for which it makes sense to have a separate mailbox. Once that's gone, so will that middle section.

For my work mail I do something similar, but the groupings are

  1. INBOX/Action/Waiting
  2. Reference (Sent Mail, Starred Mail)
  3. More reference (internal mailing lists I need to closely monitor)
  4. Even more reference (less important mailing lists)

As with everything, these approaches are under constant review.

05 May, 2023 10:12AM

Reproducible Builds (diffoscope)

diffoscope 242 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 242. This version includes the following changes:

* If the binwalk Python module is not available, ensure the user knows they
  may be missing more differences in, for example, concatenated .cpio
  archives.
* Factor out routine to generate a human-readable comments when
  Python modules are missing.

You find out more by visiting the project homepage.

05 May, 2023 12:00AM

Valhalla's Things

Hiking Slippers

Posted on May 5, 2023

image

When I travel for a few days I don’t usually1 bring any other shoe than the ones I’m wearing, plus some kind of slippers for use inside hotel / B&B rooms.

It’s good for not carrying useless weight, but it always leave me with a vague feeling of “what if my only shoes break”, followed by “on a Sunday, when the shops are closed”.

So I started to think in the general direction of hiking sandals, shoes that are designed to be worn when resting, and lightweight to carry, but are a passable substitute for regular shoes in case of an accident to the main ones, maybe with the help of an extra pair of socks2 (or when crossing fords, but that’s not really a usecase I have).

My requirements are easier than the ones for real hiking sandals, since I’m only going to be walking on paved streets (or at most easy unpaved ones), and the weight considerations are a thing, but not as strict as if I had to carry these on my back while hiking many hours in a day.

My first attempt was a pair of hiking sandals from things I already had in my stash, with vibram soles, neoprene padding and polyester webbing. After a couple fixes they sort of worked, but they had a few big issues.

  • While comfortable when worn, the neoprene made the sandals hard to make, as it tended to deform while being assembled.
  • Polyester webbing is slippery. Some strips of hot glue in strategical places helped, but they weren’t perfect and in time they are peeling off.
  • Most importantly, to make the sandals stable enough to wear while walking I had to add a strap around the ankle that needs closing: this makes it a bit of a hassle to use the sandals, say, when waking up in the middle of the night for metabolic reasons.

And then, one day I made my linen slippers, and that lead me to think again about the problem: what if I made a pair of slippers with a rubber sole, technical materials and maybe uppers made of net, so that they would be lightweight, breathable and possibly even still suitable in case I ever need to cross a ford.

This was also readily attainable from the stash: some polycotton for the sole lining, elastic mesh for the uppers, EVA foam for padding and vibram soles.

I decided to assemble most of them by machine, and it was quick and painless (possibly also thanks to the elasticity of the mesh)

image

For the soles I may have gone a bit overboard with the vibram claw, but:

  • I already had it in the stash;
  • if I need to wear them on an unpaved road, they are going to be suitable;
  • why not?

The soles were glued to the slippers rather than being sewn, as I don’t think there is a reasonable way to sew these soles; I hope it won’t cause durability issues later on (if it does, there will be an update)

the slippers on a kitchen scale

As for the finished weight, at 235 g for the pair I thought I could do better, but apparently shoes are considered ultralight if they are around 500 g? Using just one layer of mesh rather than two would probably help, but it would have required a few changes to the pattern, and anyway I don’t really to carry them around all day.

image

I’ve also added a loop of fabric (polycotton) to the centre back to be able to hang the slippers to the backpack when wet or dirty; a bit of narrow webbing may have been better, but I didn’t have any in my stash.

The pattern is the same as that used for the linen slippers, and of course it’s released as #FreeSoftWear.

I’ve worn these for a few days around the home and they worked just fine, except for the fact that I had to re-glue the sole in a few places (but I suspect it was glued badly in the first place, since the other sole had no issues).

Right now I have no plans to travel, so I don’t know how much I will be able to test these in the next few months, but sooner or later I will (or I’ll keep wearing them at home after I’ve thoroughly tested the linen ones), and if there are issues I will post them here on the blog (and add a link to this post).


  1. the exception would be when I’m also bringing some kind of costume, and even there it’s not always true.↩︎

  2. and one should always carry an extra pair of clean socks, as they are useful for so many things, as Pratchett reminds us.↩︎

05 May, 2023 12:00AM

May 04, 2023

hackergotchi for Matthew Garrett

Matthew Garrett

Twitter's e2ee DMs are better than nothing

(Edit 2023-05-10: This has now launched for a subset of Twitter users. The code that existed to notify users that device identities had changed does not appear to have been enabled - as a result, in its current form, Twitter can absolutely MITM conversations and read your messages)

Elon Musk appeared on an interview with Tucker Carlson last month, with one of the topics being the fact that Twitter could be legally compelled to hand over users' direct messages to government agencies since they're held on Twitter's servers and aren't encrypted. Elon talked about how they were in the process of implementing proper encryption for DMs that would prevent this - "You could put a gun to my head and I couldn't tell you. That's how it should be."

tl;dr - in the current implementation, while Twitter could subvert the end-to-end nature of the encryption, it could not do so without users being notified. If any user involved in a conversation were to ignore that notification, all messages in that conversation (including ones sent in the past) could then be decrypted. This isn't ideal, but it still seems like an improvement over having no encryption at all. More technical discussion follows.

For context: all information about Twitter's implementation here has been derived from reverse engineering version 9.86.0 of the Android client and 9.56.1 of the iOS client (the current versions at time of writing), and the feature hasn't yet launched. While it's certainly possible that there could be major changes in the protocol between now launch, Elon has asserted that they plan to launch the feature this week so it's plausible that this reflects what'll ship.

For it to be impossible for Twitter to read DMs, they need to not only be encrypted, they need to be encrypted with a key that's not available to Twitter. This is what's referred to as "end-to-end encryption", or e2ee - it means that the only components in the communication chain that have access to the unencrypted data are the endpoints. Even if the message passes through other systems (and even if it's stored on other systems), those systems do not have access to the keys that would be needed to decrypt the data.

End-to-end encrypted messengers were initially popularised by Signal, but the Signal protocol has since been incorporated into WhatsApp and is probably much more widely used there. Millions of people per day are sending messages to each other that pass through servers controlled by third parties, but those third parties are completely unable to read the contents of those messages. This is the scenario that Elon described, where there's no degree of compulsion that could cause the people relaying messages to and from people to decrypt those messages afterwards.

But for this to be possible, both ends of the communication need to be able to encrypt messages in a way the other end can decrypt. This is usually performed using AES, a well-studied encryption algorithm with no known significant weaknesses. AES is a form of what's referred to as a symmetric encryption, one where encryption and decryption are performed with the same key. This means that both ends need access to that key, which presents us with a bootstrapping problem. Until a shared secret is obtained, there's no way to communicate securely, so how do we generate that shared secret? A common mechanism for this is something called Diffie Hellman key exchange, which makes use of asymmetric encryption. In asymmetric encryption, an encryption key can be split into two components - a public key and a private key. Both devices involved in the communication combine their private key and the other party's public key to generate a secret that can only be decoded with access to the private key. As long as you know the other party's public key, you can now securely generate a shared secret with them. Even a third party with access to all the public keys won't be able to identify this secret. Signal makes use of a variation of Diffie-Hellman called Extended Triple Diffie-Hellman that has some desirable properties, but it's not strictly necessary for the implementation of something that's end-to-end encrypted.

Although it was rumoured that Twitter would make use of the Signal protocol, and in fact there are vestiges of code in the Twitter client that still reference Signal, recent versions of the app have shipped with an entirely different approach that appears to have been written from scratch. It seems simple enough. Each device generates an asymmetric keypair using the NIST P-256 elliptic curve, along with a device identifier. The device identifier and the public half of the key are uploaded to Twitter using a new API endpoint called /1.1/keyregistry/register. When you want to send an encrypted DM to someone, the app calls /1.1/keyregistry/extract_public_keys with the IDs of the users you want to communicate with, and gets back a list of their public keys. It then looks up the conversation ID (a numeric identifier that corresponds to a given DM exchange - for a 1:1 conversation between two people it doesn't appear that this ever changes, so if you DMed an account 5 years ago and then DM them again now from the same account, the conversation ID will be the same) in a local database to retrieve a conversation key. If that key doesn't exist yet, the sender generates a random one. The message is then encrypted with the conversation key using AES in GCM mode, and the conversation key is then put through Diffie-Hellman with each of the recipients' public device keys. The encrypted message is then sent to Twitter along with the list of encrypted conversation keys. When each of the recipients' devices receives the message it checks whether it already has a copy of the conversation key, and if not performs its half of the Diffie-Hellman negotiation to decrypt the encrypted conversation key. One it has the conversation key it decrypts it and shows it to the user.

What would happen if Twitter changed the registered public key associated with a device to one where they held the private key, or added an entirely new device to a user's account? If the app were to just happily send a message with the conversation key encrypted with that new key, Twitter would be able to decrypt that and obtain the conversation key. Since the conversation key is tied to the conversation, not any given pair of devices, obtaining the conversation key means you can then decrypt every message in that conversation, including ones sent before the key was obtained.

(An aside: Signal and WhatsApp make use of a protocol called Sesame which involves additional secret material that's shared between every device a user owns, hence why you have to do that QR code dance whenever you add a new device to your account. I'm grossly over-simplifying how clever the Signal approach is here, largely because I don't understand the details of it myself. The Signal protocol uses something called the Double Ratchet Algorithm to implement the actual message encryption keys in such a way that even if someone were able to successfully impersonate a device they'd only be able to decrypt messages sent after that point even if they had encrypted copies of every previous message in the conversation)

How's this avoided? Based on the UI that exists in the iOS version of the app, in a fairly straightforward way - each user can only have a single device that supports encrypted messages. If the user (or, in our hypothetical, a malicious Twitter) replaces the device key, the client will generate a notification. If the user pays attention to that notification and verifies with the recipient through some out of band mechanism that the device has actually been replaced, then everything is fine. But, if any participant in the conversation ignores this warning, the holder of the subverted key can obtain the conversation key and decrypt the entire history of the conversation. That's strictly worse than anything based on Signal, where such impersonation would simply not work, but even in the Twitter case it's not possible for someone to silently subvert the security.

So when Elon says Twitter wouldn't be able to decrypt these messages even if someone held a gun to his head, there's a condition applied to that - it's true as long as nobody fucks up. This is clearly better than the messages just not being encrypted at all in the first place, but overall it's a weaker solution than Signal. If you're currently using Twitter DMs, should you turn on encryption? As long as the limitations aren't too limiting, definitely! Should you use this in preference to Signal or WhatsApp? Almost certainly not. This seems like a genuine incremental improvement, but it'd be easy to interpret what Elon says as providing stronger guarantees than actually exist.

comment count unavailable comments

04 May, 2023 09:49PM

hackergotchi for Holger Levsen

Holger Levsen

20230504-Debian-Reunion-Hamburg-2023

Small reminder for the Debian Reunion Hamburg 2023 from May 23 to 30

As in previous years there will be a rather small Debian Reunion Hamburg 2023 event taking place from May 23rd until the 30th (with the 29th being a public holiday in Germany and elsewhere).

We'll have days of hacking (inside and outside), a day trip and a small cheese & wine party, as well as daily standup meetings to learn what others are doing, and there shall also be talks and workshops. At the moment there are even still some beds on site available!

For more information on all of this: please check the above wiki page!

May the force be with you.

04 May, 2023 06:42PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, March/April 2023

In March and April I worked a total of 28 hours for Freexian's Debian LTS initiative, out of a maximum of 48 hours.

I updated the linux (4.19) package to the latest stable and stable-rt updates, and uploaded it at the end of April. I merged the latest bullseye security update into the linux-5.10 package and uploaded that at the same time.

04 May, 2023 02:22PM

Emanuele Rocca

UEFI Secure Boot on the Raspberry Pi

UPDATE: this post unexpectedly ended up on Hacker News and I received a lot of comments. The two most important points being made are (1) that Secure Boot on the RPi as described here is not actually truly secure. An attacker who successfully gained root could just mount the firmware partition and either add their own keys to the EFI variable store or replace the firmware altogether with a malicious one. (2) The TianCore firmware cannot be used instead of the proprietary blob as I mentioned. What truly happens is that the proprietary blob is loaded onto the VideoCore cores, then TianoCore is loaded onto the ARM cores. Thanks for the corrections.


A port of the free software TianoCore UEFI firmware can be used instead of the proprietary boot blob to boot the Raspberry Pi. This allows to install Debian on the RPi with the standard Debian Installer, and it also makes it possible to use UEFI Secure Boot. Note that Secure Boot had been broken on arm64 for a while, but it’s now working in Bookworm!.

Debian Installer UEFI boot

To begin, you’ll need to download the appropriate firmware files for the RPi3 or RPi4. I’ve got a Raspberry Pi 3 Model B+ myself, so the rest of this document will assume an RPi3 is being installed.

Plug the SD card you are going to use as the RPi storage device into another system. Say it shows up as /dev/sdf. Then:

# Create an msdos partition table
$ sudo parted --script /dev/sdf mklabel msdos
# Create, format, and label a 10M fat32 partition
$ sudo parted --script /dev/sdf mkpart primary fat32 0% 10M
$ sudo mkfs.vfat /dev/sdf1
$ sudo fatlabel /dev/sdf1 RPI-FW
# Get the UEFI firmware onto the SD card
$ sudo mount /dev/sdf1 /mnt/data/
$ sudo unzip Downloads/RPi3_UEFI_Firmware_v1.38.zip -d /mnt/data/
$ sudo umount /mnt/data

At this point, the SD card can be used to boot the RPi, and you’ll get a UEFI firmware.

Download the Bookworm RC 2 release of the installer, copy it to a USB stick as described in the Installation Guide, and boot your RPi from the stick. If for some reason booting from the stick does not happen automatically, enter the firmware interface with ESC and choose the USB stick from Boot Manager.

Proceed with the installation as normal, paying attention not to modify the firmware partition labeled RPI-FW. I initially thought it would be nice to reuse the firmware partition as ESP partition as well. However, setting the esp flag on makes the RPi unbootable. Either configuring the partition as ESP in debian-installer, or manually with sudo parted --script /dev/sda set 1 esp on, breaks boot. In case you accidentally do that, set it back to off and the edk2 firmware will boot again.

What I suggest doing in terms of partitioning is: (1) leave the 10M partition created above for the firmware alone, and (2) create another 512M or so ESP partition for EFI boot.

The installation should go smoothly till the end, but rebooting won’t work. Doh. This is because of an important gotcha: the Raspberry Pi port of the TianoCore firmware we are using does not support setting UEFI variables persistently from a "High Level Operating System (HLOS)", which is the debian-installer in our case. Persistently is the keyword there: variables can be set and modified regularly — with efibootmgr or otherwise, but crucially the modifications do not survive reboot. However, changes made from the firmware interface itself are persistent. So enter the firmware with ESC right after booting the RPi, select Boot Maintenance ManagerBoot OptionsAdd Boot Option → Your SD card → Your ESP partition → EFIdebianshimaa64.efi. Choose a creative name for your boot entry (eg: "debian"), save and exit the firmware interface. Bookworm should be booting fine at this point!

Enabling Secure Boot

Although the TianoCore firmware does support Secure Boot, there are no keys enrolled by default. To add the required keys, copy PK-0001.der, DB-0001.der, DB-0002.der, KEK-0001.der, and KEK-0002.der to a FAT32 formatted USB stick.

Here’s a summary of the Subject field for each of the above:

PK-0001.der.pem
        Subject: O = Debian, CN = Debian UEFI Secure Boot (PK/KEK key), emailAddress = debian-devel@lists.debian.org
DB-0001.der.pem
        Subject: C = US, ST = Washington, L = Redmond, O = Microsoft Corporation, CN = Microsoft Windows Production PCA 2011
DB-0002.der.pem
        Subject: C = US, ST = Washington, L = Redmond, O = Microsoft Corporation, CN = Microsoft Corporation UEFI CA 2011
KEK-0001.der.pem
        Subject: O = Debian, CN = Debian UEFI Secure Boot (PK/KEK key), emailAddress = debian-devel@lists.debian.org
KEK-0002.der.pem
        Subject: C = US, ST = Washington, L = Redmond, O = Microsoft Corporation, CN = Microsoft Corporation KEK CA 2011

Plug the stick into the RPi, boot and enter the firmware interface with ESC. Select Device ManagerSecure Boot ConfigurationSecure Boot Mode → choose Custom ModeCustom Secure Boot OptionsPK OptionsEnroll PK → choose PK-0001.der. Do the same for DB Options, this time choose DB-0001.der and DB-0002.der. As you may have guessed by now, the same must be done for KEK Options, but adding KEK-0001.der and KEK-0002.der. Save, exit, reboot. If everything went well, your RPi now has booted with Secure Boot enabled.

See https://wiki.debian.org/SecureBoot for the details on how to check whether Secure Boot has been enabled correctly and much more.

04 May, 2023 11:29AM by Emanuele Rocca (ema@linux.it)

Valhalla's Things

Linen Slippers

Posted on May 4, 2023

A pair of espadrille-like slippers in white fabric.

I hate going out to buy shoes. Even more so I hate buying home shoes, which is what I spend most of my life in, also because no matter what I buy they seem to disintegrate after a season or so. So, obviously, I’ve been on a quest to make my own.

As a side note, going barefoot (with socks) would only move the wear issue to the socks, so it’s not really a solution, and going bare barefoot on ceramic floors is not going to happen, kaythanksbye.

For the winter I’m trying to make knit and felted slippers; I’ve had partial success, and they should be pretty easy to mend (I’ve just had to do the first mend, with darning and needle felting, and it seems to have worked nicely).

For the summer, I’ve been thinking of something sewn, and with the warm season approaching (and the winter slippers needing urgent repairs) I decided it was time to work on them.

I already had a shaped (left/right) pattern for a sole from my hiking sandals attempts (a topic for another post), so I started by drafting a front upper, and then I started to have espadrille feeling and decided that a heel guard was needed.

As for fabric, looking around in the most easily accessible part of the Stash I’ve found the nice heavyweight linen I’m using for my Augusta Stays, of which I still have a lot and which looked almost perfect except for one small detail: it’s very white.

I briefly thought about dyeing, but I wanted to start sewing NOW to test the pattern, so, yeah, maybe it will happen one day, or maybe I’ll have patchy dust-grey slippers. If I’ll ever have a place where I can do woad dyeing a blue pair will happen, however.

Contrary to the typical espadrillas I decided to have a full lining, and some padding between the lining and the sole, using cotton padding leftovers from my ironing board.

To add some structure I also decided to add a few rows of cording (and thus make the uppers in two layers of fabric), to help prevent everything from collapsing flat.

As for the sole, that’s something that is still causing me woes: I do have some rubber sole sheets (see “hiking sandals” above), but I suspect that they require glueing, which I’m not sure would work well with the natural fabric uppers and will probably make repairs harder to do.

In the past I tried to make some crocheted rope soles and they were a big failure: they felt really nice on the foot, but they also self-destroyed in a matter of weeks, which is not really the kind of sole I’m looking for.

the slippers with the braided soles on top.

Now I have some ~ 3 mm twine that feels much harsher on the hands while working it (and would probably feel harsher on the feet, but that’s what the lining and padding are for), so I hope it may be a bit more resistant, and I tried to make a braided rope sole.

Of course, I have published the pattern and instructions for the slippers as well as those for the braided rope sole as #FreeSoftWear.

Now what is left is trying everything under daily use, and I hope I will have updates on this at the end of the season, rather than soon :D

04 May, 2023 12:00AM

May 03, 2023

John Goerzen

Martha the Pilot

Martha, now 5, can’t remember a time when she didn’t fly periodically. She’s come along in our airplane in short flights to a nearby restaurant and long ones to Michigan and South Dakota. All this time, she’s been riding in the back seat next to Laura.

Martha has been talking excitedly about riding up front next to me. She wants to “be my co-pilot”. I promised to give her an airplane wing pin when she did — one I got from a pilot of a commercial flight when I was a kid. Of course, safety was first, so I wanted to be sure she was old enough to fly there without being a distraction.

Last weekend, the moment finally arrived. She was so excited! She brought along her “Claire bear” aviator, one that I bought for her at an airport a little while back. She buckled in two of her dolls in the back seat.

Martha's dolls

And then up we went!

Martha in the airplane

Martha was so proud when we landed! We went to Stearman Field, just a short 10-minute flight away, and parked the plane right in front of the restaurant.

We flew back, and Martha thought we should get a photo of her standing on the wing by the door. Great idea!

Martha standing on the wing

She was happily jabbering about the flight all the way home. She told us several times about the pin she got, watching out the window, watching all the screens in the airplane, and also that she didn’t get sick at all despite some turbulence.

And, she says, “Now just you and I can go flying!”

Yes, that’s something I’m looking forward to!

03 May, 2023 12:18PM by John Goerzen

May 02, 2023

hackergotchi for Neil Williams

Neil Williams

Carrying Grief

This isn't a book review, although the reason that I am typing this now is because of a book, You Are Not Alone: from the creator and host of Griefcast, Cariad Lloyd, ISBN: 978-1526621870 and I include a handful of quotes from Cariad where there is really no better way of describing things.

Many people experience death for the first time as a child, often relating to a family pet. Death is universal but every experience of death is unique. One of the myths of grief is the idea of the Five Stages but this is a misinterpretation. Denial, Anger, Bargaining, Depression and Acceptance represent the five stage model of death and have nothing to do with grief. The five stages were developed from studying those who are terminally ill, the dying, not those who then grieve for the dead person and have to go on living without them. Grief is for those who loved the person who has died and it varies between each of those people just as people vary in how they love someone. The Five Stages end at the moment of death, grief is what comes next and most people do not grieve in stages, it can be more like a tangled knot.

Death has a date and time, so that is why the last stage of the model is Acceptance. Grief has no timetable, those who grieve will carry that grief for the rest of their lives. Death starts the process of grief in those who go on living just as it ends the life of the person who is loved. "Grief eases and changes and returns but it never disappears.".

I suspect many will have already stopped reading by this point. People do not talk about death and grief enough and this only adds to the burden of those who carry their grief. It can be of enormous comfort to those who have carried grief for some time to talk directly about the dead, not in vague pleasantries but with specific and strong memories. Find a safe place without distractions and talk with the person grieving face to face. Name the dead person. Go to places with strong memories and be there alongside. Talk about the times with that person before their death. Early on, everything about grief is painful and sad. It does ease but it remains unpredictable. Closing it away in a box inside your head (as I did at one point) is like cutting off a damaged limb but keeping the pain in a box on the shelf. You still miss the limb and eventually, the box starts leaking.

For me, there were family pets which died but my first job out of university was to work in hospitals, helping the nurses manage the medication regimen and providing specialist advice as a pharmacist. It will not be long in that environment before everyone on the ward gets direct experience of the death of a person. In some ways, this helped me to separate the process of death from the process of grief. I cared for these people as patients but these were not my loved ones. Later, I worked in specialist terminal care units, including providing potential treatments as part of clinical trials. Here, it was not expected for any patient to be discharged alive. The more aggressive chemotherapies had already been tried and had failed, this was about pain relief, symptom management and helping the loved ones. Palliative care is not just about the patient, it involves helping the loved ones to accept what is happening as this provides comfort to the patient by closing the loop.

Grief is stressful. One of the most common causes of personal stress is bereavement. The death of your loved one is outside of your control, it has happened, no amount of regret can change that. Then come all the other stresses, maybe about money or having somewhere to live as a result of what else has changed after the death or having to care for other loved ones.

In the early stages, the first two years, I found it helpful to imagine my life as a box containing a ball and a button. The button triggers new waves of pain and loss each time it is hit. The ball bounces around the box and hits the button at random. Initially, the button is large and the ball is enormous, so the button is hit almost constantly. Over time, both the button and the ball change size. Starting off at maximum, initially there is only one direction of change. There are two problems with this analogy. First is that the grief ball has infinite energy which does not happen in reality. The ball may get smaller and the button harder to hit but the ball will continue bouncing. Secondly, the life box is not a predictable shape, so the pattern of movement of the ball is unpredictable.

A single stress is one thing, but what has happened since has just kept adding more stress for me. Shortly before my father died 5 years ago now, I had moved house. Then, I was made redundant on the day of the first anniversary of my father's death. A year or so later, my long term relationship failed and a few months after that COVID-19 appeared. As the country eased out of the pandemic in 2021, my mother died (unrelated to COVID itself). A year after that, I had to take early retirement. My brother and sister, of course, share a lot of those stressors. My brother, in particular, took the responsibility for organising both funerals and did most of the visits to my mother before her death. The grief is different for each of the surviving family.

Cariad's book helped me understand why I was getting frequent ideas about going back to visit places which my father and I both knew. My parents encouraged each of us to work hard to leave Port Talbot (or Pong Toilet locally) behind, in no small part due to the unrestrained pollution and deprivation that is common to small industrial towns across Wales, the midlands and the north of the UK. It wasn't that I wanted to move house back to our ancestral roots. It was my grief leaking out of the box. Yes, I long for mountains and the sea because I'm now living in a remorselessly flat and landlocked region after moving here for employment. However, it was my grief driving those longings - not for the physical surroundings but out of the shared memories with my father. I can visit those memories without moving house, I just need to arrange things so that I can be undisturbed and undistracted.

I am not alone with my grief and I am grateful to my friends who have helped whilst carrying their own grief. It is necessary for everyone to think and talk about death and grief. In respect of your own death, no matter how far ahead that may be, consider Advance Care Planning and Expressions of Wish as well as your Will.

  • Help your loved ones cope with your death by describing what you would like to happen.

  • Document how your life has been arranged so that the executor of your Will can find the right documents to inform:

    • your bank,
    • your mortgage company,
    • your energy company,
    • your mobile phone company,
    • your house and car insurers and the like.
  • If you've got a complex home setup with servers and other machines which would be unfamiliar to the executor of your Will, then entrust someone else with the information required to revoke your keys, access your machines etc. and then provide the contact information for that person to your executor.

  • Arrange for your pets to be looked after.

  • Describe how you would like your belongings to be handled - do you want every effort made to have your clothes and furnishing recycled instead of going to landfill?

  • Where are the documents for the oven, the dishwasher and the central heating system so that these can be included in the sale of your property?

  • If there are loans outstanding, make sure your executor or a trusted person knows where to find the account numbers and company names.

  • What about organ donation? Make sure your executor knows your wishes and make sure your loved ones either agree or are willing to respect your wishes.

  • Then the personal stuff, what do you want to happen to your social media accounts, your cloud data, your games, DVD and CD collection, your photos and other media? Some social media companies have explicit settings available in your account to describe if you want the data deleted after a certain amount of time, after a notification from some government service or to set up some kind of memorialised version or hand over control of the account to a trusted person.

Talk to people, document what you want. Your loved ones will be grateful and they deserve that much whilst they try to cope with the first onslaught of grief. Talk to your loved ones and get them to do the same for themselves.

Normalise talking about death with your family, especially children. None of us are getting out of this alive and we will all leave behind people who will grieve.

02 May, 2023 12:45PM by Neil Williams

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.18 on CRAN: Maintenance

A new release 0.4.18 of RQuantLib arrived at CRAN earlier today, and will be uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This release of RQuantLib comes about six months after the previous maintenance release. It brings a few small updates triggered by small changes in the QuantLib releases 1.29 and 1.30. It also contains updates reflecting changes in the rgl package kindly contributed by Duncan Murdoch. Last but not least, Jeroen Ooms helped with two pull requests updating builds on, repspectively, macOS and Windows by updating the pre-made libraries of QuantLib.

Changes in RQuantLib version 0.4.18 (2023-05-01)

  • Use of several rgl functions was updated to a new naming scheme in the package (kindly contributed by Duncan Murdoch in #174)

  • A default argument is now given for option surface plots

  • Changed call from SwaptionVolCube1 to SabrSwaptionVolatilityCube (conditional on using QuantLib 1.30 or later)

  • Some other deprecation warnings were tweaked as in QL test file

  • Builds for macOS and Windows were updated with more library build (changes kindly contributed by Jeron Ooms in #176 and #175)

  • Some remaining throw calls were replace by Rcpp::stop

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 May, 2023 12:48AM

May 01, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Scanning heaps of 8mm movies

After my father passed away, I brought home most of the personal items he had, both at home and at his office. Among many, many (many, many, many) other things, I brought two of his personal treasures: His photo collection and a box with the 8mm movies he shot approximately between 1956 and 1989, when he was forced into modernity and got a portable videocassette recorder.

I have talked with several friends, as I really want to get it all in a digital format, and while I’ve been making slow but steady advances scanning the photo reels, I was particularly dismayed (even though it was most expected — most personal electronic devices aren’t meant to last over 50 years) to find out the 8mm projector was no longer in working conditions; the lamp and the fans work, but the spindles won’t spin. Of course, it is quite likely it is easy to fix, but it is beyond my tinkering abilities… and finding photographic equipment repair shops is no longer easy. Anyway, even if I got it fixed, filming a movie from a screen, even with a decent camera, is a lousy way to get it digitized.

But almost by mere chance, I got in contact with my cousin Daniel, ho came to Mexico to visit his parents, and had precisely brought with him… a 8mm/Super8 movie scanner! It is a much simpler piece of equipment than I had expected, and while it does present some minor glitches (i.e. the vertical framing slightly loses alignment over the course of a medium-length film scanning session, and no adjustment is possible while the scan is ongoing), this is something that can be decently fixed in post-processing, and a scanning session can be split with no ill effects. Anyway, it is quite uncommon a mid-length (5min) film can be done without interrupting i.e. to join a splice, mostly given my father didn’t just film, but also edited a lot (this is, it’s not just family pictures, but all different kinds of fiction and documentary work he did).

So, Daniel lent me a great, brand new, entry-level film scanner; I rushed to scan as many movies as possible before his return to the USA this week, but he insisted he bought it to help preserve our family’s memory, and given we are still several cousins living in Mexico, I could keep hold of it so any other of the cousins will find it more easily. Of course, I am thankful and delighted!

So, this equipment is a Magnasonic FS81. It is entry-level, as it lacks some adjustment abilities a professional one would surely have, and I’m sure a better scanner will make the job faster – but it’s infinitely superior to not having it!

The scanner processes roughly two frames per second (while the nominal 8mm/Super8 speed is 24 frames per second), so a 3 minute film reel takes a bit over 35 minutes… And a long, ~20 minute film reel takes… Close to 4hr, if nothing gets in your way :-Þ And yes, with longer reels, the probability of a splice breaking are way higher than with a short one — not only because there is simply a longer film to process, but also because, both at the unwinding and at the receiving reels, mechanics play their roles.

The films don’t advance smoothly, but jump to position each frame in the scanner’s screen, so every bit of film gets its fair share of gentle tugs.

My professional consultant on how and what to do is my good friend Chema Serralde, who has stopped me from doing several things I would regret later otherwise (such as joining spliced tapes with acidic chemical adhesives such as Kola Loka, a.k.a. Krazy Glue — even if it’s a bit trickier to do it, he insisted me on best using simple transparent tape if I’m not buying fancy things such as film-adhesive). Chema also explained me the importance of the loopers (las Lupes in his technical Spanish translation), which I feared increased the likelihood of breaking a bit of old glue due to the angle in which the film gets pulled… but if skipped, result in films with too much jumping.

Not all of the movies I have are for public sharing — Some of them are “just” family movies, with high personal value, but probably of very little interest to others. But some are! I have been uploading some of the movies, after minor post-processing, to the Internet Archive. Among them:

Anyway, I have a long way forward for scanning. I have 20 3min reels, 19 5min reels, and 8 20min reels. I want to check the scanning quality, but I think my 20min reels are mostly processed (we paid for scanning them some years ago). I mostly finished the 3min reels, but might have to go over some of them again due to the learning process.

And… Well, I’m having quite a bit of fun in the process!

01 May, 2023 11:21PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

May.

May. Doing some rust stuff and maintenance of existing C++ code. Doing something that I can feel improves the codebase is nice.

01 May, 2023 07:24AM by Junichi Uekawa

Russ Allbery

Review: The Amazing Maurice and His Educated Rodents

Review: The Amazing Maurice and His Educated Rodents, by Terry Pratchett

Series: Discworld #28
Publisher: HarperCollins
Copyright: 2001
Printing: 2008
ISBN: 0-06-001235-8
Format: Mass market
Pages: 351

The Amazing Maurice and His Educated Rodents is the 28th Discworld novel and the first marketed for younger readers. Although it has enough references to establish it as taking place on Discworld, it has no obvious connections with the other books and doesn't rely on any knowledge of the series so far. This would not be a bad place to start with Terry Pratchett and see if his writing style and sense of humor is for you.

Despite being marketed as young adult, and despite Pratchett's comments in an afterward in the edition I own that writing YA novels is much harder, I didn't think this was that different than a typical Discworld novel. The two main human characters read as about twelve and there were some minor changes in tone, but I'm not sure I would have immediately labeled it as YA if I hadn't already known it was supposed to be. There are considerably fewer obvious pop culture references than average, though; if that's related, I think I'll prefer Pratchett's YA novels, since I think his writing is stronger when he's not playing reference bingo.

Maurice (note to US readers: Maurice is pronounced "Morris" in the UK) is a talking cat and the mastermind of a wandering con job. He, a stupid-looking kid with a flute (Maurice's description), and a tribe of talking rats travel the small towns of Discworld. The rats go in first, making a show of breaking into the food, swimming in the cream, and widdling on things that humans don't want widdled on. Once the townspeople are convinced they have a plague of rats, the kid with the flute enters the town and offers to pipe the rats away for a very reasonable fee. He plays his flute, the rats swarm out of town, and they take their money and move on to the next town. It's a successful life that suits Maurice and his growing hoard of gold very well. If only the rats would stop asking pointed questions about the ethics of this scheme.

The town of Bad Blintz is the next on their itinerary, and if the rats have their way, will be the last. Their hope is they've gathered enough money by now to find an island, away from humans, where they can live their own lives. But, as is always the case for one last job in fiction, there's something uncannily wrong about Bad Blintz. There are traps everywhere, more brutal and dangerous ones than they've found in any other town, and yet there is no sign of native, unintelligent rats.

Meanwhile, Maurice and the boy find a town that looks wealthy but has food shortages, a bounty on rats that is absurdly high, and a pair of sinister-looking rat-catchers who are bringing in collections of rat tails that look suspiciously like bootlaces. The mayor's daughter discovers Maurice can talk and immediately decides she has to take them in hand. Malicia is very certain of her own opinions, not accustomed to taking no for an answer, and is certain that the world follows the logic of stories, even if she has to help it along.

This is truly great stuff. I think this might be my favorite Discworld novel to date, although I do have some criticisms that I'll get to in a moment.

The best part are the rats, and particularly the blind philosopher rat Dangerous Beans and his assistant Peaches. In the middle of daring infiltration of the trapped sewers in scenes reminiscent of Mission: Impossible, the rats are also having philosophical arguments. They've become something different than the unaltered rats that they call the keekees, but what those differences mean is harder to understand. The older rats are not happy about too many changes and think the rats should keep acting like rats. The younger ones are discovering that they're afraid of shadows because now they understand what the shadows hint at. Dangerous Beans is trying to work out a writing system so that they can keep important thoughts. One of their few guides is a children's book of talking animals, although they quickly discover that the portrayed clothing is annoyingly impractical.

But as good as the rats are, Maurice is nearly as much fun in an entirely different way. He is unapologetically out for himself, streetwise and canny in a way that feels apt for a cat, gets bored and mentally wanders off in the middle of conversations, and pretends to agree with people when that's how he can get what he wants. But he also has a weird sense of loyalty and ethics that only shows up when something is truly important. It's a variation on the con man with a heart of gold, but it's a very well-done variation that weaves in a cat's impatience with and inattention to anything that doesn't directly concern them. I was laughing throughout the book.

Malicia is an absolute delight, the sort of character who takes over scenes through sheer force of will, and the dumb-looking kid (whose name turns out to be Keith) is a perfect counterbalance: a laid-back, quiet boy who just wants to play his music and is almost entirely unflappable. It's such a great cast.

The best part of the plot is the end. I won't spoil it, so I'll only say that Pratchett has the characters do the work on the aftermath that a lot of books skip over. He doesn't have any magical solutions for the world's problems, but he's so very good at restoring one's faith that maybe sometimes those solutions can be constructed.

My one complaint with this book is that Pratchett introduces a second villain, and while there are good in-story justifications for it and it's entangled with the primary plot, he added elements of (mild) supernatural horror and evil that I thought were extraneous and unnecessary. He already had enough of a conflict set up without adding that additional element, and I think it undermined the moral complexity of the story. I would have much rather he kept the social dynamics of the town at the core of the story and used that to trigger the moments of sacrifice and philosophy that made the climax work.

The Discworld books by this point have gotten very good, but each book seems to have one element like that where it felt like Pratchett took the easy way out of a plot corner or added some story element that didn't really work. I feel like the series is on the verge of having a truly great book that rises above the entire series to date, but never quite gets there.

That caveat aside, I thoroughly enjoyed this and had trouble putting it down. Mrs. Frisby and the Rats of Nimh was one of my favorite books as a kid, and this reminded me of it in some good ways (enough so that I think some of the references were intentional). Great stuff. If you were to read only one Discworld book and didn't want to be confused by all the entangled plot threads and established characters, I would seriously consider making it this one. Recommended.

Followed by Night Watch in publication order. There doesn't appear to be a direct plot sequel, more's the pity.

Rating: 8 out of 10

01 May, 2023 04:03AM

Paul Wise

FLOSS Activities April 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Security issue in secret manager (sent privately)
  • Broken symlinks in opencpn
  • Test problem in SPTAG
  • Debian migration unblock needed for evolution

Review

Administration

  • Debian IRC: fixed the #debian-mips channel topic
  • Debian wiki: unblock IP addresses, approve accounts
  • Debian QA services: deploy changes, investigate SourceForge uscan issue

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SPTAG work was sponsored. All other work was done on a volunteer basis.

01 May, 2023 12:42AM

April 30, 2023

Russell Coker

hackergotchi for Matthew Palmer

Matthew Palmer

dev-dependencies and Rust's unused_crate_dependencies lint

I’m in the process of getting super-strict about the code quality of cretrit, the comparison-revealing encryption library that underlies the queryable encryption of the Enquo project. While I’m going to write a whole big thing about Rust linting in the future, I bumped across a rather gnarly problem that I thought was worth sharing separately. The problem, in short, is that the unused_crate_dependencies lint interacts badly with crates that are only needed for benchmarking, such as (in my case) criterion.

Rust has a whole bucketload of “lints” that can help your codebase adhere to certain standards, by warning (or exploding) if the problem is detected. The unused_crate_dependencies lint, as the name suggests, gets snippy when there’s a crate listed in your Cargo.toml that doesn’t appear to be used anywhere.

All well and good so far. However, while Rust has the ability to specify “crates needed for running the testsuite” (the [dev-dependencies] section of Cargo.toml) separately from “crates needed for actually using this thing” ([dependencies]), it doesn’t have a way to specify “crates needed for running the benchmarks”. That is a problem when you’re using something like criterion for benchmarking, because it doesn’t get refered to at all in your project code – it only gets used in the benchmarks.

When building your codebase for running the test suite, the compiler sees that you’ve specified criterion as one of your “testsuite dependencies”, but it never gets used in your testsuite. This causes the unused_crate_dependencies lint to lose its tiny mind, and make your build look ugly (or fail).

Thankfully, the solution is very simple, once you know the trick (this could be the unofficial theme song of the entire Rust ecosystem). You need to refer to the criterion crate somewhere in the code that gets built during the testsuite. The lint tells you most of what you need to do (like most things Rust, it tries hard to be helpful), but since it’s a development dependency, you need a little extra secret sauce.

All you need to do is add these two lines to the bottom of your src/lib.rs (or src/main.rs for a binary crate):

#[cfg(test)]
use criterion as _;

For the less Rust-literate, this means “when the build-time configuration flag test is set, import the criterion crate, but don’t, like, allow it to actually be referred to”. This is enough to make the lint believe that the dependency is being used, and making it only happen when the test build-time config flag is set avoids the ugliness of it trying to refer to the crate during regular builds (which would fail, because criterion is only a dev-dependency).

Simple? Yes. Did it take me a lot of skull-sweat to figure out? You betcha. That’s why I’m writing it down – even if everyone else already knows this, at least future Matt will find this post next time he trips over this problem.

30 April, 2023 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

Valhalla's Things

Programming the ESP32-C3-DevKit-Lipo with Arduino

Posted on April 30, 2023

A few months ago we may have bought a few ESP32-C3-DevKit-Lipo boards from Olimex.

Since every time I go back to working with them I’ve forgotten how to do so, and my old notes on the fediverse are hard to find, this is the full procedure.

Setup

I start by sort-of-following https://docs.espressif.com/projects/arduino-esp32/en/latest/installing.html

  • Install arduino from the distribution packages (version 1.8 is ok).
  • Under File → Preferences, add the development URL to the Additional Boards Manager URLs field. (on 2023-04-30 that’s https://espressif.github.io/arduino-esp32/package_esp32_dev_index.json).
  • Under Tools → Board → Boards Manager make sure that you install a version of esp32 by Espressif Systems that is above 2.0 (on 2023-04-30 there is a 2.0.8 that works).

Programming

  • Under Tools → Board → ESP32 Arduino select ESP32C3 Dev Module.
  • Under Tools → USB CDC On Boot select Enabled.

You can now compile and upload your sketches.

If something goes wrong, to force the board to bootloader mode bring GPIO9 to GND.

Note that the serial port device /dev/ttyACM0 only appears when in bootloader mode, but uploading sketches and the serial monitor will still work even if the port is not set in the arduino IDE.

30 April, 2023 12:00AM

April 29, 2023

Andrew Cater

And it's now after 2100 - so the unexpurgated version

 We're all but done - a couple of bugs sorted. All testing complete.



29 April, 2023 09:23PM by Andrew Cater (noreply@blogger.com)

Russ Allbery

INN 2.7.1

This is a bug fix and minor feature release over INN 2.7.0, and the upgrade should be painless. You can download the new release from ISC or my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

As of this release, we're no longer generating hashes and signed hashes. Instead, the release is a simple tarball and a detached GnuPG signature, similar to my other software releases. We're also maintaining the releases in parallel on GitHub.

For the full list of changes, see the INN 2.7.1 NEWS file.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

29 April, 2023 05:18PM

Andrew Cater

Debian Bullseye 11.7 release - testing going on - 202304291427

 We've been joined by Simon (smcv) - lots of chat bouncing backwards and forwards. Laptops appearing out of backpacks suddenly being repurposed.

Settling very much into a rhythm and routine.

Working with two laptops on your lap ends up being quite heavy :)

29 April, 2023 02:27PM by Andrew Cater (noreply@blogger.com)

Russell Coker

Write a blog post in the style of Russell Coker

Feeling a bit bored I asked ChatGPT “Write a blog post in the style of Russell Coker” and the result is in the section below. I don’t know if ChatGPT knows that the person asking the question is the same as the person being asked about. If a human had created that I’d be certain that “great computer scientist and writer” was an attempt at flattery, for a machine I’m not sure.

I have not written a single book, but I expect that in some alternate universe some version of me has written several. I don’t know if humans would describe my writing as being known for “clarity, precision, and depth”. I would not be surprised if “no-one else wrote about it so I guess I’m forced to read what he wrote” would be a more common response.

The actual “article” part doesn’t seem to be in my style at all. Firstly it’s very short at only 312 words, while I have written some short posts most of them are much longer. To find this out I did some MySQL queries to get the lengths of posts (I used this blog post as inspiration [1]). Note that multiple sequential spaces counts as multiple words.

# get post ID and word count
SELECT id, LENGTH(post_content) - LENGTH(REPLACE(REPLACE(REPLACE(REPLACE(post_content, "\r", ""), "\n", ""), "\t", ""), " ", "")) + 1 AS wordcount FROM wp_posts where post_status = 'publish' and post_type='post';
# get average word count
SELECT avg(LENGTH(post_content) - LENGTH(REPLACE(REPLACE(REPLACE(REPLACE(post_content, "\r", ""), "\n", ""), "\t", ""), " ", "")) + 1) FROM wp_posts where post_status = 'publish' and post_type='post';
# get the first posts by length
SELECT id, LENGTH(post_content) - LENGTH(REPLACE(REPLACE(REPLACE(REPLACE(post_content, "\r", ""), "\n", ""), "\t", ""), " ", "")) + 1 AS wordcount, post_content FROM wp_posts where post_status = 'publish' and post_type='post' ORDER BY wordcount limit 10;
# get a count of the posts less than 312 words
SELECT count(*) from wp_posts where (LENGTH(post_content) - LENGTH(REPLACE(REPLACE(REPLACE(REPLACE(post_content, "\r", ""), "\n", ""), "\t", ""), " ", "")) + 1) < 312 and post_status = 'publish' and post_type='post';
# get a count of all posts
select count(*) from wp_posts where post_status = 'publish' and post_type='post';

It turns out that there are 333/1521 posts that are less than 312 words and the average length is 665 words. Of the shortest posts a large portion were written before Twitter became popular and had the sort of unimportant content that Twitter is good for.

It seems ironic that ChatGPT says that I'm known for "depth" and then writes a short post allegedly in my style.

As for the actual content of the "post", it's not something that I greatly disagree with but it's definitely not something I'd write. If some manager or marketing person wrote something like this and asked my opinion I'd probably suppress a sigh and say it's OK for it's target market.

I think that free software is better in many ways and for many reasons and that separately from the issue of merit free software is important because of the ways that it is used. We could debate various issues related to the quality of free software vs proprietary software but even people who don't believe that free software is of high quality have to admit that software that runs most mobile phones and most web servers is important. As much as I dislike Windows most desktop PCs and laptops run it so I can't claim it to be unimportant. So my first disagreement with the content is the issue of importance vs quality.

One thing that ChatGPT is known for is weak arguments that appear convincing. "This diversity often leads to better software, as bugs are found and fixed faster, and new features are added more quickly" could be opposed by "the payment for proprietary software often leads to better software, as bugs are found and fixed faster, and new features are added more quickly". I think that any case where "feature of option A gives this list of good results" can be rewritten as "feature of the opposite of option A gives the same list of good results" and be equally convincing is a poor argument. Also any argument that seems to rest on "free software is all written by hobbyists" is ignoring the demands of users who need things done on a non-hobby timeframe, the many profitable companies that develop and support free software, and the employment status of a large portion of my Facebook friends (many of whom are paid for ~2000 hours a year of free software development and many more of whom have paid free software development as part of their work supporting the use of free software). Really the entire blog post it wrote seems to me like something a human might write if their boss assigned them to write about the benefits of free software, and that they would then write a new article saying the exact opposite if their boss asked them to.

Finally I prefer the term "free software" to "open source software". The difference doesn't seem important to some people, but any being that doesn't know the difference won't do well at impersonating me.

ChatGPT Result

Hello everyone, I'm ChatGPT, and today I'm going to try my best to write a blog post in the style of the great computer scientist and writer, Russell Coker.

Russell Coker is a renowned software engineer and writer, who is best known for his contributions to the Linux community, especially the Debian project. He has authored several books and articles on topics ranging from security to software engineering, and his writing style is known for its clarity, precision, and depth.

So, without further ado, let's dive into the topic of this blog post, which is the importance of open-source software.

Open-source software is software whose source code is available for anyone to inspect, modify, and distribute. This means that anyone can use, study, and improve the software, without any restrictions or fees. Open-source software has been around for a long time, but it has become more prevalent and important in recent years, thanks to the rise of the internet and the digital economy.

Open-source software has several benefits, both for individuals and for society as a whole. One of the most significant benefits is that it fosters innovation and collaboration. When software is open-source, it means that anyone can contribute to it, which leads to a diverse range of perspectives and ideas. This diversity often leads to better software, as bugs are found and fixed faster, and new features are added more quickly.

Another benefit of open-source software is that it is often more secure than proprietary software. Because the source code is available for anyone to inspect, vulnerabilities and security flaws can be found and fixed faster. This means that open-source software is often more reliable and trustworthy than proprietary software, which may have hidden backdoors or vulnerabilities that can be exploited by attackers.

Open-source software is also often more cost-effective than proprietary software. Because it is free to use and distribute, organizations can save money on licensing fees and other costs associated with proprietary software. This can be particularly important for non-profit organizations or small businesses that may not have the resources to invest in expensive software licenses.

In conclusion, open-source software is an important part of the digital landscape, and its benefits are numerous. Whether you are a software developer, a business owner, or an individual user, open-source software has something to offer. So, the next time you are looking for software to use, consider open-source options, and help support the collaborative and innovative spirit of the open-source community.

29 April, 2023 02:18PM by etbe

Enrico Zini

Gtk4 model-backed radio button in Python

Gtk4 has interesting ways of splitting models and views. One that I didn't find very well documented, especially for Python bindings, is a set of radio buttons backed by a common model.

The idea is to define an action that takes a string as a state. Each radio button is assigned a string matching one of the possible states, and when the state of the backend action is changed, the radio buttons are automatically updated.

All the examples below use a string for a value type, but anything can be used that fits into a GLib.Variant.

The model

This defines the action. Note that enables all the usual declarative ways of a status change:

mode = Gio.SimpleAction.new_stateful(
        name="mode-selection",
        parameter_type=GLib.VariantType("s"),
        state=GLib.Variant.new_string(""))
gtk_app.add_action(self.mode)

The view

def add_radio(model: Gio.SimpleAction, id: str, label: str):
    button = Gtk.CheckButton(label=label)

    # Tell this button to activate when the model has the given value
    button.set_action_target_value(GLib.Variant.new_string(id))

    # Build the name under which the action is registesred, plus the state
    # value controlled by this button: clicking the button will set this state
    detailed_name = Gio.Action.print_detailed_name(
            "app." + model.get_name(),
            GLib.Variant.new_string(id))
    button.set_detailed_action_name(detailed_name)

    # If the model has no current value set, this sets the first radio button
    # as selected
    if not model.get_state().get_string():
        model.set_state(GLib.Variant.new_string(id))

Accessing the model

To read the currently selected value:

current = model.get_state().get_string()

To set the currently selected value:

model.set_state(GLib.Variant.new_string(id))

29 April, 2023 02:03PM

Andrew Cater

Debian Bullseye 11.7 release weekend 202304291215UTC

 A switch failure early on: some quick changes of wiring and we're off. The room is very quiet for a few minutes then a burst of chatter, then on we go.

Just the noise of keyboards and quiet concentration

29 April, 2023 11:46AM by Andrew Cater (noreply@blogger.com)

Simon Josefsson

How To Trust A Machine

Let’s reflect on some of my recent work that started with understanding Trisquel GNU/Linux, improving transparency into apt-archives, working on reproducible builds of Trisquel, strengthening verification of apt-archives with Sigstore, and finally thinking about security device threat models. A theme in all this is improving methods to have trust in machines, or generally any external entity. While I believe that everything starts by trusting something, usually something familiar and well-known, we need to deal with misuse of that trust that leads to failure to deliver what is desired and expected from the trusted entity. How can an entity behave to invite trust? Let’s argue for some properties that can be quantitatively measured, with a focus on computer software and hardware:

  • Deterministic Behavior – given a set of circumstances, it should behave the same.
  • Verifiability and Transparency – the method (the source code) should be accessible for understanding (compare scientific method) and its binaries verifiable, i.e., it should be possible to verify that the entity actually follows the intended deterministic method (implying efforts like reproducible builds and bootstrappable builds).
  • Accountable – the entity should behave the same for everyone, and deviation should be possible prove in a way that is hard to deny, implying efforts such as Certificate Transparency and more generic checksum logs like Sigstore and Sigsum.
  • Liberating – the tools and documentation should be available as free software to enable you to replace the trusted entity if so desired. An entity that wants to restrict you from being able to replace the trusted entity is vulnerable to corruption and may stop acting trustworthy. This point of view reinforces that open source misses the point; it has become too common to use trademark laws to restrict re-use of open source software (e.g., firefox, chrome, rust).

Essentially, this boils down to: Trust, Verify and Hold Accountable. To put this dogma in perspective, it helps to understand that this approach may be harmful to human relationships (which could explain the social awkwardness of hackers), but it remains useful as a method to improve the design of computer systems, and a useful method to evaluate safety of computer systems. When a system fails some of the criteria above, we know we have more work to do to improve it.

How far have we come on this journey? Through earlier efforts, we are in a fairly good situation. Richard Stallman through GNU/FSF made us aware of the importance of free software, the Reproducible/Bootstrappable build projects made us aware of the importance of verifiability, and Certificate Transparency highlighted the need for accountable signature logs leading to efforts like Sigstore for software. None of these efforts would have seen the light of day unless people wrote free software and packaged them into distributions that we can use, and built hardware that we can run it on. While there certainly exists more work to be done on the software side, with the recent amazing full-source build of Guix based on a 357-byte hand-written seed, I believe that we are closing that loop on the software engineering side.

So what remains? Some inspiration for further work:

  • Accountable binary software distribution remains unresolved in practice, although we have some software components around (e.g., apt-sigstore and guix git authenticate). What is missing is using them for verification by default and/or to improve the signature process to use trustworthy hardware devices, and committing the signatures to transparency logs.
  • Trustworthy hardware to run trustworthy software on remains a challenge, and we owe FSF’s Respect Your Freedom credit for raising awareness of this. Many modern devices requires non-free software to work which fails most of the criteria above and are thus inherently untrustworthy.
  • Verifying rebuilds of currently published binaries on trustworthy hardware is unresolved.
  • Completing a full-source rebuild from a small seed on trustworthy hardware remains, preferably on a platform wildly different than X86 such as Raptor’s Talos II.
  • We need improved security hardware devices and improved established practices on how to use them. For example, while Gnuk on the FST enable a trustworthy software and hardware solution, the best process for using it that I can think of generate the cryptographic keys on a more complex device. Efforts like Tillitis are inspiring here.

Onwards and upwards, happy hacking!

Update 2023-05-03: Added the “Liberating” property regarding free software, instead of having it be part of the “Verifiability and Transparency”.

29 April, 2023 11:45AM by simon

Andrew Cater

Back in Cambridge - Debian point release for Debian Bullseye due this weekend - 11.7

Back in Cambridge for a point release weekend. Lots of people turning up - it comes to something when large monitors, a desktop machine require two or three trips to the car and there's still a crate of leads to go.

As ever, lots of banter - computer renovations and updates were done yesterday - if they hadn't been, I'd have had at least another expert engineer on hand.

This is *definitely* the place to be rather than at the other end of an IRC chat.

This is *not* the release for Debian Bookworm: that will be on June 10th/11th - likely to be the same personnel, same place. There was a release of Debian Bookworm RC2  media yesterday incorporating fixes up to date.

29 April, 2023 10:30AM by Andrew Cater (noreply@blogger.com)

Abhijith PA

Attending FOSSASIA 2023

I attended FOSSASIA 2023 summit held at Lifelong Learning Institute, Singapore. A 3 day long parallel talk filled conference. Its my second time attending FOSSASIA. The first one was 2018 summit. Like last time, I didn’t attend much talks but focussed on networking with people. A lot of familiar faces there. PV Anthony, Harish, etc.

I vounteered to run Debian Booth at the exhibition hall distributing stickers, flyers. Rajudev also helped me at the booth. Most of the people there used debian or its derivates or know debian already, its easier for me that way, that I don’t have do much explaining compared to other booths. Thanks to Parth for looking after booth in my breaks.

debian_booth1 debian_booth2

Sometimes our booth also act as cloak room :). Ours was close to the entrance door and we may be similar faces to folks. So people come and drop their bags before they go to talks.

One thing I love about the such conference is that people have very different hardwares that I never able to see otherwise. I remember KDE booth had a Steam Value portable gaming board running KDE plasma. Then a person have this eyeglass which act as a monitor. Then the usual DJI drones but custom programmed. It was very lovely to meet and play around with exotic hardwares.

Kurian whom I met during a Debian packaging workshop in Kerala was a speaker at FOSSASIA. He presented a talk titled “OpenAI Whisper and it’s amazing power to do finetuning”. I was his unofficial PR guy, taking pictures :).

Weather

Clear sky, little hot and humid. The weather was quite nice for me except the surprise rain and small thunderstorms. Comparing to my place’s temperature, its wonderful.

29 April, 2023 06:53AM

April 28, 2023

Enrico Zini

Handling keyboard-like devices

CNC control panel and Bluetooth pedal page turner

I acquired some unusual input devices to experiment with, like a CNC control panel and a bluetooth pedal page turner.

These identify and behave like a keyboard, sending nice and simple keystrokes, and can be accessed with no drivers or other special software. However, their keystrokes appear together with keystrokes from normal keyboards, which is the expected default when plugging in a keyboard, but not what I want in this case.

I'd also like them to be readable via evdev and accessible by my own user.

Here's the udev rule I cooked up to handle this use case:

# Handle the CNC control panel
SUBSYSTEM=="input", ENV{ID_VENDOR}=="04d9", ENV{ID_MODEL}=="1203", \
   OWNER="enrico", ENV{ID_INPUT}=""

# Handle the Bluetooth page turner
SUBSYSTEM=="input", ENV{ID_BUS}=="bluetooth", ENV{LIBINPUT_DEVICE_GROUP}=="*/…mac…", ENV{ID_INPUT_KEYBOARD}="1" \
   OWNER="enrico", ENV{ID_INPUT}="", SYMLINK+="input/by-id/bluetooth-…mac…-kbd"
SUBSYSTEM=="input", ENV{ID_BUS}=="bluetooth", ENV{LIBINPUT_DEVICE_GROUP}=="*/…mac…", ENV{ID_INPUT_TABLET}="1" \
   OWNER="enrico", ENV{ID_INPUT}="", SYMLINK+="input/by-id/bluetooth-…mac…-tablet"

The bluetooth device didn't have standard rules to create /dev/input/by-id/ symlinks so I added them. In my own code, I watch /dev/input/by-id with inotify to handle when devices appear or disappear.

I used udevadm info /dev/input/event… to see what I could use to identify the device.

The Static device configuration via udev page of libinput's documentation has documentation on the various elements specific to the input subsystem

Grepping rule files in /usr/lib/udev/rules.d was useful to see syntax examples.

udevadm test /dev/input/event… was invaluable for syntax checking and testing my rule file while working on it.

Finally, this is an extract of a quick prototype Python code to read keys from the CNC control panel:

import libevdev

KEY_MAP = {
    libevdev.EV_KEY.KEY_GRAVE: "EMERGENCY",
    # InputEvent(EV_KEY, KEY_LEFTALT, 1)
    libevdev.EV_KEY.KEY_R: "CYCLE START",

    libevdev.EV_KEY.KEY_F5: "SPINDLE ON/OFF",

    # InputEvent(EV_KEY, KEY_RIGHTCTRL, 1)
    libevdev.EV_KEY.KEY_W: "REDO",

    # InputEvent(EV_KEY, KEY_LEFTALT, 1)
    libevdev.EV_KEY.KEY_N: "SINGLE STEP",

    # InputEvent(EV_KEY, KEY_LEFTCTRL, 1)
    libevdev.EV_KEY.KEY_O: "ORIGIN POINT",

    libevdev.EV_KEY.KEY_ESC: "STOP",
    libevdev.EV_KEY.KEY_KPPLUS: "SPEED UP",
    libevdev.EV_KEY.KEY_KPMINUS: "SLOW DOWN",

    libevdev.EV_KEY.KEY_F11: "F+",
    libevdev.EV_KEY.KEY_F10: "F-",
    libevdev.EV_KEY.KEY_RIGHTBRACE: "J+",
    libevdev.EV_KEY.KEY_LEFTBRACE: "J-",

    libevdev.EV_KEY.KEY_UP: "+Y",
    libevdev.EV_KEY.KEY_DOWN: "-Y",
    libevdev.EV_KEY.KEY_LEFT: "-X",
    libevdev.EV_KEY.KEY_RIGHT: "+X",

    libevdev.EV_KEY.KEY_KP7: "+A",
    libevdev.EV_KEY.KEY_Q: "-A",
    libevdev.EV_KEY.KEY_PAGEDOWN: "-Z",
    libevdev.EV_KEY.KEY_PAGEUP: "+Z",
}


class KeyReader:
    def __init__(self, path: str):
        self.path = path
        self.fd: IO[bytes] | None = None
        self.device: libevdev.Device | None = None

    def __enter__(self):
        self.fd = open(self.path, "rb")
        self.device = libevdev.Device(self.fd)
        return self

    def __exit__(self, exc_type, exc, tb):
        self.device = None
        self.fd.close()
        self.fd = None

    def events(self) -> Iterator[dict[str, Any]]:
        for e in self.device.events():
            if e.type == libevdev.EV_KEY:
                if (val := KEY_MAP.get(e.code)):
                    yield {
                        "name": val,
                        "value": e.value,
                        "sec": e.sec,
                        "usec": e.usec,
                    }

Edited: added rules to handle the Bluetooth page turner

28 April, 2023 06:58PM

Scarlett Gately Moore

KDE Snaps, Gear 23.04.0 available in snap store

KDE Digikam 8.0.0 SnapKDE Digikam 8.0.0 Snap

It has been another crazy busy couple of weeks. There are too many snaps released to list here, but you can track my progress here:

https://invent.kde.org/packaging/snapcraft-kde-applications/-/issues/30

Some notable releases are:

  • Digikam 8.0.0
  • KPhotoalbum ( New! )
  • KDevelop
  • Kate ( Now classic )
  • Arianna ( New! )
  • Kdenlive
  • Kommit ( New! )

I updated our Frameworks/QT5 content pack to kf5 105 and qt5 5.15.9.

I have added more documentation at:

https://invent.kde.org/teams/neon/-/wikis/Snaps

I am finishing up the last of 23.04.0 snaps and will tackle KDE PIM suite next week. I will also work on more apps that have their own release cycle.

Several snap bugs have been resolved and closed! Please remember if you have issues, to file them at bugs.kde.org as I have a hard time finding them at <insert random forum / social network here>

On the Debian front, I successfully closed two CVE’s for ruby-rack in Buster! I will be doing more ELTS/LTS security updates next month!

Thanks for stopping by! If you enjoy my work, consider a dontation.

https://gofund.me/2c7b1808

28 April, 2023 04:04PM by sgmoore

Sven Hoexter

What's wrong in IT: commit messages

In my day job someone today took the time in the team daily to explain his research why some of our configuration is wrong. He spent quite some time on his own to look at the history in git and how everything was setup initially, and ended up in the current - wrong - way. That triggered me to validate that quickly, another 5min of work. So we agreed to change it. A one line change, nothing spectacular, but lifetime was invested to figure out why it should've a different value.

When the pull request got opened a few minutes later there was nothing of that story in the commit message. Zero, nada, nothing. :( I'm really puzzled why someone invests lifetime to dig into company internal history to try get something right, do a lengthy explanation to the whole team, use the time of others, even mention that there was no explanation of why it's not the default value anymore it should be, and repeat the same mistake by not writing down anything in the commit message.

For the current company I'm inclined to propose a commit message validator. For a potential future company I might join, I guess I ask for real world git logs from repositories I should contribute to. Seems that this is another valuable source of information to qualify the company culture. Next up to the existence of whiteboards in the office. I'm really happy that at least a majority of the people contributing to Debian writes somewhat decent commit messages and changelogs. Let that be a reminder to myself to improve in that area the next time I've to change something.

28 April, 2023 08:15AM

hackergotchi for Shirish Agarwal

Shirish Agarwal

John Grisham’s books, Evolution removed from textbooks

Gray Mountain – John Grisham

I have been perusing John Grisham’s books, some read and some re-read again. Almost all of the books that Mr. Grisham wrote are relevant even today. The Gray Mountain talks about how mountain top removal was done in Applachia, the U.S. (South). In fact NASA made a summary which either was borrowed from this book or the author borrowed from NASA, either could be true although seems it might be the former. And this is when GOI just made a new Forest ‘Conservation Bill’ 2023 which does the opposite. There are many examples of the same, the latest from my own city as an e.g. Vetal Tekdi is and was a haven for people animals, birds all kinds of ecosystem and is a vital lung of the city, one of the few remaining green spaces in the city but BJP wants to commercialize it as it has been doing for everything, I haven’t been to the Himalayas since 4-5 years back as I hear the rape of the land daily. Even after Joshimath tragedy, if people are not opening the eyes what can I do 😦 The more I say, the more depressed I will become so will leave it for now. In many ways the destruction seems similar to the destruction that happened in Brazil under Jair Bolsonaro. So as can be seen from what I have shared this book was mostly about environment and punitive damages and also how punitive damages have been ceiling in America (South). This was done via lobbying by the coal groups and apparently destroyed people’s lives, communities, even whole villages. It also shared how most people called black lung and how those claims had been denied by Coal Companies all the time. And there are hardly any unions. While the book itself is a fiction piece, there is a large amount of truth in it. And that is one of the reasons people write a fiction book. You could write about reality using fictional names and nobody can sue you while you set the reality as it is. In many ways, it is a tell-all.

The Testament – John Grisham

One of the things I have loved about John Grisham is he understands human condition. In this book it starts with an eccentric billionaire who makes a will which leaves all his property to an illegitimate child who coincidentally also lives in Brazil, she is a missionary. The whole book is about human failings as well as about finding the heir. I am not going to give much detail as the book itself is fun.

The Appeal – John Grisham

In this, we are introduced to a company that does a chemical spill for decades and how that leads to lobbying and funding Judicial elections. It does go into quite a bit of length how private money coming from Big business does all kind of shady things to get their person elected to the Supreme Court. Sadly, this seems to happen all the time, for e.g. two weeks ago. This piece from the Atlantic also says the same. Again, won’t tell as there is a bit of irony at the end of the book.

The Rainmaker – John Grisham

This in short is about how Insurance Companies stiff people. It’s a wonderful story that has all people in grey including our hero. An engaging book that sorta tells how the Insurance Industry plays the game. In India, the companies are safe as they ask for continuance for years together and their object is to delay the hearings till the grandchildren are not alive unlike in the U.S. or UK. It also does tell how more lawyers are there then required. Both of which are the same in my country.

The Litigators – John Grisham

This one is about Product Liability, both medicines as well as toys for young kids. What I do find funny a bit is how the law in States allows people to file cases but doesn’t protect people while in EU they try their best that if there is any controversy behind any medicine or product, they will simply ban it. Huge difference between the two cultures.

Evolution no longer part of Indian Education

A few days ago, NCERT (one of the major bodies) that looks into Indian Education due to BJP influence has removed Darwin’s Theory of Evolution. You can’t even debate because the people do not understand adaptability. So the only conclusion is that Man suddenly appeared out of nowhere. If that is so, then we are the true aliens. They discard the notion that we and Chimpanzees are similar. Then they also have to discard this finding that genetically we are 96% similar.

28 April, 2023 03:34AM by shirishag75

April 27, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

Repurposing my C.H.I.P.

Way back at DebConf16 Gunnar managed to arrange for a number of Next Thing Co. C.H.I.P. boards to be distributed to those who were interested. I was lucky enough to be amongst those who received one, but I have to confess after some initial experimentation it ended up sitting in its box unused.

The reasons for that were varied; partly about not being quite sure what best to do with it, partly due to a number of limitations it had, partly because NTC sadly went insolvent and there was less momentum around the hardware. I’ve always meant to go back to it, poking it every now and then but never completing a project. I’m finally almost there, and I figure I should write some of it up.

TL;DR: My C.H.I.P. is currently running a mainline Linux 6.3 kernel with only a few DTS patches, an upstream u-boot v2022.1 with a couple of minor patches and an unmodified Debian bullseye armhf userspace.

Storage

The main issue with the C.H.I.P. is that it uses MLC NAND, in particular mine has an 8MB H27QCG8T2E5R. That ended up unsupported in Linux, with the UBIFS folk disallowing operation on MLC devices. There’s been subsequent work to enable an “SLC emulation” mode which makes the device more reliable at the cost of losing capacity by pairing up writes/reads in cells (AFAICT). Some of this hit for the H27UCG8T2ETR in 5.16 kernels, but I definitely did some experimentation with 5.17 without having much success. I should maybe go back and try again, but I ended up going a different route.

It turned out that BytePorter had documented how to add a microSD slot to the NTC C.H.I.P., using just a microSD to full SD card adapter. Every microSD card I buy seems to come with one of these, so I had plenty lying around to test with. I started with ensuring the kernel could see it ok (by modifying the device tree), but once that was all confirmed I went further and built a more modern u-boot that talked to the SD card, and defaulted to booting off it. That meant no more relying on the internal NAND at all!

I do see some flakiness with the SD card, which is possibly down to the dodgy way it’s hooked up (I should probably do a basic PCB layout with JLCPCB instead). That’s mostly been mitigated by forcing it into 1-bit mode instead of 4-bit mode (I tried lowering the frequency too, but that didn’t make a difference).

The problem manifests as:

sunxi-mmc 1c11000.mmc: data error, sending stop command

and then all storage access freezing (existing logins still work, if the program you’re trying to run is in cache). I can’t find a conclusive software solution to this; I’m pretty sure it’s the hardware, but I don’t understand why the recovery doesn’t generally work.

Random power offs

After I had storage working I’d see random hangs or power offs. It wasn’t quite clear what was going on. So I started trying to work out how to find out the CPU temperature, in case it was overheating. It turns out the temperature sensor on the R8 is part of the touchscreen driver, and I’d taken my usual approach of turning off all the drivers I didn’t think I’d need. Enabling it (CONFIG_TOUCHSCREEN_SUN4I) gave temperature readings and seemed to help somewhat with stability, though not completely.

Next I ended up looking at the AXP209 PMIC. There were various scripts still installed (I’d started out with the NTC Debian install and slowly upgraded it to bullseye while stripping away the obvious pieces I didn’t need) and a start-up script called enable-no-limit. This turned out to not be running (some sort of expectation of i2c-dev being loaded and another failing check), but looking at the script and the data sheet revealed the issue.

The AXP209 can cope with 3 power sources; an external DC source, a Li-battery, and finally a USB port. I was powering my board via the USB port, using a charger rated for 2A. It turns out that the AXP209 defaults to limiting USB current to 900mA, and that with wifi active and the CPU busy the C.H.I.P. can rise above that. At which point the AXP shuts everything down. Armed with that info I was able to understand what the power scripts were doing and which bit I needed - i2cset -f -y 0 0x34 0x30 0x03 to set no limit and disable the auto-power off. Additionally I also discovered that the AXP209 had a built in temperature sensor as well, so I added support for that via iio-hwmon.

WiFi

WiFi on the C.H.I.P. is provided by an RTL8723BS SDIO attached device. It’s terrible (and not just here, I had an x86 based device with one where it also sucked). Thankfully there’s a driver in staging in the kernel these days, but I’ve still found it can fall out with my house setup, end up connecting to a further away AP which then results in lots of retries, dropped frames and CPU consumption. Nailing it to the AP on the other side of the wall from where it is helps. I haven’t done any serious testing with the Bluetooth other than checking it’s detected and can scan ok.

Patches

I patched u-boot v2022.01 (which shows you how long ago I was trying this out) with the following to enable boot from external SD:

u-boot C.H.I.P. external SD patch
diff --git a/arch/arm/dts/sun5i-r8-chip.dts b/arch/arm/dts/sun5i-r8-chip.dts
index 879a4b0f3b..1cb3a754d6 100644
--- a/arch/arm/dts/sun5i-r8-chip.dts
+++ b/arch/arm/dts/sun5i-r8-chip.dts
@@ -84,6 +84,13 @@
 		reset-gpios = <&pio 2 19 GPIO_ACTIVE_LOW>; /* PC19 */
 	};
 
+	mmc2_pins_e: mmc2@0 {
+		pins = "PE4", "PE5", "PE6", "PE7", "PE8", "PE9";
+		function = "mmc2";
+		drive-strength = <30>;
+		bias-pull-up;
+	};
+
 	onewire {
 		compatible = "w1-gpio";
 		gpios = <&pio 3 2 GPIO_ACTIVE_HIGH>; /* PD2 */
@@ -175,6 +182,16 @@
 	status = "okay";
 };
 
+&mmc2 {
+	pinctrl-names = "default";
+	pinctrl-0 = <&mmc2_pins_e>;
+	vmmc-supply = <&reg_vcc3v3>;
+	vqmmc-supply = <&reg_vcc3v3>;
+	bus-width = <4>;
+	broken-cd;
+	status = "okay";
+};
+
 &ohci0 {
 	status = "okay";
 };
diff --git a/arch/arm/include/asm/arch-sunxi/gpio.h b/arch/arm/include/asm/arch-sunxi/gpio.h
index f3ab1aea0e..c0dfd85a6c 100644
--- a/arch/arm/include/asm/arch-sunxi/gpio.h
+++ b/arch/arm/include/asm/arch-sunxi/gpio.h
@@ -167,6 +167,7 @@ enum sunxi_gpio_number {
 
 #define SUN8I_GPE_TWI2		3
 #define SUN50I_GPE_TWI2		3
+#define SUNXI_GPE_SDC2		4
 
 #define SUNXI_GPF_SDC0		2
 #define SUNXI_GPF_UART0		4
diff --git a/board/sunxi/board.c b/board/sunxi/board.c
index fdbcd40269..f538cb7e20 100644
--- a/board/sunxi/board.c
+++ b/board/sunxi/board.c
@@ -433,9 +433,9 @@ static void mmc_pinmux_setup(int sdc)
 			sunxi_gpio_set_drv(pin, 2);
 		}
 #elif defined(CONFIG_MACH_SUN5I)
-		/* SDC2: PC6-PC15 */
-		for (pin = SUNXI_GPC(6); pin <= SUNXI_GPC(15); pin++) {
-			sunxi_gpio_set_cfgpin(pin, SUNXI_GPC_SDC2);
+		/* SDC2: PE4-PE9 */
+		for (pin = SUNXI_GPE(4); pin <= SUNXI_GPE(9); pin++) {
+			sunxi_gpio_set_cfgpin(pin, SUNXI_GPE_SDC2);
 			sunxi_gpio_set_pull(pin, SUNXI_GPIO_PULL_UP);
 			sunxi_gpio_set_drv(pin, 2);
 		}


I’ve sent some patches for the kernel device tree upstream - there’s an outstanding issue with the Bluetooth wake GPIO causing the serial port not to probe(!) that I need to resolve before sending a v2, but what’s there works for me.

The only remaining piece is patch to enable the external SD for Linux; I don’t think it’s appropriate to send upstream but it’s fairly basic. This limits the bus to 1 bit rather than the 4 bits it’s capable of, as mentioned above.

Linux C.H.I.P. external SD DTS patch ```diff diff --git a/arch/arm/boot/dts/sun5i-r8-chip.dts b/arch/arm/boot/dts/sun5i-r8-chip.dts index fd37bd1f3920..2b5aa4952620 100644 --- a/arch/arm/boot/dts/sun5i-r8-chip.dts +++ b/arch/arm/boot/dts/sun5i-r8-chip.dts @@ -163,6 +163,17 @@ &mmc0 { status = "okay"; }; +&mmc2 { + pinctrl-names = "default"; + pinctrl-0 = <&mmc2_4bit_pe_pins>; + vmmc-supply = <&reg_vcc3v3>; + vqmmc-supply = <&reg_vcc3v3>; + bus-width = <1>; + non-removable; + disable-wp; + status = "okay"; +}; + &ohci0 { status = "okay"; }; ```


As for what I’m doing with it, I think that’ll have to be a separate post.

27 April, 2023 06:44PM

hackergotchi for Thomas Lange

Thomas Lange

New feature for FAI.me build service

After the initial installation of a new machine, you often want to login as root via ssh. Therefore it's convenient to provide a ssh public key for a passwordless login.

This can now be done by just adding your user name from salsa.debian.org, gitlab.com or github.com. You can also give a customized URL from where to download the keys. Before it was only possible to use a github account name.

The FAI.me build service then creates a customized installation ISO for you, which will automatically install the ssh public key into the root account. Also the ready-to-boot cloud images support this feature.

The build service is available on the FAI project website at https://fai-project.org/FAIme

27 April, 2023 06:25PM

Simon Josefsson

A Security Device Threat Model: The Substitution Attack

I’d like to describe and discuss a threat model for computational devices. This is generic but we will narrow it down to security-related devices. For example, portable hardware dongles used for OpenPGP/OpenSSH keys, FIDO/U2F, OATH HOTP/TOTP, PIV, payment cards, wallets etc and more permanently attached devices like a Hardware Security Module (HSM), a TPM-chip, or the hybrid variant of a mostly permanently-inserted but removable hardware security dongles.

Our context is cryptographic hardware engineering, and the purpose of the threat model is to serve as as a thought experiment for how to build and design security devices that offer better protection. The threat model is related to the Evil maid attack.

Our focus is to improve security for the end-user rather than the traditional focus to improve security for the organization that provides the token to the end-user, or to improve security for the site that the end-user is authenticating to. This is a critical but often under-appreciated distinction, and leads to surprising recommendations related to onboard key generation, randomness etc below.

The Substitution Attack

An attacker is able to substitute any component of the device (hardware or software) at any time for any period of time.

Your takeaway should be that devices should be designed to mitigate harmful consequences if any component of the device (hardware or software) is substituted for a malicious component for some period of time, at any time, during the lifespan of that component. Some designs protect better against this attack than other designs, and the threat model can be used to understand which designs are really bad, and which are less so.

Terminology

The threat model involves at least one device that is well-behaving and one that is not, and we call these Good Device and Bad Device respectively. The bad device may be the same physical device as the good key, but with some minor software modification or a minor component replaced, but could also be a completely separate physical device. We don’t care about that distinction, we just care if a particular device has a malicious component in it or not. I’ll use terms like “security device”, “device”, “hardware key”, “security co-processor” etc interchangeably.

From an engineering point of view, “malicious” here includes “unintentional behavior” such as software or hardware bugs. It is not possible to differentiate an intentionally malicious device from a well-designed device with a critical bug.

Don’t attribute to malice what can be adequately explained by stupidity, but don’t naïvely attribute to stupidity what may be deniable malicious.

What is “some period of time”?

“Some period of time” can be any length of time: seconds, minutes, days, weeks, etc.

It may also occur at any time: During manufacturing, during transportation to the user, after first usage by the user, or after a couple of months usage by the user. Note that we intentionally consider time-of-manufacturing as a vulnerable phase.

Even further, the substitution may occur multiple times. So the Good Key may be replaced with a Bad Key by the attacker for one day, then returned, and later this repeats a month later.

What is “harmful consequences”?

Since a security key has a fairly well-confined scope and purpose, we can get a fairly good exhaustive list of things that could go wrong. Harmful consequences include:

  • Attacker learns any secret keys stored on a Good Key.
  • Attacker causes user to trust a public generated by a Bad Key.
  • Attacker is able to sign something using a Good Key.
  • Attacker learns the PIN code used to unlock a Good Key.
  • Attacker learns data that is decrypted by a Good Key.

Thin vs Deep solutions

One approach to mitigate many issues arising from device substitution is to have the host (or remote site) require that the device prove that it is the intended unique device before it continues to talk to it. This require an authentication/authorization protocol, which usually involves unique device identity and out-of-band trust anchors. Such trust anchors is often problematic, since a common use-case for security device is to connect it to a host that has never seen the device before.

A weaker approach is to have the device prove that it merely belongs to a class of genuine devices from a trusted manufacturer, usually by providing a signature generated by a device-specific private key signed by the device manufacturer. This is weaker since then the user cannot differentiate two different good devices.

In both cases, the host (or remote site) would stop talking to the device if it cannot prove that it is the intended key, or at least belongs to a class of known trusted genuine devices.

Upon scrutiny, this “solution” is still vulnerable to a substitution attack, just earlier in the manufacturing chain: how can the process that injects the per-device or per-class identities/secrets know that it is putting them into a good key rather than a malicious device? Consider also the consequences if the cryptographic keys that guarantee that a device is genuine leaks.

The model of the “thin solution” is similar to the old approach to network firewalls: have a filtering firewall that only lets through “intended” traffic, and then run completely insecure protocols internally such as telnet.

The networking world has evolved, and now we have defense in depth: even within strongly firewall’ed networks, it is prudent to run for example SSH with publickey-based user authentication even on locally physical trusted networks. This approach requires more thought and adds complexity, since each level has to provide some security checking.

I’m arguing we need similar defense-in-depth for security devices. Security key designs cannot simply dodge this problem by assuming it is working in a friendly environment where component substitution never occur.

Example: Device authentication using PIN codes

To see how this threat model can be applied to reason about security key designs, let’s consider a common design.

Many security keys uses PIN codes to unlock private key operations, for example on OpenPGP cards that lacks built-in PIN-entry functionality. The software on the computer just sends a PIN code to the device, and the device allows private-key operations if the PIN code was correct.

Let’s apply the substitution threat model to this design: the attacker replaces the intended good key with a malicious device that saves a copy of the PIN code presented to it, and then gives out error messages. Once the user has entered the PIN code and gotten an error message, presumably temporarily giving up and doing other things, the attacker replaces the device back again. The attacker has learnt the PIN code, and can later use this to perform private-key operations on the good device.

This means a good design involves not sending PIN codes in clear, but use a stronger authentication protocol that allows the card to know that the PIN was correct without learning the PIN. This is implemented optionally for many OpenPGP cards today as the key-derivation-function extension. That should be mandatory, and users should not use setups that sends device authentication in the clear, and ultimately security devices should not even include support for that. Compare how I build Gnuk on my PGP card with the kdf_do=required option.

Example: Onboard non-predictable key-generation

Many devices offer both onboard key-generation, for example OpenPGP cards that generate a Ed25519 key internally on the devices, or externally where the device imports an externally generated cryptographic key.

Let’s apply the subsitution threat model to this design: the user wishes to generate a key and trust the public key that came out of that process. The attacker substitutes the device for a malicious device during key-generation, imports the private key into a good device and gives that back to the user. Most of the time except during key generation the user uses a good device but still the attacker succeeded in having the user trust a public key which the attacker knows the private key for. The substitution may be a software modification, and the method to leak the private key to the attacker may be out-of-band signalling.

This means a good design never generates key on-board, but imports them from a user-controllable environment. That approach should be mandatory, and users should not use setups that generates private keys on-board, and ultimately security devices should not even include support for that.

Example: Non-predictable randomness-generation

Many devices claims to generate random data, often with elaborate design documents explaining how good the randomness is.

Let’s apply the substitution threat model to this design: the attacker replaces the intended good key with a malicious design that generates (for the attacker) predictable randomness. The user will never be able to detect the difference since the random output is, well, random, and typically not distinguishable from weak randomness. The user cannot know if any cryptographic keys generated by a generator was faulty or not.

This means a good design never generates non-predictable randomness on the device. That approach should be mandatory, and users should not use setups that generates non-predictable randomness on the device, and ideally devices should not have this functionality.

Case-Study: Tillitis

I have warmed up a bit for this. Tillitis is a new security device with interesting properties, and core to its operation is the Compound Device Identifier (CDI), essentially your Ed25519 private key (used for SSH etc) is derived from the CDI that is computed like this:

cdi = blake2s(UDS, blake2s(device_app), USS)

Let’s apply the substitution threat model to this design: Consider someone replacing the Tillitis key with a malicious key during postal delivery of the key to the user, and the replacement device is identical with the real Tillitis key but implements the following key derivation function:

cdi = weakprng(UDS’, weakprng(device_app), USS)

Where weakprng is a compromised algorithm that is predictable for the attacker, but still appears random. Everything will work correctly, but the attacker will be able to learn the secrets used by the user, and the user will typically not be able to tell the difference since the CDI is secret and the Ed25519 public key is not self-verifiable.

Conclusion

Remember that it is impossible to fully protect against this attack, that’s why it is merely a thought experiment, intended to be used during design of these devices. Consider an attacker that never gives you access to a good key and as a user you only ever use a malicious device. There is no way to have good security in this situation. This is not hypothetical, many well-funded organizations do what they can to derive people from having access to trustworthy security devices. Philosophically it does not seem possible to tell if these organizations have succeeded 100% already and there are only bad security devices around where further resistance is futile, but to end on an optimistic note let’s assume that there is a non-negligible chance that they haven’t succeeded. In these situations, this threat model becomes useful to improve the situation by identifying less good designs, and that’s why the design mantra of “mitigate harmful consequences” is crucial as a takeaway from this. Let’s improve the design of security devices that further the security of its users!

27 April, 2023 04:30PM by simon

Arturo Borrero González

Kubecon and CloudNativeCon 2023 Europe summary

Post logo

This post serves as a report from my attendance to Kubecon and CloudNativeCon 2023 Europe that took place in Amsterdam in April 2023. It was my second time physically attending this conference, the first one was in Austin, Texas (USA) in 2017. I also attended once in a virtual fashion.

The content here is mostly generated for the sake of my own recollection and learnings, and is written from the notes I took during the event.

The very first session was the opening keynote, which reunited the whole crowd to bootstrap the event and share the excitement about the days ahead. Some astonishing numbers were announced: there were more than 10.000 people attending, and apparently it could confidently be said that it was the largest open source technology conference taking place in Europe in recent times.

It was also communicated that the next couple iteration of the event will be run in China in September 2023 and Paris in March 2024.

More numbers, the CNCF was hosting about 159 projects, involving 1300 maintainers and about 200.000 contributors. The cloud-native community is ever-increasing, and there seems to be a strong trend in the industry for cloud-native technology adoption and all-things related to PaaS and IaaS.

The event program had different tracks, and in each one there was an interesting mix of low-level and higher level talks for a variety of audience. On many occasions I found that reading the talk title alone was not enough to know in advance if a talk was a 101 kind of thing or for experienced engineers. But unlike in previous editions, I didn’t have the feeling that the purpose of the conference was to try selling me anything. Obviously, speakers would make sure to mention, or highlight in a subtle way, the involvement of a given company in a given solution or piece of the ecosystem. But it was non-invasive and fair enough for me.

On a different note, I found the breakout rooms to be often small. I think there were only a couple of rooms that could accommodate more than 500 people, which is a fairly small allowance for 10k attendees. I realized with frustration that the more interesting talks were immediately fully booked, with people waiting in line some 45 minutes before the session time. Because of this, I missed a few important sessions that I’ll hopefully watch online later.

Finally, on a more technical side, I’ve learned many things, that instead of grouping by session I’ll group by topic, given how some subjects were mentioned in several talks.

On gitops and CI/CD pipelines

Most of the mentions went to FluxCD and ArgoCD. At that point there were no doubts that gitops was a mature approach and both flux and argoCD could do an excellent job. ArgoCD seemed a bit more over-engineered to be a more general purpose CD pipeline, and flux felt a bit more tailored for simpler gitops setups. I discovered that both have nice web user interfaces that I wasn’t previously familiar with.

However, in two different talks I got the impression that the initial setup of them was simple, but migrating your current workflow to gitops could result in a bumpy ride. This is, the challenge is not deploying flux/argo itself, but moving everything into a state that both humans and flux/argo can understand. I also saw some curious mentions to the config drifts that can happen in some cases, even if the goal of gitops is precisely for that to never happen. Such mentions were usually accompanied by some hints on how to operate the situation by hand.

Worth mentioning, I missed any practical information about one of the key pieces to this whole gitops story: building container images. Most of the showcased scenarios were using pre-built container images, so in that sense they were simple. Building and pushing to an image registry is one of the two key points we would need to solve in Toolforge Kubernetes if adopting gitops.

In general, even if gitops were already in our radar for Toolforge Kubernetes, I think it climbed a few steps in my priority list after the conference.

Another learning was this site: https://opengitops.dev/.

Group

On etcd, performance and resource management

I attended a talk focused on etcd performance tuning that was very encouraging. They were basically talking about the exact same problems we have had in Toolforge Kubernetes, like api-server and etcd failure modes, and how sensitive etcd is to disk latency, IO pressure and network throughput. Even though Toolforge Kubernetes scale is small compared to other Kubernetes deployments out there, I found it very interesting to see other’s approaches to the same set of challenges.

I learned how most Kubernetes components and apps can overload the api-server. Because even the api-server talks to itself. Simple things like kubectl may have a completely different impact on the API depending on usage, for example when listing the whole list of objects (very expensive) vs a single object.

The conclusion was to try avoiding hitting the api-server with LIST calls, and use ResourceVersion which avoids full-dumps from etcd (which, by the way, is the default when using bare kubectl get calls). I already knew some of this, and for example the jobs-framework-emailer was already making use of this ResourceVersion functionality.

There have been a lot of improvements in the performance side of Kubernetes in recent times, or more specifically, in how resources are managed and used by the system. I saw a review of resource management from the perspective of the container runtime and kubelet, and plans to support fancy things like topology-aware scheduling decisions and dynamic resource claims (changing the pod resource claims without re-defining/re-starting the pods).

On cluster management, bootstrapping and multi-tenancy

I attended a couple of talks that mentioned kubeadm, and one in particular was from the maintainers themselves. This was of interest to me because as of today we use it for Toolforge. They shared all the latest developments and improvements, and the plans and roadmap for the future, with a special mention to something they called “kubeadm operator”, apparently capable of auto-upgrading the cluster, auto-renewing certificates and such.

I also saw a comparison between the different cluster bootstrappers, which to me confirmed that kubeadm was the best, from the point of view of being a well established and well-known workflow, plus having a very active contributor base. The kubeadm developers invited the audience to submit feature requests, so I did.

The different talks confirmed that the basic unit for multi-tenancy in kubernetes is the namespace. Any serious multi-tenant usage should leverage this. There were some ongoing conversations, in official sessions and in the hallway, about the right tool to implement K8s-whitin-K8s, and vcluster was mentioned enough times for me to be convinced it was the right candidate. This was despite of my impression that multiclusters / multicloud are regarded as hard topics in the general community. I definitely would like to play with it sometime down the road.

On networking

I attended a couple of basic sessions that served really well to understand how Kubernetes instrumented the network to achieve its goal. The conference program had sessions to cover topics ranging from network debugging recommendations, CNI implementations, to IPv6 support. Also, one of the keynote sessions had a reference to how kube-proxy is not able to perform NAT for SIP connections, which is interesting because I believe Netfilter Conntrack could do it if properly configured. One of the conclusions on the CNI front was that Calico has a massive community adoption (in Netfilter mode), which is reassuring, especially considering it is the one we use for Toolforge Kubernetes.

Slide

On jobs

I attended a couple of talks that were related to HPC/grid-like usages of Kubernetes. I was truly impressed by some folks out there who were using Kubernetes Jobs on massive scales, such as to train machine learning models and other fancy AI projects.

It is acknowledged in the community that the early implementation of things like Jobs and CronJobs had some limitations that are now gone, or at least greatly improved. Some new functionalities have been added as well. Indexed Jobs, for example, enables each Job to have a number (index) and process a chunk of a larger batch of data based on that index. It would allow for full grid-like features like sequential (or again, indexed) processing, coordination between Job and more graceful Job restarts. My first reaction was: Is that something we would like to enable in Toolforge Jobs Framework?

On policy and security

A surprisingly good amount of sessions covered interesting topics related to policy and security. It was nice to learn two realities:

  1. kubernetes is capable of doing pretty much anything security-wise and create greatly secured environments.
  2. it does not by default. The defaults are not security-strict on purpose.

It kind of made sense to me: Kubernetes was used for a wide range of use cases, and developers didn’t know beforehand to which particular setup they should accommodate the default security levels.

One session in particular covered the most basic security features that should be enabled for any Kubernetes system that would get exposed to random end users. In my opinion, the Toolforge Kubernetes setup was already doing a good job in that regard. To my joy, some sessions referred to the Pod Security Admission mechanism, which is one of the key security features we’re about to adopt (when migrating away from Pod Security Policy).

I also learned a bit more about Secret resources, their current implementation and how to leverage a combo of CSI and RBAC for a more secure usage of external secrets.

Finally, one of the major takeaways from the conference was learning about kyverno and kubeaudit. I was previously aware of the OPA Gatekeeper. From the several demos I saw, it was to me that kyverno should help us make Toolforge Kubernetes more sustainable by replacing all of our custom admission controllers with it. I already opened a ticket to track this idea, which I’ll be proposing to my team soon.

Final notes

In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I’m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days.

List of sessions I attended on the first day:

List of sessions I attended on the second day:

List of sessions I attended on third day:

The videos have been published on Youtube.

27 April, 2023 10:47AM

April 25, 2023

hackergotchi for Bálint Réczey

Bálint Réczey

Improve build time of Rust, Java and Intel Fortran projects with Firebuild’s new release!

Rust is a hugely popular compiled programming language and accelerating it was an important goal for Firebuild for some time.

Firebuild‘s v0.8.0 release finally added Rust support in addition to numerous other improvements including support for Doxygen, Intel’s Fortran compiler and restored javac and javadoc acceleration.

Firebuild’s Rust + Cargo support

Firebuild treats programs as black boxes intercepting C standard library calls and system calls. It shortcuts the program invocations that predictably generate the same outputs because the program itself is known to be deterministic and all inputs are known in advance. Rust’s compiler, rustc is deterministic in itself and simple rustc invocations were already accelerated, but parallel builds driven by Cargo needed a few enhancements in Firebuild.

Cargo’s jobserver

Cargo uses the Rust variant of the GNU Make’s jobserver to control the parallelism in a build. The jobserver creates a file descriptor from which descendant processes can read tokens and are allowed to run one extra thread or parallel process per token received. After the extra threads or processes are finished the tokens must be returned by writing to the other file descriptor the jobserver created. The jobserver’s file descriptors are shared with the descendant processes via environment variables:

# rustc's environment variables
...
CARGO_MAKEFLAGS="-j --jobserver-fds=4,5 --jobserver-auth=4,5"
...

Since getting tokens from the jobserver involves reading them as nondeterministic bytes from an inherited file descriptor this is clearly an operation that would depend on input not known in advance. Firebuild needs to make an exception and ignore jobserver usage related reads and writes since they are not meant to change the build results. However, there are programs not caring at all about jobservers and their file descriptors. They happily close the inherited file descriptors and open new ones with the same id, to use them for entirely different purposes. One such program is the widely used ./configure script, thus the case is far from being theoretical.

To stay on the safe side firebuild ignores jobserver fd usage only in programs which are known to use the jobserver properly. The list of the programs is now configurable in /etc/firebuild.conf and since rustc is on the list by default parallel Rust builds are accelerated out of the box!

Writable dependency dir

The other issue that prevented highly accelerated Rust builds was rustc‘s -L dependency=<dir> parameter. This directory is populated in a not fully deterministic order in parallel builds. Firebuild on the other hand hashes directory listings of open()-ed directories treating them as inputs assuming that the directory content will influence the intercepted programs’ outputs. As rustc programs started in parallel scanned the dependency directory in different states depending on what other Rust compilations finished already Firebuild had to store the full directory content as an input for each rustc cache entry resulting low hit rate when rustc was started again with otherwise identical inputs.

The solution here is ignoring rustc scanning the dependency directory, because the dependencies actually used are still treated as input and are checked when shortcutting rustc. With that implemented in firebuild, too, librsvg’s build that uses Rust and Cargo can be accelerated by more than 90%, even on a system having 12 cores/24 threads!:

Firebuild accelerating librsvg’s Rust + Cargo build from 38s to 2.8s on a Ryzen 5900X (12C/24T) system

On the way to accelerate anything

Firebuild’s latest release incorporated more than 100 changes just from the last two months. They unlocked acceleration of Rust builds with Cargo, fixed Firebuild to work with the latest Java update that slightly changed its behavior, started accelerating Intel’s Fortran compiler in addition to accelerating gfortran that was already supported and included many smaller changes improving the acceleration of other compilers and tools. If your favorite toolchain is not mentioned, there is still a good chance that it is already supported. Give Firebuild a try and tell us about your experience!

Update 1: Comparison to sccache came up in the reddit topic about Firebuild’s Rust acceleration , thus by popular demand this is how sccache performs on the same project:

Firebuild 0.8.0 vs. sccache 0.4.2 accelerating librsvg ‘s Rust + Cargo build

All builds took place on the same Ryzen 5900X system with 12 cores / 24 threads in LXC containers limited to using 1-12 virtual CPUs. A warm-up build took place before the vanilla (without any instrumentation) build to download and compile the dependency crates to measure only the project’s build time. A git clean command cleared all the build artifacts from the project directory before each build and ./autogen.sh was run to measure only clean rebuilds (without autotools). See test configuration in the Firebuild performance test repository for more details and easy reproduction.

Firebuild had lower overhead than sccache (2.83% vs. 6.10% on 1 CPU and 7.71% vs. 22.05% on 12 CPUs) and made the accelerated build finish much faster (2.26% vs. 19.41% of vanilla build’s time on 1 CPU and 7.5% vs. 27.4% of vanilla build’s time on 12 CPUs).

25 April, 2023 09:38PM by Réczey Bálint