January 17, 2025

hackergotchi for C.J. Collier

C.J. Collier

Security concerns regarding OpenSSH mac sha1 in Debian

What is HMAC?

HMAC stands for Hash-Based Message Authentication Code. It’s a specific way to use a cryptographic hash function (like SHA-1, SHA-256, etc.) along with a secret key to produce a unique “fingerprint” of some data. This fingerprint allows someone else with the same key to verify that the data hasn’t been tampered with.

How HMAC Works

Keyed Hashing: The core idea is to incorporate the secret key into the hashing process. This is done in a specific way to prevent clever attacks that might try to bypass the security.
Inner and Outer Hashing: HMAC uses two rounds of hashing. First, the message and a modified version of the key are hashed together. Then, the result of that hash, along with another modified version of the key, are hashed again. This two-step process adds an extra layer of protection.

HMAC in OpenSSH

OpenSSH uses HMAC to ensure the integrity of messages sent back and forth during an SSH session. This prevents an attacker from subtly modifying data in transit.

HMAC-SHA1 with OpenSSH: Is it Weak?

SHA-1 itself is considered cryptographically broken. This means that with enough computing power, it’s possible to find collisions (two different messages that produce the same hash). However, HMAC-SHA1 is generally still considered secure for most purposes. This is because exploiting weaknesses in SHA-1 to break HMAC-SHA1 is much more difficult than just finding collisions in SHA-1.

Should you use it?

While HMAC-SHA1 might still be okay for now, it’s best practice to move to stronger alternatives like HMAC-SHA256 or HMAC-SHA512. OpenSSH supports these, and they provide a greater margin of safety against future attacks.

In Summary

HMAC is a powerful tool for ensuring data integrity. Even though SHA-1 has weaknesses, HMAC-SHA1 in OpenSSH is likely still safe for most users. However, to be on the safe side and prepare for the future, switching to HMAC-SHA256 or HMAC-SHA512 is recommended.

Following are instructions for creating dataproc clusters with sha1 mac support removed:

I can appreciate an excess of caution, and I can offer you some code to produce Dataproc instances which do not allow HMAC authentication using sha1.

Place code similar to this in a startup script or an initialization action that you reference when creating a cluster with gcloud dataproc clusters create:

#!/bin/bash
# remove mac specification from sshd configuration
sed -i -e 's/^macs.*$//' /etc/ssh/sshd_config
# place a new mac specification at the end of the service configuration
ssh -Q mac | perl -e \
  '@mac=grep{ chomp; ! /sha1/ }; print("macs ", join(",",@mac), $/)' >> /etc/ssh/sshd_config
# reload the new ssh service configuration
systemctl reload ssh.service

If this code is hosted on GCS, you can refer to it with

--initialization-actions=CLOUD_STORAGE_URI,[...]

or

--metadata startup-script-url=CLOUD_STORAGE_URI,[...]

17 January, 2025 10:47PM by C.J. Collier

Russell Coker

Systemd Hardening and Sending Mail

A feature of systemd is the ability to reduce the access that daemons have to the system. The restrictions include access to certain directories, system calls, capabilities, and more. The systemd.exec(5) man page describes them all [1]. To see an overview of the security of daemons run “systemd-analyze security” and to get details of one particular daemon run a command like “systemd-analyze security mon.service”.

I created a Debian wiki page for a systemd-analyze security goal [2]. At this time release goals aren’t a serious thing for Debian so this won’t result in release critical bug reports, but it is still something we can aim for.

For a simple daemon (EG BIND, dhcpd, and syslogd) this isn’t difficult to do. It might be difficult to understand the implications of some changes (especially when restricting system calls) but you can do some quick tests. The functionality of such programs has a limited scope and once you get it basically working it’s done.

For some daemons it’s harder. Network-Manager is one of the well known slightly more difficult cases as it could do things like starting a VPN connection. The larger scope and the use of plugins makes it difficult to test the combinations. The systemd restrictions apply to child processes too unlike restrictions by SE Linux and AppArmor which permit a child process to run in a different security context.

The messages when a daemon fails due to systemd restrictions are usually unclear which makes things harder to setup and makes it more important to get it right.

My “mon” package (which I forked upstream as etbe-mon [3] is one of the difficult daemons as local test can involve probing large parts of the system. But I have got that working reasonably well for most cases.

I have a bug report about running mon with Exim [4]. The problem with this is that Exim has a single process model which means that the process doing local delivery can be a child of the process that initially received the message. So the main mon process needs all the access for delivering mail (writing to /home etc). This also means that every other child of mon will get such access including programs that receive untrusted data from the Internet. Most of the extra access needed by Exim is not a problem, but /home access is a potential risk. It also means that more effort is needed when reviewing the access control.

The problem with this Exim design is that it applies to many daemons. Every daemon that sends email or that potentially could send email in some configuration needs extra access to be granted.

Can Exim be configured to have it’s sendmail -T” type operation just write a file in a spool directory for another program to process? Do we need to grant permissions to most of the system just for Exim?

17 January, 2025 04:50AM by etbe

Reproducible Builds (diffoscope)

diffoscope 285 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 285. This version includes the following changes:

[ Chris Lamb ]
* Validate --css command-line argument. Thanks to Daniel Schmidt @ SRLabs for
  the report. (Closes: #396)
* Prevent XML entity expansion attacks through vulnerable versions of
  pyexpat. Thanks to Florian Wilkens @ SRLabs for the report. (Closes: #397)
* Print a warning if we have disabled XML comparisons due to a potentially
  vulnerable version of pyexpat.
* Remove (unused) logging facility from a few comparators.
* Update copyright years.

You find out more by visiting the project homepage.

17 January, 2025 12:00AM

Michael Ablassmeier

sshcont

Due to circumstances:

sshcont: ssh daemon that starts and enters a throwaway docker container for testing

17 January, 2025 12:00AM

January 16, 2025

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Command line tools to process templates

I’ve always been a fan of template engines that work with text files, mainly to work with static site generators, but also to generate code, configuration files, and other text-based files.

For my own web projects I used to go with Jinja2, as all my projects were written in Python, while for static web sites I used the template engines included with the tools I was using, i.e. Liquid with Jekyll and Go Templates (based on the text/template and the html/template go packages) for Hugo.

When I needed to generate code snippets or configuration files from shell scripts I used to go with sed and/or envsubst, but lately things got complicated and I started to use a command line application called tmpl that uses the Go Template Language with functions from the Sprig library.

tmpl

I’ve been using my fork of the tmpl program to process templates on CI/CD pipelines (gitlab-ci) to generate configuration files and code snippets because it uses the same syntax used by helm (easier to use by other DevOps already familiar with the format) and the binary is small and can be easily included into the docker images used by the pipeline jobs.

One interesting feature of the tmpl tool is that it can read values from command line arguments and from multiple files in different formats (YAML, JSON, TOML, etc) and merge them into a single object that can be used to render the templates.

There are alternatives to the tmpl tool and I’ve looked at them (i.e. simple ones like go-template-cli or complex ones like gomplate), but I haven’t found one that fits my needs.

For my next project I plan to evaluate a move to a different tool or template format, as tmpl is not being actively maintained (as I said, I’m using my own fork) and it is not included on existing GNU/Linux distributions (I packaged it for Debian and Alpine, but I don’t want to maintain something like that without an active community and I’m not interested in being the upstream myself, as I’m trying to move to Rust instead of Go as the compiled programming language for my projects).

Mini Jinja

Looking for alternate tools to process templates on the command line I found the minijinja rust crate, a minimal implementation of the Jinja2 template engine that also includes a small command line utility (minijinja-cli) and I believe I’ll give it a try on the future for various reasons:

  • I’m already familiar with the Jinja2 syntax and it is widely used on the industry.
  • On my code I can use the original Jinja2 module for Python projects and MiniJinja for Rust programs.
  • The included command line utility is small and easy to use, and the binaries distributed by the project are good enough to add them to the docker container images used by CI/CD pipelines.
  • As I want to move to Rust I can try to add functionalities to the existing command line client or create my own version of it if they are needed (don’t think so, but who knows).

16 January, 2025 09:34AM

January 15, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastFloat 0.0.5 on CRAN: New Upstream, Updates

A new minor release of RcppFastFloat just arrived on CRAN. The package wraps fast_float, another nice library by Daniel Lemire. For details, see the arXiv preprint or published paper showing that one can convert character representations of ‘numbers’ into floating point at rates at or exceeding one gigabyte per second.

This release updates the underlying fast_float library version to the current version 7.0.0, and updates a few packaging aspects.

Changes in version 0.0.5 (2025-01-15)

  • No longer set a compilation standard

  • Updates to continuous integration, badges, URLs, DESCRIPTION

  • Update to fast_float 7.0.0

  • Per CRAN Policy comment-out compiler 'diagnostic ignore' instances

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the [issue tracker][issue tickets] at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

15 January, 2025 01:05PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.2.5 and new ISO available

The new years starts with a FAI release. FAI 6.2.5 is available and contains many small improvements. A new feature is that the command fai-cd can now create ISOs for the ARM64 architecture.

The FAIme service uses the newest FAI version and the Debian most recent point release 12.9. The FAI CD images were also updated. The Debian packages of FAI 6.2.5 are available for Debian stable (aka bookworm) via the FAI repository adding this line to sources.list:

deb https://fai-project.org/download bookworm koeln

Using the tool extrepo, you can also add the FAI repository to your host

# extrepo enable fai

FAI 6.2.5 will soon be available in Debian testing via the official Debian mirrors.

15 January, 2025 10:41AM

January 14, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.23 on CRAN: Mulitple Updates

A new maintenance release 0.4.23 of RProtoBuf arrived on CRAN earlier today, about one year after the previous update. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings a number of contributed PRs which are truly appreciate. As the package dates back fifteen+ years, some code corners can be crufty which was addressed in several PRs, as were two updates for ongoing changes / new releases of ProtoBuf itself. I also made the usual changes one does to continuous integrations, README badges and URL as well as correcting one issue the checkbashism script complained about.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.23 (2022-12-13)

  • More robust tests using toTextFormat() (Xufei Tan in #99 addressing #98)

  • Various standard packaging updates to CI and badges (Dirk)

  • Improvements to string construction in error messages (Michael Chirico in #102 and #103)

  • Accommodate ProtoBuf 26.x and later (Matteo Gianella in #104)

  • Accommodate ProtoBuf 6.30.9 and later (Lev Kandel in #106)

  • Correct bashism issues in configure.ac (Dirk)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

14 January, 2025 11:04PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2024 edition

Another year of data from Société de Transport de Montréal, Montreal's transit agency!

A few highlights this year:

  1. The closure of the Saint-Michel station had a drastic impact on D'Iberville, the station closest to it.

  2. The opening of the Royalmount shopping center nearly doubled the traffic of the De La Savane station.

  3. The Montreal subway continues to grow, but has not yet recovered from the pandemic. Berri-UQAM station (the largest one) is still below 1 million entries per quarter compared to its pre-pandemic record.

By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Licences

  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. It's pretty much the same code as last year. STM apparently changed (again!) the way they are exporting data, and since it's now somewhat sane, I didn't have to rely on a converter script.

14 January, 2025 05:00AM by Louis-Philippe Véronneau

January 13, 2025

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Stick to boot from 6.11 linux-image

Since Dec 2024, there is a compatibility issue with linux-image 6.12 and nvidia-driver 535.216.03.

#1089513 - nvidia-driver: crash in drm_open_helper on Linux 6.12.3 - Debian Bug report logs

It seems that the upstream was already fixed this issue in newer release, but not available yet on Debian sid.

If you keep installed linux-image-amd64 or linux-headers-amd64, it will be booted from 6.12 by default. Surely it will boot, but it still has the resolution issue. It can't be your daily driver.

Thus workaround is sticking to boot from 6.11 linux-image. As usually older image was listed in "Advanced options for ..." submenu, you need to explicitly choose 6.11 image during booting. In such a case, the changing default boot image is useful. (another option is just purge all 6.12 linux-image, linux-image-amd64 and linux-headers-amd64, but it's out of scope in this article)

See /boot/grub/grub.cfg and collect each menuentry_id_option.

You need to collect the following menuentry id.

  • submenu menuentry's id (It might be 'gnulinux-advanced-.....') [1]
  • 6.11 kernel's menuentry id which you want to boot by default. (It might be 'gnulinux-6.11.10-amd64-advanced-...') [2]

Then, edit GRUB_DEFAULT entry in /etc/default/grub.

GRUB_DEFAULT should be combination with [1] and [2] which is concatenated with ">"

e.g. NOTE: the actual value might vary on your environment

GRUB_DEFAULT="gnulinux-advanced-2e0e7dd0-7e10-4b58-92f7-518dabb5b547>gnulinux-6.11.10-amd64-advanced-2e0e7dd0-7e10-4b58-92f7-518dabb5b547"

After that, update grub entry with sudo update-grub.

/boot/grub/grub.cfg will be updated correctly.

13 January, 2025 08:24AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, December 2024 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In December, 19 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 14.0h (out of 14.0h assigned).
  • Adrian Bunk did 47.75h (out of 53.0h assigned and 47.0h from previous period), thus carrying over 52.25h to the next month.
  • Andrej Shadura did 6.0h (out of 17.0h assigned and -7.0h from previous period after hours given back), thus carrying over 4.0h to the next month.
  • Bastien Roucariès did 22.0h (out of 22.0h assigned).
  • Ben Hutchings did 15.0h (out of 0.0h assigned and 18.0h from previous period), thus carrying over 3.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.0h (out of 17.0h assigned and 9.0h from previous period), thus carrying over 3.0h to the next month.
  • Emilio Pozuelo Monfort did 32.25h (out of 40.5h assigned and 19.5h from previous period), thus carrying over 27.75h to the next month.
  • Guilhem Moulin did 22.5h (out of 9.75h assigned and 12.75h from previous period).
  • Jochen Sprickerhof did 2.0h (out of 3.5h assigned and 6.5h from previous period), thus carrying over 8.0h to the next month.
  • Lee Garrett did 8.5h (out of 14.75h assigned and 45.25h from previous period), thus carrying over 51.5h to the next month.
  • Lucas Kanashiro did 32.0h (out of 10.0h assigned and 54.0h from previous period), thus carrying over 32.0h to the next month.
  • Markus Koschany did 40.0h (out of 20.0h assigned and 20.0h from previous period).
  • Roberto C. Sánchez did 13.5h (out of 6.75h assigned and 17.25h from previous period), thus carrying over 10.5h to the next month.
  • Santiago Ruano Rincón did 18.75h (out of 24.75h assigned and 0.25h from previous period), thus carrying over 6.25h to the next month.
  • Sean Whitton did 6.0h (out of 2.0h assigned and 4.0h from previous period).
  • Sylvain Beucler did 10.5h (out of 21.5h assigned and 38.5h from previous period), thus carrying over 49.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).

Evolution of the situation

In December, we have released 29 DLAs.

The LTS Team has published updates to several notable packages. Contributor Guilhem Moulin published an update of php7.4, a widely-used open source general purpose scripting language, which addressed denial of service, authorization bypass, and information disclosure vulnerabilities. Contributor Lucas Kanashiro published an update of clamav, an antivirus toolkit for Unix and Linux, which addressed denial of service and authorization bypass vulnerabilities. Finally, contributor Tobias Frost published an update of intel-microcode, the microcode for Intel microprocessors, which well help to ensure that processor hardware is protected against several local privilege escalation and local denial of service vulnerabilities.

Beyond our customary LTS package updates, the LTS Team has made contributions to Debian’s stable bookworm release and its experimental section. Notably, contributor Lee Garrett published a stable update of dnsmasq. The LTS update was previously published in November and in December Lee continued working to bring the same fixes (addressing the high profile KeyTrap and NSEC3 vulnerabilities) to the dnsmasq package in Debian bookworm. This package was accepted for inclusion in the Debian 12.9 point release scheduled for January 2025. Addititionally, contributor Sean Whitton provided assistance, via upload sponsorships, to the Debian maintainers of xen. This assistance resulted in two uploads of xen into Debian’s experimental section, which will contribute to the next Debian stable release having a version of xen with better longterm support from the upstream development team.

Thanks to our sponsors

Sponsors that joined recently are in bold.

13 January, 2025 12:00AM by Roberto C. Sánchez

January 12, 2025

Divine Attah-Ohiemi

My 30-Day Outreachy Experience with the Debian Community

Hey everyone! It’s Divine Attah-Ohiemi here, and I’m excited to share what I’ve been up to in my internship with the Debian community. It’s been a month since I began this journey, and if you’re thinking about applying for Outreachy, let me give you a glimpse into my project and the amazing people I get to work with.

So, what’s it like in the Debian community? It’s a fantastic mix of folks from all walks of life—seasoned developers, curious newbies, and everyone in between. What really stands out is how welcoming everyone is. I’m especially thankful to my mentors, Thomas Lange, Carsten Schoenert, and Subin Siby, for their guidance and for always clocking in whenever I have questions. It feels like a big family where you can share your ideas and learn from each other. The commitment to diversity and merit is palpable, making it a great place for anyone eager to jump in and contribute.

Now, onto the project! We’re working on improving the Debian website by switching from WML (Web Meta Language) to Hugo, a modern static site generator. This change doesn’t just make the site faster; it significantly reduces the time it takes to build compared to WML. Plus, it makes it way easier for non-developers to contribute and add pages since the content is built from Markdown files. It’s all about enhancing the experience for both new and existing users.

My role involves developing a proof of concept for this transition. I’m migrating existing pages while ensuring that old links still work, so users won’t run into dead ends. It’s a bit of a juggling act, but knowing that my work is helping to make Debian more accessible is incredibly rewarding.

What gets me most excited is the chance to contribute to a project that’s been around for over 20 years! It’s an honor to be part of something so significant and to help shape its future. How cool is it to know that what I’m doing will impact users around the globe?

In the past month, I’ve learned a bunch of new things. For instance, I’ve been diving into Apache's mod_rewrite to automatically map old multilingual URLs to new ones. This is important since Hugo handles localization differently than WML. I’ve also been figuring out how to set up 301 redirects to prevent dead links, which is crucial for a smooth user experience.

One of the more confusing parts has been using GNU Make to manage Perl scripts for dynamic pages. It’s a bit of a learning curve, but I’m tackling it head-on. Each challenge is a chance to grow, and I’m here for it!

If you’re considering applying to the Debian community through Outreachy, I say go for it! There’s so much to learn and experience, and you’ll be welcomed with open arms. Happy coding, everyone! 🌟

12 January, 2025 11:51PM by Divine Attah-Ohiemi

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.14 on CRAN: Regular Semi-Annual Update

rcpp logo

The Rcpp Core Team is once again thrilled, pleased, and chuffed (am I doing this right for LinkedIn?) to announce a new release (now at 1.0.14) of the Rcpp package. It arrived on CRAN earlier today, and has since been uploaded to Debian. Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution–and of course r2u should catch up tomorrow too. The release was only uploaded yesterday, and as always get flagged because of the grandfathered .Call(symbol) as well as for the url to the Rcpp book (which has remained unchanged for years) ‘failing’. My email reply was promptly dealt with under European morning hours and by the time I got up the submission was in state ‘waiting’ over a single reverse-dependency failure which … is also spurious, appears on some systems and not others, and also not new. Imagine that: nearly 3000 reverse dependencies and only one (spurious) change to worse. Solid testing seems to help. My thanks as always to the CRAN for responding promptly.

This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. This time we also need a one-off hotfix release 1.0.13-1: we had (accidentally) conditioned an upcoming R change on 4.5.0, but it already came with 4.4.2 so we needed to adjust our code. As a reminder, we do of course make interim snapshot ‘dev’ or ‘rc’ releases available via the Rcpp drat repo as well as the r-universe page and repo and strongly encourage their use and testing—I run my systems with these versions which tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 2977 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 60.8% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 93.7 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 1947 (JSS, 2011) and 354 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 676.

This release is primarily incremental as usual, generally preserving existing capabilities faithfully while smoothing our corners and / or extending slightly, sometimes in response to changing and tightened demands from CRAN or R standards. The move towards a more standardized approach for the C API of R once again to a few changes; Kevin did once again did most of these PRs. Other contributed PRs include Gábor permitting builds on yet another BSD variant, Simon Guest correcting sourceCpp() to work on read-only files, Marco Colombo correcting a (surprisingly large) number of vignette typos, Iñaki rebuilding some documentation files that tickled (false) alerts, and I took care of a number of other maintenance items along the way.

The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.0.14 (2025-01-11)

  • Changes in Rcpp API:

    • Support for user-defined databases has been removed (Kevin in #1314 fixing #1313)

    • The SET_TYPEOF function and macro is no longer used (Kevin in #1315 fixing #1312)

    • An errorneous cast to int affecting large return object has been removed (Dirk in #1335 fixing #1334)

    • Compilation on DragonFlyBSD is now supported (Gábor Csárdi in #1338)

    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)

  • Changes in Rcpp Attributes:

    • The sourceCpp() function can now handle input files with read-only modes (Simon Guest in #1346 fixing #1345)
  • Changes in Rcpp Deployment:

    • One unit tests for arm64 macOS has been adjusted; a macOS continuous integration runner was added (Dirk in #1324)

    • Authors@R is now used in DESCRIPTION as mandated by CRAN, the Rcpp.package.skeleton() function also creates it (Dirk in #1325 and #1327)

    • A single datetime format test has been adjusted to match a change in R-devel (Dirk in #1348 fixing #1347)

  • Changes in Rcpp Documentation:

    • The Rcpp Modules vignette was extended slightly following #1322 (Dirk)

    • Pdf vignettes have been regenerated under Ghostscript 10.03.1 to avoid a false positive by a Windows virus scanner (Iñaki in #1331)

    • A (large) number of (old) typos have been corrected in the vignettes (Marco Colombo in #1344)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

12 January, 2025 07:24PM

hackergotchi for Bastian Venthur

Bastian Venthur

Investigating the popularity of Python build backends over time (II)

Last year, I analyzed the popularity of build backends used in pyproject.toml files over time. This post is the update for 2024.

Analysis

Like last year, I’m using Tom Forbes’ fantastic dataset containing information about every file within every release uploaded to PyPI. To get the current dataset, I followed the same process as in last year’s analysis, so I won’t repeat all the details here. Instead, I’ll highlight the main steps:

  • Download the parquet files from the dataset
  • Use DuckDB to query the parquet files, extracting the project name, upload date, the pyproject.toml file, and its hash for each upload
  • Download each pyproject.toml file and extract the build backend. To avoid redundant downloads, I stored a mapping of the file hash and their respective build backend

Downloading all the parquet files took roughly a week due to GitHub’s rate limiting. Tom suggested leveraging the Git v2 protocol to fetch the data directly. This approach could bypass rate limiting and complete the download of all pyproject.toml files in just 20 minutes(!). However, I couldn’t find sufficient documentation that would help me to implement this method, so this will have to wait until next year’s analysis.

Once all the data is downloaded, I perform some preprocessing:

  • Grouped the top 4 build backends by their absolute number of uploads and categorized the remaining ones as “other”
  • Binned upload dates into quarters to reduce clutter in the resulting graphs

Results

I modified the plots a bit from last year to make them easier to read. Most notably, I binned the data into quarters to make the plots less noisy, and secondly, I stopped stacking the relative distribution plots to make the percentages directly readable.

The first plot shows the absolute number of uploads (in thousands) by quarter and build backend.

Absolute distribution of build backends by quarter

The second plot shows the relative distribution of build backends by quarter.

Relative distribution of build backends by quarter

In 2024, we observe that:

  • Setuptools continues to grow in absolute numbers and remains around the 50% mark in relative distribution
  • Poetry maintains a 30% relative distribution, but the trend has been declining since 2024-Q3. Preliminary data for 2025-Q1 (not shown here) supports this, suggesting that Poetry might be surpassed by Hatch in 2025, which showed a remarkable growth last year.
  • Flit is the only build backend in this analysis whose absolute and relative numbers decreased in 2024. With a 5% relative distribution, it underlines the dominance of Setuptools, Poetry, and Hatch over the remaining build backends.

The script for downloading and analyzing the data is available in my GitHub repository. If someone has insights or examples on implementing the Git v2 protocol to download the pyproject.toml file given the repository URL and its hash, I’d love to hear from you!

12 January, 2025 03:00PM by Bastian Venthur

Sahil Dhiman

Prosody Certificate Management With Nginx and Certbot

I have a self-hosted XMPP chat server through Prosody. Earlier, I struggled with certificate renewal and generation for Prosody because I have Nginx (and a bunch of other services) running on the same server which binds to Port 80. Due to this, Certbot wasn’t able to auto-renew (through HTTP validation) for domains managed by Prosody.

Now, I have cobbled together a solution to keep both Nginx and Prosody happy. This is how I did it:

  • Expose /.well-known/acme-challenge through Nginx for Prosody domain. Nginx config looked like this:
server {
      listen 80;
      listen [::]:80;
      server_name PROSODY.DOMAIN;
      root <ANY_NGINX_WRITABLE_LOCATION>;

      location ~ /.well-known/acme-challenge {
         allow all;
      }
}
  • Run certbot to get certificates for <PROSODY.DOMAIN>.
  • To use those in Prosody, add a cron entry for root user:
0 0 * * * prosodyctl --root cert import /etc/letsencrypt/live/PROSODY.DOMAIN

Explanation from Prosody docs:

Certificates and their keys are copied to /etc/prosody/certs (can be changed with the certificates option) and then it signals Prosody to reload itself. –root lets prosodyctl write to paths that may not be writable by the prosody user, as is common with /etc/prosody.

  • Certbot now manages auto-renewal as well, and we’re all set.

12 January, 2025 04:50AM

January 11, 2025

Andrew Cater

20250111 Release media testing for Debian 12.9

 We're part way through the testing of release media. RattusRattus, Isy, Sledge, smcv and Helen in Cambridge, a new tester Blew in Manchester, another new tester MerCury[m] and also  highvoltage in South Africa.

Everything is going well so far and we're chasing through the test schedule.

Sorry not to be there in Cambridgeshire with friends - but the room is fairly small and busy :) 


[UPDATE/EDIT - at 20250111 1701 - we're pretty much complete on the testing]

11 January, 2025 05:59PM by Andrew Cater (noreply@blogger.com)

January 10, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.11 on CRAN: Polish

Another minor update 0.3.11 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release covers two corner case. Michael sent in a PR avoiding a clang warning on complex types. We fixed an issue that surfaced in a downstream package under sanitizier checks: R extends coverage of NA to types such as integer or character which need special treatment in non-R library code as ‘they do not know’. We flagged (character) formatted values after we had called the corresponding CCTZ function but that leaves potentiall ‘undefined’ values (from R’s NA values for int, say, cast to double) so now we flag them, set a transient safe value for the call and inject the (character) representation "NA" after the call in those spots. End result is the same, but without a possibly slap on the wrist from sanitizer checks.

The NEWS snippet below has the full details.

Changes in version 0.3.11 (2025-01-10)

  • Explicit Rcomplex assignment accommodates pickier compilers over newer R struct (Michael Chirico in #135 fixing #134)

  • When formatting, NA are flagged before CCTZ call to to not trigger santizier, and set to NA after call (Dirk in #136)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

10 January, 2025 10:50PM

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Testing New User Tools

On recent weeks I’ve had some time to scratch my own itch on matters related to tools I use daily on my computer, namely the desktop / window manager and my text editor of choice.

This post is a summary of what I tried, how it worked out and my short and medium-term plans related to them.

Desktop / WM

On the desktop / window manager front I’ve been using Cinnamon on Debian and Ubuntu systems since Gnome 3 was published (I never liked version 3, so I decided to move to something similar to Gnome 2, including the keyboard shortcuts).

In fact I’ve never been a fan of Desktop environments, before Gnome I used OpenBox and IceWM because they where a lot faster than desktop systems on my hardware at the time and I was using them only to place one or two windows on multiple workspaces using mainly the keyboard for my interactions (well, except for the web browsers and the image manipulation programs).

Although I was comfortable using Cinnamon, some years ago I tried to move to i3, a tilling window manager for X11 that looked like a good choice for me, but I didn’t have much time to play with it and never used it enough to make me productive with it (I didn’t prepare a complete configuration nor had enough time to learn the new shortcuts, so I went back to Cinnamon and never tried again).

Anyway, some weeks ago I updated my work machine OS (it was using Ubuntu 22.04 LTS and I updated it to the 24.04 LTS version) and the Cinnamon systray applet stopped working as it used to do (in fact I still have to restart Cinnamon after starting a session to make it work) and, as I had some time, I decided to try a tilling window manager again, but now I decided to go for SwayWM, as it uses Wayland instead of X11.

Sway configuration

On my ~/.config/sway/config I tuned some things:

  • Set fuzzel as the application launcher.
  • Installed manually the shikane application and created a configuration to be executed always when sway is started / reloaded (I adjusted my configuration with wdisplays and used shikanectl to save it).
  • Added support for running the xdg-desktop-portal-wlr service.
  • Enabled the swayidle command to lock the screen after some time of inactivity.
  • Adjusted the keyboard to use the es key map
  • Added some keybindings to make my life easier, including the use of grimm and swappy to take screenshots
  • Configured waybar as the environment bar.
  • Added a shell script to start applications when sway is started (it uses swaymsg to execute background commands and the i3toolwait script to wait for the

    #!/bin/sh
    
    # VARIABLES
    CHROMIUM_LOCAL_STATE="$HOME/.config/google-chrome/Local State"
    I3_TOOLWAIT="$HOME/.config/sway/scripts/i3-toolwait"
    
    # Functions
    chromium_profile_dir() {
      jq -r ".profile.info_cache|to_entries|map({(.value.name): .key})|add|.\"$1\" // \"\"" "$CHROMIUM_LOCAL_STATE"
    }
    
    # MAIN
    IGZ_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@intelygenz.com")"
    OURO_PROFILE_DIR="$(chromium_profile_dir "sergio.talens@nxr.global")"
    PERSONAL_PROFILE_DIR="$(chromium_profile_dir "stalens@gmail.com")"
    
    # Common programs
    swaymsg "exec nextcloud --background"
    swaymsg "exec nm-applet"
    
    # Run spotify on the first workspace (it is mapped to the laptop screen)
    swaymsg -q "workspace 1"
    ${I3_TOOLWAIT} "spotify"
    
    # Run tmux on the
    swaymsg -q "workspace 2"
    ${I3_TOOLWAIT} -- foot tmux a -dt sto
    
    wp_num="3"
    if [ "$OURO_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      ${I3_TOOLWAIT} -m ouro-browser -- google-chrome --profile-directory="$OURO_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    
    if [ "$IGZ_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      ${I3_TOOLWAIT} -m igz-browser -- google-chrome --profile-directory="$IGZ_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    
    if [ "$PERSONAL_PROFILE_DIR" ]; then
      swaymsg -q "workspace $wp_num"
      ${I3_TOOLWAIT} -m personal-browser -- google-chrome --profile-directory="$PERSONAL_PROFILE_DIR"
      wp_num="$((wp_num+1))"
    fi
    
    # Open the browser without setting the profile directory if none was found
    if [ "$wp_num" = "3" ]; then
      swaymsg -q "workspace $wp_num"
      ${I3_TOOLWAIT} google-chrome
      wp_num="$((wp_num+1))"
    fi
    swaymsg -q "workspace $wp_num"
    ${I3_TOOLWAIT} evolution
    wp_num="$((wp_num+1))"
    
    swaymsg -q "workspace $wp_num"
    ${I3_TOOLWAIT} slack
    wp_num="$((wp_num+1))"
    
    # Open a private browser and a console in the last workspace
    swaymsg -q "workspace $wp_num"
    ${I3_TOOLWAIT} -- google-chrome --incognito
    ${I3_TOOLWAIT} foot
    
    # Go back to the second workspace for keepassxc
    swaymsg "workspace 2"
    ${I3_TOOLWAIT} keepassxc

Conclusion

After using Sway for some days I can confirm that it is a good choice for me, but some of the components needed to make it work as I want are too new and not available on the Ubuntu 24.04 LTS repositories, so I decided to go back to Cinnamon and try Sway again in the future, although I added more workspaces to my setup (now they are only available on the main monitor, the laptop screen is fixed while there is a big monitor connected), added some additional keyboard shortcuts and installed or updated some applets.

Text editor

When I started using Linux many years ago I used vi/vim and emacs as my text editors (vi for plain text and emacs for programming and editing HTML/XML), but eventually I moved to vim as my main text editor and I’ve been using it since (well, I moved to neovim some time ago, although I kept my old vim configuration).

To be fair I’m not as expert as I could be with vim, but I’m productive with it and it has many plugins that make my life easier on my machines, while keeping my ability to edit text and configurations on any system that has a vi compatible editor installed.

For work reasons I tried to use Visual Studio Code last year, but I’ve never really liked it and almost everything I do with it I can do with neovim (i. e. I even use copilot with it). Besides, I’m a heavy terminal user (I use tmux locally and via ssh) and I like to be able to use my text editor on my shell sessions, and code does not work like that.

The only annoying thing about vim/neovim is its configuration (well, the problem is that I have a very old one and probably should spend some time fixing and updating it), but, as I said, it’s been working well for me for a long time, so I never really had the motivation to do it.

Anyway, after finishing my desktop tests I saw that I had the Helix editor installed for some time but I never tried it, so I decided to give it a try and see if it could be a good replacement for neovim on my environments (the only drawback is that as it is not vi compatible, I would need to switch back to vi mode when working on remote systems, but I guess I could live with that).

I ran the helix tutorial and I liked it, so I decided to configure and install the Language Servers I can probably take advantage of on my daily work on my personal and work machines and see how it works.

Language server installations

A lot of manual installations are needed to get the language servers working what I did on my machines is more or less the following:

# AWK
sudo npm i -g 'awk-language-server@>=0.5.2'
# BASH
sudo apt-get install shellcheck shfmt
sudo npm i -g bash-language-server
# C/C++
sudo apt-get install clangd
# CSS, HTML, ESLint, JSON, SCS
sudo npm i -g vscode-langservers-extracted
# Docker
sudo npm install -g dockerfile-language-server-nodejs
# Docker compose
sudo npm install -g @microsoft/compose-language-service
# Helm
app="helm_ls_linux_amd64"
url="$(
  curl -s https://api.github.com/repos/mrjosh/helm-ls/releases/latest |
    jq -r ".assets[] | select(.name == \"$app\") | .browser_download_url"
)"
curl -L "$url" --output /tmp/helm_ls
sudo install /tmp/helm_ls /usr/local/bin
rm /tmp/helm_ls
# Markdown
app="marksman-linux-x64"
url="$(
  curl -s https://api.github.com/repos/artempyanykh/marksman/releases/latest |
    jq -r ".assets[] | select(.name == \"$app\") | .browser_download_url"
)"
curl -L "$url" --output /tmp/marksman
sudo install /tmp/marksman /usr/local/bin
rm /tmp/marksman
# Python
sudo npm i -g pyright
# Rust
rustup component add rust-analyzer
# SQL
sudo npm i -g sql-language-server
# Terraform
sudo apt-get install terraform-ls
# TOML
cargo install taplo-cli --locked --features lsp
# YAML
sudo npm install --global yaml-language-server
# JavaScript, TypeScript
sudo npm install -g typescript-language-server typescript
sudo npm install -g --save-dev --save-exact @biomejs/biome

Helix configuration

The helix configuration is done on a couple of toml files that are placed on the ~/.config/helix directory, the config.toml file I used is this one:

theme = "solarized_light"

[editor]
line-number = "relative"
mouse = false

[editor.statusline]
left = ["mode", "spinner"]
center = ["file-name"]
right = ["diagnostics", "selections", "position", "file-encoding", "file-line-ending", "file-type"]
separator = "│"
mode.normal = "NORMAL"
mode.insert = "INSERT"
mode.select = "SELECT"

[editor.cursor-shape]
insert = "bar"
normal = "block"
select = "underline"

[editor.file-picker]
hidden = false

[editor.whitespace]
render = "all"

[editor.indent-guides]
render = true
character = "╎" # Some characters that work well: "▏", "┆", "┊", "⸽"
skip-levels = 1

And to configure the language servers I used the following language-servers.toml file:

[[language]]
name = "go"
auto-format = true
formatter = { command = "goimports" }

[[language]]
name = "javascript"
language-servers = [
  "typescript-language-server", # optional
  "vscode-eslint-language-server",
]

[language-server.rust-analyzer.config.check]
command = "clippy"

[language-server.sql-language-server]
command = "sql-language-server"
args = ["up", "--method", "stdio"]

[[language]]
name = "sql"
language-servers = [ "sql-language-server" ]

[[language]]
name = "hcl"
language-servers = [ "terraform-ls" ]
language-id = "terraform"

[[language]]
name = "tfvars"
language-servers = [ "terraform-ls" ]
language-id = "terraform-vars"

[language-server.terraform-ls]
command = "terraform-ls"
args = ["serve"]

[[language]]
name = "toml"
formatter = { command = "taplo", args = ["fmt", "-"] }

[[language]]
name = "typescript"
language-servers = [
  "typescript-language-server",
  "vscode-eslint-language-server",
]

Neovim configuration

After a little while I noticed that I was going to need some time to get used to helix and the most interesting thing for me was the easy configuration and the language server integrations, but as I am already comfortable with neovim and just had installed the language server support tools on my machines I just need to configure them for neovim and I can keep using it for a while.

As I said my configuration is old, to configure neovim I have the following init.vim file on my ~/.config/nvim folder:

set runtimepath^=~/.vim runtimepath+=~/.vim/after
let &packpath=&runtimepath
source ~/.vim/vimrc
" load lua configuration
lua require('config')

With that configuration I keep my old vimrc (it is a little bit messy, but it works) and I use a lua configuration file for the language servers and some additional neovim plugins on the ~/.config/nvim/lua/config.lua file:

-- -----------------------
-- BEG: LSP Configurations
-- -----------------------
-- AWS (awk_ls)
require'lspconfig'.awk_ls.setup{}
-- Bash (bashls)
require'lspconfig'.bashls.setup{}
-- C/C++ (clangd)
require'lspconfig'.clangd.setup{}
-- CSS (cssls)
require'lspconfig'.cssls.setup{}
-- Docker (dockerls)
require'lspconfig'.dockerls.setup{}
-- Docker Compose
require'lspconfig'.docker_compose_language_service.setup{}
-- Golang (gopls)
require'lspconfig'.gopls.setup{}
-- Helm (helm_ls)
require'lspconfig'.helm_ls.setup{}
-- Markdown
require'lspconfig'.marksman.setup{}
-- Python (pyright)
require'lspconfig'.pyright.setup{}
-- Rust (rust-analyzer)
require'lspconfig'.rust_analyzer.setup{}
-- SQL (sqlls)
require'lspconfig'.sqlls.setup{}
-- Terraform (terraformls)
require'lspconfig'.terraformls.setup{}
-- TOML (taplo)
require'lspconfig'.taplo.setup{}
-- Typescript (ts_ls)
require'lspconfig'.ts_ls.setup{}
-- YAML (yamlls)
require'lspconfig'.yamlls.setup{
  settings = {
    yaml = {
      customTags = { "!reference sequence" }
    }
  }
}
-- -----------------------
-- END: LSP Configurations
-- -----------------------
-- ---------------------------------
-- BEG: Autocompletion configuration
-- ---------------------------------
-- Ref: https://github.com/neovim/nvim-lspconfig/wiki/Autocompletion
--
-- Pre requisites:
--
--   # Packer
--   git clone --depth 1 https://github.com/wbthomason/packer.nvim \
--      ~/.local/share/nvim/site/pack/packer/start/packer.nvim
--
--   # Start nvim and run :PackerSync or :PackerUpdate
-- ---------------------------------
local use = require('packer').use
require('packer').startup(function()
  use 'wbthomason/packer.nvim' -- Packer, useful to avoid removing it with PackerSync / PackerUpdate
  use 'neovim/nvim-lspconfig' -- Collection of configurations for built-in LSP client
  use 'hrsh7th/nvim-cmp' -- Autocompletion plugin
  use 'hrsh7th/cmp-nvim-lsp' -- LSP source for nvim-cmp
  use 'saadparwaiz1/cmp_luasnip' -- Snippets source for nvim-cmp
  use 'L3MON4D3/LuaSnip' -- Snippets plugin
end)
-- Add additional capabilities supported by nvim-cmp
local capabilities = require("cmp_nvim_lsp").default_capabilities()
local lspconfig = require('lspconfig')
-- Enable some language servers with the additional completion capabilities offered by nvim-cmp
local servers = { 'clangd', 'rust_analyzer', 'pyright', 'ts_ls' }
for _, lsp in ipairs(servers) do
  lspconfig[lsp].setup {
    -- on_attach = my_custom_on_attach,
    capabilities = capabilities,
  }
end
-- luasnip setup
local luasnip = require 'luasnip'
-- nvim-cmp setup
local cmp = require 'cmp'
cmp.setup {
  snippet = {
    expand = function(args)
      luasnip.lsp_expand(args.body)
    end,
  },
  mapping = cmp.mapping.preset.insert({
    ['<C-u>'] = cmp.mapping.scroll_docs(-4), -- Up
    ['<C-d>'] = cmp.mapping.scroll_docs(4), -- Down
    -- C-b (back) C-f (forward) for snippet placeholder navigation.
    ['<C-Space>'] = cmp.mapping.complete(),
    ['<CR>'] = cmp.mapping.confirm {
      behavior = cmp.ConfirmBehavior.Replace,
      select = true,
    },
    ['<Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_next_item()
      elseif luasnip.expand_or_jumpable() then
        luasnip.expand_or_jump()
      else
        fallback()
      end
    end, { 'i', 's' }),
    ['<S-Tab>'] = cmp.mapping(function(fallback)
      if cmp.visible() then
        cmp.select_prev_item()
      elseif luasnip.jumpable(-1) then
        luasnip.jump(-1)
      else
        fallback()
      end
    end, { 'i', 's' }),
  }),
  sources = {
    { name = 'nvim_lsp' },
    { name = 'luasnip' },
  },
}
-- ---------------------------------
-- END: Autocompletion configuration
-- ---------------------------------

Conclusion

I guess I’ll keep helix installed and try it again on some of my personal projects to see if I can get used to it, but for now I’ll stay with neovim as my main text editor and learn the shortcuts to use it with the language servers.

10 January, 2025 12:35PM

Valhalla's Things

Winter

Posted on January 10, 2025
Tags: madeof:atoms, craft:painting, medium:acrylic

An acrylic painting in blue and greenish grey vaguely resembling rocky slopes at the bottom right and a cloudy sky at the top left.

A few days ago1 I wanted to paint, but I didn’t know what to paint, so I did a few more colour tests to find out which green combinations I can get out of the available yellows and blues (and greens) acrylic I have (from a cheap student grade line).

A sheet of colour tests with mixes of various yellows and blues.

I liked the cool grey tones in the second to last line, from 200 “Naples yellow” (PY3 PY83 PW6) and 410 Ultramarine blue (PB29), so I went up a step on the scale of “things to do when having a desire to paint, but the utter inability to actually paint something ” and did a bit of a study with those two colours plus titanium white.

The study above over a sheet of scrap paper: at one of the edge the excess paint has connected the two together.

And btw, painting with acrylics on watercolour paper taped to a sheet of scrap paper works just fine for stuff like this that is not supposed to last, but the work will get glued to the paper below when (not if) the paint overflows :D


  1. i.e. almost three months, and then I wrote 95% of this post and forgot to finish and publish it.↩︎

10 January, 2025 12:00AM

January 09, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.21: Minor Polish

A fresh minor release of the inline package got to CRAN today, following on the November release which had marked the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

In the November release we accommodated upcoming R-devel changes on setting R_NO_REMAP by conditioning on the release version. It turns that this does not get when the #define independently so this needed a small refinement which this version brings. No other changes were made.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.21 (2025-01-08)

  • Refine use of Rf_warning in cfunction setting -DR_NO_REMAP ourselves to get R version independent state

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

09 January, 2025 10:52PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

RIP vorlon

I was very sad to hear that Steve Langasek, aka vorlon, has passed away from cancer. I hadn't talked to him in many years, but I did meet him at Debconf a couple of times, and more importantly: I was there when he was Release Manager for Debian.

Steve stepped up as one of the RMs at a point where Debian's releases were basically a hell march. Releases would drag on for years, freezes would be forever, at some point not a single package came through to testing over a glibc issue. In that kind of environment, and despite no small amount of toxicity surrounding it all, Steve pulled through and managed not only one, but two releases. If you've only seen the release status of Debian after this period, you won't really know how much must have happened in that decade.

The few times I met Steve, he struck me as not only knowledgeable, but also kind and not afraid to step up for people even it went against the prevailing winds. I wish we could all learn from that. Rest in peace, Steve, your passing is a huge loss for our communities.

09 January, 2025 08:00PM

Scarlett Gately Moore

KDE: Snaps 24.12.1 Release, Kubuntu Plasma 5.27.12 Call for testers

I have released more core24 snaps to –edge for your testing pleasure. If you find any bugs please report them at bugs.kde.org and assign them to me. Thanks!

Kdenlive our amazing video editor!

Haruna is a video player that also supports youtube!

Kdevelop is our feature rich development IDE

KDE applications 24.12.1 release https://kde.org/announcements/gear/24.12.1/

New qt6 ports

  • lokalize
  • isoimagewriter
  • parley
  • kteatime
  • ghostwriter
  • ktorrent
  • kanagram
  • marble

Kubuntu:

We have Plasma 5.27.12 Bugfix release in staging https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/staging-plasma for noble updates, please test! Do NOT do this on a production system. Thanks!

I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

09 January, 2025 01:24PM by sgmoore

Reproducible Builds

Reproducible Builds in December 2024

Welcome to the December 2024 report from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As ever, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. debian-repro-status
  3. On our mailing list
  4. Enhancing the Security of Software Supply Chains
  5. diffoscope
  6. Supply-chain attack in the Solana ecosystem
  7. Website updates
  8. Debian changes
  9. Other development news
  10. Upstream patches
  11. Reproducibility testing framework

reproduce.debian.net

Last month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there.

This month, however, we are pleased to announce that not only does the service now produce graphs, the reproduce.debian.net homepage itself has become a “start page” of sorts, and the amd64.reproduce.debian.net and i386.reproduce.debian.net pages have emerged. The first of these rebuilds the amd64 architecture, naturally, but it also is building Debian packages that are marked with the ‘no architecture’ label, all. The second builder is, however, only rebuilding the i386 architecture.

Both of these services were also switched to reproduce the Debian trixie distribution instead of unstable, which started with 43% of the archive rebuild with 79.3% reproduced successfully. This is very much a work in progress, and we’ll start reproducing Debian unstable soon.

Our i386 hosts are very kindly sponsored by Infomaniak whilst the amd64 node is sponsored by OSUOSL — thank you! Indeed, we are looking for more workers for more Debian architectures; please contact us if you are able to help.


debian-repro-status

Reproducible builds developer kpcyrd has published a client program for reproduce.debian.net (see above) that queries the status of the locally installed packages and rates the system with a percentage score. This tool works analogously to arch-repro-status for the Arch Linux Reproducible Builds setup.

The tool was packaged for Debian and is currently available in Debian trixie: it can be installed with apt install debian-repro-status.


On our mailing list

On our mailing list this month:

  • Bernhard M. Wiedemann wrote a detailed post on his “long journey towards a bit-reproducible Emacs package.” In his interesting message, Bernhard goes into depth about the tools that they used and the lower-level technical details of, for instance, compatibility with the version for glibc within openSUSE.

  • Shivanand Kunijadar posed a question pertaining to the reproducibility issues with encrypted images. Shivanand explains that they must “use a random IV for encryption with AES CBC. The resulting artifact is not reproducible due to the random IV used.” The message resulted in a handful of replies, hopefully helpful!

  • User Danilo posted an in interesting question related to their attempts in trying to achieve reproducible builds for Threema Desktop 2.0. The question resulted in a number of replies attempting to find the right combination of compiler and linker flags (for example).

  • Longstanding contributor David A. Wheeler wrote to our list announcing the release of the “Census III of Free and Open Source Software: Application Libraries” report written by Frank Nagle, Kate Powell, Richie Zitomer and David himself. As David writes in his message, the report attempts to “answer the question ‘what is the most popular Free and Open Source Software (FOSS)?’”.

  • Lastly, kpcyrd followed-up to a post from September 2024 which mentioned their desire for “someone” to implement “a hashset of allowed module hashes that is generated during the kernel build and then embedded in the kernel image”, thus enabling a deterministic and reproducible build. However, they are now reporting that “somebody implemented the hash-based allow list feature and submitted it to the Linux kernel mailing list”. Like kpcyrd, we hope it gets merged.


Enhancing the Security of Software Supply Chains: Methods and Practices

Mehdi Keshani of the Delft University of Technology in the Netherlands has published their thesis on “Enhancing the Security of Software Supply Chains: Methods and Practices”. Their introductory summary first begins with an outline of software supply chains and the importance of the Maven ecosystem before outlining the issues that it faces “that threaten its security and effectiveness”. To address these:

First, we propose an automated approach for library reproducibility to enhance library security during the deployment phase. We then develop a scalable call graph generation technique to support various use cases, such as method-level vulnerability analysis and change impact analysis, which help mitigate security challenges within the ecosystem. Utilizing the generated call graphs, we explore the impact of libraries on their users. Finally, through empirical research and mining techniques, we investigate the current state of the Maven ecosystem, identify harmful practices, and propose recommendations to address them.

A PDF of Mehdi’s entire thesis is available to download.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 283 and 284 to Debian:

  • Update copyright years. []
  • Update tests to support file 5.46. [][]
  • Simplify tests_quines.py::test_{differences,differences_deb} to simply use assert_diff and not mangle the test fixture. []


Supply-chain attack in the Solana ecosystem

A significant supply-chain attack impacted Solana, an ecosystem for decentralised applications running on a blockchain.

Hackers targeted the @solana/web3.js JavaScript library and embedded malicious code that extracted private keys and drained funds from cryptocurrency wallets. According to some reports, about $160,000 worth of assets were stolen, not including SOL tokens and other crypto assets.


Website updates

Similar to last month, there was a large number of changes made to our website this month, including:

  • Chris Lamb:

    • Make the landing page hero look nicer when the vertical height component of the viewport is restricted, not just the horizontal width.
    • Rename the “Buy-in” page to “Why Reproducible Builds?” []
    • Removing the top black border. [][]
  • Holger Levsen:

  • hulkoba:

    • Remove the sidebar-type layout and move to a static navigation element. [][][][]
    • Create and merge a new Success stories page, which “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds. These stories aim to enhance the technical resilience of the initiative by encouraging community involvement and inspiring new contributions.”. []
    • Further changes to the homepage. []
    • Remove the translation icon from the navigation bar. []
    • Remove unused CSS styles pertaining to the sidebar. []
    • Add sponsors to the global footer. []
    • Add extra space on large screens on the Who page. []
    • Hide the side navigation on small screens on the Documentation pages. []


Debian changes

There were a significant number of reproducibility-related changes within Debian this month, including:

  • Santiago Vila uploaded version 0.11+nmu4 of the dh-buildinfo package. In this release, the dh_buildinfo becomes a no-op — ie. it no longer does anything beyond warning the developer that the dh-buildinfo package is now obsolete. In his upload, Santiago wrote that “We still want packages to drop their [dependency] on dh-buildinfo, but now they will immediately benefit from this change after a simple rebuild.”

  • Holger Levsen filed Debian bug #1091550 requesting a rebuild of a number of packages that were built with a “very old version” of dpkg.

  • Fay Stegerman contributed to an extensive thread on the debian-devel development mailing list on the topic of “Supporting alternative zlib implementations”. In particular, Fay wrote about her results experimenting whether zlib-ng produces identical results or not.

  • kpcyrd uploaded a new rust-rebuilderd-worker, rust-derp, rust-in-toto and debian-repro-status to Debian, which passed successfully through the so-called NEW queue.

  • Gioele Barabucci filed a number of bugs against the debrebuild component/script of the devscripts package, including:

    • #1089087: Address a spurious extra subdirectory in the build path.
    • #1089201: Extra zero bytes added to .dynstr when rebuilding CMake projects.
    • #1089088: Some binNMUs have a 1-second offset in some timestamps.
  • Gioele Barabucci also filed a bug against the dh-r package to report that the Recommends and Suggests fields are missing from rebuilt R packages. At the time of writing, this bug has no patch and needs some help to make over 350 binary packages reproducible.

  • Lastly, 8 reviews of Debian packages were added, 11 were updated and 11 were removed this month adding to our knowledge about identified issues.


Other development news

In other ecosystem and distribution news:

  • Lastly, in openSUSE, Bernhard M. Wiedemann published another report for the distribution. There, Bernhard reports about the success of building ‘R-B-OS’, a partial fork of openSUSE with only 100% bit-reproducible packages. This effort was sponsored by the NLNet NGI0 initiative.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add a new i386.reproduce.debian.net rebuilder. [][][][][][]
    • Make a number of updates to the documentation. [][][][][]
    • Run i386.reproduce.debian.net run on a public port to allow external workers. []
    • Add a link to the /api/v0/pkgs/list endpoint. []
    • Add support for a statistics page. [][][][][][]
    • Limit build logs to 20 MiB and diffoscope output to 10 MiB. []
    • Improve the frontpage. [][]
    • Explain that we’re testing arch:any and arch:all on the amd64 architecture, but only arch:any on i386. []
  • Misc:

    • Remove code for testing Arch Linux, which has moved to reproduce.archlinux.org. [][]
    • Don’t install dstat on Jenkins nodes anymore as its been removed from Debian trixie. []
    • Prepare the infom08-i386 node to become another rebuilder. []
    • Add debug date output for benchmarking the reproducible_pool_buildinfos.sh script. []
    • Install installation-birthday everywhere. []
    • Temporarily disable automatic updates of pool links on buildinfos.debian.net. []
    • Install Recommends by default on Jenkins nodes. []
    • Rename rebuilder_stats.py to rebuilderd_stats.py. []
    • r.d.n/stats: minor formatting changes. []
    • Install files under /etc/cron.d/ with the correct permissions. []

… and Jochen Sprickerhof made the following changes:

Lastly, Gioele Barabucci also classified packages affected by 1-second offset issue filed as Debian bug #1089088 [][][][], Chris Hofstaedtler updated the URL for Grml’s dpkg.selections file  [], Roland Clobus updated the Jenkins log parser to parse warnings from diffoscope [] and Mattia Rizzolo banned a number of bots and crawlers from the service [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

09 January, 2025 12:00PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Tracker.debian.org updates, Salsa CI improvements, Coinstallable build-essential, Python 3.13 transition, Ruby 3.3 transition and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-12

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Tracker.debian.org updates, by Raphaël Hertzog

Profiting from end-of-year vacations, Raphaël prepared for tracker.debian.org to be upgraded to Debian 12 bookworm by getting rid of the remnants of python3-django-jsonfield in the code (it was superseded by a Django-native field). Thanks to Philipp Kern from the Debian System Administrators team, the upgrade happened on December 23rd.

Raphaël also improved distro-tracker to better deal with invalid Maintainer fields which recently caused multiples issues in the regular data updates (#1089985, MR 105). While working on this, he filed #1089648 asking dpkg tools to error out early when maintainers make such mistakes.

Finally he provided feedback to multiple issues and merge requests (MR 106, issues #21, #76, #77), there seems to be a surge of interest in distro-tracker lately. It would be nice if those new contributors could stick around and help out with the significant backlog of issues (in the Debian BTS, in Salsa).

Salsa CI improvements, by Santiago Ruano Rincón

Given that the Debian buildd network now relies on sbuild using the unshare backend, and that Salsa CI’s reproducibility testing needs to be reworked (#399), Santiago resumed the work for moving the build job to use sbuild. There was some related work a few months ago that was focused on sbuild with the schroot and the sudo backends, but those attempts were stalled for different reasons, including discussions around the convenience of the move (#296). However, using sbuild and unshare avoids all of the drawbacks that have been identified so far. Santiago is preparing two merge requests: !568 to introduce a new build image, and !569 that moves all the extract-source related tasks to the build job. As mentioned in the previous reports, this change will make it possible for more projects to use the pipeline to build the packages (See #195). Additional advantages of this change include a more optimal way to test if a package builds twice in a row: instead of actually building it twice, the Salsa CI pipeline will configure sbuild to check if the clean target of debian/rules correctly restores the source tree, saving some CPU cycles by avoiding one build. Also, the images related to Ubuntu won’t be needed anymore, since the build job will create chroots for different distributions and vendors from a single common build image. This will save space in the container registry. More changes are to come, especially those related to handling projects that customize the pipeline and make use of the extract-source job.

Coinstallable build-essential, by Helmut Grohne

Building on the gcc-for-host work of last December, a notable patch turning build-essential Multi-Arch: same became feasible. Whilst the change is small, its implications and foundations are not. We still install crossbuild-essential-$ARCH for cross building and due to a britney2 limitation, we cannot have it depend on the host’s C library. As a result, there are workarounds in place for sbuild and pbuilder. In turning build-essential Multi-Arch: same, we may actually express these dependencies directly as we install build-essential:$ARCH instead. The crossbuild-essential-$ARCH packages will continue to be available as transitional dummy packages.

Python 3.13 transition, by Colin Watson and Stefano Rivera

Building on last month’s work, Colin, Stefano, and other members of the Debian Python team fixed 3.13 compatibility bugs in many more packages, allowing 3.13 to now be a supported but non-default version in testing. The next stage will be to switch to it as the default version, which will start soon. Stefano did some test-rebuilds of packages that only build for the default Python 3 version, to find issues that will block the transition. The default version transition typically shakes out some more issues in applications that (unlike libraries) only test with the default Python version.

Colin also fixed Sphinx 8.0 compatibility issues in many packages, which otherwise threatened to get in the way of this transition.

Ruby 3.3 transition, by Lucas Kanashiro

The Debian Ruby team decided to ship Ruby 3.3 in the next Debian release, and Lucas took the lead of the interpreter transition with the assistance of the rest of the team. In order to understand the impact of the new interpreter in the ruby ecosystem, ruby-defaults was uploaded to experimental adding ruby3.3 as an alternative interpreter, and a mass rebuild of reverse dependencies was done here. Initially, a couple of hundred packages were failing to build, after many rounds of rebuilds, adjustments, and many uploads we are down to 30 package build failures, of those, 21 packages were asked to be removed from testing and for the other 9, bugs were filled. All the information to track this transition can be found here. Now, we are waiting for PHP 8.4 to finish to avoid any collision. Once it is done the Ruby 3.3 transition will start in unstable.

Miscellaneous contributions

  • Enrico Zini redesigned the way nm.debian.org stores historical audit logs and personal data backups.
  • Carles Pina submitted a new package (python-firebase-messaging) and prepared updates for python3-ring-doorbell.
  • Carles Pina developed further po-debconf-manager: better state transition, fixed bugs, automated assigning translators and reviewers on edit, updating po header files automatically, fixed bugs, etc.
  • Carles Pina reviewed, submitted and followed up the debconf templates translation (more than 20 packages) and translated some packages (about 5).
  • Santiago continued to work on DebConf 25 organization related tasks, including handling the logo survey and results. Stefano spent time on DebConf 25 too.
  • Santiago continued the exploratory work about linux livepatching with Emmanuel Arias. Santiago and Emmanuel found a challenge since kpatch won’t fully support linux in trixie and newer, so they are exploring alternatives such as klp-build.
  • Helmut maintained the /usr-move transition filing bugs in e.g. bubblewrap, e2fsprogs, libvpd-2.2-3, and pam-tmpdir and corresponding on related issues such as kexec-tools and live-build. The removal of the usrmerge package unfortunately broke debootstrap and was quickly reverted. Continued fallout is expected and will continue until trixie is released.
  • Helmut sent patches for 10 cross build failures and worked with Sandro Knauß on stuck Qt/KDE patches related to cross building.
  • Helmut continued to maintain rebootstrap removing the need to build gnu-efi in the process.
  • Helmut collaborated with Emanuele Rocca and Jochen Sprickerhof on an interesting adventure in diagnosing why gcc would FTBFS in recent sbuild.
  • Helmut proposed supporting build concurrency limits in coreutils’s nproc. As it turns out nproc is not a good place for this functionality.
  • Colin worked with Sandro Tosi and Andrej Shadura to finish resolving the multipart vs. python-multipart name conflict, as mentioned last month.
  • Colin upgraded 48 Python packages to new upstream versions, fixing four CVEs and a number of compatibility bugs with recent Python versions.
  • Colin issued an openssh bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which had been quite broken in bookworm.
  • Stefano fixed a minor bug in debian-reimbursements that was disallowing combination PDFs containing JAL tickets, encoded in UTF-16.
  • Stefano uploaded a stable update to PyPy3 in bookworm, catching up with security issues resolved in cPython.
  • Stefano fixed a regression in the eventlet from his Python 3.13 porting patch.
  • Stefano continued discussing a forwarded patch (renaming the sysconfigdata module) with cPython upstream, ending in a decision to drop the patch from Debian. This will need some continued work.
  • Anupa participated in the Debian Publicity team meeting in December, which discussed the team activities done in 2024 and projects for 2025.

09 January, 2025 12:00AM by Anupa Ann Joseph, Stefano Rivera

Valhalla's Things

Poor Man Media Server

Posted on January 9, 2025
Tags: madeof:bits

Some time ago I installed minidlna on our media server: it was pretty easy to do, but quite limited in its support for the formats I use most, so I ended up using other solutions such as mounting the directory with sshfs.

Now, doing that from a phone, even a pinephone running debian, may not be as convenient as doing it from the laptop where I already have my ssh key :D and I needed to listed to music from the pinephone.

So, in anger, I decided to configure a web server to serve the files.

I installed lighttpd because I already had a role for this kind of configuration in my ansible directory, and configured it to serve the relevant directory in /etc/lighttpd/conf-available/20-music.conf:

$HTTP["host"] =~ "music.example.org" {
    server.name          = "music.example.org"
    server.document-root = "/path/to/music"
}

the domain was already configured in my local dns (since everything is only available to the local network), and I enabled both 20-music.conf and 10-dir-listing.conf.

And. That’s it. It works. I can play my CD rips on a single flac exactly in the same way as I was used to (by ssh-ing to the media server and using alsaplayer).

Then this evening I was talking to normal people1, and they mentioned that they wouldn’t mind being able to skip tracks and fancy things like those :D and I’ve found one possible improvement.

For the directories with the generated single-track ogg files I’ve added some playlists with the command ls *.ogg > playlist.m3u, then in the directory above I’ve run ls */*.m3u > playlist.m3u and that also works.

With vlc I can now open http://music.example.org/band/album/playlist.m3u to listen to an album that I have in ogg, being able to move between tracks, or I can open http://music.example.org/band/playlist.m3u and in the playlist view I can browse between the different albums.

Left as an exercise to the reader2 are writing a bash script to generate all of the playlist.m3u files (and running it via some git hook when the files change) or writing a php script to generate them on the fly.


Update 2025-01-10: another reader3 wrote the php script and has authorized me to post it here.

<?php
define("MUSIC_FOLDER", __DIR__);
define("ID3v2", false);


function dd() {
    echo "<pre>"; call_user_func_array("var_dump", func_get_args());
    die();
}

function getinfo($file) {
    $cmd = 'id3info "' . MUSIC_FOLDER . "/" . $file . '"';
    exec($cmd, $output);
    $res = [];
    foreach($output as $line) {
    if (str_starts_with($line, "=== ")) {
        $key = explode(" ", $line)[1];
        $val = end(explode(": ", $line, 2));
        $res[$key] = $val;
    }
    }
    if (isset($res['TPE1']) || isset($res['TIT2']))
    echo "#EXTINF: , " . ($res['TPE1'] ?? "Unk") . " - " . ($res['TIT2'] ?? "Untl") . "\r\n";
    if (isset($res['TALB']))
    echo "#EXTALB: " . $res['TALB'] . "\r\n";
}


function pathencode($path, $name) {
    $path = urlencode($path);
    $path =  str_replace("%2F", "/", $path);
    $name = urlencode($name);
    if ($path != "") $path = "/" . $path;
    return $path . "/" . $name;
}

function serve_playlist($path) {
    echo "#EXTM3U";
    echo "# PATH: $path\n\r";
    foreach (glob(MUSIC_FOLDER . "/$path/*") as $filename) {
    $name = basename($filename);
    if (is_dir($filename)) {
        echo pathencode($path, $name) . ".m3u\r\n";
    }
    $t = explode(".", $filename);
    $ext = array_pop($t);
    if (in_array($ext, ["mp3", "ogg", "flac", "mp4", "m4a"])) {
        if (ID3v2) {
 	   getinfo($path . "/" . $name);
        } else {
 	   echo "#EXTINF: , " . $path . "/" . $name . "\r\n";
        }
        echo pathencode($path, $name) . "\r\n";
    }
    }
    die();
}



$path = $_SERVER["REQUEST_URI"];
$path = urldecode($path);
$path = trim($path, "/");

if (str_ends_with($path, ".m3u")) {
    $path = str_replace(".m3u", "", $path);

    serve_playlist($path);
}

$path = MUSIC_FOLDER . "/" . $path;
if (file_exists($path) && is_file($path)) {
    header('Content-Description: File Transfer');
    header('Content-Type: application/octet-stream');
    header('Expires: 0');
    header('Cache-Control: must-revalidate');
    header('Pragma: public');
    header('Content-Length: ' . filesize($path));
    readfile($path);
}

It’s php, so I assume no responsability for it :D


  1. as much as the members of our LUG can be considered normal.↩︎

  2. i.e. the person in the LUG who wanted me to share what I had done.↩︎

  3. i.e. the other person in the LUG who was in that conversation and suggested the php script option.↩︎

09 January, 2025 12:00AM

January 08, 2025

John Goerzen

Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week’s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I — a person that has been censored by Facebook for mentioning the Open Source social network Mastodon — not cheering a “lighter touch”?

The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s.

Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not.

As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access — and wouldn’t understand how to access even if they had the equipment — you have a place where self-expression can be unleashed.

And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms.

But as I’ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough — more than a few dozen people, say — there are troublemakers that ruin it for everyone. Maybe it’s trolling, maybe it’s vicious attacks, you name it — it will arrive and it will be poisonous.

I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.)

But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the “sysop” or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you’re banned. But, in all but the smallest towns, there were other options you could try.

On the other hand, sysops enjoyed having people call their BBSs and didn’t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don’t be excessively annoying, and don’t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet.

On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn’t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty — delinking from the network — if they repeatedly failed to prevent malicious content from flowing through their site.

Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn’t be “in real life”. And yes, some harbored trolls and flamers.

The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn’t like what the vibe was at a certain place.

That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online.

With the rise of the very large platforms — and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later — the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you’d need tens or hundreds of millions of dollars of equipment and employees.

All that means you can’t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook’s mission is not the one they say it is — “[to] give people the power to build community and bring the world closer together.” If that was their goal, they wouldn’t be creating AI users and AI spam and all the rest. Zuck isn’t showing courage; he’s sucking up to Trump and those that will pay the price are those that always do: women and minorities.

Really, the point of any large social network isn’t to build community. It’s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles — even free speech — doing it. Don’t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others.

The problem with a one-size-fits-all content policy is that the world isn’t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can’t square this circle.

Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don’t have much to offer on that point. Secondly, it indicates that we are too centralized.

We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don’t.

And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance.

Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go “shrug, this sucks” and try something better.

(And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

08 January, 2025 02:59PM by John Goerzen

Sandro Tosi

HOWTO remove Reddit (web) "Recent" list of communities

If you go on reddit.com via browser, on the left column you can see a section called "RECENT" with the list of the last 5 communities recently visited.

If you want to remove them, say for privacy reasons (shared device, etc.), there's no simple way to do so: there's not "X" button next to it, your profile page doesn't offer a way to clear that out. you could clear all the data from the website, but that seems too extreme, no?

Enter Chrome's "Developer Tools"

While on reddit.com open Menu > More Tools > Developers tool, go on the Application tab, Storage > Local storage and select reddit.com; on the center panel you see a list of key-value pairs, look for the key "recent-subreddits-store"; you can see the list of the 5 communities in the JSON below.

If you wanna get rid of the recently viewed communities list, simply delete that key, refresh reddit.com and voila, empty list.

Note: I'm fairly sure i read about this method somewhere, i simply cant remember where, but it's definitely not me who came up with it. I just needed to use it recently and had to back track memories to figure it out again, so it's time to write it down.

08 January, 2025 03:25AM by Sandro Tosi (noreply@blogger.com)

January 07, 2025

Jonathan Wiltshire

Using TPM for Automatic Disk Decryption in Debian 12

These days it’s straightforward to have reasonably secure, automatic decryption of your root filesystem at boot time on Debian 12. Here’s how I did it on an existing system which already had a stock kernel, secure boot enabled, grub2 and an encrypted root filesystem with the passphrase in key slot 0.

There’s no need to switch to systemd-boot for this setup but you will use systemd-cryptenroll to manage the TPM-sealed key. If that offends you, there are other ways of doing this.

Caveat

The parameters I’ll seal a key against in the TPM include a hash of the initial ramdisk. This is essential to prevent an attacker from swapping the image for one which discloses the key. However, it also means the key has to be re-sealed every time the image is rebuilt. This can be frequent, for example when installing/upgrading/removing packages which include a kernel module. You won’t get locked out (as long as you still have a passphrase in another slot), but will need to re-seal the key to restore the automation.

You can also choose not to include this parameter for the seal, but that opens the door to such an attack.

Caution: these are the steps I took on my own system. You may need to adjust them to avoid ending up with a non-booting system.

Check for a usable TPM device

We’ll bind the secure boot state, kernel parameters, and other boot measurements to a decryption key. Then, we’ll seal it using the TPM. This prevents the disk being moved to another system, the boot chain being tampered with and various other attacks.

# apt install tpm2-tools
# systemd-cryptenroll --tpm2-device list
PATH        DEVICE     DRIVER 
/dev/tpmrm0 STM0125:00 tpm_tis

Clean up older kernels including leftover configurations

I found that previously-removed (but not purged) kernel packages sometimes cause dracut to try installing files to the wrong paths. Identify them with:

# apt install aptitude
# aptitude search '~c'

Change search to purge or be more selective, this part is an exercise for the reader.

Switch to dracut for initramfs images

Unless you have a particular requirement for the default initramfs-tools, replace it with dracut and customise:

# mkdir /etc/dracut.conf.d
# echo 'add_dracutmodules+=" tpm2-tss crypt "' > /etc/dracut.conf.d/crypt.conf
# apt install dracut

Remove root device from crypttab, configure grub

Remove (or comment) the root device from /etc/crypttab and rebuild the initial ramdisk with dracut -f.

Edit /etc/default/grub and add ‘rd.auto rd.luks=1‘ to GRUB_CMDLINE_LINUX. Re-generate the config with update-grub.

At this point it’s a good idea to sanity-check the initrd contents with lsinitrd. Then, reboot using the new image to ensure there are no issues. This will also have up-to-date TPM measurements ready for the next step.

Identify device and seal a decryption key

# lsblk -ip -o NAME,TYPE,MOUNTPOINTS
NAME                                                    TYPE  MOUNTPOINTS
/dev/nvme0n1p4                                          part  /boot
/dev/nvme0n1p5                                          part  
`-/dev/mapper/luks-deff56a9-8f00-4337-b34a-0dcda772e326 crypt 
  |-/dev/mapper/lv-var                                  lvm   /var
  |-/dev/mapper/lv-root                                 lvm   /
  `-/dev/mapper/lv-home                                 lvm   /home

In this example my root filesystem is in a container on /dev/nvme0n1p5. The existing passphrase key is in slot 0.

# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7+8+9+14 /dev/nvme0n1p5
Please enter current passphrase for disk /dev/nvme0n1p5: ********
New TPM2 token enrolled as key slot 1.

The PCRs I chose (7, 8, 9 and 14) correspond to the secure boot policy, kernel command line (to prevent init=/bin/bash-style attacks), files read by grub including that crucial initrd measurement, and secure boot MOK certificates and hashes. You could also include PCR 5 for the partition table state, and any others appropriate for your setup.

Reboot

You should now be able to reboot and the root device will be unlocked automatically, provided the secure boot measurements remain consistent.

The key slot protected by a passphrase (mine is slot 0) is now your recovery key. Do not remove it!


Please consider supporting my work in Debian and elsewhere through Liberapay.

07 January, 2025 11:03PM by Jonathan

Thorsten Alteholz

My Debian Activities in December 2024

Debian LTS

This was my hundred-twenty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

I worked on updates for ffmpeg and haproxy in all releases. Along the way I marked more CVEs as not-affected than I had to fix. So finally there was no upload needed for haproxy anymore. Unfortunately testing ffmpeg was not as easy, as the recommended “just look whether mpv can play random videos” is not really satisfying. So the upload will happen only in January.

I also wonder whether fixing glewlwyd is really worth the effort, as the software is already EOL upstream.

Debian ELTS

This month was the seventy-seventhth ELTS month. During my allocated time I worked on ffmpeg, haproxy, amanda and kmail-account-wizzard.

Like LTS, all CVEs of haproxy and some of ffmpeg could be marked as not-affected and testing of the other packages was/is not really straight forward. So the final upload will only happen in January as well.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Thanks a lot to William Desportes for all fixes of my bad PHP packaging.

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

I again sponsored an upload of calceph.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

I also sponsored uploads of emacs-lsp-docker, emacs-dape, emacs-oauth2, gpgmngr, libjs-jush.

FTP master

This month I accepted 330 and rejected 13 packages. The overall number of packages that got accepted was 335.

07 January, 2025 12:29PM by alteholz

Enrico Zini

Debugging printing to a remote printer

I upgraded to Debian testing/trixie, and my network printer stopped appearing in print dialogs. These are notes from the debugging session.

Check firewall configuration

I tried out kde, which installed plasma-firewall, which installed firewalld, which closed by default the ports used for printing.

For extra fun, appindicators are not working in Gnome and so firewall-applet is currently useless, although one can run firewall-config manually, or use the command line that might be more user friendly than the UI.

Step 1: change the zone for the home wifi to "Home":

firewall-cmd  --zone home --list-interfaces
firewall-cmd  --zone home --add-interface wlp1s0

Step 2: make sure the home zone can print:

firewall-cmd --zone home --list-services
firewall-cmd --zone home --add-service=ipp
firewall-cmd --zone home --add-service=ipp-client
firewall-cmd --zone home --add-service=mdns

I searched and searched but I could not find out whether ipp is needed, ipp-client is needed, or both are needed.

Check if avahi can see the printer

Is the printer advertised correctly over mdns?

When it didn't work:

$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []

$ avahi-browse -rt _ipp._tcp
[empty]

When it works:

$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []

$ avahi-browse -rt _ipp._tcp
+ wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
+ wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
= wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1092109
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]

Check if cups can see the printer

From CUPS' Using Network Printers:

$ /usr/sbin/lpinfo --include-schemes dnssd -v

network dnssd://Brother%20HL-2030%20series%20%40%20server._ipp._tcp.local/cups?uuid=

Debugging session interrupted

At this point, the printer appeared.

It could be that:

In the end, debugging failed successfully, and this log now remains as a reference for possible further issues.

07 January, 2025 11:40AM

January 05, 2025

Dominique Dumont

hackergotchi for Jonathan McDowell

Jonathan McDowell

Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the ‘flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I’d do my annual recap of my Free Software activities.

For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences

In 2024 I managed to make it to FOSDEM again. It’s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I’m already booked to go this year as well.

I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I’d a talk submission in for FOSDEM about our use of attestation and why it’s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn’t selected. Maybe I’ll find somewhere else suitable to do it.

BSides Belfast may or may not count - it’s a security conference, but there’s a lot of overlap with various bits of Free software, so I feel it deserves a mention.

I skipped DebConf for 2024 for a variety of reasons, but I’m expecting to make DebConf25 in Brest, France in July.

Debian

Most of my contributions to Free software continue to happen within Debian.

In 2023 I’d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I’m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I’ll have to find some time to build the appropriate DFSG tarball and update it.

rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded.

kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that’s tracking Kodi 22 and we’re still on Kodi 21 so I plan to follow the Omega branch for now. Which I’ve just noticed had a 21.0.8 release this week.

Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I’ve had zero concerns about any of his packaging.

The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there.

I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it’s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3.

The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups.

OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn’t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well.

libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1.

I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I’m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze.

Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there’s been some recent activity, but it doesn’t seem to be release quality), but there was an outstanding ITP and I’ve some familiarity with the space as we’ve been using it at work as part of investigating TCG OPAL encryption.

I continue to keep an eye on Debian New Members, even though I’m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions.

Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux

I’d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I’ve another fix in progress that I hope to submit in 2025, but it’s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects

I didn’t end up doing much in the way of externally published personal project work in 2024.

Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet.

listadmin3 got some minor updates based on external feedback / MRs. It’s nice to know it’s useful to other folk even in its basic state.

That wraps up 2024. I’ve got no particular goals for this year at present. Ideally I’d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I’ll settle for making sure all my Debian packages are in reasonable state for Trixie.

05 January, 2025 04:10PM

Enrico Zini

ncdu on files to back up

I use borg and restic to backup files in my system. Sometimes I run a huge download or clone a large git repo and forget to mark it with CACHEDIR.TAG, and it gets picked up slowing the backup process and wasting backup space uselessly.

I would like to occasionally audit the system to have an idea of what is a candidate for backup. ncdu would be great for this, but it doesn't know about backup exclusion filters.

Let's teach it then.

Here's a script that simulates a backup and feeds the results to ncdu:

#!/usr/bin/python3

import argparse
import os
import sys
import time
import stat
import json
import subprocess
import tempfile
from pathlib import Path
from typing import Any

FILTER_ARGS = [
    "--one-file-system",
    "--exclude-caches",
    "--exclude",
    "*/.cache",
]
BACKUP_PATHS = [
    "/home",
]


class Dir:
    """
    Dispatch borg output into a hierarchical directory structure.

    borg prints a flat file list, ncdu needs a hierarchical JSON.
    """

    def __init__(self, path: Path, name: str):
        self.path = path
        self.name = name
        self.subdirs: dict[str, "Dir"] = {}
        self.files: list[str] = []

    def print(self, indent: str = "") -> None:
        for name, subdir in self.subdirs.items():
            print(f"{indent}{name:}/")
            subdir.print(indent + " ")
        for name in self.files:
            print(f"{indent}{name}")

    def add(self, parts: tuple[str, ...]) -> None:
        if len(parts) == 1:
            self.files.append(parts[0])
            return

        subdir = self.subdirs.get(parts[0])
        if subdir is None:
            subdir = Dir(self.path / parts[0], parts[0])
            self.subdirs[parts[0]] = subdir

        subdir.add(parts[1:])

    def to_data(self) -> list[Any]:
        res: list[Any] = []
        st = self.path.stat()
        res.append(self.collect_stat(self.name, st))
        for name, subdir in self.subdirs.items():
            res.append(subdir.to_data())

        dir_fd = os.open(self.path, os.O_DIRECTORY)
        try:
            for name in self.files:
                try:
                    st = os.lstat(name, dir_fd=dir_fd)
                except FileNotFoundError:
                    print(
                        "Possibly broken encoding:",
                        self.path,
                        repr(name),
                        file=sys.stderr,
                    )
                    continue
                if stat.S_ISDIR(st.st_mode):
                    continue
                res.append(self.collect_stat(name, st))
        finally:
            os.close(dir_fd)

        return res

    def collect_stat(self, fname: str, st) -> dict[str, Any]:
        res = {
            "name": fname,
            "ino": st.st_ino,
            "asize": st.st_size,
            "dsize": st.st_blocks * 512,
        }
        if stat.S_ISDIR(st.st_mode):
            res["dev"] = st.st_dev
        return res


class Scanner:
    def __init__(self) -> None:
        self.root = Dir(Path("/"), "/")
        self.data = None

    def scan(self) -> None:
        with tempfile.TemporaryDirectory() as tmpdir_name:
            mock_backup_dir = Path(tmpdir_name) / "backup"
            subprocess.run(
                ["borg", "init", mock_backup_dir.as_posix(), "--encryption", "none"],
                cwd=Path.home(),
                check=True,
            )

            proc = subprocess.Popen(
                [
                    "borg",
                    "create",
                    "--list",
                    "--dry-run",
                ]
                + FILTER_ARGS
                + [
                    f"{mock_backup_dir}::test",
                ]
                + BACKUP_PATHS,
                cwd=Path.home(),
                stderr=subprocess.PIPE,
            )
            assert proc.stderr is not None
            for line in proc.stderr:
                match line[0:2]:
                    case b"- ":
                        path = Path(line[2:].strip().decode())
                    case b"x ":
                        continue
                    case _:
                        raise RuntimeError(f"Unparsable borg output: {line!r}")

                if path.parts[0] != "/":
                    raise RuntimeError(f"Unsupported path: {path.parts!r}")
                self.root.add(path.parts[1:])

    def to_json(self) -> list[Any]:
        return [
            1,
            0,
            {
                "progname": "backup-ncdu",
                "progver": "0.1",
                "timestamp": int(time.time()),
            },
            self.root.to_data(),
        ]

    def export(self):
        return json.dumps(self.to_json()).encode()


def main():
    parser = argparse.ArgumentParser(
        description="Run ncdu to estimate sizes of files to backup."
    )
    parser.parse_args()

    scanner = Scanner()
    scanner.scan()
    # scanner.root.print()
    res = subprocess.run(["ncdu", "-f-"], input=scanner.export())
    sys.exit(res.returncode)


if __name__ == "__main__":
    main()

05 January, 2025 03:09PM

January 04, 2025

Scarlett Gately Moore

KDE: Snap hotfixes and updates

Fixed okular pdf printing https://bugs.kde.org/show_bug.cgi?id=498065

Fixed kwave recording https://bugs.kde.org/show_bug.cgi?id=442085 please run sudo snap connect kwave:audio-record :audio-record until auto-connect gets approved here: https://forum.snapcraft.io/t/kde-auto-connect-our-two-recording-apps/44419

New qt6 snaps in –edge until 24.12.1 release

  • minuet
  • ksystemlog
  • kwordquiz
  • lokalize
  • ksirk
  • ksnakeduel
  • kturtle

I have begun the process of moving to core24 currently in –edge until 24.12.1 release.

Some major improvements come with core24!

Tokodon is our wonderful Mastadon client

I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

04 January, 2025 01:36PM by sgmoore

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Tips when building debian-installer

Recently, I'm trying to fix d-i Han-Unification issue for Japanese. This issue was not fixed for a long time since Debian 9 (stretch).

#1037256 - debian-installer: GUI font for Japanese was incorrectly rendered - Debian Bug report logs

To know about how Han-Unification is harmful for Japanese in some cases, See "Your Code Displays Japanese Wrong".

heistak.github.io

When building d-i (GUI Installer), you need to build build_netboot-gtk target.

But note that you need recent master because it has nitpick issue with GNU Make 4.4.x.

bugs.debian.org

After that, need to prepare required packages. See README in details.

apt-get update
apt-get install -y myrepos git libgtk2.0-dev fakeroot
apt-get build-dep -y debian-installer

It seems that Bug#1037256 will be fixed with supporting compressed font. I don't know how to do it furthermore , but I'm sure that Mr. Cyril Brulebois will handle this issue better. :-)

(I thought that creating fake fontconfig cache when building image, then decompress compressed font dynamically might work as just an idea, but it didn't work.)

If you would like to tackle fixing d-i issues as a newbie, it might be better to execute "make reallyclean" before rebuilding image not to fall-in pitfalls.

04 January, 2025 01:32PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - December 2024

Our Debian User Group met on December 22nd for our last meeting of 2024. I wasn't sure at first it was a good idea, but many people showed up and it was great!

Here's what we did:

pollo:

anarcat:

lelutin:

lavamind:

  • installed Debian on an oooollld (as in, with a modem) laptop
  • debugged a FTBFS on jruby

tvaz:

  • did some simple packaging QA
  • added basic salsa CI and some RFA for a bunch of packages (python-midiutil, antimony, python-pyo, rakarrack, python-pyknon, soundcraft-utils, cecilia, nasty, gnome-icon-theme-nuovo, gnome-extra-iconsg, nome-subtitles, timgm6mb-soundfont)

mjeanson and joeDoe:

  • hanged out and did some stuff :)

Some of us ended up grabbing a drink after the event at l'Isle de Garde, a pub right next to the venue.

Pictures

This time around, we were hosted by l'Espace des possibles, at their new location (they moved since our last visit). It was great! People liked the space so much we actually discussed going back there more often :)

Group photo at l'Espace des possibles

04 January, 2025 05:00AM by Louis-Philippe Véronneau

January 03, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for December.

Happy New Year 2025! Wishing everyone health, productivity, and a successful Debian release later in this year.

Strict ownership of packages

I'm glad my last bits sparked discussions about barriers between packages and contributors, summarized temporarily in some post on the debian-devel list. As one participant aptly put it, we need a way to visibly say, "I'll do the job until someone else steps up". Based on my experience with the Bug of the Day initiative, simplifying the process for engaging with packages would significantly help.

Currently we have

  1. NMU The Developers Reference outlines several preconditions for NMUs, explicitly stating, "Fixing cosmetic issues or changing the packaging style in NMUs is discouraged." This makes NMUs unsuitable for addressing package smells. However, I've seen NMUs used for tasks like switching to source format 3.0 or bumping the debhelper compat level. While it's technically possible to file a bug and then address it in an NMU, the process inherently limits the NMUer's flexibility to reduce package smells.

  2. Package Salvaging This is another approach for working on someone else's packages, aligning with the process we often follow in the Bug of the Day initiative. The criteria for selecting packages typically indicate that the maintainer either lacks time to address open bugs, has lost interest, or is generally MIA.

Both options have drawbacks, so I'd welcome continued discussion on criteria for lowering the barriers to moving packages to Salsa and modernizing their packaging. These steps could enhance Debian overall and are generally welcomed by active maintainers. The discussion also highlighted that packages on Salsa are often maintained collaboratively, fostering the team-oriented atmosphere already established in several Debian teams.

Salsa

Continuous Integration

As part of the ongoing discussion about package maintenance, I'm considering the suggestion to switch from the current opt-in model for Salsa CI to an opt-out approach. While I fully agree that human verification is necessary when the pipeline is activated, I believe the current option to enable CI is less visible than it should be. I'd welcome a more straightforward approach to improve access to better testing for what we push to Salsa.

Number of packages not on Salsa

In my campaign, I stated that I aimed to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. As of this writing, the count stands at 1,928 [1], so I consider this promise fulfilled. My thanks go out to everyone who contributed to this effort. Moving forward, I'd like to set a more ambitious goal for the remainder of my term and hope we can reduce the number to below 1,800.

[1] UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;

Past and future events

Talk at MRI Together

In early December, I gave a short online talk, primarily focusing on my work with the Debian Med team. I also used my position as DPL to advocate for attracting more users and developers from the scientific research community.

FOSSASIA

I originally planned to attend FOSDEM this year. However, given the strong Debian presence there and the need for better representation at the FOSSASIA Summit, I decided to prioritize the latter. This aligns with my goal of improving geographic diversity. I also look forward to opportunities for inter-distribution discussions.

Debian team sprints

Debian Ruby Sprint

I approved the budget for the Debian Ruby Sprint, scheduled for January 2025 in Paris. If you're interested in contributing to the Ruby team, whether in person or online, consider reaching out to them. I'm sure any helping hand would be appreciated.

Debian Med sprint

There will also be a Debian Med sprint in Berlin in mid-February. As usual, you don't need to be an expert in biology or medicine–basic bug squashing skills are enough to contribute and enjoy the friendly atmosphere the Debian Med team fosters at their sprints. For those working in biology and medicine, we typically offer packaging support. Anyone interested in spending a weekend focused on impactful scientific work with Debian is warmly invited.

Again all the best for 2025

Andreas.

03 January, 2025 11:00PM by Andreas Tille

Taavi Väänänen

Automatically updating reverse DNS entries for my Hetzner servers

Some parts of my infrastructure run on Hetzner dedicated servers. Hetzner's management console has an interface to update reverse DNS entries, and I wanted to automate that. Unfortunately there's no option to just delegate the zones to my own authoritative DNS servers. So I did the next best thing, which is updating the Hetzner-managed records with data from my own authoritative DNS servers.

Generating DNS zones the hard way

The first step of automating DNS record provisioning is, well, figuring out which records need to be provisioned. I wanted to re-use my existing automation for generating the record data, instead of coming up with a new system for these records. The basic summary is that there's a Go program creatively named dnsgen that's in charge of generating zone file snippets from various sources (these include Netbox, Kubernetes, PuppetDB and my custom reverse web proxy setup).

Those snippets are combined with Jinja templates to generate full zone files to be loaded to a hidden primary running Bind9 (like all other DNS servers I run). The zone files are then transferred to a fleet of internal authoritative servers as well as my public authoritative DNS server, which in turn transfers them to various other authoritative DNS servers (like ns-global and Traficom anycast) for redundancy.

There's also a bunch of other smaller features, like using Bind views to server different data to internal and external clients, and resolving external records during record generation time to be used on apex records that would use CNAME records if they could. (The latter is a workaround for Masto.host, the hosting provider we use for Wikis World, not having a stable IPv6 address.) Overall it's a really nice system, and I've spent quite a bit of time on it.

Updating records on Hetzner-managed space

As mentioned above, Hetzner unfortunately does not support custom DNS servers for reverse records on IP space rented from them. But I wanted to use my existing, perfectly working DNS record generation setup since that works perfectly fine. So the obvious answer is to (ab)use DNS zone file transfers.

I quickly wrote a few hundred lines of Go to request the zone data and then use the Hetzner robot API to ensure the reverse entries are in sync. The main obstacle hit here was the Hetzner API somehow requiring an "update" call (instead of a "create" one) to create a new record, as the create endpoint was returning an HTTP 400 response no matter what. Once I sorted that out, the script started working fine and created the few dozen missing records. Finally I added a CronJob in my Kubernetes cluster to run the script once in a while.

Overall this is a big improvement over doing things by hand and didn't require that much effort. The obvious next step would be to expand the script to a tiny DNS server capable of receiving zone update NOTIFYs to make the updates happen real-time. Unfortunately there's now no hiding of the records revealing my ugly hacks clever networking solutions :(

03 January, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

January 02, 2025

Paul Wise

FLOSS Activities December 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 January, 2025 10:54AM

hackergotchi for Martin-&#201;ric Racine

Martin-Éric Racine

On the future of i386 on Debian

Before we proceed, let's emphasize a few things:

  • My Testing hardware is i386 simply because I have plenty of leftovers from older days. These are hosts that I can afford to see randomly break due to transitions.
  • Meanwhile, my desktop has been a 64-bit for over 10 years. My laptop for a bit less. Basically, my daily activities don't depend on 32-bit hardware remaining supported.
  • I fully agree that there is no sense in making a fresh install on 32-bit hardware nowadays. I therefore support Debian dropping 32-bit architectures from debian-installer.

This being said, I still think that the current approach of keeping i386 among the supported architectures, all while no longer shipping kernels, is entirely the wrong decision. What should instead be done is to keep on shipping i386 kernels for Trixie, but clearly indicate in the Trixie Release Notes that i386 is supported for the last time and thereafter fully demoted to Ports.

02 January, 2025 08:02AM by Martin-Éric (noreply@blogger.com)

hackergotchi for Matthew Garrett

Matthew Garrett

The GPU, not the TPM, is the root of hardware DRM

As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.

I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.

What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.

Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.

The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.

The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.

In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).

Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.

The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.

comment count unavailable comments

02 January, 2025 01:14AM

hackergotchi for Colin Watson

Colin Watson

Free software activity in December 2024

Most of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via Liberapay (thanks!).

OpenSSH

I issued a bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which was quite broken in bookworm.

base-passwd

A few months ago, the adduser maintainer started a discussion with me (as the base-passwd maintainer) and the shadow maintainer about bringing all three source packages under one team, since they often need to cooperate on things like user and group names. I agreed, but hadn’t got round to doing anything about it until recently. I’ve now officially moved it under team maintenance.

debconf

Gioele Barabucci has been working on eliminating duplicated code between debconf and cdebconf, ultimately with the goal of migrating to cdebconf (which I’m not sure I’m convinced of as a goal, but if we can make improvements to both packages as part of working towards it then there’s no harm in that). I finally got round to reviewing and merging confmodule changes in each of debconf and cdebconf. This caused an installer regression due to a weirdness in cdebconf-udeb’s packaging, which I fixed - sorry about that!

I’ve also been dealing with a few patch submissions that had been in my queue for a long time, but more on that next month if all goes well.

CI issues

I noticed and fixed a problem with Restrictions: needs-sudo in autopkgtest.

I fixed broken aptly images in the Salsa CI pipeline.

Python team

Last month, I mentioned some progress on sorting out the multipart vs. python-multipart name conflict in Debian (#1085728), and said that I thought we’d be able to finish it soon. I was right! We got it all done this month:

The Python 3.13 transition continues, and last month we were able to add it to the supported Python versions in testing. (The next step will be to make it the default.) I fixed lots of problems in aid of this, including:

Sphinx 8.0 removed some old intersphinx_mapping syntax which turned out to still be in use by many packages in Debian. The fixes for this were individually trivial, but there were a lot of them:

I found that twisted 24.11.0 broke tests in buildbot and wokkel, and fixed those.

I packaged python-flatdict, needed for a new upstream version of python-semantic-release.

I tracked down a test failure in vdirsyncer (which I’ve been using for some years, but had never previously needed to modify) and contributed a fix upstream.

I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools:

I fixed django-cte to remove a build-dependency on the obsolete python3-nose package.

I added Django 5.1 support to django-polymorphic. (There are a number of other packages that still need work here.)

I fixed various other build/test failures:

I upgraded these packages to new upstream versions:

  • aioftp
  • alot
  • astroid
  • buildbot
  • cloudpickle (fixing a Python 3.13 failure)
  • django-countries
  • django-sass-processor
  • djoser (fixing CVE-2024-21543)
  • ipython
  • jsonpickle
  • lazr.delegates
  • loguru (fixing a Python 3.13 failure)
  • netmiko
  • pydantic
  • pydantic-core
  • pydantic-settings
  • pydoctor
  • pygresql
  • pylint (fixing Python 3.13 failures #1089758 and #1091029)
  • pypandoc (fixing a Python 3.12 warning)
  • python-aiohttp (fixing CVE-2024-52303 and CVE-2024-52304
  • python-aiohttp-security
  • python-argcomplete
  • python-asyncssh
  • python-click
  • python-cytoolz
  • python-jira (fixing a Python 3.13 failure)
  • python-limits
  • python-line-profiler
  • python-mkdocs
  • python-model-bakery
  • python-pgspecial
  • python-pyramid (fixing CVE-2023-40587)
  • python-pythonjsonlogger
  • python-semantic-release
  • python-utils
  • python-venusian
  • pyupgrade
  • pyzmq
  • quart
  • six
  • sqlparse
  • twisted
  • vcr.py
  • vulture
  • yoyo
  • zope.configuration
  • zope.testrunner

I updated the team’s library style guide to remove material related to Python 2 and early versions of Python 3, which is no longer relevant to any current Python packaging work.

Other Python upstream work

I happened to notice a Twisted upstream issue requesting the removal of the deprecated twisted.internet.defer.returnValue, realized it was still used in many places in Debian, and went on a PR-filing spree informed by codesearch to try to reduce the future impact of such a change on Debian:

Other small fixes

Santiago Vila has been building the archive with make --shuffle (also see its author’s explanation). I fixed associated bugs in cccc (contributed upstream), groff, and spectemu.

I backported an upstream patch to putty to fix undefined behaviour that affected use of the “small keypad”.

I removed groff’s Recommends: libpaper1 (#1091375, #1091376), since it isn’t currently all that useful and was getting in the way of a transition to libpaper2. I filed an upstream bug suggesting better integration in this area.

02 January, 2025 12:16AM by Colin Watson

January 01, 2025

Tim Retout

Strauss as Pop Music

While watching the Vienna New Year’s Concert today, reading about its perhaps somewhat problematic origins, I was struck by the observation that the Strauss family’s polkas were seen as pop music during their lifetime, not as serious as proper classical composers, and so it took some time before the Vienna Philharmonic would actually play their work.

(Perhaps the space-themed interval today and the ballet dancers pretending to be a steam train were a continuation of the true spirit of this? It felt very Eurovision.)

I can’t decide if it’s remarkable that this year was the first time a female composer (Constanze Geiger) was represented at this concert, or if that is what you get when you set up a tradition of playing mainly Strauss?

01 January, 2025 11:36PM

Russ Allbery

2024 Book Reading in Review

In 2024, I finished and reviewed 46 books, not counting another three books I've finished but not yet reviewed and which will therefore roll over to 2025. This is slightly fewer books than the last couple of years, but more books than 2021. Reading was particularly spotty this year, with much of the year's reading packed into late November and December.

This was a year in which I figured out I was trying to do too much, but did not finish figuring out what to do about it. Reading and particularly reviewing reflected that, with long silent periods and then attempts to catch up. One of the goals for next year is to find a more sustainable balance for the hobbies in my life, including reading.

My favorite books I read this year were Ashley Herring Blake's Bright Falls sapphic romance trilogy: Delilah Green Doesn't Care, Astrid Parker Doesn't Fail, and Iris Kelly Doesn't Date. These are not perfect books, but they made me laugh, made me cry, and were impossible to put down. My thanks to a video from BookTuber Georgia Marie for the recommendation.

I Shall Wear Midnight was the best of the remaining Pratchett novels. It's the penultimate Tiffany Aching book and, in my opinion, the best. All of the elements of the previous books come together in snarky competence porn that was a delight to read.

The best book I read last year was Mark Lawrence's The Book That Wouldn't Burn, which much to my surprise did not make a single award list for its publication year of 2023. It was a tour de force of world-building that surprised me multiple times. Unfortunately, the sequel was not as good and I fear the series may be heading in the wrong direction. I am attempting to stay hopeful about the upcoming third and concluding book.

I didn't read much non-fiction this year, but the best of what I did read was Zeke Faux's Number Go Up about the cryptocurrency bubble. This book will not change anyone's mind, but it's a readable and entertaining summary of some of the more obvious cryptocurrency scams. I also had enough quibbles with it to write an extended review, which is a compliment of sorts.

The Discworld read-through is done, so I may either start or return to another series re-read in 2025. I have a huge backlog of all sorts of books, though, so we will see how the year goes. As always, I have no specific numeric goals, just a hope that I can make time for regular and varied reading and maintain a rhythm with writing reviews.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2025 08:11PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities December 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.44 release and landing the initial Cell Broadcast support in phosh. The rest is all just small bits of bug, fallout/regression fixing here and there.

phosh

  • Fix notification regression and release 0.43.1 (MR), 0.43.1
  • Make notifiction banner take less vertical space (MR)
  • Allow to unfullscreen apps from the overview (MR)
  • Fix a leak in the tests tripping up our ASAN CI (MR)
  • Use consistent prefix and portal name (MR). This allows us to properly name the portal
  • Undraft the initial Cell Broadcast implementation (MR)
  • Brush up and merge the 1y old background in overview MR (MR)
  • Monitor background file changes (MR)
  • Some style improvements prompted by the above MR plus some other cleanups (MR)
  • Release 0.44~rc1 and 0.44.0
  • Make new headers introduced in 0.44 private (MR)
  • Port prefs to GtkFileDialog (so we use the adaptive portal) (MR)
  • Make fake clock available in regular shell (MR)
  • Enable/disable autoconnect on wwan connection, otherwise they come back on after e.g. resume (MR)
  • Toggle top-bar transparency (MR)
  • Create thumbnails for screenshots (MR)

phoc

  • Don't crash on NULL output when using foreign-toplevel to fullscreen (MR)
  • Allow to force shell-reveal for debugging (MR)
  • Release 0.44~rc1 and 0.44.0
  • Don't forget to reset fullscreen state when tiling (MR)

phosh-mobile-settings

libphosh-rs

  • Update for 0.44~rc1 (MR)
  • Release 0.0.5 (MR)

phosh-osk-stub

  • Release 0.44~rc1
  • Drop experimental status, update screenshots and release 0.44.0

phosh-tour

pfs

  • Allow to sort by modification time (MR)
  • Allow to activate via <return> (MR)
  • Load thumbnails if they exist (MR)
  • Store sort-mode (MR)
  • Tweak file name display a bit (MR)

xdg-desktop-portal-phosh

  • Use phosh as portal name rather than pmp (which is confusing to users) (MR)
  • Update pfs subproject and adjust packaging (MR)
  • Release 0.44~rc1 and 0.44.0
  • Implement r/o mode (MR)

phog

  • Unbreak with recent phoc (MR)

Debian

git-buildpackage

  • Fix ci (MR)
  • Move upsteam ci to separate pipeline and run type checks and collect test results (MR)
  • pristine-tar: handle upstream-signatures like import-orig (MR
  • Run tests before salsa-ci pipeline and enable component tests (MR)
  • Run tests that need network access in CI, use ci-faire, etc (MR)
  • Bundle pipes module to avoid deprecation (MR)
  • Release 0.9.36
  • Fix --export-dir regression (MR)

wlr-randr

  • Document --toggle (MR

python-dbusmock

  • Add mock for cell broadcast messages (MR)

livi

  • Use AdwAboutDialog (MR)
  • Release 0.3.0 (MR)

Chatty

  • Fix crash when saving attachments in Matrix chats (MR)

feedbackd

  • Add vibrate() API to allow e.g.games more haptic control (MR). This could also be used in browser to implement the vibration API in e.g. Firefox.
  • Release 0.6.0 (MR)

libadwaita

  • Drop superfluous "makes" (MR)

phosh-ev

  • Add ci: (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!

  • phosh: Switch to AdwPreferencesDialog (MR)
  • phosh: Visual effect when swiping notification (MR)
  • phosh: Notification banner slide up animation (MR)
  • phosh: Slide down notifications when adding a new one (MR)
  • libphosh-rs: License symlinks (MR)
  • phosh-ev: Support for Nissan (MR) (got merged)
  • Debian: libvirt update (enabling nftables) (MR) (got merged)
  • Debian: libvirt update (disabling nftables again (among other things) (MR)
  • git-buildpackage: uscan --download-vesion (MR
  • git-buildpackage: manpage improvements (MR)
  • git-buildpackage: improve intro (MR)
  • git-buildpackage: Add import-ref to gbp(1) ([MR}(https://salsa.debian.org/agx/git-buildpackage/-/merge_requests/31))

Help Development

Thanks a lot to all the those who supported my work on this in 2024. Happy new year!

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 January, 2025 09:09AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

2024 — A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one.

Albums

In 2024, I added 88 new albums to my collection — that's a lot!

This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union.

Money continues to ruin the world, I guess.

Concerts

I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :)

If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year.

Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait.

Anyway, here are the concerts I went to in 2024:

  • February 22nd-23rd-24th (Montreal Madhouse 2024): Scorching Tomb, Bruiserweight, Scaramanga, Cloned Apparition, Chain Block, Freezerburn, Béton Armé, Mil-Spec, NUKE, Friction, Reality Denied, SOV, Deathnap, Glint, Mulch, Stigmatism, Plus Minus, Puffer, Deadbolt, Apes, Pale Ache, Total Nada, Verify, Cross Check
  • March 16th: Kavinsky
  • April 11th: Agriculture
  • April 26th-27th (Oi! Fest 2024): Bishops Green, The Partisans, Mess, Fuerza Bruta, Empire Down, Unwanted Noise, Lion's Law, The Oppressed, Ultra Sect, Reckless Upstarts, 21 Gun Salute, Jail
  • May 4th: MASTER BOOT RECORD
  • May 16th: Wayfarer, Valdrin, Sonja
  • May 25th: Union Thugs
  • June 15th: Ultra Razzia, Over the Hill, Street Code, Mortier
  • September 5th-6th (Droogs Fest 2024): Skarface, Inspecter 7, 2 Stone 2 Skank, Francbâtards, Les Happycuriens, Perkele, Blanks 77, Violent Way, La Gachette, Jenny Woo
  • September 16th: Too Many Zoos
  • September 27th: The Slads, Young Blades, New Release, Mortier
  • October 2nd: Amorphis, Dark Tranquility, Fires in the Distance
  • October 7th: Jordi Savall & Hespèrion XXI, accompanied by La Capella Reial de Catalunya
  • October 11th-12th (Revolution Fest 2024): René Binamé, Dirty Old Mat, Union Thugs, Gunh Twei, Vermine Kaos, Inner Terrestrials, Ultra Razzia, Battery March, Uzu, One Last Thread, Years of Lead
  • October 19th (Varning from Montreal XVI): Coupe Gorge, Flash, Imploders, Young Blades, Tenaz, Mötorwölf
  • November 2nd: Kon-Fusion, Union Thugs
  • November 12th: Chat Pile, Agriculture, Traindodge
  • November 25th: Godspeed You! Black Emperor
  • November 27th: Zeal & Ardour, Gaerea, Zetra
  • December 7th: Perestroïka, Priors, White Knuckles, Tenaz

Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there.

See you all in 2025!


  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics. 

01 January, 2025 05:00AM by Louis-Philippe Véronneau

hackergotchi for Junichi Uekawa

Junichi Uekawa

Happy New Year.

Happy New Year. Spending most of my time in work and family. Kids are taking my time.

01 January, 2025 03:55AM by Junichi Uekawa

Russ Allbery

Review: Driving the Deep

Review: Driving the Deep, by Suzanne Palmer

Series: Finder Chronicles #2
Publisher: DAW
Copyright: 2020
ISBN: 0-7564-1512-8
Format: Kindle
Pages: 426

Driving the Deep is science fiction, a sequel to Finder (not to be confused with Finders, Emma Bull's Finder, or the many other books and manga with the same title). It stands alone and you could start reading here, although there will be spoilers for the first book of the series. It's Suzanne Palmer's second novel.

When Fergus Ferguson was fifteen, he stole his cousin's motorcycle to escape an abusive home, stashed it in a storage locker, and got the hell off of Earth. Nineteen years later, he's still paying for the storage locker and it's still bothering him that he never returned the motorcycle. His friends in the Shipyard orbiting Pluto convince him to go to Earth and resolve this ghost of his past, once and for all.

Nothing for Fergus is ever that simple. When the key he's been carrying all these years fails to open the storage unit, he hacks it open, only to find no sign of his cousin's motorcycle. Instead, the unit is full of expensive storage crates containing paintings by artists like Van Gogh. They're obviously stolen. Presumably the paintings also explain the irate retired police officer who knocks him out and tries to arrest him, slightly after the urgent message from the Shipyard AI telling him his friends are under attack.

Fergus does not stay arrested, a development that will not surprise readers of the previous book. He does end up with an obsessed and increasingly angry ex-cop named Zacker as an unwanted passenger. Fergus reluctantly cuts a deal with Zacker: assist him in finding out what happened to his friends, and Fergus will then go back to Earth and help track down the art thieves who shot Zacker's daughter.

It will be some time before they get back to Earth. Fergus's friends have been abducted by skilled professionals. What faint clues he can track down point to Enceladus, a moon of Saturn with a vast subsurface ocean. One simulation test with a desperate and untrustworthy employer later, Fergus is now a newly-hired pilot of an underwater hauler.

The trend in recent SFF genre novels has been towards big feelings and character-centric stories. Sometimes this comes in the form of found family, sometimes as melodrama, and often now as romance. I am in general a fan of this trend, particularly as a corrective to the endless engineer-with-a-wrench stories, wooden protagonists, and cardboard characters that plagued classic science fiction. But sometimes I want to read a twisty and intelligent plot navigated by a competent but understated protagonist and built around nifty science fiction ideas. That is exactly what Driving the Deep is, and I suspect this series is going to become my go-to recommendation for people who "just want a science fiction novel."

I don't want to overstate this. Fergus is not a blank slate; he gets the benefit of the dramatic improvement in writing standards and characterization in SFF over the past thirty years. He's still struggling with what happened to him in Finder, and the ending of this book is rather emotional. But the overall plot structure is more like a thriller or a detective novel: there are places to go, people to investigate, bases to infiltrate, and captives to find, so the amount of time spent on emotional processing is necessarily limited. Fergus's emotions and characterization are grace notes around the edges of the plot, not its center.

I thoroughly enjoyed this. Palmer has a light but effective touch with characterization and populates the story with interesting and distinguishable characters. The plot has a layered complexity that allows Fergus to keep making forward progress without running out of twists or getting repetitive. The motivations of the villains were not the most original, but they didn't need to be; the fun of the story is figuring out who the villains are and watching Fergus get out of impossible situations with the help of new friends. Finder was a solid first novel, but I thought Driving the Deep was a substantial improvement in both pacing and plot coherence.

If I say a novel is standard science fiction, that sounds like criticism of lack of originality, but sometimes standard science fiction is exactly what I want to read. Not every book needs to do something wildly original or upend my understanding of story. I started reading science fiction because I loved tense adventures on moons of Saturn with intelligent spaceships and neat bits of technology, and they're even better with polished writing, quietly competent characterization, and an understated sense of humor.

This is great stuff, and there are two more books already published that I'm now looking forward to. Highly recommended when you just want a science fiction novel.

Followed by The Scavenger Door.

Rating: 8 out of 10

01 January, 2025 02:36AM

December 31, 2024

hackergotchi for Chris Lamb

Chris Lamb

Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024.

It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

§

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller’s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

§

Films

Recent releases

     † Seen at a 2023 festival.

Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almodóvar) and Emilia Pérez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie à Deux (Todd Phillips).


Older releases

ie. Films released before 2023, and not including rewatches from previous years.

Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser’s Husband (Patrice Leconte, 19990).

On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).


31 December, 2024 03:58PM

Scarlett Gately Moore

KDE: Application snaps 24.12.0 release and more

https://kde.org/announcements/gear/24.12.0

I hope everyone had a wonderful holiday! Your present from me is shiny new application snaps! There are several new qt6 ports in this release. Please visit https://snapcraft.io/store?q=kde

I have also fixed the Krita snap unable to open/save bug. Please test –edge!

I am continuing work on core24 support and hope to be done before next release.

I do look forward to 2025! Begone 2024!

If you can help with gas, I still have 3 weeks of treatments to go. Thank you for your continued support.

https://gofund.me/573cc38e

31 December, 2024 02:34PM by sgmoore

Russell Coker

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

git.sesse.net goes IPv6-only

Following Dennis Schubert's post on how LLM bots are scraping the Internet continuously at full speed, I decided to take a look at my own server. If you exclude my chess site which naturally has a lot of unusual organic traffic right now (due to the World Rapid and Blitz Chess Championship; every site update is a long-poll request), I am at a neat 86% of requests matching “crawler” or “bot” in the UA, and then more crawler traffic with spoofed UAs on top of that. (GPTBot at the top, of course, with 25x as many requests as Googlebot. MJ12Bot is in second place, which I'd never even heard of.)

This is unsustainable, but also perhaps not a big surprise; these companies have tons of VC money (so they don't need to care much about being efficient), an insatiable lust for data, questionable ethics, and lack the sophistication in crawling that the traditional web search companies have built up over the last twenty years. So of course they will just follow every link and never really back down. Including, say, try to fetch every single tree object from all of my Git mirrors, including large repositories like an old VLC fork.

I could add robots.txt, but there are tons of them, and it's honestly not a game I want to spend energy on. So I went for a simple solution: Remove the A record. git.sesse.net is now IPv6-only; if your infrastructure administrator is remotely competent, you'll have IPv6 by now, and if not, interested programmers (the main target for Git repositories, obviously) are probably able to fix that easily enough. (Curiously enough, there are still some sites coming in with bogus “Chrome” user agents over IPv4. Not even respecting DNS timeouts, of course…)

We'll see how it turns out; perhaps I'll need to reenable if there's an influx of legitimate users wanting my software. But TBH, as we go into 2025 and IPv6 turns 30, enough is enough anyway. It will soon be more common to have IPv6 than not to have it (47% and counting), so it's time to move on.

Happy new year!

31 December, 2024 08:48AM

Russ Allbery

Review: Metal from Heaven

Review: Metal from Heaven, by August Clarke

Publisher: Erewhon
Copyright: November 2024
ISBN: 1-64566-099-0
Format: Kindle
Pages: 443

Metal from Heaven is industrial-era secondary-world fantasy with a literary bent. It is a complete story in one book, and I would be very surprised by a sequel. Clarke previously wrote the Scapegracers young-adult trilogy, which got excellent reviews and a few award nominations, as H.A. Clarke. This is his first adult novel.

Know I adore you. Look out over the glow. The cities sundered, their machines inverted, mountains split and prairies blazing, that long foreseen Hereafter crowning fast. This calamity is a promise made to you. A prayer to you, and to your shadow which has become my second self, tucked behind my eye and growing in tandem with me, pressing outwards through the pupil, the smarter, truer, almost bursting reason for our wrath. Do not doubt me. Just look. Watch us rise as the sun comes up over the beauty. The future stains the bleakness so pink. When my violence subsides, we will have nothing, and be champions.

Marney Honeycutt is twelve years old, a factory worker, and lustertouched. She works in the Yann I. Chauncey Ichorite Foundry in Ignavia City, alongside her family and her best friend, shaping the magical metal ichorite into the valuable industrial products of a new age of commerce and industry. She is the oldest of the lustertouched, the children born to factory workers and poisoned by the metal. It has made her allergic, prone to fits at any contact with ichorite, but also able to exert a strange control over the metal if she's willing to pay the price of spasms and hallucinations for hours afterwards.

As Metal from Heaven opens, the workers have declared a strike. Her older sister is the spokesperson, demanding shorter hours, safer working conditions, and an investigation into the health of the lustertouched children. Chauncey's response is to send enforcer snipers to kill the workers, including the entirety of her family.

The girl sang, "Unalone toward dawn we go, toward the glory of the new morning."

An enforcer shot her in the belly, and when she did not fall, her head.

Marney survives, fleeing into the city, swearing an impossible personal revenge against Yann Chauncey. An act of charity gets her a ticket on a train into the countryside. The woman who bought her ticket is a bandit who is on the train to rob it. Marney's ability to control ichorite allows her to help the bandits in return, winning her a place with the Highwayman's Choir who have been preying on the shipments of the rich and powerful and then disappearing into the hills.

The Choir's secret is that the agoraphobic and paranoid Baron of the Fingerbluffs is dead and has been for years. He was killed by his staff, Hereafterist idealists, who have turned his remote territory into an anarchist commune and haven for pirates and bandits. This becomes Marney's home and the Choir becomes her family, but she never forgets her oath of revenge or the childhood friend she left behind in the piles of bodies and to whom this story is narrated.

First, Clarke's writing is absolutely gorgeous.

We scaled the viny mountain jags at Montrose Barony's legal edge, the place where land was and wasn't Ignavia, Royston, and Drustland alike. There was a border but it was diffuse and hallucinatory, even more so than most. On legal papers and state maps there were harsh lines that squashed topography and sanded down the mountains into even hills in planter's rows, but here among the jutting rocks and craggy heather, the ground was lineless.

The rhythm of it, the grasp of contrast and metaphor, the word choice! That climactic word "lineless," with its echo of limitless. So good.

Second, this is the rarest of books: a political fantasy that takes class and religion seriously and uses them for more than plot drivers. This is not at all our world, and the technology level is somewhat ambiguous, but the parallels to the Gilded Age and Progressive Era are unmistakable. The Hereafterists that Marney joins are political anarchists, not in the sense of alternative governance structures and political theory sanitized for middle-class liberals, but in the sense of Emma Goldman and Peter Kropotkin. The society they have built in the Fingerbluffs is temporary, threatened, and contingent, but it is sincere and wildly popular among the people who already lived there.

Even beyond politics, class is a tangible force in this book. Marney is a factory worker and the child of factory workers. She barely knows how to read and doesn't magically learn over the course of the book. She has friends who are clever in the sense rewarded by politics and nobility, who navigate bureaucracies and political nuance, but that is not Marney's world. When, towards the end of the book, she has to deal with a gathering of high-class women, the contrast is stark, and she navigates that gathering only by being entirely unexpected.

Perhaps the best illustration of the subtlety of this is the terminology in the book for lesbian. Marney is a crawly, which is a slur thrown at people like her (and one of the rare fictional slurs that work exactly as the author intended) but is also simply what she calls herself. Whether or not it functions as a slur depends on context, and the context is never hard to understand. The high-class lesbians she meets later are Lunarists, and react to crawly as a vile and insulting word. They use language to separate themselves from both the insult and from the social class that uses it. Language is an indication of culture and manners and therefore of morality, unlike deeds, which admit endless justifications.

Conversation was fleeting. Perdita managed with whomever stood near her, chipper about every prettiness she saw, the flitting butterflies, the dappled light between the leaves, the lushness and the fragrance of untamed land, and her walking companions took turns sharing in her delight. It was infectious, how happy she was. She was going to slaughter millions. She was going to skip like this all the while.

The handling of religion is perhaps even better. Marney was raised a Tullian, which sits alongside two other fleshed-out fictional religions and sketches of several more. Tullians tend to be conservative and patriarchal, and Marney has a realistically complicated relationship with faith: sticking with some Tullian worship practices and gestures because they're part of who she is, feeling a kinship to other Tullians, discarding beliefs that don't fit her, and revising others.

Every major religion has a Hereafterist spin or reinterpretation that upends or reverses the parts of the religion that were used to prop up the existing social order and brings it more in line with Hereafterist ideals. We see the Tullian Hereafterist variation in detail, and as someone who has studied a lot of methods of reinterpreting Christianity, I was impressed by how well Clarke invents both a belief system and its revisionist rewrite. This is exactly how religions work in human history, but one almost never sees this subtlety in fantasy novels.

Marney's allergy to ichorite causes her internal dialogue to dissolve into hallucinatory synesthesia when she's manipulating or exposed to it. Since that's most of the book, substantial portions read like drug trips with growing body horror. I normally hate this type of narration, so it's a sign of just how good Clarke's writing is that I tolerated it and even enjoyed parts. It helps that the descriptions are irreverent and often surprising, full of unexpected metaphors and sudden turns. It's very hard not to quote paragraph after paragraph of this book.

Clarke is also doing a lot with gender that I don't feel qualified to comment in detail on, but it would not surprise me to see this book in the Otherwise Award recommendation list. I can think of three significant male characters, all of whom are well-done, but every other major character is female by at least some gender definition. Within that group, though, is huge gender diversity of the complicated and personal type that doesn't force people into defined boxes. Marney's sexuality is similarly unclassified and sometimes surprising. My one complaint is that I thought the sex scenes (which, to warn, are often graphic) fell into the literary fiction trap of being described so closely and physically that it didn't feel like anyone involved was actually enjoying themselves. (This is almost certainly a matter of personal taste.)

I had absolutely no idea how Clarke was going to end this book, and the last couple of chapters caught me by surprise. I'm still not sure what I think about the climax. It's not the ending that I wanted, but one of the merits of this book is that it never did what I thought I wanted and yet made me enjoy the journey anyway. It is, at least, a genre ending, not a literary ending: The reader gets a full explanation of what is going on, and the setting is not static the way that it so often is in literary fiction. The characters can change the world, for good or for ill. The story felt frustrating and incomplete when I first finished it, but I haven't stopped thinking about this book and I think I like the shape of it a bit more now. It was certainly unexpected, at least by me.

Clarke names Dhalgren as one of their influences in the acknowledgments, and yes, Metal from Heaven is that kind of book. This is the first 2024 novel I've read that felt like the kind of book that should be on award shortlists. I'm not sure it was entirely successful, and there are parts of it that I didn't like or that weren't for me, but it's trying to do something different and challenging and uncomfortable, and I think it mostly worked. And the writing is so good.

She looked like a mythic princess from the old woodcuts, who ruled nature by force of goodness and faith and had no legal power.

Metal from Heaven is not going to be everyone's taste. If you do not like literary fantasy, there is a real chance that you will hate this. I am very glad that I read it, and also am going to take a significant break from difficult books before I tackle another one. But then I'm probably going to try the Scapegracers series, because Clarke is an author I want to follow.

Content notes: Explicit sex, including sadomasochistic sex. Political violence, mostly by authorities. Murdered children, some body horror, and a lot of serious injuries and death.

Rating: 8 out of 10

31 December, 2024 03:12AM

kpcyrd

2024 wrapped

Dear blog. This post is inspired by an old friend of mine who has been writing these for the past few years. I meant to do this for a while now, but ended up not preparing anything, so this post is me writing it from memory. There’s likely stuff I forgot, me being gentle with myself I’ll probably just permit myself to complete this list the next couple of days.

I hate bragging, I try to not depend on external validation as much as possible, and being the anarcho-communist anti-capitalist that I am, I try to be content with knowing I’m “doing good in the background”. I don’t think people owe me for the work I did, I don’t expect anything in return, and it’s my way of giving back to the community and the people around me. Consider us even.

That being said, I:

  • Uploaded 689 packages to Arch Linux
    • Most of which being reproducible, meaning I provably didn’t abuse my position of compiling the binaries
    • 59 of those are signal-desktop
    • 34 of those are metasploit
  • Made 28 commits in Alpine Linux’ aports
    • 24 of those being package releases
  • Made 43 uploads to Debian
    • All of them being related to my work in the debian-rust team, that I’ve been a part of since 2018
  • Made 5 commits in NixOS’ nixpkgs
  • Made 1 commit in homebrew-core
  • Was one of the people involved in rolling out _FORTIFY_SOURCE=3 compiler hardening in Arch Linux, for the entire operating system. I wrote lists, tools, patches and my work got me quoted in an “Additional Considerations” section of the OpenSSF compiler hardening guide for C and C++. There are now more, stricter buffer-overflow checks at runtime that hopefully make your computer harder to exploit in 2025.
  • Was one of the people behind the launch of reproduce.debian.net which is analogous to reproducible.archlinux.org that I also helped create 5 years ago. Reproducing these packages (and allowing anybody else to do the same) proves the binaries have not been backdoored by the build server (or whoever compiled them), and if there’s a backdoor, you can likely find it in the source code.
  • Integrated librustls, a memory safe TLS implementation, into Arch Linux’ C dynamic linking ecosystem and became one of the authors of the rustls curl TLS backend
  • In response to the XZ Jia Tan incident I created whatsrc.org, a source code indexing project. It doesn’t solve anything in itself, but it’s framing the concept of source code inputs and how to reason about them in a way that I consider promising. It also documents and makes it very apparent what specifically is the source code we’re putting into our computers, that would benefit from code reviews.
  • Contributed to the Reproducible Builds mailing list 33 times
  • Volunteered at a soldering workshop for beginners for the 3rd year in a row, with people describing me as a good teacher, giving very calm vibes and having endless patience
  • Reverse engineered the signal username and QR-code feature
  • Rewrote my tooling for apt.vulns.xyz to use repro-env, the .deb files can now be verified through reproducible builds, and I switched to static Rust binaries because I had trouble targeting multiple Debian/Ubuntu releases with my previous tooling
  • Wrote 0 blog posts (besides this one)
  • Wrote 5.937 messages in irc channels
  • Got mentioned 1.664 times on irc
  • Attended FOSDEM, Fusion, the Reproducible Builds summit, Hackjunta 2024#2 and 38c3
  • Made and printed 8 new sticker designs, and a custom hoodie
  • Mastered the art of pragmatic zaza cultivation and processing
  • Got 2 new piercings and 2-3 new tattoos (depending on how you count them)

Thanks to everybody who has been part of my human experience, past or present. Especially those who’ve been closest.

cheers,
kpcyrd ✨

31 December, 2024 12:00AM

December 30, 2024

hackergotchi for Steve Kemp

Steve Kemp

The CP/M emulator runs on Windows, maybe!

Today I made a new release of my CP/M emulator and I think that maybe now it will run on Microsoft Windows. Unfortunately I cannot test it!

A working CP/M implementation needs to provide facilities for reading input from the console, both reading a complete line of text and individual keystrokes. These input functions need to handle several different types of input:

  • Blocking, waiting for input to become available.
  • Non-blocking, returning any pending input if it is available otherwise nothing.
  • With echo, so the user can see what they typed.
  • Without echo, so the keys are returned by not displayed ot the user.

In the past we used a Unix-specific approach to handle the enabling and disabling of keyboard echoing (specifically we executed the stty binary to enable/disable echos), but this release adds a more portable solution, based around termbox-go which is the new default, and should allow our emulator to work on Microsoft Windows systems.

We always had the ability to select between a number of different output drivers, and as of this release we can now select between multiple input drivers too - with the new portable option being the default. This has been tested on MacOS X systems, as well as GNU/Linux, but sadly I don't have access to Windows to test that.

Fingers crossed it's all good now though, happy new year!

30 December, 2024 08:45PM

Russ Allbery

Review: House in Hiding

Review: House in Hiding, by Jenny Schwartz

Series: Uncertain Sanctuary #2
Publisher: Jenny Schwartz
Copyright: October 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 196

House in Hiding is the second book of a self-published space fantasy trilogy that started with The House That Walked Between Worlds. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata.

At the end of the previous book, Kira had gathered a motley crew for her house and discovered that she had drawn the attention of some rather significant galactic powers. Now, with the help of her new (hopefully) friends, she has to decide what role she's going to play in the galaxy.

Or she can dither a lot, ruminate repeatedly on the same topics, and flail about randomly. That's also an option.

This is slightly unfair. By the second half of the book, the series plot is beginning to cohere around two major problems: what is happening to the magic flows in the universe, and who killed Kira's parents. But apparently there was a limit on my enjoyment for the chaos in Kira's chaotic decisiveness I praised in my review of the last book, and I hit that limit around the middle of this book. I am interested in the questions of ethics, responsibility, and public image that this series is raising. I'm just not convinced that Schwartz is going to provide satisfying answers.

One thing I do appreciate about this book is that it acknowledges that politics exist and that taking powerful people at face value is a bad idea. You would think that this would be a low bar, and yet it's depressing how many fantasy novels signal the trustworthiness of a character via some variation of "I looked into his eyes and shook his hand," or at least expect readers to be surprised by the inevitable betrayals. Schwartz does not make that mistake; after getting a call from a powerful player in galactic politics, the characters take apart everything that was said while assuming it could be attempted manipulation, which is the correct initial response.

My problem comes after that. I like reading about competent characters with a plan, and these are absurdly powerful but very naive characters with no plan. This is realistic for the situation Kira has been thrust into, but it's not that entertaining to read about.

I think the root of my problem is that there are some fundamental storytelling problems here that Schwartz is struggling to fix. The basic theory of story says that you need a protagonist, a setting, a conflict, and a plot. Schwartz has a good protagonist, one great supporting character and several adequate ones, and an enjoyably weird setting. I think she's working her way up to having a plot, although usually it's best for the plot to show up before the middle book of the series. What she doesn't have is a meaningful conflict. It's not entirely clear to either the reader or to Kira why Kira cares about what's happening.

You would not think this would be a problem given that Kira's parents were murdered before the start of the first book. That's a classic conflict that's driven more books than I think anyone could count. It's not what Kira has cared about up to this point, however; she got away from Earth and has shown no sign of wanting to go back or identify the people who killed her parents, perhaps because she mostly blames herself. Instead, she's stumbling across other problems in the universe that other people would like her to care about. She occasionally feels like she ought to care about them because they involve her new friends or because she wants to be a good person, but they have very little dramatic oomph. "I'm a sorcerer and vaguely want the universe to be a better place" turns out to not work that well as a source of dramatic tension.

This lack of conflict is somewhat fascinating because it's so different than most fantasy novels. If Schwartz were more aware of how oddly disconnected her protagonist is from the story conflict, I think there could be a thoughtful, if odd, psychological novel in here about one's ethical responsibilities if one suddenly had vast power and no strong attachments to the world. Kira does gesture occasionally in that direction, but there's no real meat to her musings. Instead, her lack of motivation is solved through one of the hoariest tropes in fiction: children in danger.

I really want to like this series, and I still love the House, but this book was not good. The romance that I was delighted to not be subjected to in the first book appears to be starting (sigh), the political maneuvering that happens here is only mildly interesting and not believably competent, and the book concludes in Kira making an egregiously and blatantly stupid mistake that should have resulted in one of her friends asking her what the hell she was doing. Some setup happens, and it seems likely that the final book will have a clear conflict and plot, but this middle book was a disappointing mess.

These books are fast to read and lightly entertaining between other things, and the House still has me invested enough in this universe that I'll read the last book in the omnibus. Be warned, though, that the middle book is more a collection of anecdotes than a story, and there's only so much of Kira showing off her power I can take without a conflict and a plot.

Followed by The House That Fought.

Rating: 5 out of 10

30 December, 2024 03:54AM

December 29, 2024

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Accessing Atari ST disk images on Linux

This post leverages support for Atari Hard Disk Interface Partition (AHDI) partition tables in the Linux kernel, activated by default in Debian, and in the parted partition editor.

Accessing the content of a partition using a user mounted loop device

This is the easiest procedure and should be tried to first. Depending if your Linux kernel has support for AHDI partition tables, and the size of the FAT system on the partition, this procedure might not work. In that case, try the procedure using mtools further below.

Attach a disk image called hd80mb.image to a loop device:

$ udisksctl loop-setup --file hd80mb.image
Mapped file hd80mb.image as /dev/loop0

Notice how the kernel detected the partition table:

$ dmesg | grep loop0
[160892.151941] loop0: detected capacity change from 0 to 164138
[160892.171061]  loop0: AHDI p1 p2 p3 p4

Inspect the block devices created for each partition:

$ lsblk | grep loop0

If the partitions are not already mounted by udisks2 under /media/, mount them manually:

$ sudo mount /dev/loop0p1 /mnt/
$ ls /mnt/
SHDRIVER.SYS

When you are finished copying data, unmount the partition, and detach the loop device.

$ sudo umount /mnt
$ udisksctl loop-delete --block-device /dev/loop0

Accessing the content of a partition using mtools and parted

This procedure uses the mtools package and the support for the AHDI partition scheme in the parted partition editor.

Display the partition table, with partitions offsets in bytes:

$ parted st_mint-1.5.img -- unit B print
...
Partition Table: atari
Disk Flags: 

Number  Start       End         Size        Type     File system  Flags
 1      1024B       133170175B  133169152B  primary               boot
 2      133170176B  266339327B  133169152B  primary
 3      266339328B  399508479B  133169152B  primary
 4      399508480B  532676607B  133168128B  primary

Set some Atari-friendly mtools options:

$ export MTOOLS_SKIP_CHECK=1
$ export MTOOLS_NO_VFAT=1

List the content of the partition, passing as parameter the offset in bytes of the partition: For instance here we are interested in the second partition, and the parted output above indicates that this partition starts at byte offset 133170176 in the disk image.

$ mdir -s -i st_mint-1.5.img@@133170176
 Volume in drive : has no label
Directory for ::/

demodata          2024-08-27  11:43 
        1 file                    0 bytes

Directory for ::/demodata

We can also use the command mcopy with a similar syntax to copy data from and to the disk image. For instance we copy a file named file.zip to the root directory of the second partition:

$ mcopy -s -i st_mint-1.5.img@@133170176 file.zip ::

Recompiling mtools to access large partitions

With disk images having large AHDI partitions (well considered large in 1992 …), you might encounter the error

mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
init: sector size too big
Cannot initialize '::'

This error is caused by the non-standard large logical sectors that the TOS uses for large FAT partitions (see the Atari Hard Disk Filesystem reference on page 41, TOS partitions size)

We can inspect the logical sector size using fsck tools:

$ udiskctl loop-setup --file cecile-falcon-singlepart-1GB.img
$ sudo fsck.fat -Anv /dev/loop0p1
fsck.fat 4.2 (2021-01-31)
...
Media byte 0xf8 (hard disk)
16384 bytes per logical sector

To access the partition, you need to patch mtools, so that it supports a logical sector size of 16384 bytes. For this you need to change the MAX_SECTOR macro from 8192 to 16384 in msdos.h in the mtools distribution and recompile. A rebuilt mtools is then able to access the partition:

$ /usr/local/bin/mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
 Volume in drive : has no label
Directory for ::/

CECILE   SYS      8462 1998-03-27  22:42 
NEWDESK  INF       804 2024-09-09   9:23 
        2 files               9 266 bytes
                      1 072 463 872 bytes free

29 December, 2024 08:26PM by Manu

Russ Allbery

Review: The Last Hour Between Worlds

Review: The Last Hour Between Worlds, by Melissa Caruso

Series: The Echo Archives #1
Publisher: Orbit
Copyright: November 2024
ISBN: 0-316-30364-X
Format: Kindle
Pages: 388

The Last Hour Between Worlds is urban, somewhat political high fantasy with strong fae vibes. It is the first book of a series, but it stands alone quite well.

Kembral Thorne is a Hound, a member of the guild that serves as guards, investigators, and protectors. Kembral's specialty is Echo retrieval: rescues of people and animals who have fallen through a weak spot in reality into one of the strange, dangerous, and malleable layers called Echoes. Kem once rescued a dog from six layers down, an almost unheard-of feat.

Kem is also a new single mother, which means her past two months have been spent in a sleep-deprived haze revolving exclusively around her much-beloved infant. Dona Marjorie Swift's year-turning party is the first time she's been out without Emmi since she gave birth, and she's only there because her sister took the child and practically shoved her out the door. Now, she's desperately trying to remember how to be social and normal, which is not made easier by the unexpected presence of Rika at the party.

Rika Nonesuch is not a Hound. She's a Cat, a member of the guild of thieves and occasional assassins. They are the nemesis of the Hounds, but in a stylized and formalized way in which certain courtesies are expected. (The politics of this don't really make sense; you just have to go with it.) Kem has complicated feelings about Rika's grace, banter, and intoxicating perfume, feelings that she thought might be reciprocated until Rika drugged her during an apparent date and left her buried under a pile of garbage. She was not expecting Rika to be at this party and is definitely not ready to have a conversation with her.

This emotional turmoil is rudely interrupted by the death of nearly everyone at the party via an Echo poison, the appearance of a dark figure driving a black sword into someone, and the descent of the entire party into an Echo.

This was one of those books that kept getting better the farther into the book I read. I was a bit leery at first because the publisher's blurb made it sound more like horror than I prefer, but this is more the disturbing strangeness of fae creatures than the sort of gruesomeness, disgust, or body horror that I find off-putting. Most importantly, the point of this book is not to torture the characters or scare the reader. It's instead structured a bit like a murder mystery, but one whose resolution requires working out obscure fantasy rules and hidden political agendas. One of the currencies in the world of Echos is blood, but another is emotion, revelation, and the stories that bring both, and Caruso focuses the story more on that aspect than on horrifying imagery.

Rika frowned. "Resolve it? How?"

"I have no idea." I couldn't keep my frustration from leaking through. "Might be that we have to delve deep into our own hearts to confront the unhealed wounds we've carried with us in secret. Might be that we have to say their names backward, or just close our eyes and they'll go away. Echoes never make any damned sense."

Rika made a face. "We'd better not have to confront our unhealed wounds, or I'm leaving you to die."

All of The Last Hour Between Worlds is told in the first person from Kem's perspective, but Rika is the best character in this book. Kem is a rather straightforward, dogged, stubborn protector; Rika is complicated, selfish, conflicted, and considerably more dynamic. The first obvious twist in her background I spotted so long before Kem found out that it was a bit frustrating, but there were multiple satisfying twists after that. As advertised in the blurb, there's a sapphic romance angle here, but it's the sort that comes from a complicated friendship and a lot of mutual respect rather than love at first sight. Some of their relationship conflict is driven by misunderstanding, but the misunderstanding happens before the novel begins, which means the reader doesn't have to sit through the bit where one yells at the characters for being stupid.

It helps that the characters have something concrete to do, and that driving plot problem is multi-layered and satisfying. Each time the party falls through a layer of reality, it's mostly reset to the start of the book, but the word "mostly" is hiding a lot of subtlety. Given the clock at the start of each chapter and the blurb (if one read it), the reader can make a good guess that the plot problem will not be fully resolved until the characters fall quite deep into the Echoes, but the story never felt repetitive the way that some time loop stories can. As the characters gain more understanding, the problems change, the players change, and they have to make several excursions into the surrounding world.

This is the sort of fantasy that feels a bit like science fiction. You're thrown into a world with a different culture and different rules that are foreign to the reader and natural to the characters. Part of the fun of reading is figuring out the rules, history, and backstory while watching the characters try to solve the puzzles they're faced with.

The writing is good but not great. Characterization was good enough for a story primarily focused on action and puzzle-solving, but it was a bit lacking in subtlety. I think Caruso's strengths showed most in the world design, particularly the magic system and the rules followed by the Echo creatures. The excursions outside of the somewhat-protected house struck a balance between eeriness and comprehensibility that reminded me of T. Kingfisher or Sandman. The human politics were unfortunately less successful and rested on some tired centrist cliches. Thankfully, this was not the main point of the story.

I should also warn that there is a lot of talk about babies. Kem's entire identity at the start of the novel, to the point of incessant monologue, is "new mother." This is not a perspective we get very often in fantasy, and Kem eventually finds a steadier balance between her bond with her daughter and the other parts of her life. I think some readers will feel very seen. But Caruso leans hard into maternal bonding. So hard. If you don't want to read about someone who is deliriously obsessed with their new child, you may want to skip this one.

Right after I finished this book, I thought it was amazing. Now that I've had a few days to think about it, the lack of subtlety and the facile human politics brought it down a notch. I'm a science fiction reader at heart, so I loved the slow revelation of mechanics; the reader starts the story by knowing that Kem can "blink step" but not knowing what that means, and by the end of the story one not only knows but has opinions about its limitations, political implications, and interactions with other forms of magic. The Echo worlds are treated similarly, and this type of world-building is my jam. But the cost is that the human characters, particularly the supporting cast, don't get the same focus and therefore are a bit straightforward and obvious. The subplot with Dona Vandelle was particularly annoying.

Ah well. Kem and Rika's relationship did work, and it's the center of the book.

If you like fantasy mechanics but are a bit leery of fae stories because they feel too symbolic or arbitrary, give this a try. It's the most satisfyingly constructed fae story that I've read in a long time. It's not great literary fiction, but it's also not trying to be; it's a puzzle adventure, and a well-executed one. Recommended, and I will definitely be reading the sequel.

Content notes: Lots of violent death and other physical damage, creepy dream worlds with implied but not explicit horror, and rather a lot of blood.

Followed by The Last Soul Among Wolves, not yet published at the time I wrote this review.

Rating: 8 out of 10

29 December, 2024 03:40AM

December 28, 2024

hackergotchi for Thomas Goirand

Thomas Goirand

Running a Lenovo Legion pro 7 laptop under Debian

As I was tired of long build times, so I convinced my boss to buy me a Lenovo Legion pro 7. The reason is: this laptop has an AMD Ryzen 9 7945HX that has 16 cores (32 threads). This reduces a lot the time I have to just wait for my laptop to compile, or run unit tests, especially for big packages like Ceph, OpenVSwitch, and so on.

When buying it, I knew it would not be a good fit for Debian, as this type of laptop is aimed at gaming, and the support under Linux is rather bad. I wish Lenovo had other policies, but that is the way it is: if you’re a Linux user, you’re not suppose to be needing a big CPU, apparently.

Anyways, I slowly have been able to fix all issues over this year. In this blog post I’ll explain how I fixed all problems, in the hope it can be useful to others. And I’ll explain what the src:lenovolegionlinux package (that I now maintain in Debian) does.

Video

The laptop comes with an nVidia RTX-4080 and a Radeon. I quickly tried the radeon, but couldn’t make it work with an external monitor. So I gave up on it, disabled it, and now I’m using the proprietary nVidia driver from non-free. I don’t like it: the nVidia card drains too much power, and I don’t care at all 3D acceleration. I would have prefer an intel board, but no choice: all laptops with this kind of CPU comes with gamer’s 3D card. Anyways, apart from the power issue, it works out well.

Fan control

This sounds like a non-issue, but it is a huge one. Indeed, if not controlling the fan, it is impossible to get the full potential of the CPUs that are otherwise throttling. One may end up using the laptop at a few hundred MHz instead of 5GHz+. More on this later.

Sound

It took me a really long time to figure out what to do. Indeed, while the sound card works out of the box, the issue was that my laptop came with a TI (Texas Instrument) speaker firmware that isn’t on by default. I suppose the purpose is to save on power when it isn’t in use. Anyways, to have sound working, one need in Debian, to run at least kernel 6.10, which means for me, running the Bookworm backport, so that there’s a kernel module for the speakers. But that’s not it. The speakers also need a proprietary firmware in /lib/firmware/TAS2XXX38*.bin. I was able to find that in the ti.com forum. As I tried so many packages, I wouldn’t be able to tell which one was the correct one. Once that was done, the firmware needs to be initialized through the i2c interface. I could find a script that did that, which I pushed in my lenovolegionlinux package (see below).

WiFi

WiFi worked out of the box for me, just it wouldn’t wake up if I closed the laptop lead. This fixed it for me in /etc/modprobe.d/rtw8852be.conf:

options rtw89_pci disable_aspm_l1=y disable_aspm_l1ss=y
options rtw89_core disable_ps_mode=y

lenovolegionlinux package

I came across https://github.com/johnfanv2/LenovoLegionLinux which I packaged. The result is now 4 binary packages: lenovolegionlinux-dkms that provides the kernel module for accessing the fan control. python3-legion-linux that provides legion_cli and legion_gui, written in Python, that make it possible to control the kernel module. I often use sudo legion_gui, click on “Other options” and then switch the power profile from quiet to balanced. Many things on this GUI do no work for me, like the fancurve thingy, but should be working for other flavors of Legion laptops. Please feel free to contribute. There’s also legiond that provides a daemon for setting-up the fan curve on wake up. And finally, I pushed my i2c speaker script to a new lenovolegionlinux-sound debian binary package that I have just uploaded today, in the hope it may be useful for others.

Conclusion

Finally, almost everything is (almost) working as expected. Just my webcam (lsusb says it’s a Luxvisions Innotech Limited Integrated Camera) went dark at some point (it did work previously). It is now as if it is working, but just transmitting a black picture. If anyone knows how to fix, please tell me. Also, I only get 40 minutes of battery time if I’m lucky, I hope this could be fixed. But overall, I’m happy of the laptop.

Thanks to Ding Shenghao for his support of many people in the ti.com forum. Thanks to the people maintaining the LenovoLegionLinux that helped me a lot writing this Debian package.

Please try and report issue with lenovolegionlinux in Debian, and help me improving it. It is in Salsa’s debian namespace in the hope that others may push contributions.

28 December, 2024 02:55PM by Goirand Thomas

Enrico Zini

Disable spellchecker popup on Android

On Android, there's a spellchecker popup that occasionally appears over the keyboard, getting very annoyingly in the way. See for example this unanswered question with screenshots.

It looks like a feature of the keyboard, but it's not, and so I looked and I looked and I could not find how to turn it off.

The answer is to look for how to disable the spellchecker in the keyboard section of the android system settings, not in the android keyboard app settings.

See for example this answer on stackexchange.

28 December, 2024 12:47PM