May 17, 2025

hackergotchi for Daniel Lange

Daniel Lange

Polkitd (Policy Kit Daemon) in Trixie ... getting rid of "Authentication is required to create a color profile"

On the way to Trixie, polkitd (Policy Kit Daemon) has lost the functionality to evaluate its .pkla (Polkit Local Authority) files.

$ zcat /usr/share/doc/polkitd/NEWS.Debian.gz 
policykit-1 (121+compat0.1-2) experimental; urgency=medium

  This version of polkit changes the syntax used for local policy rules:
  it is now the same JavaScript-based format used by the upstream polkit
  project and by other Linux distributions.

  System administrators can override the default security policy by
  installing local policy overrides into /etc/polkit-1/rules.d/*.rules,
  which can either make the policy more restrictive or more
  permissive. Some sample policy rules can be found in the
  /usr/share/doc/polkitd/examples directory. Please see polkit(8) for
  more details.

  Some Debian packages include security policy overrides, typically to
  allow members of the sudo group to carry out limited administrative
  actions without re-authenticating. These packages should install their
  rules as /usr/share/polkit-1/rules.d/*.rules. Typical examples can be
  found in packages like flatpak, network-manager and systemd.

  Older Debian releases used the "local authority" rules format from
  upstream version 0.105 (.pkla files with an .desktop-like syntax,
  installed into subdirectories of /etc/polkit-1/localauthority
  or /var/lib/polkit-1/localauthority). The polkitd-pkla package
  provides compatibility with these files: if it is installed, they
  will be processed at a higher priority than most .rules files. If the
  polkitd-pkla package is removed, .pkla files will no longer be used.

 -- Simon McVittie   Wed, 14 Sep 2022 21:33:22 +0100

This applies now to the polkitd version 126-2 destined for Trixie.

The most prominent issue is that you will get an error message: "Authentication is required to create a color profile" asking for the root(!) password every time you remotely log into a Debian Trixie system via RDP, x2go or the like.

This used to be mendable with a .pkla file dropped into /etc/polkit-1/localauthority/50-local.d/ ... but these .pkla files are void now and need to be replace with a Javascript "rules" file.

The background to his is quite a fascinating read ... 13 years later:
https://davidz25.blogspot.com/2012/06/authorization-rules-in-polkit.html

The solution has been listed in DevAnswers as other distros (Fedora, ArchLinux, OpenSuse) have been faster to depreciate the .pkla files and require .rules files.

So, create a 50-color-manager.rules file in /etc/polkit-1/rules.d/:

polkit.addRule(function(action, subject) {
    if (action.id.startsWith("org.freedesktop.color-manager.") && subject.isInGroup("users")) {
        return polkit.Result.YES;
    }
});

and run systemctl restart polkit.

You should be good until polkit is rewritten in Rust.

17 May, 2025 11:26PM by Daniel Lange

Andrew Cater

Debian 12.11 - testing completed, images being signed and we'll be back for the next point release on ???

 All finished and wrapping up. The bug I thought was fixed has been identified on two distinct sets of hardware. There are workarounds: the most sensible is *not* to use i386 without a modeset parameter but to just use amd64 instead. amd64 works on the identical problematic hardware in question - just use 64 bit.

17 May, 2025 06:00PM by Andrew Cater (noreply@blogger.com)

Debian 12.11 testing - and we're nearly there

 Almost finished the testing we're going to do at 15:29 UTC. It's all been good - we've found that at least one of the major bug reports from 12.10 is not reproducible now. All good - and many thanks to all testers: Sledge, rattusrattus, egw, smcv (and me).

17 May, 2025 03:31PM by Andrew Cater (noreply@blogger.com)

Russell Coker

DDR4 RAM Size

I’ve been looking at computer hardware on AliExpress a lot recently and I saw an advert for a motherboard which can take 256G DDR4 RDIMMs (presumably LRDIMMs). Most web pages about DDR4 state that 128G is the largest possible. The Wikipedia page for DDR4 doesn’t state that 128G is the maximum but does have 128G as the largest size mentioned on the page.

Recently I’ve been buying 32G DDR4 RDIMMs for between $25 and $30 each. A friend can get me 64G modules for about $70 at the lowest price. If I hadn’t already bought a heap of 32G modules I’d buy some 64G modules right now at that price as it’s worth paying 40% extra to allow better options for future expansion.

Apparently the going rate for 128G modules is $300 each which is within the range for a hobbyist who has a real need for RAM. 256G modules are around $1200 each which is starting to get a big expensive. But at that price I could buy 2TB of RAM for $9600 and the computer containing it still wouldn’t be the most expensive computer I’ve bought – the laptop that cost $5800 in 1998 takes that honour when inflation is taken into account.

DDR5 RDIMMs are currently around $10/GB compared to DDR4 for $1/GB for 32G modules and DDR3 for $0.50/GB. DDR6 is supposed to be released late this year or early next year so hopefully enterprise grade systems with DDR5 RAM and DDR5 RDIMMs will be getting cheaper on ebay by the end of next year.

17 May, 2025 03:29PM by etbe

Andrew Cater

Debian 12.11 images testing - progress

 We're now well under way: Been joined by a Simon McVittie (smcv) and we're almost through testing most of the standard images. Live image testing is being worked through. All good so far without identifying problems other than mistyping :)

17 May, 2025 01:10PM by Andrew Cater (noreply@blogger.com)

John Goerzen

How to Use SSH with FIDO2/U2F Security Keys

For many years now, I’ve been using an old YubiKey along with the free tier of Duo Security to add a second factor to my SSH logins. This is klunky, and has a number of drawbacks (dependency on a cloud service and Internet among them).

I decided it was time to upgrade, so I recently bought a couple of YubiKey 5 series security keys. These support FIDO2/U2F, which make it so much easier to integrate with ssh.

But in researching how to do this, I found a lot of pages online with poor instructions. Either they didn’t explain what was going on very well, or suggested what I came to learn were insecure practices, or — most often — both.

It turns out this whole process is quite easy. But I wanted to understand how it worked.

So, I figured it out, set it up myself, and then put up a new, comprehensive page on my website: https://www.complete.org/easily-using-ssh-with-fido2-u2f-hardware-security-keys/. I hope it helps!

17 May, 2025 12:53PM by John Goerzen

Andrew Cater

20250517 - Debian point release - Bookworm 12.11 today

In Cottenham with Andy and the usual suspects. The point release update files are already on the servers - anyone can do an "apt-get update ; apt-get dist-upgrade" and update any running machine. This machine has just been upgraded and "just worked".

Here to do release testing for the images that we will end up publishing later in the day.

Expecting one more of us to turn up a bit later. Team will be working on IRC on #debian-cd




17 May, 2025 11:51AM by Andrew Cater (noreply@blogger.com)

May 16, 2025

hackergotchi for Michael Prokop

Michael Prokop

Grml 2025.05 – codename Nudlaug

Debian hard freeze on 2025-05-15? We bring you a new Grml release on top of that! 2025.05 🚀 – codename Nudlaug.

There’s plenty of new stuff, check out our official release announcement for all the details. But I’d like to highlight one feature that I particularly like: SSH service announcement with Avahi. The grml-full flavor ships Avahi, and when you enable SSH, it automatically announces the SSH service on your local network. So when f.e. booting Grml with boot option `ssh=debian`, you should be able to login on your Grml live system with `ssh grml@grml.local` and password ‘debian‘:

% insecssh grml@grml.local
Warning: Permanently added 'grml.local' (ED25519) to the list of known hosts.
grml@grml.local's password: 
Linux grml 6.12.27-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.27-1 (2025-05-06) x86_64
Grml - Linux for geeks

grml@grml ~ %

Hint: grml-zshrc provides that useful shell alias `insecssh`, which is aliased to `ssh -o “StrictHostKeyChecking=no” -o “UserKnownHostsFile=/dev/null”`. Using those options, you aren’t storing the SSH host key of the (temporary) Grml live system (permanently) in your UserKnownHostsFile.

BTW, you can run `avahi-browse -d local _ssh._tcp –resolve -t` to discover the SSH services on your local network. 🤓

Happy Grml-ing!

16 May, 2025 04:42PM by mika

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, April 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In April, 22 contributors have been paid to work on Debian LTS, their reports are available:

  • Adrian Bunk did 56.25h (out of 56.25h assigned).
  • Andreas Henriksson did 15.0h (out of 20.0h assigned), thus carrying over 5.0h to the next month.
  • Andrej Shadura did 10.0h (out of 6.0h assigned and 4.0h from previous period).
  • Bastien Roucariès did 31.5h (out of 31.5h assigned).
  • Ben Hutchings did 8.0h (out of 0.0h assigned and 12.0h from previous period), thus carrying over 4.0h to the next month.
  • Carlos Henrique Lima Melara did 11.0h (out of 12.0h assigned), thus carrying over 1.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 26.0h (out of 26.0h assigned).
  • Emilio Pozuelo Monfort did 30.0h (out of 39.25h assigned and 0.25h from previous period), thus carrying over 9.5h to the next month.
  • Guilhem Moulin did 8.5h (out of 3.25h assigned and 11.75h from previous period), thus carrying over 6.5h to the next month.
  • Jochen Sprickerhof did 12.5h (out of 20.75h assigned and 9.25h from previous period), thus carrying over 17.5h to the next month.
  • Lee Garrett did 26.25h (out of 7.75h assigned and 31.75h from previous period), thus carrying over 13.25h to the next month.
  • Lucas Kanashiro did 50.0h (out of 0.0h assigned and 52.0h from previous period), thus carrying over 2.0h to the next month.
  • Markus Koschany did 39.5h (out of 39.5h assigned).
  • Roberto C. Sánchez did 9.0h (out of 0.0h assigned and 12.0h from previous period), thus carrying over 3.0h to the next month.
  • Santiago Ruano Rincón did 12.5h (out of 7.5h assigned and 7.5h from previous period), thus carrying over 2.5h to the next month.
  • Sean Whitton did 7.0h (out of 7.0h assigned).
  • Stefano Rivera did 0.5h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.5h to the next month.
  • Sylvain Beucler did 39.5h (out of 39.25h assigned and 0.25h from previous period).
  • Thorsten Alteholz did 15.0h (out of 15.0h assigned).
  • Tobias Frost did 12.0h (out of 7.75h assigned and 4.25h from previous period).
  • Utkarsh Gupta did 2.0h (out of 2.0h assigned).

Evolution of the situation

In April, we released 46 DLAs.

  • Notable security updates:
    • jetty9, prepared by Markus Koschany, fixes an information disclosure and potential remote code execution vulnerability
    • zabbix, prepared by Tobias Frost, fixes several vulnerabilities, encompassing denial of service, information disclosure or remote code inclusion
    • glibc, prepared by Sean Whitton, fixes a buffer overflow vulnerability
  • Notable non-security updates:
    • tzdata, prepared by Emilio Pozuelo Monfort, brings the latest timezone database release
    • php-horde-editor and php-horde-imp, prepared by Sylvain Beucler, have been updated to switch from CKEditor v3, which is EOL, to CKEditor v4; this builds upon work done last month by Sylvain and Bastien for the complete removal of ckeditor3
    • distro-info-data, prepared by Stefano Rivera, adds information concerning future Debian and Ubuntu releases

The LTS team continues to welcome the collaboration of maintainers and other interested parties from outside the regular team. In April, we had external updates contributed by: Yadd - lemonldap-ng and Moritz Schlarb - libapache2-mod-auth-openidc

A point release of the current stable Debian 12 (codename “bookworm”) is planned for mid-May and several LTS contributors have prepared packages for this update, many of them prepared in conjunction with related LTS updates of the same packages:

  • glib2.0, haproxy, imagemagick, poppler, and python-h11, prepared by Adrian Bunk
  • rubygems, prepared by Lucas Kanashiro
  • ruby3.1 (in collaboration with Lucas Kanashiro), twitter-bootstrap3, twitterboot-strap4, wpa, and erlang, prepared by Bastien Roucariès (corresponding updates of twitter-bootstrap3 and twitter-bootstrap4 were also uploaded to Debian unstable)
  • abseil, prepared by Tobias Frost (a corresponding update was also uploaded to Debian unstable)
  • vips, prepared by Guilhem Moulin

Additional updates of ruby3.3 and rubygems were prepared for Debian unstable by Lucas Kanashiro.

And finally, a highlight of our continued commitment to enhancing long term support efforts in upstream projects. Freexian, as the primary entity behind the management and execution of the LTS project, has partnered with Invisible Things Lab to extend the upstream security support of Xen 4.17, which is shipped in Debian 12 “bookworm” (the current stable release). This partnership will result in significantly improved lifecycle support for users of Xen on bookworm, and members of the LTS team will play a part in this endeavour. The Freexian announcement has additional details.

Thanks to our sponsors

Sponsors that joined recently are in bold.

16 May, 2025 12:00AM by Roberto C. Sánchez

Reproducible Builds (diffoscope)

diffoscope 296 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 296. This version includes the following changes:

[ Chris Lamb ]
* Don't rely on zipdetails' --walk functionality to be available; only add
  that argument after testing for a new enough versions.
  (Closes: reproducible-builds/diffoscope#408)
* Disable and then re-enable failing on stable-bpo.
* Update copyright years.

[ Omair Majid ]
* Add NuGet package support.

You find out more by visiting the project homepage.

16 May, 2025 12:00AM

May 15, 2025

hackergotchi for Yves-Alexis Perez

Yves-Alexis Perez

New laptop: Lenovo Thinkpad X13 Gen 5

After more than ten years on my trusted X250, and with a lot of financial help for Debian (which I really thank, more on that later), I finally jumped on a new ThinkPad, an X13 Gen 5.

The migration path was really easy: I'm doing daily backups with borg of the whole filesystems on an encrypted USB drive, so I just had to boot a live USB key on the new laptop, plug the USB drive, create the partitioning (encryption, LVM etc.) and then run borg extract. Since I'm using LABEL in the various fstab I didn't have much to change.

I actually had a small hiccup because my daily backup scripts used ProtectKernelModules and besides preventing loading modules into the running kernel, it also prevents access to /usr/lib/modules. So when restoring I didn't have any modules for the installed kernels. No big deal, I reinstalled the kernel package from the chroot and it did work just fine.

All in all it was pretty smooth.

I've started a similar page as the X250 for the X13G5 but honestly I don't think I'll have to document a lot of stuff because everything basically works out of the box. It's not really a surprise because we went a long way since 2015 and Linux kernels are really tested on a lot of hardware, including laptops these days, and Intel laptops are the most standard stuff you can find. I guess it's still rocky for ARM64 laptops (and especially Apple hardware) but the point was less to do porting work for Debian and rather beeing more efficient for the current stuff I maintain (and sometimes struggle with).

As said above, the laptop has been funded by Debian and I really thank the DPL and the Debian France treasurer for authorizing it and beeing really fast on the reimbursement.

I had already posted a long time ago about hardware funding for Debian developers. It took me quite a while but I finally managed to ask for help because I couldn't afford the hardware at this point and it was becoming problematic. This is not something which should be done lightly (Debian wouldn't have the funds) but this is definitely something which should be done if needed. Don't hesitate to ask your fellow Debian developpers about advice on this.

15 May, 2025 08:19PM by Yves-Alexis (corsac@debian.org)

May 14, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 3: A Detour into Tensorflow

To build our local voice satellite on a Debian system rather than using the ATOM Echo device we need something that can handle the wake word component; the piece that means we only send audio to the Home Assistant server for processing by whisper.cpp when we’ve detected someone is trying to talk to us.

openWakeWord seems to be one of the better ways to do this, and is well supported. However. It relies on TensorFlow Lite (now LiteRT) which is a complicated mess of machine learning code. tflite-runtime is available from PyPI, but that’s prebuilt and we’re trying to avoid that.

Despite, on initial impressions, it looking quite complicated to deal with building TensorFlow - Bazel is an immediate warning - it turns out to be incredibly simple to build your own .deb:

$ wget -O tensorflow-v2.15.1.tar.gz https://github.com/tensorflow/tensorflow/archive/refs/tags/v2.15.1.tar.gz
…
$ tar -axf tensorflow-v2.15.1.tar.gz
$ cd tensorflow-2.15.1/
$ BUILD_NUM_JOBS=$(nproc) BUILD_DEB=y tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh
…
$ find . -name *.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime-dbgsym_2.15.1-1_amd64.deb
./tensorflow/lite/tools/pip_package/gen/tflite_pip/python3-tflite-runtime_2.15.1-1_amd64.deb

This is hiding an awful lot of complexity, however. In particular the number of 3rd party projects that are being downloaded in the background (and compiled, to be fair, rather than using binary artefacts).

We can build the main C++ wrapper .so directly with cmake, allowing us to investigate a bit more:

mkdir tf-build
cd tf-build/
cmake \
    -DCMAKE_C_FLAGS="-I/usr/include/python3.11" \
    -DCMAKE_CXX_FLAGS="-I/usr/include/python3.11" \
    ../tensorflow-2.15.1/tensorflow/lite/
cmake --build . -t _pywrap_tensorflow_interpreter_wrapper
…
[100%] Built target _pywrap_tensorflow_interpreter_wrapper
$ ldd _pywrap_tensorflow_interpreter_wrapper.so
    linux-vdso.so.1 (0x00007ffec9588000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f22d00d0000)
    libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f22cf600000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f22d00b0000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f22cf81f000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f22d01d1000)

Looking at the output we can see that pthreadpool, FXdiv, FP16 + PSimd are all downloaded, and seem to have ways to point to a local copy. That seems positive.

However, there are even more hidden dependencies, which we can see if we look in the _deps/ subdirectory of the build tree. These don’t appear to be as easy to override, and not all of them have packages already in Debian.

First, the ones that seem to be available: abseil-cpp, cpuinfo, eigen, farmhash, flatbuffers, gemmlowp, ruy + xnnpack

(lots of credit to the Debian Deep Learning Team for these, and in particular Mo Zhou)

Dependencies I couldn’t see existing packages for are: OouraFFT, ml_dtypes & neon2sse.

At this point I just used the package I built with the initial steps above. I live in hope someone will eventually package this properly for Debian, or that I’ll find the time to try and help out, but that’s not going to be today.

I wish upstream developers made it easier to use system copies of their library dependencies. I wish library developers made it easier to build and install system copies of their work. pkgconf is not new tech these days (pkg-config appears to date back to 2000), and has decent support in CMake. I get that there can be issues with incompatibilities even in minor releases, or awkwardness in doing builds of multiple connected projects, but at least give me the option to do so.

14 May, 2025 05:39PM

Sven Hoexter

Disable Firefox DRM Plugin Infobar

... or how I spent my lunch break today.

An increasing amount of news outlets (hello heise.de) start to embed bullshit which requires DRM playback. Since I keep that disabled I now get an infobar that tells me that I need to enable it for this page. Pretty useless and a pain in the back because it takes up screen space. Here's the quick way how to get rid of it:

  1. Go to about:config and turn on toolkit.legacyUserProfileCustomizations.stylesheets.
  2. Go to your Firefox profile folder (e.g. ~/.mozilla/firefox/<random-value>.default/) and mkdir chrome && touch chrome/userChrome.css.
  3. Add the following to your userChrome.css file:

     .infobar[value="drmContentDisabled"] {
       display: none !important;
     }
    
  4. Restart Firefox and read news again with full screen space.

14 May, 2025 10:59AM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Orbital

Orbital at NX, Newcastle in 2023

Orbital at NX, Newcastle in 2023

I'm on a bit of an Orbital kick at the moment. Last year they re-issued their 1991 debut album with 43 extra tracks. Later this month they're doing the same for their 1993 sophomore album.

I thought I'd try to narrow down some tracks to recommend. I seem to have settled on roughly 5 in previous posts (for Underworld, The Cure, Coil and Gazelle Twin). This time I've done 6 (I borrowed one from Underworld)

As always it's a hard choice. I've tried to select some tracks I really enjoy that don't often come up on best-of compilation albums. For a more conventional choice of best-of tracks, I recommend the recent-ish 30 something "compilation" (of sorts, previously written about)


  1. The Naked and the Dead (1992)

    The Naked and the Dead by Orbital

    From an early EP Radiccio, which is being re-issued this month. Digital versions of the re-issue will feature a new recording "Deepest" featuring Tilda Swinton. Sadly this isn't making it onto the pressed version. She performed with them live at Glastonbury 2024. That entire performance was a real pick-me-up during my convolescence, and is recommended.

    Anyway I've now written more about a song I haven't recommended than the one I did…

  2. Remind (1993)

    Remind by Orbital

    From the Brown Album, I first heard this as the Encore from their "final show", for John Peel, when they split up in 2004. "Remind" wasn't broadcast, but an audience recording was circulated on fan site Loopz. Remarkably, 21 years on, it's still there.

    In writing this I discovered that it's a re-working of a remix Orbital did for Meat Beat Manifesto: MindStream (Mind The Bend The Mind)

  3. You Lot (2004)

    From the unfairly-maligned "final" Blue album. Featuring a sample of pre-Doctor Who Christoper Eccleston, from another Russell T Davies production, Second Coming.

  4. Beached (2000)

    Beached (Long version) by Orbital, Angelo Badalamenti

    Co-written by Angelo Badalamenti, it's built around a sample of Badalamenti's score for the movie "The Beach". Orbital's re-work adds some grit to the orchestral instrumentation and opens with a monologue, delivered by Leonardo Di Caprio, sampled from the movie.

  5. Spare Parts Express (1999)

    Spare Parts Express by Orbital

    Critics had started to be quite unfair to Orbital by this point. The band themselves said that they'd ran out of ideas (pointing at album closer "Style", built around a Stylophone melody, as proof). Their malaise continued right up to the Blue Album, at which point the split up; ostensibly for good, before regrouping 8 years later.

    Spare Parts Express is a hatchet job of various bits that they didn't develop into full songs on their own. Despite this I think it works. I love long-form electronica, and this clocks in at 10:07. My favourite segment (06:37) is adjacent to a reference (05:05) to John Baker's theme for the BBC children's program Newsround (sadly they aren't using it today. Here's a rundown of Newsround themes over time)

  6. Attached (1994)

    Attached by Orbital

    This originally debuted on a Peel session before appearing on the subsequent album Snivilisation a few months later. An album closer, and a good come-down song to close this list.

14 May, 2025 10:41AM

hackergotchi for Evgeni Golov

Evgeni Golov

running modified containers with podman

Everybody (who runs containers) knows this situation: you've been running happycontainer:stable for a while and it's been great but now something external changed and you need to adjust the code while there is still no release with the patch.

I've encountered exactly this when our Home-Assistant stopped showing the presence of our cat correctly, but we've also been discussing this at work recently.

Now the most obvious (to me?) solution would be to build a new container, based on the original one, and perform the modifications at build time. Something like this:

FROM happycontainer:stable
RUN curl … | patch -p1

But that's not interactive, and if you don't have a patch readily available, that's not what you want. (And I'll save you the idea of RUNing sed and friends to alter files!)

You could run vim inside the container, but that requires vim to be installed there in the first place. And a reasonable configuration. And…

Well, turns out podman can mount the root fs of a running container.

[root@sai ~]# podman mount homeassistant
/var/lib/containers/storage/overlay/f3ac502d97b5681989dff

And if you're running as non-root, you'll get an error:

[container@sai ~]$ podman mount homeassistant
Error: cannot run command "podman mount" in rootless mode, must execute `podman unshare` first

Luckily the solution is in the error message - use podman unshare

[container@sai ~]$ podman unshare
[root@sai ~]# podman mount homeassistant
/home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged

So in both cases (root and rootless) we get a path, which is the mounted root fs and we can edit things in there as we like.

[root@sai ~]# vi /home/container/.local/share/containers/storage/overlay/95d3809d53125e4d40ad05e52efaa5a48e6e61fe9b8a8478416ff44459c7eb31/merged/usr/src/homeassistant/homeassistant/components/surepetcare/binary_sensor.py

Once done, the container can be unmounted again, and the namespace left

[root@sai ~]# podman umount homeassistant
homeassistant
[root@sai ~]# exit
[container@sai ~]$

At this point we have modified the code inside the container, but the running process is still using the old code. If we restart the container now to restart the process, our changes will be lost.

Instead, we can commit the changes as a new layer and tag the result.

[container@sai ~]$ podman commit homeassistant docker.io/homeassistant/home-assistant:stable

And now, when we restart the container, it will use the new code with our changes 🎉

[container@sai ~]$ systemctl --user restart homeassistant

Is this the best workflow you can get? Probably not. Does it work? Hell yeah!

14 May, 2025 08:54AM by evgeni

May 13, 2025

hackergotchi for Ben Hutchings

Ben Hutchings

Report for Debian BSP near Leuven in April 2025

On 26th and 27th April we held a Debian bug-squashing party near Leuven, Belgium. Several longstanding and new Debian contributors gathered to work through some of the highest priority bugs affecting the upcoming release of Debian 13 “trixie”.

We were hosted by the Familia community centre in Tildonk. As this venue currently does not have an Internet connection, we brought a mobile hotspot and a local Debian mirror.

In attendance were:

  • Debian Developers: Ben Hutchings, Nattie Mayer-Hutchings, Kurt Roeckx, and Geert Stappers
  • New contributors: Yüce Kürüm, Louis Renuart, Arnout Vandecappelle

The new contributors were variously using Arch, Fedora, and Ubuntu, and the DDs spent some some time setting them up with Debian dvelopment environments.

The bugs we worked on included:

13 May, 2025 08:19PM by Ben Hutchings

Ravi Dwivedi

KDE India Conference 2025

Last month, I attended the KDE India conference in Gandhinagar, Gujarat from the 4th to the 6th of April. I made my mind to attend when Sahil told me about his plans to attend and giving a talk.

A day after my talk submission, the organizer Bhushan contacted me on Matrix and informed me that my talk had been accepted. I was also informed that KDE will cover my travel and accommodation expenses. So, I planned to attend the conference at this point. I am a longtime KDE user, so why not ;)

I arrived in Ahmedabad, the twin city of Gandhinagar, a day before the conference. The first thing that struck me as soon as I came out of the Ahmedabad airport was the heat. I felt as if I was being cooked—exactly how Bhushan put it earlier in the group chat. I took a taxi to get to my hotel, which was close to the conference venue.

Later that afternoon, I met Bhushan and Joseph. Joseph lived in Germany. Bhushan was taking him to get a SIM card, so I tagged along and got to roam around. Joseph was unsure about where to go after the conference, so I asked him what he wanted out of his trip and had conversations along that line.

Later, Vishal convinced him to go to Lucknow. Since he was adamant about taking the train, I booked a Tatkal train ticket for him to Lucknow. He was curious about how Tatkal booking works and watched me in amusement while I was booking the ticket.

The 4th of April marked the first day of the conference, with around 25 attendees. Bhushan started the day with an overview of KDE conferences in India, followed by Vishal, who discussed FOSS United’s activities. After the lunch, Joseph gave an overview of his campaign to help people switch from Windows to GNU/Linux due to environmental and security reasons. He continued his session in detail the next day.

Conference hall

Conference hall

A key takeaway for me from Joseph’s session was the idea pointed out by Adwaith: marketing GNU/Linux as a cheap alternative may not attract as much attention as marketing it as a status symbol. He gave the example of how the Tata Nano didn’t do well in the Indian market due to being perceived as a poor person’s car.

My talk was scheduled for the evening of the first day. I hadn’t prepared any slides because I wanted to make my session interactive. During my talk, I did an activity with the attendees to demonstrate the federated nature of XMPP messaging, of which Prav is a part. After the talk, I got a lot of questions, signalling engagement. The audience was cooperative (just like Prav ;)), contrary to my expectations (I thought they will be tired and sleepy).

On the third day, I did a demo on editing OpenStreetMap (referred to as “OSM” in short) using the iD editor. It involved adding points to OSM based on the students’ suggestions. Since my computer didn’t have an HDMI port, I used Subin’s computer, and he logged into his OSM account for my session. Therefore, any mistakes I made will be under Subin’s name. :)

On the third day, I attended Aaruni’s talk about backing up a GNU/Linux system. This was the talk that resonated with me the most. He suggested formatting the system with the btrfs file system during the installation, which helps in taking snapshots of the system and provides an easy way to roll back to a previous version if, for example, a file is accidentally deleted. I have tried many backup techniques, including this one, but I never tried backing up on the internal disk. I’ll certainly give this a try.

A conference is not only about the talks, that’s why we had a Prav table as well ;) Just kidding. What I really mean is that a conference is more about interactions than talks. Since the conference was a three-day affair, attendees got plenty of time to bond and share ideas.

Prav stall at the conference

Prav stall at the conference

Conference group photo

Conference group photo

After the conference, Bhushan took us to Adalaj Stepwell, an attraction near Gandhinagar. Upon entering the complex, we saw a park where there were many langurs. Going further, there were stairs that led down to a well. I guess this is why it is called a stepwell.

Adalaj Stepwell

Adalaj Stepwell

Later that day, we had Gujarati Thali for dinner. It was an all-you-can-eat buffet and was reasonably priced at 300 rupees per plate. Aamras (Mango juice) was the highlight for me. This was the only time we had Gujarati food during this visit. After the dinner, Aaruni dropped Sahil and I off at the airport. The hospitality was superb - for instance, in addition to Aaruni dropping us, Bhushan also picked up some of the attendees from the airport.

Finally, I would like to thank KDE for sponsoring my travel and accommodation costs.

Let’s wrap up this post here and meet you in the next one.

Thanks to contrapunctus and Joseph for proofreading.

13 May, 2025 05:58PM

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Running dind with sysbox

When I configured forgejo-actions I used a docker-compose.yaml file to execute the runner and a dind container configured to run using privileged mode to be able to build images with it; as mentioned on my post about my setup, the use of the privileged mode is not a big issue for my use case, but reduces the overall security of the installation.

On a work chat the other day someone mentioned that the GitLab documentation about using kaniko says it is no longer maintained (see the kaniko issue #3348) so we should look into alternatives for kubernetes clusters.

I never liked kaniko too much, but it works without privileged mode and does not need a daemon, which is a good reason to use it, but if it is deprecated it makes sense to look into alternatives, and today I looked into some of them to use with my forgejo-actions setup.

I was going to try buildah and podman but it seems that they need to adjust things on the systems running them:

  • When I tried to use buildah inside a docker container in Ubuntu I found the problems described on the buildah issue #1901 so I moved on.
  • Reading the podman documentation I saw that I need to export the fuse device to run it inside a container and, as I found other option, I also skipped it.

As my runner was already configured to use dind I decided to look into sysbox as a way of removing the privileged flag to make things more secure but have the same functionality.

Installing the sysbox package

As I use Debian and Ubuntu systems I used the .deb packages distributed from the sysbox release page to install it (in my case I used the one from the 0.6.7 version).

On the machine running forgejo (a Debian 12 server) I downloaded the package, stopped the running containers (it is needed to install the package and the only ones running where the ones started by the docker-compose.yaml file) and installed the sysbox-ce_0.6.7.linux_amd64.deb package using dpkg.

Updating the docker-compose.yaml file

To run the dind container without setting the privileged mode we set sysbox-runc as the runtime on the dind container definition and set the privileged flag to false (it is the same as removing the key, as it defaults to false):

--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -2,7 +2,9 @@ services:
   dind:
     image: docker:dind
     container_name: 'dind'
-    privileged: 'true'
+    # use sysbox-runc instead of using privileged mode
+    runtime: 'sysbox-runc'
+    privileged: 'false'
     command: ['dockerd', '-H', 'unix:///dind/docker.sock', '-G', '$RUNNER_GID']
     restart: 'unless-stopped'
     volumes:

Testing the changes

After applying the changes to the docker-compose.yaml file we start the containers and to test things we re-run previously executed jobs to see if things work as before.

In my case I re-executed the build-image-from-tag workflow #18 from the oci project and everything worked as expected.

Conclusion

For my current use case (docker + dind) seems that sysbox is a good solution but I’m not sure if I’ll be installing it on kubernetes anytime soon unless I find a valid reason to do it (last time we talked about it my co workers said that they are evaluating buildah and podman for kubernetes and probably we will use them to replace kaniko in our gitlab-ci pipelines and for those tools the use of sysbox seems an overkill).

13 May, 2025 05:45PM

May 12, 2025

Reproducible Builds

Reproducible Builds in April 2025

Welcome to our fourth report from the Reproducible Builds project in 2025. These monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. Lastly, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. Fifty Years of Open Source Software Supply Chain Security
  3. 4th CHAINS Software Supply Chain Workshop
  4. Mailing list updates
  5. Canonicalization for Unreproducible Builds in Java
  6. OSS Rebuild adds new TUI features
  7. Distribution roundup
  8. diffoscope & strip-nondeterminism
  9. Website updates
  10. Reproducibility testing framework
  11. Upstream patches

reproduce.debian.net

The last few months have seen the introduction, development and deployment of reproduce.debian.net. In technical terms, this is an instance of rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.

This month, however, we are pleased to announce that reproduce.debian.net now tests all the Debian trixie architectures except s390x and mips64el.

The ppc64el architecture was added through the generous support of Oregon State University Open Source Laboratory (OSUOSL), and we can support the armel architecture thanks to CodeThink.


Fifty Years of Open Source Software Supply Chain Security

Russ Cox has published a must-read article in ACM Queue on Fifty Years of Open Source Software Supply Chain Security. Subtitled, “For decades, software reuse was only a lofty goal. Now it’s very real.”, Russ’ article goes on to outline the history and original goals of software supply-chain security in the US military in the early 1970s, all the way to the XZ Utils backdoor of 2024. Through that lens, Russ explores the problem and how it has changed, and hasn’t changed, over time.

He concludes as follows:

We are all struggling with a massive shift that has happened in the past 10 or 20 years in the software industry. For decades, software reuse was only a lofty goal. Now it’s very real. Modern programming environments such as Go, Node and Rust have made it trivial to reuse work by others, but our instincts about responsible behaviors have not yet adapted to this new reality.

We all have more work to do.


4th CHAINS Software Supply Chain Workshop

Convened as part of the CHAINS research project at the KTH Royal Institute of Technology in Stockholm, Sweden, the 4th CHAINS Software Supply Chain Workshop occurred during April. During the workshop, there were a number of relevant workshops, including:

The full listing of the agenda is available on the workshop’s website.


Mailing list updates

On our mailing list this month:

  • Luca DiMaio of Chainguard posted to the list reporting that they had successfully implemented reproducible filesystem images with both ext4 and an EFI system partition. They go on to list the various methods, and the thread generated at least fifteen replies.

  • David Wheeler announced that the OpenSSF is building a “glossary” of sorts in order that they “consistently use the same meaning for the same term” and, moreover, that they have drafted a definition for ‘reproducible build’. The thread generated a significant number of replies on the definition, leading to a potential update to the Reproducible Build’s own definition.

  • Lastly, kpcyrd posted to the list with a timely reminder and update on their repro-env” tool. As first reported in our July 2023 report, kpcyrd mentions that:

    My initial interest in reproducible builds was “how do I distribute pre-compiled binaries on GitHub without people raising security concerns about them”. I’ve cycled back to this original problem about 5 years later and built a tool that is meant to address this. []


Canonicalization for Unreproducible Builds in Java

Aman Sharma, Benoit Baudry and Martin Monperrus have published a new scholarly study related to reproducible builds within Java. Titled Canonicalization for Unreproducible Builds in Java, the article’s abstract is as follows:

[…] Achieving reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central and we develop a novel taxonomy of six root causes of unreproducibility. We study actionable mitigations: artifact and bytecode canonicalization using OSS-Rebuild and jNorm respectively. Finally, we present Chains-Rebuild, a tool that raises reproducibility success from 9.48% to 26.89% on 12,283 unreproducible artifacts. To sum up, our contributions are the first large-scale taxonomy of build unreproducibility causes in Java, a publicly available dataset of unreproducible builds, and Chains-Rebuild, a canonicalization tool for mitigating unreproducible builds in Java.

A full PDF of their article is available from arXiv.


OSS Rebuild adds new TUI features

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io and npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild ships a text-based user interface (TUI) for viewing, launching, and debugging rebuilds. While previously requiring ownership of a full instance of OSS Rebuild’s hosted infrastructure, the TUI now supports a fully local mode of build execution and artifact storage. Thanks to Giacomo Benedetti for his usage feedback and work to extend the local-only development toolkit.

Another feature added to the TUI was an experimental chatbot integration that provides interactive feedback on rebuild failure root causes and suggests fixes.


Distribution roundup

In Debian this month:

  • Roland Clobus posted another status report on reproducible ISO images on our mailing list this month, with the summary that “all live images build reproducibly from the online Debian archive”.

  • Debian developer Simon Josefsson published another two reproducibility-related blog posts this month, the first on the topic of Verified Reproducible Tarballs. Simon sardonically challenges the reader as follows: “Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days?” After that, they also published a blog post on Building Debian in a GitLab Pipeline using their multi-stage rebuild approach.

  • Roland also posted to our mailing list to highlight that “there is now another tool in Debian that generates reproducible output, equivs”. This is a tool to create trivial Debian packages that might Depend on other packages. As Roland writes, “building the [equivs] package has been reproducible for a while, [but] now the output of the [tool] has become reproducible as well”.

  • Lastly, 9 reviews of Debian packages were added, 10 were updated and 10 were removed this month adding to our extensive knowledge about identified issues.

The IzzyOnDroid Android APK repository made more progress in April. Thanks to funding by NLnet and Mobifree, the project was also to put more time into their tooling. For instance, developers can now easily run their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming.

  • The rbuilder_setup tool can now setup the entire framework within less than five minutes. The process is configurable, too, so everything from “just the basics to verify builds” up to a fully-fledged RB environment is also possible.

  • This tool works on Debian, RedHat and Arch Linux, as well as their derivates. The project has received successful reports from Debian, Ubuntu, Fedora and some Arch Linux derivates so far.

  • Documentation on how to work with reproducible builds (making apps reproducible, debugging unreproducible packages, etc) is available in the project’s wiki page.

  • Future work is also in the pipeline, including documentation, guidelines and helpers for debugging.

NixOS defined an Outreachy project for improving build reproducibility. In the application phase, NixOS saw some strong candidates providing contributions, both on the NixOS side and upstream: guider-le-ecit analyzed a libpinyin issue. Tessy James fixed an issue in arandr and helped analyze one in libvlc that led to a proposed upstream fix. Finally, 3pleX fixed an issue which was accepted in upstream kitty, one in upstream maturin, one in upstream python-sip and one in the Nix packaging of python-libbytesize. Sadly, the funding for this internship fell through, so NixOS were forced to abandon their search.

Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading a number of versions to Debian:

  • Use the --walk argument over the potentially dangerous alternative --scan when calling out to zipdetails(1). []
  • Correct a longstanding issue where many >-based version tests used in conditional fixtures were broken. This was used to ensure that specific tests were only run when the version on the system was newer than a particular number. Thanks to Colin Watson for the report (Debian bug #1102658) []
  • Address a long-hidden issue in the test_versions testsuite as well, where we weren’t actually testing the greater-than comparisons mentioned above, as it was masked by the tests for equality. []
  • Update copyright years. []

In strip-nondeterminism, however, Holger Levsen updated the Continuous Integration (CI) configuration in order to use the standard Debian pipelines via debian/salsa-ci.yml instead of using .gitlab-ci.yml. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In April, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add armel.reproduce.debian.net to support the armel architecture. [][]
    • Add a new ARM node, codethink05. [][]
    • Add ppc64el.reproduce.debian.net to support testing of the ppc64el architecture. [][][]
    • Improve the reproduce.debian.net front page. [][]
    • Make various changes to the ppc64el nodes. [][]9[][]
    • Make various changes to the arm64 and armhf nodes. [][][][]
    • Various changes related to the rebuilderd-worker entry point. [][][]
    • Create and deploy a pkgsync script. [][][][][][][][]
    • Fix the monitoring of the riscv64 architecture. [][]
    • Make a number of changes related to starting the rebuilderd service. [][][][]
  • Backup-related:

    • Backup the rebuilder databases every week. [][][][]
    • Improve the node health checks. [][]
  • Misc:

    • Re-use existing connections to the SSH proxy node on the riscv64 nodes. [][]
    • Node maintenance. [][][]

In addition:

  • Jochen Sprickerhof fixed the risvc64 host names [] and requested access to all the rebuilderd nodes [].

  • Mattia Rizzolo updated the self-serve rebuild scheduling tool, replacing the deprecated “SSO”-style authentication with OpenIDC which authenticates against salsa.debian.org. [][][]

  • Roland Clobus updated the configuration for the osuosl3 node to designate 4 workers for bigger builds. []


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

12 May, 2025 07:00PM

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Playing with vCluster

After my previous posts related to Argo CD (one about argocd-autopilot and another with some usage examples) I started to look into Kluctl (I also plan to review Flux, but I’m more interested on the kluctl approach right now).

While reading an entry on the project blog about Cluster API somehow I ended up on the vCluster site and decided to give it a try, as it can be a valid way of providing developers with on demand clusters for debugging or run CI/CD tests before deploying things on common clusters or even to have multiple debugging virtual clusters on a local machine with only one of them running at any given time.

On this post I will deploy a vcluster using the k3d_argocd kubernetes cluster (the one we created on the posts about argocd) as the host and will show how to:

  • use its ingress (in our case traefik) to access the API of the virtual one (removes the need of having to use the vcluster connect command to access it with kubectl),
  • publish the ingress objects deployed on the virtual cluster on the host ingress, and
  • use the sealed-secrets of the host cluster to manage the virtual cluster secrets.

Creating the virtual cluster

Installing the vcluster application

To create the virtual clusters we need the vcluster command, we can install it with arkade:

❯ arkade get vcluster

The vcluster.yaml file

To create the cluster we are going to use the following vcluster.yaml file (you can find the documentation about all its options here):

controlPlane:
  proxy:
    # Extra hostnames to sign the vCluster proxy certificate for
    extraSANs:
    - my-vcluster-api.lo.mixinet.net
exportKubeConfig:
  context: my-vcluster_k3d-argocd
  server: https://my-vcluster-api.lo.mixinet.net:8443
  secret:
    name: my-vcluster-kubeconfig
sync:
  toHost:
    ingresses:
      enabled: true
    serviceAccounts:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true
    nodes:
      enabled: true
      clearImageStatus: true
    secrets:
      enabled: true
      mappings:
        byName:
          # sync all Secrets from the 'my-vcluster-default' namespace to the
          # virtual "default" namespace.
          "my-vcluster-default/*": "default/*"
          # We could add other namespace mappings if needed, i.e.:
          # "my-vcluster-kube-system/*": "kube-system/*"

On the controlPlane section we’ve added the proxy.extraSANs entry to add an extra host name to make sure it is added to the cluster certificates if we use it from an ingress.

The exportKubeConfig section creates a kubeconfig secret on the virtual cluster namespace using the provided host name; the secret can be used by GitOps tools or we can dump it to a file to connect from our machine.

On the sync section we enable the synchronization of Ingress objects and ServiceAccounts from the virtual to the host cluster:

  • We copy the ingress definitions to use the ingress server that runs on the host to make them work from the outside world.
  • The service account synchronization is not really needed, but we enable it because if we test this configuration with EKS it would be useful if we use IAM roles for the service accounts.

On the opposite direction (from the host to the virtual cluster) we synchronize:

  • The IngressClass objects, to be able to use the host ingress server(s).
  • The Nodes (we are not using the info right now, but it could be interesting if we want to have the real information of the nodes running pods of the virtual cluster).
  • The Secrets from the my-vcluster-default host namespace to the default of the virtual cluster; that synchronization allows us to deploy SealedSecrets on the host that generate secrets that are copied automatically to the virtual one. Initially we only copy secrets for one namespace but if the virtual cluster needs others we can add namespaces on the host and their mappings to the virtual one on the vcluster.yaml file.

Creating the virtual cluster

To create the virtual cluster we run the following command:

vcluster create my-vcluster --namespace my-vcluster --upgrade --connect=false \
  --values vcluster.yaml

It creates the virtual cluster on the my-vcluster namespace using the vcluster.yaml file shown before without connecting to the cluster from our local machine (if we don’t pass that option the command adds an entry on our kubeconfig and launches a proxy to connect to the virtual cluster that we don’t plan to use).

Adding an ingress TCP route to connect to the vcluster api

As explained before, we need to create an IngressTcpRoute object to be able to connect to the vcluster API, we use the following definition:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: my-vcluster-api
  namespace: my-vcluster
spec:
  entryPoints:
    - websecure
  routes:
    - match: HostSNI(`my-vcluster-api.lo.mixinet.net`)
      services:
        - name: my-vcluster
          port: 443
  tls:
    passthrough: true

Once we apply those changes the cluster API will be available on the https://my-cluster-api.lo.mixinet.net:8443 URL using its own self signed certificate (we have enabled TLS passthrough) that includes the hostname we use (we adjusted it on the vcluster.yaml file, as explained before).

Getting the kubeconfig for the vcluster

Once the vcluster is running we will have its kubeconfig available on the my-vcluster-kubeconfig secret on its namespace on the host cluster.

To dump it to the ~/.kube/my-vcluster-config we can do the following:

❯ kubectl get -n my-vcluster secret/my-vcluster-kubeconfig \
    --template="{{.data.config}}" | base64 -d > ~/.kube/my-vcluster-config

Once available we can define the vkubectl alias to adjust the KUBECONFIG variable to access it:

alias vkubectl="KUBECONFIG=~/.kube/my-vcluster-config kubectl"

Or we can merge the configuration with the one on the KUBECONFIG variable and use kubectx or a similar tool to change the context (for our vcluster the context will be my-vcluster_k3d-argocd). If the KUBECONFIG variable is defined and only has the PATH to a single file the merge can be done running the following:

KUBECONFIG="$KUBECONFIG:~/.kube/my-vcluster-config" kubectl config view \
  --flatten >"$KUBECONFIG.new"
mv "$KUBECONFIG.new" "$KUBECONFIG"

On the rest of this post we will use the vkubectl alias when connecting to the virtual cluster, i.e. to check that it works we can run the cluster-info subcommand:

❯ vkubectl cluster-info
Kubernetes control plane is running at https://my-vcluster-api.lo.mixinet.net:8443
CoreDNS is running at https://my-vcluster-api.lo.mixinet.net:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Installing the dummyhttpd application

To test the virtual cluster we are going to install the dummyhttpd application using the following kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0
# Add the config map
configMapGenerator:
  - name: dummyhttp-configmap
    literals:
      - CM_VAR="Vcluster Test Value"
    behavior: create
    options:
      disableNameSuffixHash: true
patches:
  # Change the ingress host name
  - target:
      kind: Ingress
      name: dummyhttp
    patch: |-
      - op: replace
        path: /spec/rules/0/host
        value: vcluster-dummyhttp.lo.mixinet.net
  # Add reloader annotations -- it will only work if we install reloader on the
  # virtual cluster, as the one on the host cluster doesn't see the vcluster
  # deployment objects
  - target:
      kind: Deployment
      name: dummyhttp
    patch: |-
      - op: add
        path: /metadata/annotations
        value:
          reloader.stakater.com/auto: "true"
          reloader.stakater.com/rollout-strategy: "restart"

It is quite similar to the one we used on the Argo CD examples but uses a different DNS entry; to deploy it we run kustomize and vkubectl:

❯ kustomize build . | vkubectl apply -f -
configmap/dummyhttp-configmap created
service/dummyhttp created
deployment.apps/dummyhttp created
ingress.networking.k8s.io/dummyhttp created

We can check that everything worked using curl:

❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}

The objects available on the vcluster now are:

❯ vkubectl get all,configmap,ingress
NAME                             READY   STATUS    RESTARTS   AGE
pod/dummyhttp-55569589bc-9zl7t   1/1     Running   0          24s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/dummyhttp    ClusterIP   10.43.51.39    <none>        80/TCP    24s
service/kubernetes   ClusterIP   10.43.153.12   <none>        443/TCP   14m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dummyhttp   1/1     1            1           24s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dummyhttp-55569589bc   1         1         1       24s

NAME                            DATA   AGE
configmap/dummyhttp-configmap   1      24s
configmap/kube-root-ca.crt      1      14m

NAME                                CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    24s

While we have the following ones on the my-vcluster namespace of the host cluster:

❯ kubectl get all,configmap,ingress -n my-vcluster
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster   1/1     Running   0          18m
pod/dummyhttp-55569589bc-9zl7t-x-default-x-my-vcluster    1/1     Running   0          45s
pod/my-vcluster-0                                         1/1     Running   0          19m

NAME                                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
service/dummyhttp-x-default-x-my-vcluster      ClusterIP   10.43.51.39     <none>        80/TCP                   45s
service/kube-dns-x-kube-system-x-my-vcluster   ClusterIP   10.43.91.198    <none>        53/UDP,53/TCP,9153/TCP   18m
service/my-vcluster                            ClusterIP   10.43.153.12    <none>        443/TCP,10250/TCP        19m
service/my-vcluster-headless                   ClusterIP   None            <none>        443/TCP                  19m
service/my-vcluster-node-k3d-argocd-agent-1    ClusterIP   10.43.189.188   <none>        10250/TCP                18m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     19m

NAME                                                     DATA   AGE
configmap/coredns-x-kube-system-x-my-vcluster            2      18m
configmap/dummyhttp-configmap-x-default-x-my-vcluster    1      45s
configmap/kube-root-ca.crt                               1      19m
configmap/kube-root-ca.crt-x-default-x-my-vcluster       1      11m
configmap/kube-root-ca.crt-x-kube-system-x-my-vcluster   1      18m
configmap/vc-coredns-my-vcluster                         1      19m

NAME                                                        CLASS   HOSTS                             ADDRESS                          PORTS AGE
ingress.networking.k8s.io/dummyhttp-x-default-x-my-vcluster traefik vcluster-dummyhttp.lo.mixinet.net 172.20.0.2,172.20.0.3,172.20.0.4 80    45s

As shown, we have copies of the Service, Pod, Configmap and Ingress objects, but there is no copy of the Deployment or ReplicaSet.

Creating a sealed secret for dummyhttpd

To use the hosts sealed secrets controller with the virtual cluster we will create the my-vcluster-default namespace and add there the sealed secrets we want to have available as secrets on the default namespace of the virtual cluster:

❯ kubectl create namespace my-vcluster-default
❯ echo -n "Vcluster Boo" | kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml

After running the previous commands we have the following objects available on the host cluster:

❯ kubectl get sealedsecrets.bitnami.com,secrets -n my-vcluster-default
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     34s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      34s

And we can see that the secret is also available on the virtual cluster with the content we expected:

❯ vkubectl get secrets
NAME               TYPE     DATA   AGE
dummyhttp-secret   Opaque   1      34s
❯ vkubectl get secret/dummyhttp-secret --template="{{.data.SECRET_VAR}}" \
  | base64 -d
Vcluster Boo

But the output of the curl command has not changed because, although we have the reloader controller deployed on the host cluster, it does not see the Deployment object of the virtual one and the pods are not touched:

❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c": "Vcluster Test Value","s": ""}

Installing the reloader application

To make reloader work on the virtual cluster we just need to install it as we did on the host using the following kustomization.yaml file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

We deploy it with kustomize and vkubectl:

❯ kustomize build . | vkubectl apply -f -
serviceaccount/reloader-reloader created
clusterrole.rbac.authorization.k8s.io/reloader-reloader-role created
clusterrolebinding.rbac.authorization.k8s.io/reloader-reloader-role-binding created
deployment.apps/reloader-reloader created

As the controller was not available when the secret was created the pods linked to the Deployment are not updated, but we can force things removing the secret on the host system; after we do that the secret is re-created from the sealed version and copied to the virtual cluster where the reloader controller updates the pod and the curl command shows the new output:

❯ kubectl delete -n my-vcluster-default secrets dummyhttp-secret
secret "dummyhttp-secret" deleted
❯ sleep 2
❯ vkubectl get pods
NAME                         READY   STATUS        RESTARTS   AGE
dummyhttp-78bf5fb885-fmsvs   1/1     Terminating   0          6m33s
dummyhttp-c68684bbf-nx8f9    1/1     Running       0          6s
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"Vcluster Boo"}

If we change the secret on the host systems things get updated pretty quickly now:

❯ echo -n "New secret" | kubectl create secret generic "dummyhttp-secret" \
    --namespace "my-vcluster-default" --dry-run=client \
    --from-file=SECRET_VAR=/dev/stdin -o yaml >dummyhttp-secret.yaml
❯ kubeseal -f dummyhttp-secret.yaml -w dummyhttp-sealed-secret.yaml
❯ kubectl apply -f dummyhttp-sealed-secret.yaml
❯ rm -f dummyhttp-secret.yaml dummyhttp-sealed-secret.yaml
❯ curl -s https://vcluster-dummyhttp.lo.mixinet.net:8443/ | jq -cM .
{"c":"Vcluster Test Value","s":"New secret"}

Pause and restore the vcluster

The status of pods and statefulsets while the virtual cluster is active can be seen using kubectl:

❯ kubectl get pods,statefulsets -n my-vcluster
NAME                                                                 READY   STATUS    RESTARTS   AGE
pod/coredns-bbb5b66cc-snwpn-x-kube-system-x-my-vcluster              1/1     Running   0          127m
pod/dummyhttp-587c7855d7-pt9b8-x-default-x-my-vcluster               1/1     Running   0          4m39s
pod/my-vcluster-0                                                    1/1     Running   0          128m
pod/reloader-reloader-7f56c54d75-544gd-x-kube-system-x-my-vcluster   1/1     Running   0          60m

NAME                           READY   AGE
statefulset.apps/my-vcluster   1/1     128m

Pausing the vcluster

If we don’t need to use the virtual cluster we can pause it and after a small amount of time all Pods are gone because the statefulSet is scaled down to 0 (note that other resources like volumes are not removed, but all the objects that have to be scheduled and consume CPU cycles are not running, which can translate in a lot of savings when running on clusters from cloud platforms or, in a local cluster like the one we are using, frees resources like CPU and memory that now can be used for other things):

❯ vcluster pause my-vcluster
11:20:47 info Scale down statefulSet my-vcluster/my-vcluster...
11:20:48 done Successfully paused vcluster my-vcluster/my-vcluster
❯ kubectl get pods,statefulsets -n my-vcluster
NAME                           READY   AGE
statefulset.apps/my-vcluster   0/0     130m

Now the curl command fails:

❯ curl -s https://vcluster-dummyhttp.localhost.mixinet.net:8443
404 page not found

Although the ingress is still available (it returns a 404 because there is no pod behind the service):

❯ kubectl get ingress -n my-vcluster
NAME                                CLASS     HOSTS                               ADDRESS                            PORTS   AGE
dummyhttp-x-default-x-my-vcluster   traefik   vcluster-dummyhttp.lo.mixinet.net   172.20.0.2,172.20.0.3,172.20.0.4   80      120m

In fact, the same problem happens when we try to connect to the vcluster API; the error shown by kubectl is related to the TLS certificate because the 404 page uses the wildcard certificate instead of the self signed one:

❯ vkubectl get pods
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
❯ curl -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/
404 page not found
❯ curl -v -s https://my-vcluster-api.lo.mixinet.net:8443/api/v1/ 2>&1 | grep subject
*  subject: CN=lo.mixinet.net
*  subjectAltName: host "my-vcluster-api.lo.mixinet.net" matched cert's "*.lo.mixinet.net"

Resuming the vcluster

When we want to use the virtual cluster again we just need to use the resume command:

❯ vcluster resume my-vcluster
12:03:14 done Successfully resumed vcluster my-vcluster in namespace my-vcluster

Once all the pods are running the virtual cluster goes back to its previous state, although all of them were started, of course.

Cleaning up

The virtual cluster can be removed using the delete command:

❯ vcluster delete my-vcluster
12:09:18 info Delete vcluster my-vcluster...
12:09:18 done Successfully deleted virtual cluster my-vcluster in namespace my-vcluster
12:09:18 done Successfully deleted virtual cluster namespace my-vcluster
12:09:18 info Waiting for virtual cluster to be deleted...
12:09:50 done Virtual Cluster is deleted

That removes everything we used on this post except the sealed secrets and secrets that we put on the my-vcluster-default namespace because it was created by us.

If we delete the namespace all the secrets and sealed secrets on it are also removed:

❯ kubectl delete namespace my-vcluster-default
namespace "my-vcluster-default" deleted

Conclusions

I believe that the use of virtual clusters can be a good option for two of the proposed use cases that I’ve encountered in real projects in the past:

  • need of short lived clusters for developers or teams,
  • execution of integration tests from CI pipelines that require a complete cluster (the tests can be run on virtual clusters that are created on demand or paused and resumed when needed).

For both cases things can be set up using the Apache licensed product, although maybe evaluating the vCluster Platform offering could be interesting.

In any case when everything is not done inside kubernetes we will also have to check how to manage the external services (i.e. if we use databases or message buses as SaaS instead of deploying them inside our clusters we need to have a way of creating, deleting or pause and resume those services).

12 May, 2025 11:00AM

Taavi Väänänen

lua entry thread aborted: runtime error: bad request

The Wikimedia Cloud VPS shared web proxy has an interesting architecture: the management API writes an entry for each proxy to a Redis database, and the web server in use (Nginx with Lua support from ngx_http_lua_module) looks up the backend server URL from Redis for each request. This is maybe not how I would design this today, but the basic design dates back to 2013 and has served us well ever since.

However, with a recent operating system upgrade to Debian 12 (we run Nginx from the packages in Debian's repositories), we started seeing mysterious errors that looked like this:

2025/04/30 07:24:25 [error] 82656#82656: *5612 lua entry thread aborted: runtime error: /etc/nginx/lua/domainproxy.lua:32: bad request
stack traceback:
coroutine 0:
[C]: in function 'set_keepalive'
/etc/nginx/lua/domainproxy.lua:32: in function 'redis_shutdown'
/etc/nginx/lua/domainproxy.lua:48: in main chunk, client: [redacted], server: *.wmcloud.org, request: "GET [redacted] HTTP/2.0", host: "codesearch.wmcloud.org", referrer: "https://codesearch.wmcloud.org/search/"

The code in question seems straightforward enough:

function redis_shutdown()
 -- Use a connection pool of 256 connections with a 32s idle timeout
 -- This also closes the current redis connection.
 red:set_keepalive(1000 * 32, 256) -- line 32
end

When searching for this error online, you'll end up finding advice like "the resty.redis object instance cannot be stored in a Lua variable at the Lua module level". However, our code already stores it as a local variable:

local redis = require 'nginx.redis'
local red = redis:new()
red:set_timeout(1000)
red:connect('127.0.0.1', 6379)

Turns out the issue was with the function definition: functions can also be defined as local. Without that, something somewhere in some situations seems to reference the variables from other requests, instead of using the Redis connection for the current request. (Don't ask me what changed between Debian 12 and 13 making this only break now.) So we needed to change our function definition to this instead:

local function redis_shutdown()
 -- Use a connection pool of 256 connections with a 32s idle timeout
 -- This also closes the current redis connection.
 red:set_keepalive(1000 * 32, 256)
end

I spent almost an entire workday looking for this, ultimately making a two-line patch to fix the issue. Hopefully by publishing this post I can save that time from everyone else stumbling upon the same problem after myself.

12 May, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: DebConf 25 preparations, PyPA tools updates, Removing libcrypt-dev from build-essential and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-04

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

DebConf 25 Preparations, by Stefano Rivera and Santiago Ruano Rincón

DebConf 25 preparations continue. In April, the bursary team reviewed and ranked bursary applications. Santiago Ruano Rincón examined the current state of the conference’s finances, to see if we could allocate any more money to bursaries. Stefano Rivera supported the bursary team’s work with infrastructure and advice and added some metrics to assist Santiago’s budget review. Santiago was also involved in different parts of the organization, including Content team matters, as reviewing the first of proposals, preparing public information about the new Academic Track; or coordinating different aspects of the Day trip activities and the Conference Dinner.

PyPA tools updates, by Stefano Rivera

Around the beginning of the freeze (in retrospect, definitely too late) Stefano looked at updating setuptools in the archive to 78.1.0. This brings support for more comprehensive license expressions (PEP-639), that people are expected to adopt soon upstream. While the reverse-autopkgtests all passed, it all came with some unexpected complications, and turned into a mini-transition. The new setuptools broke shebangs for scripts (pypa/setuptools#4952).

It also required a bump of wheel to 0.46 and wheel 0.46 now has a dependency outside the standard library (it de-vendored packaging). This meant it was no longer suitable to distribute a standalone wheel.whl file to seed into new virtualenvs, as virtualenv does by default. The good news here is that setuptools doesn’t need wheel any more, it included its own implementation of the bdist_wheel command, in 70.1. But the world hadn’t adapted to take advantage of this, yet. Stefano scrambled to get all of these issues resolved upstream and in Debian:

We’re now at the point where python3-wheel-whl is no longer needed in Debian unstable, and it should migrate to trixie.

Removing libcrypt-dev from build-essential, by Helmut Grohne

The crypt function was originally part of glibc, but it got separated to libxcrypt. As a result, libc6-dev now depends on libcrypt-dev. This poses a cycle during architecture cross bootstrap. As the number of packages actually using crypt is relatively small, Helmut proposed removing the dependency. He analyzed an archive rebuild kindly performed by Santiago Vila (not affiliated with Freexian) and estimated the necessary changes. It looks like we may complete this with modifications to less than 300 source packages in the forky cycle. Half of the bugs have been filed at this time. They are tracked with libcrypt-* usertags.

Miscellaneous contributions

  • Carles uploaded a new version of simplemonitor.
  • Carles improved the documentation of salsa-ci-team/pipeline regarding piuparts arguments.
  • Carles closed an FTBFS on gcc-15 on qnetload.
  • Carles worked on Catalan translations using po-debconf-manager: reviewed 57 translations and created their merge requests in salsa, created 59 bug reports for packages that didn’t merge in more than 30 days. Followed-up merge requests and comments in bug reports. Managed some translations manually for packages that are not in Salsa.
  • Lucas did some work on the DebConf Content and Bursary teams.
  • Lucas fixed multiple CVEs and bugs involving the upgrade from bookworm to trixie in ruby3.3.
  • Lucas fixed a CVE in valkey in unstable.
  • Stefano updated beautifulsoup4, python-authlib, python-html2text, python-packaging, python-pip, python-soupsieve, and unidecode.
  • Stefano packaged python-dependency-groups, a new vendored library in python-pip.
  • During an afternoon Bug Squashing Party in Montevideo, Santiago uploaded a couple of packages fixing RC bugs #1057226 and #1102487. The latter was a sponsored upload.
  • Thorsten uploaded new upstream versions of brlaser, ptouch-driver and sane-airscan to get the latest upstream bug fixes into Trixie.
  • Raphaël filed an upstream bug on zim for a graphical glitch that he has been experiencing.
  • Colin Watson upgraded openssh to 10.0p1 (also known as 10.0p2), and debugged various follow-up bugs. This included adding riscv64 support to vmdb2 in passing, and enabling native wtmpdb support so that wtmpdb last now reports the correct tty for SSH connections.
  • Colin fixed dput-ng’s –override option, which had never previously worked.
  • Colin fixed a security bug in debmirror.
  • Colin did his usual routine work on the Python team: 21 packages upgraded to new upstream versions, 8 CVEs fixed, and about 25 release-critical bugs fixed.
  • Helmut filed patches for 21 cross build failures.
  • Helmut uploaded a new version of debvm featuring a new tool debefivm-create to generate EFI-bootable disk images compatible with other tools such as libvirt or VirtualBox. Much of the work was prototyped in earlier months. This generalizes mmdebstrap-autopkgtest-build-qemu.
  • Helmut continued reporting undeclared file conflicts and suggested package removals from unstable.
  • Helmut proposed build profiles for libftdi1 and gnupg2. To deal with recently added dependencies in the architecture cross bootstrap package set.
  • Helmut managed the /usr-move transition. He worked on ensuring that systemd would comply with Debian’s policy. Dumat continues to locate problems here and there yielding discussion occasionally. He sent a patch for an upgrade problem in zutils.
  • Anupa worked with the Debian publicity team to publish Micronews and Bits posts.
  • Anupa worked with the DebConf 25 content team to review talk and event proposals for DebConf 25.

12 May, 2025 12:00AM by Anupa Ann Joseph

May 11, 2025

Sergio Durigan Junior

Debian Bug Squashing Party Brazil 2025

With the trixie release approaching, I had the idea back in April to organize a bug squashing party with the Debian Brasil community. I believe the outcome was very positive, and we were able to tackle and fix quite a number of release-critical bugs. This is a brief report of what we did.

A remote BSP

It’s not the first time I organize a BSP: back in 2019, I helped throw another similar party in Toronto. The difference this time is that, because Brazil is a big country and (perhaps most importantly) because I’m not currently living there, the BSP had to be done online.

I’m a fan of social interactions (especially with the Brazilian community), and in my experience we usually can achieve much more when we get together in a physical place, but hey, you gotta do what you gotta do…

Most (if not all) of the folks interested in participating had busy weekdays, so it was decided that we would meet during the weekends and try to work on a few bugs over Jitsi. Nothing stopped people from working on bugs during the week as well, of course.

A tag to rule them all

We used the bsp-2025-04-brazil usertag to mark those bugs that were touched by us. You can see the full list of bugs here, although the current list (as of 2025-05-11) is smaller than the one we had by the end of April. I don’t know what happened; maybe it’s some glitch with the BTS, or maybe someone removed the usertag by mistake.

Stats

In total, we had:

  • 7 participants
  • 37 bugs handled. Of those,
  • 35 bugs fixed

The BSP officially started on 04 April 2025, and ended on 30 April 2025. I was able to attend meetings during two weekends; other people participated more sporadically.

Outcome

As I said above, the Debian Brasil community is great and very engaged in the project. Speaking more specifically about the Debian Brasil Devel group, I can say that We have contributors with strong technical skills, and I really love that we have this inclusive, extremely technical culture where debugging and understanding things is really core to pretty much all our discussions.

We already meet weekly on Thursdays to talk shop and help newcomers, so having a remote BSP with this group seemed like a logical thing to do. I’m really glad to see our results and even happier to hear positive feedback from the community during the last MiniDebConf in Maceió.

There’s some interest in organizing another BSP, this time face-to-face and during the next DebConf. I’m all for it, as I love fixing bugs and having a great time with friends. If you’re interested in attending, let me know.

Thanks, and until next time.

11 May, 2025 10:00PM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

This is bits from the DPL for April.

End of 10

I am sure I was speaking in the interest of the whole project when joining the "End of 10" campaign. Here is what I wrote to the initiators:

Hi Joseph and all drivers of the "End of 10" campaign, On behalf of the entire Debian project, I would like to say that we proudly join your great campaign. We stand with you in promoting Free Software, defending users' freedoms, and protecting our planet by avoiding unnecessary hardware waste. Thank you for leading this important initiative.

Andreas Tille Debian Project Leader

I have some goals I would like to share with you for my second term.

Ftpmaster delegation

This splits up into tasks that can be done before and after Trixie release.

Before Trixie:

⁣1. Reducing Barriers to DFSG Compliance Checks

Back in 2002, Debian established a way to distribute cryptographic software in the main archive, whereas such software had previously been restricted to the non-US archive. One result of this arrangement which influences our workflow is that all packages uploaded to the NEW queue must remain on the server that hosts it. This requirement means that members of the ftpmaster team must log in to that specific machine, where they are limited to a restricted set of tools for reviewing uploaded code.

This setup may act as a barrier to participation--particularly for contributors who might otherwise assist with reviewing packages for DFSG compliance. I believe it is time to reassess this limitation and work toward removing such hurdles.

In October last year, we had some initial contact with SPI's legal counsel, who noted that US regulations around cryptography have been relaxed somewhat in recent years (as of 2021). This suggests it may now be possible to revisit and potentially revise the conditions under which we manage cryptographic software in the NEW queue.

I plan to investigate this further. If you have expertise in software or export control law and are interested in helping with this topic, please get in touch with me.

The ultimate goal is to make it easier for more people to contribute to ensuring that code in the NEW queue complies with the DFSG.

⁣2. Discussing Alternatives

My chances to reach out to other distributions remained limited. However, regarding the processing of new software, I learned that OpenSUSE uses a Git-based workflow that requires five "LGTM" approvals from a group of trusted developers. As far as I know, Fedora follows a similar approach.

Inspired by this, a recent community initiative--the Gateway to NEW project--enables peer review of new packages for DFSG compliance before they enter the NEW queue. This effort allows anyone to contribute by reviewing packages and flagging potential issues in advance via Git. I particularly appreciate that the DFSG review is coupled with CI, allowing for both license and technical evaluation.

While this process currently results in some duplication of work--since final reviews are still performed by the ftpmaster team--it offers a valuable opportunity to catch issues early and improve the overall quality of uploads. If the community sees long-term value in this approach, it could serve as a basis for evolving our workflows. Integrating it more closely into DAK could streamline the process, and we've recently seen that merge requests reflecting community suggestions can be accepted promptly.

For now, I would like to gather opinions about how such initiatives could best complement the current NEW processing, and whether greater consensus on trusted peer review could help reduce the burden on the team doing DFSG compliance checks. Submitting packages for review and automated testing before uploading can improve quality and encourage broader participation in safeguarding Debian's Free Software principles.

My explicit thanks go out to the Gateway to NEW team for their valuable and forward-looking contribution to Debian.

⁣3. Documenting Critical Workflows

Past ftpmaster trainees have told me that understanding the full set of ftpmaster workflows can be quite difficult. While there is some useful documentation − thanks in particular to Sean Whitton for his work on documenting NEW processing rules – many other important tasks carried out by the ftpmaster team remain undocumented or only partially so.

Comprehensive and accessible documentation would greatly benefit current and future team members, especially those onboarding or assisting in specific workflows. It would also help ensure continuity and transparency in how critical parts of the archive are managed.

If such documentation already exists and I have simply overlooked it, I would be happy to be corrected. Otherwise, I believe this is an area where we need to improve significantly. Volunteers with a talent for writing technical documentation are warmly invited to contact me--I'd be happy to help establish connections with ftpmaster team members who are willing to share their knowledge so that it can be written down and preserved.

Once Trixie is released (hopefully before DebConf):

⁣4. Split of the Ftpmaster Team into DFSG and Archive Teams

As discussed during the "Meet the ftpteam" BoF at DebConf24, I would like to propose a structural refinement of the current Ftpmaster team by introducing two different delegated teams:

  1. DFSG Team
  2. Archive Team (responsible for DAK maintenance and process tooling, including releases)

(Alternative name suggestions are, of course, welcome.) The primary task of the DFSG team would be the processing of the NEW queue and ensuring that packages comply with the DFSG. The Archive team would focus on maintaining DAK and handling the technical aspects of archive management.

I am aware that, in the recent past, the ftpmaster team has decided not to actively seek new members. While I respect the autonomy of each team, the resulting lack of a recruitment pipeline has led to some friction and concern within the wider community, including myself. As Debian Project Leader, it is my responsibility to ensure the long-term sustainability and resilience of our project, which includes fostering an environment where new contributors can join and existing teams remain effective and well-supported. Therefore, even if the current team does not prioritize recruitment, I will actively seek and encourage new contributors for both teams, with the aim of supporting openness and collaboration.

This proposal is not intended as criticism of the current team's dedication or achievements--on the contrary, I am grateful for the hard work and commitment shown, often under challenging circumstances. My intention is to help address the structural issues that have made onboarding and specialization difficult and to ensure that both teams are well-supported for the future.

I also believe that both teams should regularly inform the Debian community about the policies and procedures they apply. I welcome any suggestions for a more detailed description of the tasks involved, as well as feedback on how best to implement this change in a way that supports collaboration and transparency.

My intention with this proposal is to foster a more open and effective working environment, and I am committed to working with all involved to ensure that any changes are made collaboratively and with respect for the important work already being done.

I'm aware that the ideas outlined above touch on core parts of how Debian operates and involve responsibilities across multiple teams. These are not small changes, and implementing them will require thoughtful discussion and collaboration.

To move this forward, I've registered a dedicated BoF for DebConf. To make the most of that opportunity, I'm looking for volunteers who feel committed to improving our workflows and processes. With your help, we can prepare concrete and sensible proposals in advance--so the limited time of the BoF can be used effectively for decision-making and consensus-building.

In short: I need your help to bring these changes to life. From my experience in my last term, I know that when it truly matters, the Debian community comes together--and I trust that spirit will guide us again.

Please also note: we had a "Call for volunteers" five years ago, and much of what was written there still holds true today. I've been told that the response back then was overwhelming--but that training such a large number of volunteers didn't scale well. This time, I hope we can find a more sustainable approach: training a few dedicated people first, and then enabling them to pass on their knowledge. This will also be a topic at the DebCamp sprint.

Dealing with Dormant Packages

Debian was founded on the principle that each piece of software should be maintained by someone with expertise in it--typically a single, responsible maintainer. This model formed the historical foundation of Debian's packaging system and helped establish high standards of quality and accountability. However, as the project has grown and the number of packages has expanded, this model no longer scales well in all areas. Team maintenance has since emerged as a practical complement, allowing multiple contributors to share responsibility and reduce bottlenecks--depending on each team's internal policy.

While working on the Bug of the Day initiative, I observed a significant number of packages that have not been updated in a long time. In the case of team-maintained packages, addressing this is often straightforward: team uploads can be made, or the team can be asked whether the package should be removed. We've also identified many packages that would fit well under the umbrella of active teams, such as language teams like Debian Perl and Debian Python, or blends like Debian Games and Debian Multimedia. Often, no one has taken action--not because of disagreement, but simply due to inattention or a lack of initiative.

In addition, we've found several packages that probably should be removed entirely. In those cases, we've filed bugs with pre-removal warnings, which can later be escalated to removal requests.

When a package is still formally maintained by an individual, but shows signs of neglect (e.g., no uploads for years, unfixed RC bugs, failing autopkgtests), we currently have three main tools:

  1. The MIA process, which handles inactive or unreachable maintainers.
  2. Package Salvaging, which allows contributors to take over maintenance if conditions are met.
  3. Non-Maintainer Uploads (NMUs), which are limited to specific, well-defined fixes (which do not include things like migration to Salsa).

These mechanisms are important and valuable, but they don't always allow us to react swiftly or comprehensively enough. Our tools for identifying packages that are effectively unmaintained are relatively weak, and the thresholds for taking action are often high.

The Package Salvage team is currently trialing a process we've provisionally called "Intend to NMU" (ITN). The name is admittedly questionable--some have suggested alternatives like "Intent to Orphan"--and discussion about this is ongoing on debian-devel. The mechanism is intended for situations where packages appear inactive but aren't yet formally orphaned, introducing a clear 21-day notice period before NMUs, similar in spirit to the existing ITS process. The discussion has sparked suggestions for expanding NMU rules.

While it is crucial not to undermine the autonomy of maintainers who remain actively involved, we also must not allow a strict interpretation of this autonomy to block needed improvements to obviously neglected packages.

To be clear: I do not propose to change the rights of maintainers who are clearly active and invested in their packages. That model has served us well. However, we must also be honest that, in some cases, maintainers stop contributing--quietly and without transition plans. In those situations, we need more agile and scalable procedures to uphold Debian's high standards.

To that end, I've registered a BoF session for DebConf25 to discuss potential improvements in how we handle dormant packages. These discussions will be prepared during a sprint at DebCamp, where I hope to work with others on concrete ideas.

Among the topics I want to revisit is my proposal from last November on debian-devel, titled "Barriers between packages and other people". While the thread prompted substantial discussion, it understandably didn't lead to consensus. I intend to ensure the various viewpoints are fairly summarised--ideally by someone with a more neutral stance than myself--and, if possible, work toward a formal proposal during the DebCamp sprint to present at the DebConf BoF.

My hope is that we can agree on mechanisms that allow us to act more effectively in situations where formerly very active volunteers have, for whatever reason, moved on. That way, we can protect both Debian's quality and its collaborative spirit.

Building Sustainable Funding for Debian

Debian incurs ongoing expenses to support its infrastructure--particularly hardware maintenance and upgrades--as well as to fund in-person meetings like sprints and mini-DebConfs. These investments are essential to our continued success: they enable productive collaboration and ensure the robustness of the operating system we provide to users and derivative distributions around the world.

While DebConf benefits from generous sponsorship, and we regularly receive donated hardware, there is still considerable room to grow our financial base--especially to support less visible but equally critical activities. One key goal is to establish a more constant and predictable stream of income, helping Debian plan ahead and respond more flexibly to emerging needs.

This presents an excellent opportunity for contributors who may not be involved in packaging or technical development. Many of us in Debian are engineers first--and fundraising is not something we've been trained to do. But just like technical work, building sustainable funding requires expertise and long-term engagement.

If you're someone who's passionate about Free Software and has experience with fundraising, donor outreach, sponsorship acquisition, or nonprofit development strategy, we would deeply value your help. Supporting Debian doesn't have to mean writing code. Helping us build a steady and reliable financial foundation is just as important--and could make a lasting impact.

Kind regards Andreas.

PS: In April I also planted my 5000th tree and while this is off-topic here I'm proud to share this information with my fellow Debian friends.

11 May, 2025 10:00PM by Andreas Tille

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.8 on CRAN: Maintenance

Release 0.2.8 of our RcppSMC package arrived at CRAN yesterday. RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

This release is somewhat procedural and contains solely maintenance, either for items now highlighted by the R and CRAN package checks, or to package internals. We had made those changes at the GitHub repo over time since the last release two years ago, and it seemed like a good time to get them to CRAN now.

The release is summarized below.

Changes in RcppSMC version 0.2.8 (2025-05-10)

  • Updated continuous integration script

  • Updated package metadata now using Authors@R

  • Corrected use of itemized list in one manual page

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More information is on the RcppSMC page and the repo. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 May, 2025 04:25PM

May 10, 2025

Taavi Väänänen

Wikimedia Hackathon Istanbul 2025

It's that time of the year again: the Wikimedia Hackathon 2025 happened last weekend in Istanbul. This year was my third time attending what has quickly become one of my favourite events of the year simply due to the concentration of friends and other like-minded nerds in a single location.1

Valerio, Lucas, me and a shark.

Image by Chlod Alejandro is licensed under CC BY-SA 4.0.

This year I did a short presentation about the MediaWiki packages in Debian (slides), which is something I do but I suspect is fairly obscure to most people in the MediaWiki community. I was hoping to do some work on reproducibility of MediaWiki releases, but other interests (plus lack of people involved in the release process at the hackathon) meant that I didn't end up getting any work done on that (assuming this does not count).

Other long-standing projects did end up getting some work done! MusikAnimal and I ended up fixing the Commons deletion notification bot, which had been broken for well over two years at that point (and was at some point in the hackathon plans for last year for both of us). Other projects that I made progress on include supporting multiple types of two-factor devices, and LibraryUpgrader which gained support for rebasing and updating existing patches2.

In addition to hacking, the other highlight of these events is the hallway track. Some of the crowd is people who I've seen at previous events and/or interact very frequently with, but there are also significant parts of the community and the Foundation that I don't usually get to interact with outside of these events. (Although it still feels extremely weird to heard from various mostly-WMF people with whom I haven't spoken with before that they've heard various (usually positive) rumours stories about me.)

Unfortunately we did not end up having a Cuteness Association meetup this year, but we had an impromptu PGP key signing party which is basically almost as good, right?

However, I did continue a tradition from last year: I ended up nominating Chlod, a friend of mine, to receive +2 access to mediawiki/* during the hackathon. The request is due to be closed sometime tomorrow.

(Usual disclosure: My travel was funded by the Wikimedia Foundation. Thank you! This is my personal blog and these are my own opinions.)

Now that you've read this post, maybe check out posts from others?


  1. Unfortunately you can never have absolutely everyone attending :( ↩︎

  2. Amir, I still have not forgiven you about this. ↩︎

10 May, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

May 09, 2025

Uwe Kleine-König

The Linux kernel's PGP Web of Trust

The Linux kernel's development process makes use of PGP. The most relevant part here is that subsystem maintainers are supposed to use signed tags in their pull requests to Linus Torvalds. As the concept of keyservers is considered broken, Konstantin Ryabitsev maintains a collection of relevant keys in a git repository.

As of today (at commit a0bc65fb27f5033beddf9d1ad97d67c353849be2) there are 602 valid keys tracked in that repository. The requirement for a key to be added there is that there must be at least one trust path from Linus Torvalds' key to this key of length at most 5 within that keyring.

Occasionally it happens that a key loses its trust paths because someone in these paths replaced their key, or keys expired. Currently this affects 2 keys.

However there is a problem on the horizon: GnuPG 2.4.x started to reject third-party key signatures using the SHA-1 hash algorithm. In general that's good, SHA-1 isn't considered secure any more for more than 20 years. This doesn't directly affect the kernel-pgpkeys repo, because the trust path checking doesn't rely on GnuPG trusting the signatures; there is a dedicated tool that parses the keyring contents and currently accepts signatures using SHA-1. Also signatures are not thrown away usually, but there are exceptions: Recently Theodore Ts'o asked to update his certificate. When Konstantin imported the updated certificate GnuPG's "cleaning" was applied which dropped all SHA-1 signatures. So Theodore Ts'o's key lost 168 signatures, among them one by Linus Torvalds on his primary UID.

That made me wonder what would be the effect on the web of trust if all SHA-1 signatures were dropped. Here are the facts:

  • There are 7976 signatures tracked in the korg-pgpkeys repo that are considered valid, 6045 of them use SHA-1.

  • Only considering the primary UID Linus Torvalds directly signed 40 public keys, 38 of these using SHA-1. One of the two keys that is still "properly" signed, doesn't sign any other key. So nearly all trust paths go through a single key.

  • When not considering SHA-1 signatures there are 485 public keys without a trust path from Linus Torvalds of length 5 or less. So today these 485 public keys would not qualify to be added to the pgpkeys git repository. Among the people being dropped are Andrew Morton, Greg Kroah-Hartman, H. Peter Anvin, Ingo Molnar, Junio C Hamano, Konstantin Ryabitsev, Peter Zijlstra, Stephen Rothwell and Thomas Gleixner.

  • The size of the kernel strong set is reduced from 358 to 94.

If you attend Embedded Recipes 2025 next week, there is an opportunity to improve the situation: Together with Ahmad Fatoum I'm organizing a keysigning session. If you want to participate, send your public key to er2025-keysigning@baylibre.com before 2025-05-12 08:00 UTC.

09 May, 2025 07:29PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.22 on CRAN: New Upstream

Version 0.0.22 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.3 of spdlog which was released this morning, and includes version 1.12.0 of fmt.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.22 (2025-05-09)

  • Upgraded to upstream release spdlog 1.15.3 (including fmt 11.2.0)

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

09 May, 2025 06:55PM

Abhijith PA

Bug squashing party, Kochi

Last weekend, 4 people (3 DDs and 1 soon to be, hopefully in coming months) sit together for a Bug squashing party in Kochi. We fixed lot of things including my broken autopkgtest setup.

BSP-Kochi

It all began from a discussion in #debian-in of not having any BSPs in the past in India. Then twisted in to hosting a BSP by me. I fixed the dates to 3rd & 4th May to get packages migrate naturally to testing with NMUs before the hard freeze on 15th May.

Finding a venue was a huge challenge. Unlike other places, we have very limited options on hackerspaces. We also had some company spaces (if we asked), but we may have to follow their office timings and finding accommodation near by was also a challenge.

Later we decided to go with a rental apartment where could hack all night and sleep. We booked a very bare minimal apartment for 3 nights and 3 days. I updated wiki page and sent announcement.

Not even Wi-Fi was there in the apartment, so we setup everything by ourselves (DebConf style :p ). I short listed some newbie bugs, just in case if newcomers joined the party. But it was only we 4 people and Kathara who joined remotely.

We started from May 2nd night, stacked our cabin with snacks, instant noodles and drinks. Arranged beds, tables and started hacking and having discussions. My autopkgtest-lxc setup was broken. I think its related to #1017753, which got fixed magically and now I started using autopkgtest-podman.

stack

I learned

  • reportbug tool can use its own SMTP server by default
  • autoremovals can be extended if we pinged to the bug report.

On last day, we went to a nice restaurant and had food. There was a church festival nearby, so we were able to watch wonderful procession and fireworks at night.

food

All in all we managed to touch 46 bugs of which 35 is now fixed/done and 11 is open, some of this get status done when it reaches testing. It was a fun and productive weekend. More importantly we had fun.

09 May, 2025 04:46PM

Reproducible Builds (diffoscope)

diffoscope 295 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 295. This version includes the following changes:

[ Chris Lamb ]
* Use --walk over the potentially dangerous --scan argument of zipdetails(1).
  (Closes: reproducible-builds/diffoscope#406)

You find out more by visiting the project homepage.

09 May, 2025 12:00AM

May 08, 2025

Thorsten Alteholz

My Debian Activities in April 2025

Debian LTS

This was my hundred-thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4145-1] expat security update of one CVE related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [DLA 4146-1] libxml2 security update to fix two CVEs related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Bookworm.
  • [debdiff] sent libxml2 debdiff to maintainer for update of two CVEs in Unstable.

This month I did a week of FD duties. I also started to work on libxmltok. Adrian suggested to also check the CVEs that might affect the embedded version of expat. Unfortunately these are a bunch of CVEs to check and the month ended before the upload. I hope to finish this in May. Last but not least I continued to work on the second batch of fixes for suricata CVEs.

Debian ELTS

This month was the eighty-first ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1411-1] expat security update to fix one CVE in Stretch and Buster related to a crash within XML_ResumeParser() because XML_StopParser() can stop/suspend an unstarted parser.
  • [ELA-1412-1] libxml2 security update to fix two CVEs in Jessie, Stretch and Buster related to an out-of-bounds memory access in the Python API and a heap-buffer-overflow.

This month I did a week of FD duties.
I also started to work on libxmltok. Normally I work on machines running Bullseye or Bookworm. As the Stretch version of libxmltok needs a debhelper version of 5, which is no longer supported on Bullseye, I had to create a separate Buster VM. Yes, Stretch is becoming old. As well as with LTS I need to also check the CVEs that might affect the embedded version of expat.
Last but not least I started to work on the second batch of fixes for suricata CVEs.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

misc

This month I uploaded new packages or new upstream or bugfix versions of:

bottlerocket was my first upload via debusine. It is a really cool tool and I can only recommend everybody to give it at least a try.
I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

I finally filed an RM bug for siggen. I don’t think that fixing all the gcc-14 issues is really worth the hassle.

FTP master

This month I accepted 307 and rejected 55 packages. The overall number of packages that got accepted was 308.

08 May, 2025 12:05PM by alteholz

May 07, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

procmail versus exim filters

I’ve been using Procmail to filter mail for a long time. Reading Antoine’s blog post procmail considered harmful, I felt motivated (and shamed) into migrating to something else. Luckily, Enrico's shared a detailed roadmap for moving to Sieve, in particular Dovecot's Sieve implementation (which provides "pipe" and "filter" extensions).

My MTA is Exim, and for my first foray into this, I didn't want to change that1. Exim provides two filtering languages for users: an implementation of Sieve, and its own filter language.

Requirements

A good first step is to look at what I'm using Procmail for:

  1. I invoke external mail filters: processes which read the mail and emit a possibly altered mail (headers added, etc.). In particular, crm114 (which has worked remarkably well for me) to classify mail as spam or not, and dsafilter, to mark up Debian Security Advisories

  2. I file messages into different folders depending on the outcome of the above filters

  3. I drop mail ("killfile") some sender addresses (persistent pests on mailing lists); and mails containing certain hosts in the References header (as an imperfect way of dropping mailing list threads which are replies to someone I've killfiled); and mail encoded in a character set for a language I can't read (Russian, Korean, etc.), and several other simple static rules

  4. I move mailing list mail into folders, semi-automatically (see list filtering)

  5. I strip "tagged" subjects for some mailing lists: i.e., incoming mail has subjects like "[cs-historic-committee] help moving several tons of IBM360", and I don't want the "[cs-historic-committee]" bit.

  6. I file a copy of some messages, the name of which is partly derived from the current calendar year

Exim Filters

I want to continue to do (1), which rules out Exim's implementation of Sieve, which does not support invoking external programs. Exim's own filter language has a pipe function that might do what I need, so let's look at how to achieve the above with Exim Filters.

autolists

Here's an autolist recipe for Debian's mailing lists, in Exim filter language. Contrast with the Procmail in list filtering:

if $header_list-id matches "(debian.*)\.lists\.debian\.org"
then
  save Maildir/l/$1/
  finish
endif

Hands down, the exim filter is nicer (although some of the rules on escape characters in exim filters, not demonstrated here, are byzantine).

killfile

An ideal chunk of configuration for kill-filing a list of addresses is light on boiler plate, and easy to add more addresses to in the future. This is the best I could come up with:

if foranyaddress "someone@example.org,\
                  another@example.net,\
                  especially-bad.example.com,\
                 "
   ($reply_address contains $thisaddress
    or $header_references contains $thisaddress)
then finish endif

I won't bother sharing the equivalent Procmail but it's pretty comparable: the exim filter is no great improvement.

It would be lovely if the list of addresses could be stored elsewhere, such as a simple text file, one line per address, or even a database. Exim's own configuration language (distinct from this filter language) has some nice mechanisms for reading lists of things like addresses from files or databases. Sadly it seems the filter language lacks anything similar.

external filters

With Procmail, I pass the mail to an external program, and then read the output of that program back, as the new content of the mail, which continues to be filtered: subsequent filter rules inspect the headers to see what the outcome of the filter was (is it spam?) and to decide what to do accordingly. Crucially, we also check the return status of the filter, to handle the case when it fails.

With Exim filters, we can use pipe to invoke an external program:

pipe "$home/mail/mailreaver.crm -u $home/mail/"

However, this is not a filter: the mail is sent to the external program, and the exim filter's job is complete. We can't write further filter rules to continue to process the mail: the external program would have to do that; and we have no way of handling errors.

Here's Exim's documentation on what happens when the external command fails:

Most non-zero codes are treated by Exim as indicating a failure of the pipe. This is treated as a delivery failure, causing the message to be returned to its sender.

That is definitely not what I want: if the filter broke (even temporarily), Exim would seemingly generate a bounce to the sender address, which could be anything, and I wouldn't have a copy of the message.

The documentation goes on to say that some shell return codes (defaulting to 73 and 75) cause Exim to treat it as a temporary error, spool the mail and retry later on. That's a much better behaviour for my use-case. Having said that, on the rare occasions I've broken the filter, the thing which made me notice most quickly was spam hitting my inbox, which my Procmail recipe achieves.

removing subject tagging

Here, Exim's filter language gets unstuck. There is no way to add or alter headers for a message in a user filter. Exim uses the same filter language for system-wide message filtering, and in that context, it has some extra functions: headers add <string>, headers remove <string>, but (for reasons I don't know) these are not available for user filters.

copy mail to archive folder

I can't see a way to derive a folder name from the calendar year.

next steps

Exim Sieve implementation and its filter language are ruled out as Procmail replacements because they can't do at least two of the things I need to do.

However, based on Enrico's write-up, it looks like Dovecot's Sieve implementation probably can. I was also recommended maildrop, which I might look at if Dovecot Sieve doesn't pan out.


  1. I should revisit this requirement because I could probably reconfigure exim to run my spam classifier at the system level, obviating the need to do it in a user filter, and also raising the opportunity to do smtp-time rejection based on the outcome

07 May, 2025 10:16AM

May 06, 2025

Enrico Zini

Python-like abspath for c++

Python's os.path.abspath or Path.absolute are great: you give them a path, which might not exist, and you get a path you can use regardless of the current directory. os.path.abspath will also normalize it, while Path will not by default because with Paths a normal form is less needed.

This is great to normalize input, regardless of if it's an existing file you're needing to open, or a new file you're needing to create.

In C++17, there is a filesystem library with methods with enticingly similar names, but which are almost, but not quite, totally unlike Python's abspath.

Because in my C++ code I need to normalize input, regardless of if it's an existing file I'm needing to open or a new file I'm needing to create, here's an apparently working Python-like abspath for C++ implemented on top of the std::filesystem library:

std::filesystem::path abspath(const std::filesystem::path& path)
{
    // weakly_canonical is defined as "the result of calling canonical() with a
    // path argument composed of the leading elements of p that exist (as
    // determined by status(p) or status(p, ec)), if any, followed by the
    // elements of p that do not exist."
    //
    // This means that if no lead components of the path exist then the
    // resulting path is not made absolute, and we need to work around that.
    if (!path.is_absolute())
        return abspath(std::filesystem::current_path() / path);

    // This is further and needlessly complicated because we need to work
    // around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    unsigned retry = 0;
    while (true)
    {
        std::error_code code;
        auto result = std::filesystem::weakly_canonical(path, code);
        if (!code)
        {
            // fprintf(stderr, "%s: ok in %u tries\n", path.c_str(), retry+1);
            return result;
        }

        if (code == std::errc::no_such_file_or_directory)
        {
            ++retry;
            if (retry > 50)
                throw std::system_error(code);
        }
        else
            throw std::system_error(code);
    }

    // Alternative implementation that however may not work on all platforms
    // since, formally, "[std::filesystem::absolute] Implementations are
    // encouraged to not consider p not existing to be an error", but they do
    // not mandate it, and if they did, they might still be affected by the
    // undefined behaviour outlined in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118733
    //
    // return std::filesystem::absolute(path).lexically_normal();
}

I added it to my wobble code repository, which is the thin repository of components I use to ease my C++ systems programming.

06 May, 2025 09:51AM

May 05, 2025

Ravi Dwivedi

A visit to Paris

After attending the 2024 LibreOffice conference in Luxembourg, I visited Paris in October 2024.

If you are wondering whether I needed another visa to cross the border into France— I didn’t! Further, they are both also EU members, which means you don’t need to go through customs either. Thus, crossing the Luxembourg-France border is no different from crossing Indian state borders - like going from Rajasthan to Uttar Pradesh.

I took a TGV train from Luxembourg Central Station, which was within walking distance from my hostel. The train took only 2 hours and 20 minutes to cover the 300 km distance to Paris. It departed from Luxembourg at 10:00 AM and reached Paris at 12:20 PM. The ride was smooth and comfortable, arriving on time. It gave me an opportunity to see the countryside of France. I booked the train ticket online a couple of days prior through the Omio website.

A train standing on a platform

TGV train I rode from Luxembourg to Paris

I planned the first day with my friend Joenio, whom I met upon arriving in Paris’ Gare de l’Est station, along with his wife Mari. We went to my hostel (which was within walking distance from the station) to store my luggage, but we were informed that we needed to wait for a couple of hours before I could check in. Consequently, we went to an Italian restaurant nearby for lunch, where I ordered pasta. My hostel was unbelievably cheap by French standards (25 euros per night) that Joenio was shocked when he learned about it.

Pasta on a plate topped with Ricotta cheese

Pasta I had in Paris

Walking in the city, I noticed it had separate cycling tracks and wide footpaths, just like Luxembourg. The traffic was also organized. For instance, there were traffic lights even for pedestrian crossings, unlike India, where crossing roads can be a nightmare. Car drivers stopping for pedestrians is a big improvement over what I am used to in India. The weather was also pleasant. It was a bit on the cooler side - around 15 degrees Celsius - and I had to wear a jacket.

A cycling track in Paris

A cycling track in Paris

After lunch, we returned to my hostel for my check-in at around 3 o’clock. Then, we went to the Luxembourg Museum (Musée du Luxembourg in French) as Joenio had booked tickets for an exhibition of paintings by the Brazilian painter Tarsila do Amaral. To reach there, we took a subway train from Gare du Nord station. The Paris subway charges 2.15 euros irrespective of the distance (or number of stations) traveled, as opposed to other metro systems I have used.

We reached the museum at around 4 o’clock. I found the paintings beautiful, but I would have appreciated them much more if the descriptions were in English.

A building wit trees on the left and right side of it and sky in the background. People can be seen in front of the building.

Luxembourg Museum

Afterward, we went to a beautiful garden just behind the museum. It served as a great spot to relax and take pictures. Following this, we walked to the Pantheon - a well-known attraction in the city. It is a church built a couple of centuries ago. It has a dome-shaped structure at the top, recognizable from far away.

A building with a garden in front it and people sitting closer to us. Sky can be seen in the background.

A shot of the park near to the Luxembourg Museum

A building with a dome shaped structure on top. Closer to camera, roads can be seen. In the background is blue colored cloudy sky.

Pantheon, one of the attractions of Paris.

Then we went to Notre Dame after having evening snacks and coffee at a nearby bakery. The Notre Dame was just over a kilometer from the Pantheon, so we took a walk. We also crossed the beautiful Seine river. On the way, I sampled a crêpe, a signature dish of France. The shop was named Crêperie and had many varieties of Crêpe. I took the one with eggs and Emmental cheese. It was savory and delicious.

Photo with Joenio and Mari

Photo with Joenio and Mari

Notre Dame, another tourist attraction of Paris.

Notre Dame, another tourist attraction of Paris.

By the time we reached Notre Dame, it was 07:30 PM. I learned from Joenio that Notre Dame was closed and being renovated due to a fire a couple of years ago, so we just sat around and clicked photos. It is a catholic cathedral built in French Gothic architecture (I read that on Wikipedia ;)). I read on Wikipedia that it is located on an island named Île de la Cité and I didn’t even realize we are on an island.

At night, we visited the most well-known attraction of Paris, The Eiffel Tower. We again took the subway, alighting at the Bir-Hakeim station, followed by a short walk. We reached the Eiffel Tower at 9 o’clock. It was lit bright yellow. There was not much to do there, so we just clicked photos and hung out. After that, I came back to my hostel.

The Eiffel Tower lit with bright yellow

My photo with Eiffel Tower in the background

Next day, I roamed around the city by walking mostly. France is known for its bakeries, so I checked out a couple of local bakeries. I had espresso a couple of times and sampled croissant, pain au chocolat and lemon meringue tartlet.

Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.Items at a bakery in Paris

Items at a bakery in Paris. Items from left to right are: Chocolate Twist, Sugar briochette, Pain au Chocolat, Croissant with almonds, Croissant, Croissant with chocolate hazlenut filling.

Here are some random shots:

The Paris subway

The Paris subway

Inside a Paris metro train

Inside a Paris subway

A random building and road in Paris

A random building and road in Paris

A shot near Seine river

A shot near Seine river

A view of Seine river

A view of Seine river

On the third day, I had my flight for India. Thus, I checked out of the hostel early in the morning, took an RR train from Gare du Nord station to reach the airport. It costs 11.8 euros.

I heard some of my friends had bad experiences in France. Thus, I had the impression that I would not feel welcomed. Furthermore, I have encountered language problems in my previous Europe trip to Albania and Kosovo. Likewise, I learned a couple of French words, like how to say thank you and good morning, which went a long way.

However, I didn’t have bad experiences in Paris, except for one instance in which I asked my hostel’s reception about my misplaced watch and the person at the reception asked me to be “polite” by being rude. She said, “Excuse me! You don’t know how to say Good Morning?”

Overall, I enjoyed my time in Paris and would like to thank Joenio and Mari for joining me. I would also like to thank Sophie, who gave me a map of Paris.

Let’s end this post here. I’ll meet you in the next one!

Credits: Thanks to contrapunctus for reviewing this post before publishing

05 May, 2025 08:02PM

hackergotchi for Daniel Lange

Daniel Lange

Make `apt` shut up about "modernize-sources" in Trixie

Apt in Trixie (Debian 13) has the annoying function to tell you "Notice: Some sources can be modernized. Run 'apt modernize-sources' to do so." ... every single time you run apt update. Not cool for logs and log monitoring.

And - of course - if you had the option to do this, you ... would have run the indicated apt modernize-sources command to convert your sources.list to "deb822 .sources format" files already. So an information message once or twice would have done.

Well, luckily you can help yourself:

apt -o APT::Get::Update::SourceListWarnings=false will keep apt shut up. This could go into an alias or your systems management tool / update script.

Alternatively add

# Keep apt shut about preferring the "deb822" sources file format
APT::Get::Update::SourceListWarnings "false";

to /etc/apt/apt.conf.d/10quellsourceformatwarnings .

This silences the notices about sources file formats (not only the deb822 one) system-wide. That way you can decide when you can / want to migrate to the new, more verbose, apt sources format yourself.

05 May, 2025 02:14PM by Daniel Lange

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

Argo CD Usage Examples

As a followup of my post about the use of argocd-autopilot I’m going to deploy various applications to the cluster using Argo CD from the same repository we used on the previous post.

For our examples we are going to test a solution to the problem we had when we updated a ConfigMap used by the argocd-server (the resource was updated but the application Pod was not because there was no change on the argocd-server deployment); our original fix was to kill the pod manually, but the manual operation is something we want to avoid.

The proposed solution to this kind of issues on the helm documentation is to add annotations to the Deployments with values that are a hash of the ConfigMaps or Secrets used by them, this way if a file is updated the annotation is also updated and when the Deployment changes are applied a roll out of the pods is triggered.

On this post we will install a couple of controllers and an application to show how we can handle Secrets with argocd and solve the issue with updates on ConfigMaps and Secrets, to do it we will execute the following tasks:

  1. Deploy the Reloader controller to our cluster. It is a tool that watches changes in ConfigMaps and Secrets and does rolling upgrades on the Pods that use them from Deployment, StatefulSet, DaemonSet or DeploymentConfig objects when they are updated (by default we have to add some annotations to the objects to make things work).
  2. Deploy a simple application that can use ConfigMaps and Secrets and test that the Reloader controller does its job when we add or update a ConfigMap.
  3. Install the Sealed Secrets controller to manage secrets inside our cluster, use it to add a secret to our sample application and see that the application is reloaded automatically.

Creating the test project for argocd-autopilot

As we did our installation using argocd-autopilot we will use its structure to manage the applications.

The first thing to do is to create a project (we will name it test) as follows:

❯ argocd-autopilot project create test
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (16/16), done.
Total 18 (delta 1), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
INFO project created: 'test'

Now that the test project is available we will use it on our argocd-autopilot invocations when creating applications.

Installing the reloader controller

To add the reloader application to the test project as a kustomize application and deploy it on the tools namespace with argocd-autopilot we do the following:

❯ argocd-autopilot app create reloader \
    --app 'github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2' \
    --project test --type kustomize --dest-namespace tools
INFO cloning git repository: https://forgejo.mixinet.net/blogops/argocd.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (18/18), done.
Total 19 (delta 2), reused 0 (delta 0), pack-reused 0
INFO using revision: "", installation path: "/"
INFO created 'application namespace' file at '/bootstrap/cluster-resources/in-cluster/tools-ns.yaml'
INFO committing changes to gitops repo...
INFO installed application: reloader

That command creates four files on the argocd repository:

  1. One to create the tools namespace:

    bootstrap/cluster-resources/in-cluster/tools-ns.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      annotations:
        argocd.argoproj.io/sync-options: Prune=false
      creationTimestamp: null
      name: tools
    spec: {}
    status: {}
  2. Another to include the reloader base application from the upstream repository:

    apps/reloader/base/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - github.com/stakater/Reloader/deployments/kubernetes/?ref=v1.4.2
  3. The kustomization.yaml file for the test project (by default it includes the same configuration used on the base definition, but we could make other changes if needed):

    apps/reloader/overlays/test/kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namespace: tools
    resources:
    - ../../base
  4. The config.json file used to define the application on argocd for the test project (it points to the folder that includes the previous kustomization.yaml file):

    apps/reloader/overlays/test/config.json
    {
      "appName": "reloader",
      "userGivenName": "reloader",
      "destNamespace": "tools",
      "destServer": "https://kubernetes.default.svc",
      "srcPath": "apps/reloader/overlays/test",
      "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
      "srcTargetRevision": "",
      "labels": null,
      "annotations": null
    }

We can check that the application is working using the argocd command line application:

❯ argocd app get argocd/test-reloader -o tree
Name:               argocd/test-reloader
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          tools
URL:                https://argocd.lo.mixinet.net:8443/applications/test-reloader
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/reloader/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (2893b56)
Health Status:      Healthy

KIND/NAME                                          STATUS  HEALTH   MESSAGE
ClusterRole/reloader-reloader-role                 Synced
ClusterRoleBinding/reloader-reloader-role-binding  Synced
ServiceAccount/reloader-reloader                   Synced           serviceaccount/reloader-reloader created
Deployment/reloader-reloader                       Synced  Healthy  deployment.apps/reloader-reloader created
└─ReplicaSet/reloader-reloader-5b6dcc7b6f                  Healthy
  └─Pod/reloader-reloader-5b6dcc7b6f-vwjcx                 Healthy

Adding flags to the reloader server

The runtime configuration flags for the reloader server are described on the project README.md file, in our case we want to adjust three values:

  • We want to enable the option to reload a workload when a ConfigMap or Secret is created,
  • We want to enable the option to reload a workload when a ConfigMap or Secret is deleted,
  • We want to use the annotations strategy for reloads, as it is the recommended mode of operation when using argocd.

To pass them we edit the apps/reloader/overlays/test/kustomization.yaml file to patch the pod container template, the text added is the following:

patches:
# Add flags to reload workloads when ConfigMaps or Secrets are created or deleted
- target:
    kind: Deployment
    name: reloader-reloader
  patch: |-
    - op: add
      path: /spec/template/spec/containers/0/args
      value:
        - '--reload-on-create=true'
        - '--reload-on-delete=true'
        - '--reload-strategy=annotations'

After committing and pushing the updated file the system launches the application with the new options.

The dummyhttp application

To do a quick test we are going to deploy the dummyhttp web server using an image generated using the following Dockerfile:

# Image to run the dummyhttp application <https://github.com/svenstaro/dummyhttp>

# This arg could be passed by the container build command (used with mirrors)
ARG OCI_REGISTRY_PREFIX

# Latest tested version of alpine
FROM ${OCI_REGISTRY_PREFIX}alpine:3.21.3

# Tool versions
ARG DUMMYHTTP_VERS=1.1.1

# Download binary
RUN ARCH="$(apk --print-arch)" && \
  VERS="$DUMMYHTTP_VERS" && \
  URL="https://github.com/svenstaro/dummyhttp/releases/download/v$VERS/dummyhttp-$VERS-$ARCH-unknown-linux-musl" && \
  wget "$URL" -O "/tmp/dummyhttp" && \
  install /tmp/dummyhttp /usr/local/bin && \
  rm -f /tmp/dummyhttp

# Set the entrypoint to /usr/local/bin/dummyhttp
ENTRYPOINT [ "/usr/local/bin/dummyhttp" ]

The kustomize base application is available on a monorepo that contains the following files:

  1. A Deployment definition that uses the previous image but uses /bin/sh -c as its entrypoint (command in the k8s Pod terminology) and passes as its argument a string that runs the eval command to be able to expand environment variables passed to the pod (the definition includes two optional variables, one taken from a ConfigMap and another one from a Secret):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dummyhttp
      labels:
        app: dummyhttp
    spec:
      selector:
        matchLabels:
          app: dummyhttp
      template:
        metadata:
          labels:
            app: dummyhttp
        spec:
          containers:
          - name: dummyhttp
            image: forgejo.mixinet.net/oci/dummyhttp:1.0.0
            command: [ "/bin/sh", "-c" ]
            args:
            - 'eval dummyhttp -b \"{\\\"c\\\": \\\"$CM_VAR\\\", \\\"s\\\": \\\"$SECRET_VAR\\\"}\"'
            ports:
            - containerPort: 8080
            env:
            - name: CM_VAR
              valueFrom:
                configMapKeyRef:
                  name: dummyhttp-configmap
                  key: CM_VAR
                  optional: true
            - name: SECRET_VAR
              valueFrom:
                secretKeyRef:
                  name: dummyhttp-secret
                  key: SECRET_VAR
                  optional: true
  2. A Service that publishes the previous Deployment (the only relevant thing to mention is that the web server uses the port 8080 by default):

    apiVersion: v1
    kind: Service
    metadata:
      name: dummyhttp
    spec:
      selector:
        app: dummyhttp
      ports:
      - name: http
        port: 80
        targetPort: 8080
  3. An Ingress definition to allow access to the application from the outside:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: dummyhttp
      annotations:
        traefik.ingress.kubernetes.io/router.tls: "true"
    spec:
      rules:
        - host: dummyhttp.localhost.mixinet.net
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: dummyhttp
                    port:
                      number: 80
  4. And the kustomization.yaml file that includes the previous files:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    
    resources:
    - deployment.yaml
    - service.yaml
    - ingress.yaml

Deploying the dummyhttp application from argocd

We could create the dummyhttp application using the argocd-autopilot command as we’ve done on the reloader case, but we are going to do it manually to show how simple it is.

First we’ve created the apps/dummyhttp/base/kustomization.yaml file to include the application from the previous repository:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - https://forgejo.mixinet.net/blogops/argocd-applications.git//dummyhttp/?ref=dummyhttp-v1.0.0

As a second step we create the apps/dummyhttp/overlays/test/kustomization.yaml file to include the previous file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base

And finally we add the apps/dummyhttp/overlays/test/config.json file to configure the application as the ApplicationSet defined by argocd-autopilot expects:

{
  "appName": "dummyhttp",
  "userGivenName": "dummyhttp",
  "destNamespace": "default",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/dummyhttp/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we have the three files we commit and push the changes and argocd deploys the application; we can check that things are working using curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/ | jq -M .
{
  "c": "",
  "s": ""
}

Patching the application

Now we will add patches to the apps/dummyhttp/overlays/test/kustomization.yaml file:

  • One to add annotations for reloader (one to enable it and another one to set the roll out strategy to restart to avoid touching the deployments, as that can generate issues with argocd).
  • Another to change the ingress hostname (not really needed, but something quite reasonable for a specific project).

The file diff is as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,3 +2,22 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+patches:
+# Add reloader annotations
+- target:
+    kind: Deployment
+    name: dummyhttp
+  patch: |-
+    - op: add
+      path: /metadata/annotations
+      value:
+        reloader.stakater.com/auto: "true"
+        reloader.stakater.com/rollout-strategy: "restart"
+# Change the ingress host name
+- target:
+    kind: Ingress
+    name: dummyhttp
+  patch: |-
+    - op: replace
+      path: /spec/rules/0/host
+      value: test-dummyhttp.lo.mixinet.net

After committing and pushing the changes we can use the argocd cli to check the status of the application:

❯ argocd app get argocd/test-dummyhttp -o tree
Name:               argocd/test-dummyhttp
Project:            test
Server:             https://kubernetes.default.svc
Namespace:          default
URL:                https://argocd.lo.mixinet.net:8443/applications/test-dummyhttp
Source:
- Repo:             https://forgejo.mixinet.net/blogops/argocd.git
  Target:
  Path:             apps/dummyhttp/overlays/test
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to  (fbc6031)
Health Status:      Healthy

KIND/NAME                           STATUS  HEALTH   MESSAGE
Deployment/dummyhttp                Synced  Healthy  deployment.apps/dummyhttp configured
└─ReplicaSet/dummyhttp-55569589bc           Healthy
  └─Pod/dummyhttp-55569589bc-qhnfk          Healthy
Ingress/dummyhttp                   Synced  Healthy  ingress.networking.k8s.io/dummyhttp configured
Service/dummyhttp                   Synced  Healthy  service/dummyhttp unchanged
├─Endpoints/dummyhttp
└─EndpointSlice/dummyhttp-x57bl

As we can see, the Deployment and Ingress where updated, but the Service is unchanged.

To validate that the ingress is using the new hostname we can use curl:

❯ curl -s https://dummyhttp.lo.mixinet.net:8443/
404 page not found
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443/
{"c": "", "s": ""}

Adding a ConfigMap

Now that the system is adjusted to reload the application when the ConfigMap or Secret is created, deleted or updated we are ready to add one file and see how the system reacts.

We modify the apps/dummyhttp/overlays/test/kustomization.yaml file to create the ConfigMap using the configMapGenerator as follows:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,14 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+# Add the config map
+configMapGenerator:
+- name: dummyhttp-configmap
+  literals:
+  - CM_VAR="Default Test Value"
+  behavior: create
+  options:
+    disableNameSuffixHash: true
 patches:
 # Add reloader annotations
 - target:

After committing and pushing the changes we can see that the ConfigMap is available, the pod has been deleted and started again and the curl output includes the new value:

❯ kubectl get configmaps,pods
NAME                             READY   STATUS        RESTARTS   AGE
configmap/dummyhttp-configmap   1      11s
configmap/kube-root-ca.crt      1      4d7h

NAME                            DATA   AGE
pod/dummyhttp-779c96c44b-pjq4d   1/1     Running       0          11s
pod/dummyhttp-fc964557f-jvpkx    1/1     Terminating   0          2m42s
❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}

Using helm with argocd-autopilot

Right now there is no direct support in argocd-autopilot to manage applications using helm (see the issue #38 on the project), but we want to use a chart in our next example.

There are multiple ways to add the support, but the simplest one that allows us to keep using argocd-autopilot is to use kustomize applications that call helm as described here.

The only thing needed before being able to use the approach is to add the kustomize.buildOptions flag to the argocd-cm on the bootstrap/argo-cd/kustomization.yaml file, its contents now are follows:

bootstrap/argo-cd/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
  literals:
  # Enable helm usage from kustomize (see https://github.com/argoproj/argo-cd/issues/2789#issuecomment-960271294)
  - kustomize.buildOptions="--enable-helm"
  - |
    repository.credentials=- passwordSecret:
        key: git_token
        name: autopilot-secret
      url: https://forgejo.mixinet.net/
      usernameSecret:
        key: git_username
        name: autopilot-secret
  name: argocd-cm
  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
- behavior: merge
  literals:
  - "server.insecure=true"
  name: argocd-cmd-params-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
- ingress_route.yaml

On the following section we will explain how the application is defined to make things work.

Installing the sealed-secrets controller

To manage secrets in our cluster we are going to use the sealed-secrets controller and to install it we are going to use its chart.

As we mentioned on the previous section, the idea is to create a kustomize application and use that to deploy the chart, but we are going to create the files manually, as we are not going import the base kustomization files from a remote repository.

As there is no clear way to override helm Chart values using overlays we are going to use a generator to create the helm configuration from an external resource and include it from our overlays (the idea has been taken from this repository, which was referenced from a comment on the kustomize issue #38 mentioned earlier).

The sealed-secrets application

We have created the following files and folders manually:

apps/sealed-secrets/
├── helm
│   ├── chart.yaml
│   └── kustomization.yaml
└── overlays
    └── test
        ├── config.json
        ├── kustomization.yaml
        └── values.yaml

The helm folder contains the generator template that will be included from our overlays.

The kustomization.yaml includes the chart.yaml as a resource:

apps/sealed-secrets/helm/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- chart.yaml

And the chart.yaml file defines the HelmChartInflationGenerator:

apps/sealed-secrets/helm/chart.yaml
apiVersion: builtin
kind: HelmChartInflationGenerator
metadata:
  name: sealed-secrets
releaseName: sealed-secrets
name: sealed-secrets
namespace: kube-system
repo: https://bitnami-labs.github.io/sealed-secrets
version: 2.17.2
includeCRDs: true
# Add common values to all argo-cd projects inline
valuesInline:
  fullnameOverride: sealed-secrets-controller
# Load a values.yaml file from the same directory that uses this generator
valuesFile: values.yaml

For this chart the template adjusts the namespace to kube-system and adds the fullnameOverride on the valuesInline key because we want to use those settings on all the projects (they are the values expected by the kubeseal command line application, so we adjust them to avoid the need to add additional parameters to it).

We adjust global values as inline to be able to use a the valuesFile from our overlays; as we are using a generator the path is relative to the folder that contains the kustomization.yaml file that calls it, in our case we will need to have a values.yaml file on each overlay folder (if we don’t want to overwrite any values for a project we can create an empty file, but it has to exist).

Finally, our overlay folder contains three files, a kustomization.yaml file that includes the generator from the helm folder, the values.yaml file needed by the chart and the config.json file used by argocd-autopilot to install the application.

The kustomization.yaml file contents are:

apps/sealed-secrets/overlays/test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Uncomment if you want to add additional resources using kustomize
#resources:
#- ../../base
generators:
- ../../helm

The values.yaml file enables the ingress for the application and adjusts its hostname:

apps/sealed-secrets/overlays/test/values.yaml
ingress:
  enabled: true
  hostname: test-sealed-secrets.lo.mixinet.net

And the config.json file is similar to the ones used with the other applications we have installed:

apps/sealed-secrets/overlays/test/config.json
{
  "appName": "sealed-secrets",
  "userGivenName": "sealed-secrets",
  "destNamespace": "kube-system",
  "destServer": "https://kubernetes.default.svc",
  "srcPath": "apps/sealed-secrets/overlays/test",
  "srcRepoURL": "https://forgejo.mixinet.net/blogops/argocd.git",
  "srcTargetRevision": "",
  "labels": null,
  "annotations": null
}

Once we commit and push the files the sealed-secrets application is installed in our cluster, we can check it using curl to get the public certificate used by it:

❯ curl -s https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----

The dummyhttp-secret

To create sealed secrets we need to install the kubeseal tool:

❯ arkade get kubeseal

Now we create a local version of the dummyhttp-secret that contains some value on the SECRET_VAR key (the easiest way for doing it is to use kubectl):

❯ echo -n "Boo" | kubectl create secret generic dummyhttp-secret \
    --dry-run=client --from-file=SECRET_VAR=/dev/stdin -o yaml \
    >/tmp/dummyhttp-secret.yaml

The secret definition in yaml format is:

apiVersion: v1
data:
  SECRET_VAR: Qm9v
kind: Secret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret

To create a sealed version using the kubeseal tool we can do the following:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml

That invocation needs to have access to the cluster to do its job and in our case it works because we modified the chart to use the kube-system namespace and set the controller name to sealed-secrets-controller as the tool expects.

If we need to create the secrets without credentials we can connect to the ingress address we added to retrieve the public key:

❯ kubeseal -f /tmp/dummyhttp-secret.yaml -w /tmp/dummyhttp-sealed-secret.yaml \
    --cert https://test-sealed-secrets.lo.mixinet.net:8443/v1/cert.pem

Or, if we don’t have access to the ingress address, we can save the certificate on a file and use it instead of the URL.

The sealed version of the secret looks like this:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: dummyhttp-secret
  namespace: default
spec:
  encryptedData:
    SECRET_VAR: [...]
  template:
    metadata:
      creationTimestamp: null
      name: dummyhttp-secret
      namespace: default

This file can be deployed to the cluster to create the secret (in our case we will add it to the argocd application), but before doing that we are going to check the output of our dummyhttp service and get the list of Secrets and SealedSecrets in the default namespace:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": ""
}
❯ kubectl get sealedsecrets,secrets
No resources found in default namespace.

Now we add the SealedSecret to the dummyapp copying the file and adding it to the kustomization.yaml file:

--- a/apps/dummyhttp/overlays/test/kustomization.yaml
+++ b/apps/dummyhttp/overlays/test/kustomization.yaml
@@ -2,6 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
 kind: Kustomization
 resources:
 - ../../base
+- dummyhttp-sealed-secret.yaml
 # Create the config map value
 configMapGenerator:
 - name: dummyhttp-configmap

Once we commit and push the files Argo CD creates the SealedSecret and the controller generates the Secret:

❯ kubectl apply -f /tmp/dummyhttp-sealed-secret.yaml
sealedsecret.bitnami.com/dummyhttp-secret created
❯ kubectl get sealedsecrets,secrets
NAME                                        STATUS   SYNCED   AGE
sealedsecret.bitnami.com/dummyhttp-secret            True     3s

NAME                      TYPE     DATA   AGE
secret/dummyhttp-secret   Opaque   1      3s

If we check the command output we can see the new value of the secret:

❯ curl -s https://test-dummyhttp.lo.mixinet.net:8443 | jq -M .
{
  "c": "Default Test Value",
  "s": "Boo"
}

Using sealed-secrets in production clusters

If you plan to use sealed-secrets look into its documentation to understand how it manages the private keys, how to backup things and keep in mind that, as the documentation explains, you can rotate your sealed version of the secrets, but that doesn’t change the actual secrets.

If you want to rotate your secrets you have to update them and commit the sealed version of the updates (as the controller also rotates the encryption keys your new sealed version will also be using a newer key, so you will be doing both things at the same time).

Final remarks

On this post we have seen how to deploy applications using the argocd-autopilot model, including the use of helm charts inside kustomize applications and how to install and use the sealed-secrets controller.

It has been interesting and I’ve learnt a lot about argocd in the process, but I believe that if I ever want to use it in production I will also review the native helm support in argocd using a separate repository to manage the applications, at least to be able to compare it to the model explained here.

05 May, 2025 05:50AM

May 04, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#47: r2u at its Third Birthday

Welcome to post 47 in the $R^4 series!

r2u provides Ubuntu binaries for all CRAN packages for the R system. It started three years ago, and offers for Linux users on Ubuntu what windows and macOS users already experience: fast, easy and reliable installation of binary packages. But by integrating with the system package manager (which is something that cannot be done on those other operating systems) we can fully and completely integrate it with underlying system. External libraries are resolved as shared libraries and handled by the system package manager. This offers fully automatic installation both at the initial installation and all subsequent upgrades. R users just say, e.g., install.packages("sf") and spatial libraries proj, gdal, geotiff (as well as several others) are automatically installed as dependencies in the correct versions. And they remain installed along with sf as the system manager now knows of the dependency.

Work on r2u began as a quick weekend experiment in March 2022, and by May 4 a first release was marked in the NEWS file after a few brave alpha testers kicked tires quite happily. This makes today the third anniversary of that first release, and marks a good time to review where we are. This short post does this, and stresses three aspects: overall usage, current versions, and new developments.

Steadily Growing Usage at 42 Million Packages Shipped

r2u ships from two sites. Its main repository is at the University of Illinois campus providing ample and heavily redundant bandwidth. We remain very grateful for the sponsorship from Atlas. It also still ships from my own server though that may be discontinued or could be spotty as it is on retail fiber connectivity. As we have access to the both sets of server logs, we can tabulate and chart usage. As of yesterday, total downloads were north of 42 million with current weekly averages around 500 thousand. These are quite staggering numbers for what started as a small hobby project, and are quite humbling.

Usage is driven by deployment in continuous integration (as for example the Ubuntu-use at GitHub makes this both an easy and obvious choice), cloud computing (as it is easy to spin up Ubuntu instances, it is as easy to add r2u via four simple commands or one short script), explorative use (for example on Google Colab) or of course in general laptop, desktop, or server settings.

Current Versions

Since r2u began, we added two Ubuntu LTS releases, three annual R releases as well as multiple BioConductor releases. BioConductor support is on a ‘best-efforts’ basis motivated primarily to support the CRAN packages having dependencies. It has grown to around 500 packages and includes the top-250 by usage.

Right now, current versions R 4.5.0 and BioConductor 3.21, both released last month, are supported.

New Development: arm64

A recent change is the support of the arm64 platform. As discussed in the introductory post, it is a popular and increasingly common CPU choice seen anywhere from the Raspberry Pi 5 and it Cortex CPU to in-house cloud computing platforms (called, respectively, Graviton at AWS and Axiom at GCS), general server use via Ampere CPUs, Cortex-based laptops that start to appears and last but not least on the popular M1 to M4-based macOS machines. (For macOS, one key appeal is in use of ‘lighterweight’ Docker use as these M1 to M4 cpus can run arm64-based containers without a translation layer making it an attractive choice.)

This is currently supported only for the ‘noble’ aka 24.04 release. GitHub Actions, where we compile these packages, now also supports ‘jammy’ aka 22.04 but it may not be worth it to expand there as the current ‘latest’ release is available. We have not yet added BioConductor support but may do so. Drop us a line (maybe via an issue) if this of interest.

With the provision of arm64 binaries, we also started to make heavier use of GitHub Actions. The BioConductor 3.21 release binaries were also created there. This makes the provision more transparent as well as the configuration repo as well as the two builder repos (arm64, bioc) are public, as is of course the main r2u repo.

Summing Up

This short post summarised the current state of r2u along with some recent news. If you are curious, head over to the r2u site and try it, for example in a rocker/r2u container.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

04 May, 2025 09:04PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in April 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Request for OpenSSH debugging help

Following the OpenSSH work described below, I have an open report about the sshd server sometimes crashing when clients try to connect to it. I can’t reproduce this myself, and arm’s-length debugging is very difficult, but three different users have reported it. For the time being I can’t pass it upstream, as it’s entirely possible it’s due to a Debian patch.

Is there anyone reading this who can reproduce this bug and is capable of doing some independent debugging work, most likely involving bisecting changes to OpenSSH? I’d suggest first seeing whether a build of the unmodified upstream 10.0p2 release exhibits the same bug. If it does, then bisect between 9.9p2 and 10.0p2; if not, then bisect the list of Debian patches. This would be extremely helpful, since at the moment it’s a bit like trying to look for a needle in a haystack from the next field over by sending instructions to somebody with a magnifying glass.

OpenSSH

I upgraded the Debian packaging to OpenSSH 10.0p1 (now designated 10.0p2 by upstream due to a mistake in the release process, but they’re the same thing), fixing CVE-2025-32728. This also involved a diffoscope bug report due to the version number change.

I enabled the new --with-linux-memlock-onfault configure option to protect sshd against being swapped out, but this turned out to cause test failures on riscv64, so I disabled it again there. Debugging this took some time since I needed to do it under emulation, and in the process of setting up a testbed I added riscv64 support to vmdb2.

In coordination with the wtmpdb maintainer, I enabled the new Y2038-safe native wtmpdb support in OpenSSH, so wtmpdb last now reports the correct tty.

I fixed a couple of packaging bugs:

I reviewed and merged several packaging contributions from others:

dput-ng

Since we added dput-ng integration to Debusine recently, I wanted to make sure that it was in good condition in trixie, so I fixed dput-ng: will FTBFS during trixie support period. Previously a similar bug had been fixed by just using different Ubuntu release names in tests; this time I made the tests independent of the current supported release data returned by distro_info, so this shouldn’t come up again.

We also ran into dput-ng: —override doesn’t override profile parameters, which needed somewhat more extensive changes since it turned out that that option had never worked. I fixed this after some discussion with Paul Tagliamonte to make sure I understood the background properly.

man-db

I released man-db 2.13.1. This just included various small fixes and a number of translation updates, but I wanted to get it into trixie in order to include a contribution to increase the MAX_NAME constant, since that was now causing problems for some pathological cases of manual pages in the wild that documented a very large number of terms.

debmirror

I fixed one security bug: debmirror prints credentials with —progress.

Python team

I upgraded these packages to new upstream versions:

In bookworm-backports, I updated these packages:

  • python-django to 3:4.2.20-1 (issuing BSA-123)
  • python-django-pgtrigger to 4.13.3

I dropped a stale build-dependency from python-aiohttp-security that kept it out of testing (though unfortunately too late for the trixie freeze).

I fixed or helped to fix various other build/test failures:

I packaged python-typing-inspection, needed for a new upstream version of pydantic.

I documented the architecture field in debian/tests/autopkgtest-pkg-pybuild.conf files.

I fixed other odds and ends of bugs:

Science team

I fixed various build/test failures:

04 May, 2025 03:38PM by Colin Watson

Russ Allbery

Review: The Book That Held Her Heart

Review: The Book That Held Her Heart, by Mark Lawrence

Series: Library Trilogy #3
Publisher: ACE
Copyright: 2025
ISBN: 0-593-43799-3
Format: Kindle
Pages: 367

The Book That Held Her Heart is the third and final book of the Library fantasy trilogy and a direct sequel to The Book That Broke the World. Lawrence provides a much-needed summary of the previous volumes at the start of this book (thank you to every author who does this!), but I was still struggling a bit with the blizzard of character names. I recommend reading this series entry in relatively close proximity to the other two.

At the end of the previous book, and following some rather horrific violence, the cast split into four groups. Three of those are pursuing different resolutions to the moral problem of the Library's existence. The fourth group opens the book still stuck with the series villains, who were responsible for the over-the-top morality that undermined my enjoyment of The Book That Broke the World. Lawrence follows all four groups in interwoven chapters, maintaining that complex structure through most of this book. I thought this was a questionable structural decision that made this book feel choppy, disconnected, and unnecessarily confusing.

The larger problem, though, is that this is the payoff book, the book where we find out if Lawrence is equal to the tricky ethical questions he's raised and the world-building masterpiece that The Book That Wouldn't Burn kicked off. The answer, unfortunately, is "not really." This is not a total failure; there are some excellent set pieces and world-building twists, and the characters remain likable and enjoyable to read about (although the regrettable sidelining of Livira continues). But the grand finale is weirdly conservative and not particularly grand, and Lawrence's answer to the moral questions he raised is cliched and wholly unsatisfying.

I was really hoping Lawrence was going somewhere more interesting than "Nazis bad." I am entirely sympathetic to this moral position, but so is every other likely reader of this series, and we all know how that story goes. What a waste of a compelling setup.

Sadly, "Nazis bad" isn't even a metaphor for the black-and-white morality that Lawrence first introduced at the end of the previous book. It's a literal description of the main moral thrust of this book. Lawrence introduces yet another new character and timeline so that he can write about thinly-disguised Nazis persecuting even more thinly-disguised Jews, and this conflict is roughly half this book. It's also integral to the ending, which uses obvious, stock secular sainthood as a sort of trump card to resolve ideological conflicts at the heart of the series.

This is one of the things I was worried about after I read the short stories that Lawrence published between the volumes of this series. All of them were thuddingly trite, which did not make me optimistic that Lawrence would find a sufficiently interesting answer to his moral trilemma to satisfy the high expectations created by the build-up. That is, I am sad to report, precisely the failure mode of this book. The resolution of the moral question of the series is arguably radical within the context of the prior world-building, but in a way that effectively reduces it to the boring, small-c conservative bromides of everyday reality. This is precisely the opposite of why I read fantasy, and I did not find Lawrence's arguments for it at all convincing. Neither, I think, did Lawrence, given that the critical debate takes place off camera so that he could avoid having to present the argument.

This is, unfortunately, another series where the author's reach exceeded their grasp. The world-building of The Book That Wouldn't Burn is a masterpiece that created one of the most original and compelling settings that I have read in fantasy for a long time, but unfortunately Lawrence did not have an equally original plan for how to use the setting. This is a common problem and I'm not going to judge it too harshly; it's much harder to end a series than it is to start one. I thought the occasional flashes of brilliance was worth the journey, and they continue into this book with some elaborations on the Library's mythic structure that are going to stick in my mind.

You can sense the story slipping away from the hoped-for conclusion as you read, though. The story shifts more and more away from the setting and the world-building and towards character stories, and while Lawrence's characters are fine, they're not that novel. I am happy to read about Clovis and Arpix, but I can read variations of that story in a lot of places. Livira never recovers her dynamism and drive from the first book, and there is much less beneath Yute's thoughtful calm than I was hoping to find. I think Lawrence knows that the story was not entirely working because the narrative voice becomes more strident as the morality becomes less interesting. I know of only one fantasy author who can make this type of overbearing and freighted narrative style work, and Lawrence is sadly not Guy Gavriel Kay.

This is not a bad book. It is an enjoyable adventure story on its own terms, with some moments of real beauty and awe and a handful of memorable characters, somewhat undermined by a painfully obvious and unoriginal moral frame. It's only a disappointment in the context of what came before it, and it is far from the first series conclusion that doesn't quite live up to the earlier volumes. I'm glad that I read it, and the series as a whole, and I do appreciate that Lawrence brought the whole series to a firm and at least somewhat satisfying conclusion in the promised number of volumes. But I do wish the series as a whole had been as special as the first book.

Rating: 6 out of 10

04 May, 2025 04:48AM

May 03, 2025

Russell Coker

Silly Job Titles

Many years ago I was on a programming project porting code from OS/2 1.x to NT. When I was there they suddenly decided to make a database of all people and get job titles for everyone – apparently the position description used when advertising the jobs wasn’t sufficient. When I got given a clipboard with a form to write my details I looked at what everyone else had done, It was a heap of ridiculous propaganda with everyone trying to put in synonyms for “senior” or “skillful” and listing things that they were allegedly in charge of. There were even some people trying to create impressive titles for their managers to try and suck up.

I chose the job title “coder” as the shortest and most accurate description of what I was doing. I had to confirm that yes I really did want to put a one word title and not a paragraph of frippery. Part of my intent was to mock the ridiculously long job titles used by others but I don’t think anyone realised that.

I was reminded of that company when watching a video of a Trump cabinet meeting where everyone had to tell Trump how great he is. I think that a programmer who wants to be known as a “Principal Solutions Architect of Advanced Algorithmic Systems and Digital Innovation Strategy” (suggested by ChatGPT because I can’t write such ridiculous things) is showing a Trump level of lack of self esteem.

When job titles are discussed there’s always someone who will say “what if my title isn’t impressive enough and I don’t get a pay rise”. If a company bases salaries on how impressive job titles are and not on whether people actually do good work then it’s a very dysfunctional workplace. But dysfunctional companies aren’t uncommon so it’s something you might reasonably have to do. In the company in question I could have described my work as “lead debugger” as I ended up doing most of the debugging on that project (as on many programming projects). The title “lead debugger” accurately described a significant part of my work and it’s work that is essential to project completion.

What do you think are the worst job titles?

03 May, 2025 07:40AM by etbe

Russ Allbery

Review: Paper Soldiers

Review: Paper Soldiers, by Saleha Mohsin

Publisher: Portfolio
Copyright: 2024
ISBN: 0-593-53912-5
Format: Kindle
Pages: 250

The subtitle of Paper Soldiers is "How the Weaponization of the Dollar Changed the World Order," which may give you the impression that this book is about US use of the dollar system for political purposes such as sanctions. Do not be fooled like I was; this subtitle is, at best, deceptive. Coverage of the weaponization of the dollar is superficial and limited to a few chapters. This book is, instead, a history of the strong dollar policy told via a collection of hagiographies of US Treasury Secretaries and written with all of the skeptical cynicism of a poleaxed fawn.

There is going to be some grumbling about the state of journalism in this review.

Per the author's note, Saleha Mohsin is the Bloomberg News beat reporter for the US Department of the Treasury. That is, sadly, exactly what this book reads like: routine beat reporting. Mohsin asked current and former Treasury officials what they were thinking at various points in history and then wrote down their answers without, so far as I can tell, considering any contradictory evidence or wondering whether they were telling the truth. Paper Soldiers does contain extensive notes (those plus the index fill about forty pages), so I guess you could do the cross-checking yourself, although apparently most of the interviews for this book were "on background" and are therefore unattributed. (Is this weird? I feel like this is weird.) Mohsin adds a bit of utterly conventional and uncritical economic framing and casts the whole project in the sort of slightly breathless and dramatized prose style that infests routine news stories in the US.

I find this style of book unbelievably frustrating because it represents such a wasted opportunity. To me, the point of book-length journalism is precisely to not write in this style. When you're trying to crank out two or three articles a week covering current events, I understand why there isn't always space or time to go deep into background, skepticism, and contrary opinions. But when you expand that material into a book, surely the whole point is to take the time to do some real reporting. Dig into what people told you, see if they're lying, talk to the people who disagree with them, question the conventional assumptions, and show your work on the page so that the reader is smarter after finishing your book than they were before they started. International political economics is not a sequence of objective facts. It's a set of decisions made in pursuit of economic and political theories that are disputed and arguable, and I think you owe the reader some sense of the argument and, ideally, some defensible position on the merits that is more than a transcription of your interviews.

This is... not that.

It's a power loop that the United States still enjoys today: trust in America's dollar (and its democratic government) allows for cheap debt financing, which buys health care built on the most advanced research and development and inventions like airplanes and the iPhone. All of this is propelled by free market innovation and the superpowered strength to keep the nation safe from foreign threats. That investment boosts the nation's economic, military, and technological prowess, making its economy (and the dollar) even more attractive.

Let me be precise about my criticism. I am not saying that every contention in the above excerpt is wrong. Some of them are probably correct; more of them are at least arguable. This book is strictly about the era after Bretton Woods, so using airplanes as an example invention is a bizarre choice, but sure, whatever, I get the point. My criticism is that paragraphs like this, as written in this book, are not introductions to deeper discussions that question or defend that model of economic and political power. They are simple assertions that stand entirely unsupported. Mohsin routinely writes paragraphs like the above as if they are self-evident, and then immediately moves on to the next anecdote about Treasury dollar policy.

Take, for example, the role of the US dollar as the world's reserve currency, which roughly means that most international transactions are conducted in dollars and numerous countries and organizations around the world hold large deposits in dollars instead of in their native currency. The conventional wisdom holds that this is a great boon to the US economy, but there are also substantive critiques and questions about that conventional wisdom. You would never know that from this book; Mohsin asserts the conventional wisdom about reserve currencies without so much as a hint that anyone might disagree.

For example, one common argument, repeated several times by Mohsin, is that the US can only get away with the amount of deficit spending and cheap borrowing that it does because the dollar is the world's reserve currency. Consider two other countries whose currencies are clearly not the international reserve currency: Japan and the United Kingdom. The current US debt to GDP ratio is about 125% and the current interest rate on US 10-year bonds is about 4.2%. The current Japanese debt to GDP ratio is about 260% and the current interest rate on Japanese 10-year bonds is about 1.2%. The current UK debt to GDP ratio is 160% and the current interest rate on UK 10-year bonds is 4.5%. Are you seeing the dramatic effects of the role of the dollar as reserve currency? Me either.

Again, I am not saying that this is a decisive counter-argument. I am not an economist; I'm just some random guy on the Internet who finds macroeconomics interesting and reads a few newsletters. I know the Japanese bond market is unusual in ways I'm not accounting for. There may well be compelling arguments for why reserve currency status matters immensely for US borrowing capacity. My point is not that Mohsin is wrong; my point is that you have to convince me and she doesn't even try.

Nowhere in this book is a serious effort to view conventional wisdom with skepticism or confront it with opposing arguments. Instead, this book is full of blithe assertions that happen to support the narrative the author was fed by a bunch of former Treasury officials and does not appear to question in any way. I want books like this to increase my understanding of the world. To do that, they need to show me multiple sides of debates and teach me how to evaluate evidence, not simply reinforce a superficial conventional wisdom.

It doesn't help that whatever fact-checking process this book went through left some glaring errors. For example, on the Plaza Accord:

With their central banks working in concert, enough dollars were purchased on the open market to weaken the currency, making American goods more affordable for foreign buyers.

I don't know what happened after the Plaza Accord (I read books like this to find out!), but clearly it wasn't that. This is utter nonsense. Buying dollars on the open market would increase the value of the dollar, not weaken it; this is basic supply and demand that you learn in the first week of a college economics class. This is the type of error that makes me question all the other claims in the book that I can't easily check.

Mohsin does offer a more credible explanation of the importance of a reserve currency late in the book, although it's not clear to me that she realizes it: The widespread use of the US dollar gives US government sanctions vast international reach, allowing the US to punish and coerce its enemies through the threat of denying them access to the international financial system. Now we're getting somewhere! This is a more believable argument than a small and possibly imaginary effect on government borrowing costs. It is clear why a bellicose US government, particularly one led by advocates of a unitary executive theory that elevates the US president to a status of near-emperor, want to turn the dollar into a weapon of international control. It's much less obvious how comfortable the rest of the world should be with that concentration of power.

This would be a fascinating topic for a journalistic non-fiction book. Some reporter should dive deep into the mechanics of sanctions and ask serious questions about the moral, practical, and diplomatic consequences of this aggressive wielding of US power. One could give it a title like Paper Soldiers that reflected the use of banks and paper currency as foot soldiers enforcing imperious dictates on the rest of the world. Alas, apart from a brief section in which the US scared other countries away from questioning the dollar, Mohsin does not tug at this thread. Maybe someone should write that book someday.

As you will have gathered by now, I think this is a bad book and I do not recommend that you read it. Its worst flaw is one that it shares with far too much mainstream US print and TV journalism: the utter credulity of the author. I have the old-fashioned belief that a journalist should be more than a transcriptionist for powerful people. They should be skeptical, they should assume public figures may be lying, they should look for ulterior motives, and they should try to bring the reader closer to some objective truths about the world, wherever they may lie.

I have no solution for this degradation of journalism. I'm not even sure that it's a change. There were always reporters eager to transcribe the voice of power into the newspaper, and if we remember the history of journalism differently, that may be because we have elevated the rare exceptions and forgotten the average. But after watching too many journalists I once respected start parroting every piece of nonsense someone tells them, from NFTs to UFOs to the existential threat of AI, I've concluded that the least I can do as a reader is to stop rewarding reporters who cannot approach powerful subjects with skepticism, suspicion, and critical research.

I failed in this case, but perhaps I can serve as a warning to others.

Rating: 3 out of 10

03 May, 2025 03:56AM

May 02, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Korg Minilogue XD

I didn't buy the Arturia Microfreak or the Behringer Model-D; I bought a Korg Minilogue XD.

Korg Minilogue XD, and Zoom R8

Korg Minilogue XD, and Zoom R8

I wanted an all-in-one unit which meant a built-in keyboard. I was keen on analogue oscillators, partly for the sound, but mostly to ensure that most of the controls were immediately accessible. The Minilogue-XD has two analogue oscillators and an analogue filter. It also has some useful, pure digital stuff: post-effects (chorus, flanger, echo, etc.); and a third, digital oscillator.

The digital oscillator is programmable. There's an SDK, shared between the Minilogue-XD and some other Korg synths (at least the Prologue and NTS-1). There's a cottage industry of independent musicians writing and selling digital patches, e.g. STRING User Oscillator. Here's an example of a drone programmed using the SDK for the NTS-1:

Eventually I expect to have fun exploring the SDK, but for now I'm keeping it firmly away from computers (hence the Zoom R8 multitrack recorder in the above image: more on that in a future blog post). The Korg has been gathering dust whilst I was writing up, but now I hope to find some time to play.

02 May, 2025 08:04PM

hackergotchi for Daniel Lange

Daniel Lange

Compiling and installing the Gentoo Linux kernel on emerge without genkernel (part 2)

The first install of a Gentoo kernel needs to be somewhat manual if you want to optimize the kernel for the (virtual) system it boots on.

In part 1 I laid out how to improve the subsequent emerges of sys-kernel/gentoo-sources with a small drop in script to build the kernel as part of the ebuild.

Since end of last year Gentoo also supports a less manual way of emerging a kernel:

The following kernel blends are available:

  • sys-kernel/gentoo-kernel (the Gentoo kernel you can configure and compile locally - typically this is what you want if you run Gentoo)
  • sys-kernel/gentoo-kernel-bin (a pre-compiled Gentoo kernel similar to what genkernel would get you)
  • sys-kernel/vanilla-kernel (the upstream Linux kernel, again configurable and locally compiled)

So a quick walk-through for the gentoo-kernel variant:

1. Set up the correct package USE flags

We do not want an initrd and we want our own config to be re-used so:

echo "sys-kernel/gentoo-kernel -initramfs savedconfig" >> /etc/portage/package.use/gentoo-kernel

2. Preseed the saved config

The current kernel config needs to be saved as the initial savedconfig so it is found and applied for our emerge below:

mkdir -p /etc/portage/savedconfig/sys-kernel
cp -n "/usr/src/linux-$(uname -r)/.config" /etc/portage/savedconfig/sys-kernel/gentoo-kernel

3. Emerge the new kernel

emerge sys-kernel/gentoo-kernel

4. Update grub and reboot

Unfortunately this ebuild does not update grub, so we have to run grub-mkconfig manually. This can again be automated via a post_pkg_postinst() script. See the step 7 below.

But for now, let's do it manually:

grub-mkconfig -o /boot/grub/grub.cfg
# All fine? Time to reboot the machine:
reboot

5. (Optional) Prepare for the next kernel build

Run etc-update and merge the new kernel config entries into your savedconfig.

Screenshot of etc-update

The kernel should auto-build once new versions become available via portage.

Again the etc-update can be automated if you feel that is sufficiently safe to do in your environment. See step 7 below for details.

6. (Optional) Remove the old kernel sources

If you want to switch from the method based on gentoo-sources to the gentoo-kernel one, you can remove the kernel sources:

emerge -C "=sys-kernel/gentoo-sources-5*"

Be sure to update the /usr/src/linux symlink to the new kernel sources directory from gentoo-kernel, e.g.:

rm /usr/src/linux; ln -s "/usr/src/$(uname -r)" /usr/src/linux

This may be a good time for a bit more house-keeping: Clean up a bit in /usr/src/ to remove old build artefacts, /boot/ to remove old kernels and /lib/modules/ to get rid of old kernel modules.

7. (Optional) Further automate the ebuild

In part 1 we automated the kernel compile, install and a bit more via a helper function for post_pkg_postinst().

We can do the similarly for what is (currently) missing from the gentoo-kernel ebuilds:

Create /etc/portage/env/sys-kernel/gentoo-kernel with the following:

post_pkg_postinst() {
        etc-update --automode -5 /etc/portage/savedconfig/sys-kernel
        grub-mkconfig -o /boot/grub/grub.cfg
}

The upside of gentoo-kernel over gentoo-sources is that you can put "config override files" in /etc/kernel/config.d/. That way you theoretically profit from config improvements made by the upstream developers. See the Gentoo distribution kernel documentation for a sample snippet. I am fine with savedconfig for now but it is nice that Gentoo provides the flexibility to support both approaches.

02 May, 2025 05:41PM by Daniel Lange

Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)

Netatalk 3.1.9 has been released with two interesting fixes / amendments:

  • FIX: afpd: fix "admin group" option
  • NEW: afpd: new options "force user" and "force group"

Here are the full release notes for 3.1.9 for your reading pleasure.

Due to upstream now differentiating between SysVinit and systemd packages I've followed that for simplicity's sake and built libgcrypt-only builds. If you need the openssl-based tools continue to use the 3.1.8 openssl build until you have finished your migration to a safer password storage.

Warning: Read the original blog post before installing for the first time. Be sure to read the original blog post if you are new to Netatalk3 on Debian Jessie!
You'll get nowhere if you install the .debs below and don't know about the upgrade path. So RTFA.

Now with that out of the way:

Continue reading "Netatalk 3.1.9 .debs for Debian Jessie available (Apple Timemachine backup to Linux servers)"

02 May, 2025 05:40PM by Daniel Lange

Creating iPhone/iPod/iPad notes from the shell

I found a very nice script to create Notes on the iPhone from the command line by hossman over at Perlmonks.

For some weird reason Perlmonks does not allow me to reply with amendments even after I created an account. I can "preview" a reply at Perlmonks but after "create" I get "Permission Denied". Duh. vroom, if you want screenshots, contact me on IRC :-).

As I wrote everything up for the Perlmonks reply anyways, I'll post it here instead.

Against hossman's version 32 from 2011-02-22 I changed the following:

  • removed .pl from filename and documentation
  • added --list to list existing notes
  • added --hosteurope for Hosteurope mail account preferences and with it a sample how to add username and password into the script for unattended use
  • made the "Notes" folder the default (so -f Notes becomes obsolete)
  • added some UTF-8 conversions to make Umlauts work better (this is a mess in perl, see Jeremy Zawodny's writeup and Ivan Kurmanov's blog entry for some further solutions). Please try combinations of utf8::encode and ::decode, binmode utf8 for STDIN and/or STDOUT and the other hints from these linked blog entries in your local setup to get Umlauts and other non-7bit ASCII characters working. Be patient. There's more than one way to do it :-).

I /msg'd hossman the URL of this blog entry.

Continue reading "Creating iPhone/iPod/iPad notes from the shell"

02 May, 2025 05:39PM by Daniel Lange

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in April 2025

I also co-organised a Debian BSP (Bug-Squashing Party) last weekend, for which I will post a separate report later.

02 May, 2025 04:06PM by Ben Hutchings

Russ Allbery

Review: Sixteen Ways to Defend a Walled City

Review: Sixteen Ways to Defend a Walled City, by K.J. Parker

Series: Siege #1
Publisher: Orbit
Copyright: April 2019
ISBN: 0-316-27080-6
Format: Kindle
Pages: 349

Sixteen Ways to Defend a Walled City is... hm, honestly, I'm not sure what the genre of this novel is. It is a story about medieval engineering and siege weapons in a Rome-inspired secondary world that so far as I can tell is not meant to match ours. There is not a hint of magic. It's not technically a fantasy, but it's marketed like a fantasy, and it's not historical fiction nor is it attempting to be alternate history. The most common description is a fantasy of logistics, so I guess I'll go with that, as long as you understand that the fantasy here is of the non-magical sort.

K.J. Parker is a pen name for Tom Holt.

Orhan is Colonel-in-Chief of the Engineers for the Robur empire, even though he's a milkface, not a blueskin like a proper Robur. (Both of those racial terms are quite offensive.) He started out as a slave, learned a trade, joined the navy as a shipwright, and worked his way up the ranks through luck and enemy action. He's canny, practical, highly respected by his men, happy to cheat and steal to get material for his projects and wages for his people, and just wants to build literal bridges. Nice, sturdy bridges that let people get from one place to another the short way.

When this book opens, Orhan is in Classis trying to requisition some rope. He is saved from discovery of his forged paperwork by pirates burning down the warehouse that held all of the rope, and then saved from the pirates by the sorts of coincidences that seem to happen to Orhan all the time. A few subsequent discoveries about what the pirates were after, and news of another unexpected attack on the empire, make Orhan nervous enough that he takes his men to do a job as far away from the City at the heart of the empire as possible. It's just his luck to return in time to find slaughtered troops and to have to sneak his men into a City already under siege.

Sixteen Ways to Defend a Walled City is told in the first person by Orhan, with an internal justification that the reader only discovers at the end of the book. That means your enjoyment of this book is going to depend a lot on how much you like Orhan's voice. This mostly worked for me; his voice is an odd combination of chatty, self-deprecating, and brusque, and it took a bit for me to get used to it, but I came around. This book is clearly competence porn — nearly all the fun of this book is seeing what desperate plan Orhan will come up with next — so it helps that Orhan does indeed come across as competent.

The part that did not work for me was the morality. You would think from the title that would be straightforward: The City is under siege, people want to capture it and kill everyone, Orhan is on the inside, and his job is to keep them out. That would have been the morality of simplistic military fiction, but most of the appeal was in watching the problem-solving anyway.

That's how the story starts, but then Parker started dropping hints of more complexity. Orhan is a disfavored minority and the Robur who run the empire are racist assholes, even though Orhan mostly gets along with the ones who work with him closely. Orhan says a few things that make the reader wonder whether the City warrants defending, and it becomes less clear whether Orhan's loyalties were as solid as they appeared to be. Parker then offers a few moral dilemmas and has Orhan not follow them in the expected directions, making me wonder where Parker was going with the morality of this story.

And then we find out that the answer is nowhere. Parker is going nowhere. None of that setup has a payoff, and the ending is deeply unsatisfying and arguably pointless.

I am not sure this is an objective analysis. This is one of those books where I would not be surprised to see someone else praise its realism. Orhan is in some ways a more likely figure than the typical hero of a book. He likes accomplishing things, he's a cheat and a liar when that serves his purposes, he's loyal to the people he considers friends in a way that often doesn't involve consulting them about what they want, and he makes decisions mostly on vibes and stubbornness. Both his cynicism and his idealism are different types of masks; beneath both, he's an incoherent muddle. You could argue that we're all that sort of muddle, deep down, and the consistent idealists are the unrealistic (and frightening) ones, and I think Parker may be attempting exactly that argument. I know some readers like this sort of fallibly human incoherence.

But wow did I ever loathe this ending because I was not reading this book for a realistic psychological profile of an average guy. I was here for the competence porn, for the fantasy of logistics, for the experience of watching someone have a plan and get shit done. Apparently that extends to needing him to be competent at morality as well, or at least think about it as hard as he thinks about siege weapons.

One of the reasons why I am primarily a genre reader is that I don't read books for depressing psychological profiles. There are enough of those in the news. I read books to spend some time in a world better than mine, where things work out the way that they are supposed to, or at least in a way that's satisfying.

The other place where this book interfered with my vibes is that it's about a war, and a lot of Orhan's projects are finding more efficient ways to kill people. Parker takes a "war is hell" perspective, and Orhan gets deeply upset at the graphic sights of mangled human bodies that are the frequent results of his plans. I feel weird complaining about this because yes, it's good to be aware of the horrific things that we do to other people in wars, but man, I just wanted to watch some effective project management. I want to enjoy unexpected lateral thinking, appreciate the friendly psychological manipulation involved in getting a project to deliver on deadline, and watch someone solve logistical problems. Battlefields provide an endless supply of interesting challenges, but then Parker feels compelled to linger on the brutal consequences of Orhan's ideas and now I'm depressed and sickened rather than enjoying myself.

I really wanted to like this book, and for a lot of the book I did, but that ending was a bottomless pit that sucked away all my enjoyment and retroactively made the rest of the book feel worse. I so wanted Parker to be going somewhere clever and surprising, and the disappointment when none of that happened was intense. This is probably an excessively negative reaction, and I will not be surprised when other people get along with this book better than I did, but not only will I not be recommending it, I'm now rather dubious about reading any more Parker.

Followed by How to Rule an Empire and Get Away With It.

Rating: 5 out of 10

02 May, 2025 04:30AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Spending my Golden Week in boredom.

Spending my Golden Week in boredom. That's nice.

02 May, 2025 01:53AM by Junichi Uekawa

May 01, 2025

Ian Jackson

Free Software, internal politics, and governance

There is a thread of opinion in some Free Software communities, that we shouldn’t be doing “politics”, and instead should just focus on technology.

But that’s impossible. This approach is naive, harmful, and, ultimately, self-defeating, even on its own narrow terms.

Today I’m talking about small-p politics

In this article I’m using “politics” in the very wide sense: us humans managing our disagreements with each other.

I’m not going to talk about culture wars, woke, racism, trans rights, and so on. I am not going to talk about how Free Software has always had explicitly political goals; or how it’s impossible to be neutral because choosing not to take a stand is itself to take a stand.

Those issues are all are important and Free Software definitely must engage with them. Many of the points I make are applicable there too. But those are not my focus today.

Today I’m talking in more general terms about politics, power, and governance.

Many people working together always entails politics

Computers are incredibly complicated nowadays. Making software is a joint enterprise. Even if an individual program has only a single maintainer, it fits into an ecosystem of other software, maintained by countless other developers. Larger projects can have thousands of maintainers and hundreds of thousands of contributors.

Humans don’t always agree about everything. This is natural. Indeed, it’s healthy: to write the best code, we need a wide range of knowledge and experience.

When we can’t come to agreement, we need a way to deal with that: a way that lets us still make progress, but also leaves us able to work together afterwards. A way that feels OK for everyone.

Providing a framework for disagreement is the job of a governance system. The rules say which people make which decisions, who must be consulted, how the decisions are made, and, how, if any, they can be reviewed.

This is all politics.

Consensus is great but always requiring it is harmful

Ideally a discussion will converge to a synthesis that satisfies everyone, or at least a consensus.

When consensus can’t be achieved, we can hope for compromise: something everyone can live with. Compromise is achieved through negotiation.

If every decision requires consensus, then the proponents of any wide-ranging improvement have an almost insurmountable hurdle: those who are favoured by the status quo and find it convenient can always object. So there will never be consensus for change. If there is any objection at all, no matter how ill-founded, the status quo will always win.

This is where governance comes in.

Governance is like backups: we need to practice it

Governance processes are the backstop for when discussions, and then negotiations, fail, and people still don’t see eye to eye.

In a healthy community, everyone needs to know how the governance works and what the rules are. The participants need to accept the system’s legitimacy. Everyone, including the losing side, must be prepared to accept and implement (or, at least not obstruct) whatever the decision is, and hopefully live with it and stay around.

That means we need to practice our governance processes. We can’t just leave them for the day we have a huge and controversial decision to make. If we do that, then when it comes to the crunch we’ll have toxic rows where no-one can agree the rules; where determined people bend the rules to fit their outcome; and where afterwards people feel like the whole thing was horrible and unfair.

So our decisionmaking bodies and roles need to be making decisions, as a matter of routine, and we need to get used to that.

First-line decisionmaking bodies should be making decisions frequently. Last-line appeal mechanisms (large-scale votes, for example) are naturally going to be exercised more rarely, but they must happen, be seen as legitimate, and their outcomes must be implemented in full.

Governance should usually be routine and boring

When governance is working well it’s quite boring.

People offer their input, and are heard. Angles are debated, and concerns are addressed. If agreement still isn’t reached, the committee, or elected leader, makes a decision.

Hopefully everyone thinks the leadership is legitimate, and that it properly considered and heard their arguments, and made the decision for good reasons.

Hopefully the losing side can still get their work done (and make their own computer work the way they want); so while they will be disappointed, they can live with the outcome.

Many human institutions manage this most of the time. It does take some knowledge about principles of governance, and ideally some experience.

Governance means deciding, not just mediating

By making decisions I mean exercising their authority to rule on an actual disagreement: one that wasn’t resolved by debate or negotiation. Governance processes by definition involve deciding, not just mediating. It’s not governance if we’re advising or cajoling: in that case, we’re back to demanding consensus. Governance is necessary precisely when consensus is not achieved.

If the governance systems are to mean anything, they must be able to (over)rule; that means (over)ruling must be normal and accepted.

Otherwise, when the we need to overrule, we’ll find that we can’t, because we lack the collective practice.

To be legitimate (and seen as legitimate) decisions must usually be made based on the merits, not on participants’ status, and not only on process questions.

On the autonomy of the programmer

Many programmers seem to find the very concept of governance, and binding decisionmaking, deeply uncomfortable.

Ultimately, it means sometimes overruling someone’s technical decision. As programmers and maintainers we naturally see how this erodes our autonomy.

But we have all seen projects where the maintainers are unpleasant, obstinate, or destructive. We have all found this frustrating. Software is all interconnected, and one programmer’s bad decisions can cause problems for many of the rest of us. We exasperate, “why won’t they just do the right thing”. This is futile. People have never “just”ed and they’re not going to start “just”ing now. So often the boot is on the other foot.

More broadly, as software developers, we have a responsibility to our users, and a duty to write code that does good rather than ill in the world. We ought to be accountable. (And not just to capitalist bosses!)

Governance mechanisms are the answer.

(No, forking anything but the smallest project is very rarely a practical answer.)

Mitigate the consequences of decisions — retain flexibility

In software, it is often possible to soften the bad social effects of a controversial decision, by retaining flexibility. With a bit of extra work, we can often provide hooks, non-default configuration options, or plugin arrangements.

If we can convert the question from “how will the software always behave” into merely “what should the default be”, we can often save ourselves a lot of drama.

So it is often worth keeping even suboptimal or untidy features or options, if people want to use them and are willing to maintain them.

There is a tradeoff here, of course. But Free Software projects often significantly under-value the social benefits of keeping everyone happy. Wrestling software — even crusty or buggy software — is a lot more fun than having unpleasant arguments.

But don’t do decisionmaking like a corporation

Many programmers’ experience of formal decisionmaking is from their boss at work. But corporations are often a very bad example.

They typically don’t have as much trouble actually making decisions, but the actual decisions are often terrible, and not just because corporations’ goals are often bad.

You get to be a decisionmaker in a corporation by spouting plausible nonsense, sounding confident, buttering up the even-more-vacuous people further up the chain, and sometimes by sabotaging your rivals. Corporate senior managers are hardly ever held accountable — typically the effects of their tenure are only properly felt well after they’ve left to mess up somewhere else.

We should select our leaders more wisely, and base decisions on substance.

If you won’t do politics, politics will do you

As a participant in a project, or a society, you can of course opt out of getting involved in politics.

You can opt out of learning how to do politics generally, and opt out of understanding your project’s governance structures. You can opt out of making judgements about disputed questions, and tell yourself “there’s merit on both sides”.

You can hate politicians indiscriminately, and criticise anyone you see doing politics.

If you do this, then you are abdicating your decisionmaking authority, to those who are the most effective manipulators, or the most committed to getting their way. You’re tacitly supporting the existing power bases. You’re ceding power to the best liars, to those with the least scruples, and to the people who are most motivated by dominance. This is precisely the opposite of what you wanted.

If enough people won’t do politics, and hate anyone who does, your discussion spaces will be reduced to a battleground of only the hardiest and the most toxic.

If you don’t see the politics, it’s still happening

If your governance systems don’t work, then there is no effective redress against bad or even malicious decisions. Your roleholders and subteams are unaccountable power centres.

Power radically distorts every human relationship, and it takes great strength of character for an unaccountable power centre not to eventually become an unaccountable toxic cabal.

So if you have a reasonable sized community, but don’t see your formal governance systems working — people debating things, votes, leadership making explicit decisions — that doesn’t mean everything is fine, and all the decisions are great, and there’s no politics happening.

It just means that most of your community have given up on the official process. It also probably means that some parts of your project have formed toxic and unaccountable cabals. Those who won’t put up with that will leave.

The same is true if the only governance actions that ever happen are massive drama. That means that only the most determined victim of a bad decision, will even consider using such a process.

Conclusions

  • Respect and support the people who are trying to fix things with politics.

  • Be informed, and, where appropriate, involved.

  • If you are in a position of authority, be willing to exercise that authority. Do more than just mediating to try to get consensus.



comment count unavailable comments

01 May, 2025 10:15PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 2: Speech to Text and back

Having setup an ATOM Echo Voice Satellite and hooked it up to Home Assistant we now need to actually do something with the captured audio. Home Assistant largely deals with voice assistants using the Wyoming Protocol, which describes itself as essentially JSONL + PCM audio. It works nicely in terms of meaning everything can exist as separate modules that then just communicate over network sockets, and there are a whole bunch of Python implementations of the pieces necessary.

The first bit I looked at was speech to text; how do I get what I say to the voice satellite into something that Home Assistant can try and parse? There is a nice self contained speech recognition tool called whisper.cpp, which is a low dependency implementation of inference using OpenAI’s Whisper model. This is wrapped up for Wyoming as part of wyoming-whisper-cpp. Here we get into something that unfortunately seems common in this space; the repo contains a forked copy of whisper.cpp with enough differences that I couldn’t trivially make it work with regular whisper.cpp. That means missing out on new development, and potential improvements (the fork appears to be at v1.5.4, upstream is up to v1.7.5 at the time of writing). However it was possible to get up and running easily enough.

[I note there is a Wyoming Whisper API client that can use the whisper.cpp server, and that might be a cleaner way to go in the future, especially if whisper.cpp ends up in Debian.]

I stated previously I wanted all of this to be as clean an installed on Debian stable as possible. Given most of this isn’t packaged, that’s meant I’ve packaged things up as I go. I’m not at the stage anything is suitable for upload to Debian proper, but equally I’ve tried to make them a reasonable starting point. No pre-built binaries available, just Salsa git repos. https://salsa.debian.org/noodles/wyoming-whisper-cpp in this case. You need python3-wyoming from trixie if you’re building for bookworm, but it doesn’t need rebuilt.

You need a Whisper model that’s been converts to ggml format; they can be found on Hugging Face. I’ve ended up using the base.en model. I found small.en gave more accurate results, but took a little longer, when doing random testing, but it doesn’t seem to make much of a difference for voice control rather than plain transcribing.

[One of the open questions about uploading this to Debian is around the use of a prebuilt AI model. I don’t know what the right answer is here, and whether the voice infrastructure could ever be part of Debian proper, but the current discussion on the interpretation of the DFSG on AI models is very relevant.]

I run this in the same container as my Home Assistant install, using a systemd unit file dropped in /etc/systemd/system/wyoming-whisper-cpp.service:

[Unit]
Description=Wyoming whisper.cpp server
After=network.target

[Service]
Type=simple
DynamicUser=yes
ExecStart=wyoming-whisper-cpp --uri tcp://localhost:10030 --model base.en

MemoryDenyWriteExecute=false
ProtectControlGroups=true
PrivateDevices=false
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

It needs the Wyoming Protocol integration enabled in Home Assistant; you can “Add Entry” and enter localhost + 10030 for host + port and it’ll get added. Then in the Voice Assistant configuration there’ll be a whisper.cpp option available.

Text to speech turns out to be weirdly harder. The right answer is something like Wyoming Piper, but that turns out to be hard on bookworm. I’ll come back to that in a future post. For now I took the easy option and used the built in “Google Translate” option in Home Assistant. That needed an extra stanza in configuration.yaml that wasn’t entirely obvious:

media_source:

With this, and the ATOM voice satellite, I could now do basic voice control of my Home Assistant setup, with everything except the text-to-speech piece happening locally! Things such as “Hey Jarvis, turn on the study light” work out of the box. I haven’t yet got into defining my own phrases, partly because I know some of the things I want (“What time is it?”) are already added in later Home Assistant versions than the one I’m running.

Overall I found this initially complicated to setup given my self-imposed constraints about actually understanding the building blocks and compiling them myself, but I’ve been pretty impressed with the work that’s gone into it all. Next step, running a voice satellite on a Debian box.

01 May, 2025 06:05PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities April 2025

Another short status update of what happened on my side last month. Notable might be the Cell Broadcast support for Qualcomm SoCs, the rest is smaller fixes and QoL improvements.

phosh

  • Fix splash spinner icon regression with newer GTK >= 3.24.49 (MR)
  • Update adaptive app list (MR)
  • Fix missing icon when editing folders (MR)
  • Use StartupWMClass for better app-id matching (MR)
  • Fix failing CI tests, fix inverted logic, and add tests (MR)
  • Fix a sporadic test failure (MR)
  • Add support for "do not disturb" by adding a status page to feedback quick settings (MR)
  • monitor: Don't track make/model (MR)
  • Wi-Fi status page: Correctly show tick mark with multiple access points (MR)
  • Avoid broken icon in polkit prompts (MR)
  • Lockscreen auth cleanups (MR)
  • Sync mobile data toggle to sim lock too (MR)
  • Don't let the OSD display cover whole output with a transparent window (MR)

phoc

  • Allow to specify listening socket (MR)
  • Continue to catch up with wlroots git (MR)
  • Disconnect input-method signals on destroy (MR)
  • Disconnect gtk-shell and output signals on destroy (MR)
  • Don't init decorations too early (MR)
  • Allow to disable XWayland on the command line (MR)

phosh-mobile-settings

  • Allow to set overview wallpaper (MR)
  • Ask for confirmation before resetting favorits (MR)
  • Add separate volume controls for notifictaions, multimedia and alerts (MR)
  • Tweak warnings (MR)

pfs

  • Fix build on a single CPU (MR)

feedbackd

  • Move to fdo (MR)
  • Allow to set media-role (MR)
  • Doc updates (MR)
  • Sort LEDs by "usefulness" (MR)
  • Ensure multicolor LEDs have multiple components (MR)
  • Add example wireplumber config (MR)

feedbackd-device-themes

  • Release 0.8.2
  • Move to fdo (MR)
  • Override notification-missed-generic on fajita (MR)
  • Run ci-fairy here too (MR)
  • fajita: Add notification-missed-generic (MR)

gmobile

  • Build Vala support (vapi files) too (MR)
  • Add support for timers that can take the system out of suspend (MR)

Debian

git-buildpackage

  • Don't suppress dch errors (MR)
  • Release 0.9.38

wlroots

  • Get text-input-v3 a bit more in line with other protocols (MR)

ModemManager

  • Cell broadcast support for QMI modems (MR)

Libqmi

  • QMI channel setting (MR)
  • Switch to gi-docgen (MR)
  • loc: Fix since annotations (MR)

gnome-clocks

  • Add wakeup timer to take device out of suspend (MR)

gnome-calls

  • CallBox: Switch between text entry (for SIP) and dialpad (MR)

qmi-parse-kernel-dump

  • Allow to filer on message types and some other small improvements (MR)

xwayland-run

  • Support phoc (MR)

osmo-cbc

  • Small error handling improvements to osmo-cbc (MR)

phosh-nightly

  • Handle feedbackd fdo move (MR)

Blog posts

Bugs

  • Resuming of video streams fails with newer gstreamer (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 May, 2025 12:30PM

Paul Wise

FLOSS Activities April 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Patches: notmuch-mutt patchset

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

01 May, 2025 04:00AM

Russ Allbery

Review: Beyond Pain

Review: Beyond Pain, by Kit Rocha

Series: Beyond #3
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 328

Beyond Pain is a science fiction dystopian erotic romance novel and a direct sequel to Beyond Control. Following the romance series convention, each book features new protagonists who were supporting characters in the previous book. You could probably start here if you wanted, but there are significant spoilers here for earlier books in the series. I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Six has had a brutally hard life. She was rescued from an awful situation in a previous book and is now lurking around the edges of the Sector Four gang, oddly fascinated (as are we all) with their constant sexuality and trying to decide if she wants to, and can, be part of their world. Bren is one of the few people she lets get close: a huge bruiser who likes cage fights and pain but treats Six with a protective, careful respect that she finds comforting. This book is the story of Six and Bren getting to the bottom of each other's psychological hangups while the O'Kanes start taking over Six's former sector.

Yes, as threatened, I read another entry in the dystopian erotica series because I keep wondering how these people will fuck their way into a revolution. This is not happening very quickly, but it seems obvious that is the direction the series is going.

It's been a while since I've reviewed one of these, so here's another variation of the massive disclaimer: I think erotica is harder to review than any other genre because what people like is so intensely personal and individual. This is not even an attempt at an erotica review. I'm both wholly unqualified and also less interested in that part of the book, which should lead you to question my reading choices since that's a good half of the book.

Rather, I'm reading these somewhat for the plot and mostly for the vibes. This is not the most competent collection of individuals, and to the extent that they are, it's mostly because the men (who are, as a rule, charismatic but rather dim) are willing to listen to the women. What they are good at is communication, or rather, they're good about banging their heads (and other parts) against communication barriers until they figure out a way around them. Part of this is an obsession with consent that goes quite a bit deeper than the normal simplistic treatment. When you spend this much time trying to understand what other people want, you have to spend a lot of time communicating about sex, and in these books that means spending a lot of time communicating about everything else as well.

They are also obsessively loyal and understand the merits of both collective action and in making space for people to do the things that they are the best at, while still insisting that people contribute when they can. On the surface, the O'Kanes are a dictatorship, but they're run more like a high-functioning collaboration. Dallas leads because Dallas is good at playing the role of leader (and listening to Lex), which is refreshingly contrary to how things work in the real world right now.

I want to be clear that not only is this erotica, this is not the sort of erotica where there's a stand-alone plot that is periodically interrupted by vaguely-motivated sex scenes that you can skim past. These people use sex to communicate, and therefore most of the important exchanges in the book are in the middle of a sex scene. This is going to make this novel, and this series, very much not to the taste of a lot of people, and I cannot be emphatic enough about that warning.

But, also, this is such a fascinating inversion. It's common in media for the surface plot of the story to be full of sexual tension, sometimes to the extent that the story is just a metaphor for the sex that the characters want to have. This is the exact opposite of that: The sex is a metaphor for everything else that's going on in the story. These people quite literally fuck their way out of their communication problems, and not in an obvious or cringy way. It's weirdly fascinating?

It's also possible that my reaction to this series is so unusual as to not be shared by a single other reader.

Anyway, the setup in this story is that Six has major trust issues and Bren is slowly and carefully trying to win her trust. It's a classic hurt/comfort setup, and if that had played out in the way that this story often does, Bren would have taken the role of the gentle hero and Six the role of the person he rescued. That is not at all where this story goes. Six doesn't need comfort; Six needs self-confidence and the ability to demand what she wants, and although the way Beyond Pain gets her there is a little ham-handed, it mostly worked for me. As with Beyond Shame, I felt like the moral of the story is that the O'Kane men are just bright enough to stop doing stupid things at the last possible moment. I think Beyond Pain worked a bit better than the previous book because Bren is not quite as dim as Dallas, so the reader doesn't have to suffer through quite as many stupid decisions.

The erotica continues to mostly (although not entirely) follow traditional gender roles, with dangerous men and women who like attention. Presumably most people are reading these books for the sex, which I am wholly unqualified to review. For whatever it's worth, the physical descriptions are too mechanical for me, too obsessed with the precise structural assemblage of parts in novel configurations. I am not recommending (or disrecommending) these books, for a whole host of reasons. But I think the authors deserve to be rewarded for understanding that sex can be communication and that good communication about difficult topics is inherently interesting in a way that (at least for me) transcends the erotica.

I bet I'm going to pick up another one of these about a year from now because I'm still thinking about these people and am still curious about how they are going to succeed.

Followed by Beyond Temptation, an interstitial novella. The next novel is Beyond Jealousy.

Rating: 6 out of 10

01 May, 2025 03:46AM

April 30, 2025

Russell Coker

Simon Josefsson

Building Debian in a GitLab Pipeline

After thinking about multi-stage Debian rebuilds I wanted to implement the idea. Recall my illustration:

Earlier I rebuilt all packages that make up the difference between Ubuntu and Trisquel. It turned out to be a 42% bit-by-bit identical similarity. To check the generality of my approach, I rebuilt the difference between Debian and Devuan too. That was the debdistreproduce project. It “only” had to orchestrate building up to around 500 packages for each distribution and per architecture.

Differential reproducible rebuilds doesn’t give you the full picture: it ignore the shared package between the distribution, which make up over 90% of the packages. So I felt a desire to do full archive rebuilds. The motivation is that in order to trust Trisquel binary packages, I need to trust Ubuntu binary packages (because that make up 90% of the Trisquel packages), and many of those Ubuntu binaries are derived from Debian source packages. How to approach all of this? Last year I created the debdistrebuild project, and did top-50 popcon package rebuilds of Debian bullseye, bookworm, trixie, and Ubuntu noble and jammy, on a mix of amd64 and arm64. The amount of reproducibility was lower. Primarily the differences were caused by using different build inputs.

Last year I spent (too much) time creating a mirror of snapshot.debian.org, to be able to have older packages available for use as build inputs. I have two copies hosted at different datacentres for reliability and archival safety. At the time, snapshot.d.o had serious rate-limiting making it pretty unusable for massive rebuild usage or even basic downloads. Watching the multi-month download complete last year had a meditating effect. The completion of my snapshot download co-incided with me realizing something about the nature of rebuilding packages. Let me below give a recap of the idempotent rebuilds idea, because it motivate my work to build all of Debian from a GitLab pipeline.

One purpose for my effort is to be able to trust the binaries that I use on my laptop. I believe that without building binaries from source code, there is no practically feasible way to trust binaries. To trust any binary you receive, you can de-assemble the bits and audit the assembler instructions for the CPU you will execute it on. Doing that on a OS-wide level this is unpractical. A more practical approach is to audit the source code, and then confirm that the binary is 100% bit-by-bit identical to one that you can build yourself (from the same source) on your own trusted toolchain. This is similar to a reproducible build.

My initial goal with debdistrebuild was to get to 100% bit-by-bit identical rebuilds, and then I would have trustworthy binaries. Or so I thought. This also appears to be the goal of reproduce.debian.net. They want to reproduce the official Debian binaries. That is a worthy and important goal. They achieve this by building packages using the build inputs that were used to build the binaries. The build inputs are earlier versions of Debian packages (not necessarily from any public Debian release), archived at snapshot.debian.org.

I realized that these rebuilds would be not be sufficient for me: it doesn’t solve the problem of how to trust the toolchain. Let’s assume the reproduce.debian.net effort succeeds and is able to 100% bit-by-bit identically reproduce the official Debian binaries. Which appears to be within reach. To have trusted binaries we would “only” have to audit the source code for the latest version of the packages AND audit the tool chain used. There is no escaping from auditing all the source code — that’s what I think we all would prefer to focus on, to be able to improve upstream source code.

The trouble is about auditing the tool chain. With the Reproduce.debian.net approach, that is a recursive problem back to really ancient Debian packages, some of them which may no longer build or work, or even be legally distributable. Auditing all those old packages is a LARGER effort than auditing all current packages! Doing auditing of old packages is of less use to making contributions: those releases are old, and chances are any improvements have already been implemented and released. Or that improvements are no longer applicable because the projects evolved since the earlier version.

See where this is going now? I reached the conclusion that reproducing official binaries using the same build inputs is not what I’m interested in. I want to be able to build the binaries that I use from source using a toolchain that I can also build from source. And preferably that all of this is using latest version of all packages, so that I can contribute and send patches for them, to improve matters.

The toolchain that Reproduce.Debian.Net is using is not trustworthy unless all those ancient packages are audited or rebuilt bit-by-bit identically, and I don’t see any practical way forward to achieve that goal. Nor have I seen anyone working on that problem. It is possible to do, though, but I think there are simpler ways to achieve the same goal.

My approach to reach trusted binaries on my laptop appears to be a three-step effort:

  • Encourage an idempotently rebuildable Debian archive, i.e., a Debian archive that can be 100% bit-by-bit identically rebuilt using Debian itself.
  • Construct a smaller number of binary *.deb packages based on Guix binaries that when used as build inputs (potentially iteratively) leads to 100% bit-by-bit identical packages as in step 1.
  • Encourage a freedom respecting distribution, similar to Trisquel, from this idempotently rebuildable Debian.

How to go about achieving this? Today’s Debian build architecture is something that lack transparency and end-user control. The build environment and signing keys are managed by, or influenced by, unidentified people following undocumented (or at least not public) security procedures, under unknown legal jurisdictions. I always wondered why none of the Debian-derivates have adopted a modern GitDevOps-style approach as a method to improve binary build transparency, maybe I missed some project?

If you want to contribute to some GitHub or GitLab project, you click the ‘Fork’ button and get a CI/CD pipeline running which rebuild artifacts for the project. This makes it easy for people to contribute, and you get good QA control because the entire chain up until its artifact release are produced and tested. At least in theory. Many projects are behind on this, but it seems like this is a useful goal for all projects. This is also liberating: all users are able to reproduce artifacts. There is no longer any magic involved in preparing release artifacts. As we’ve seen with many software supply-chain security incidents for the past years, where the “magic” is involved is a good place to introduce malicious code.

To allow me to continue with my experiment, I thought the simplest way forward was to setup a GitDevOps-centric and user-controllable way to build the entire Debian archive. Let me introduce the debdistbuild project.

Debdistbuild is a re-usable GitLab CI/CD pipeline, similar to the Salsa CI pipeline. It provide one “build” job definition and one “deploy” job definition. The pipeline can run on GitLab.org Shared Runners or you can set up your own runners, like my GitLab riscv64 runner setup. I have concerns about relying on GitLab (both as software and as a service), but my ideas are easy to transfer to some other GitDevSecOps setup such as Codeberg.org. Self-hosting GitLab, including self-hosted runners, is common today, and Debian rely increasingly on Salsa for this. All of the build infrastructure could be hosted on Salsa eventually.

The build job is simple. From within an official Debian container image build packages using dpkg-buildpackage essentially by invoking the following commands.

sed -i 's/ deb$/ deb deb-src/' /etc/apt/sources.list.d/*.sources
apt-get -o Acquire::Check-Valid-Until=false update
apt-get dist-upgrade -q -y
apt-get install -q -y --no-install-recommends build-essential fakeroot
env DEBIAN_FRONTEND=noninteractive \
    apt-get build-dep -y --only-source $PACKAGE=$VERSION
useradd -m build
DDB_BUILDDIR=/build/reproducible-path
chgrp build $DDB_BUILDDIR
chmod g+w $DDB_BUILDDIR
su build -c "apt-get source --only-source $PACKAGE=$VERSION" > ../$PACKAGE_$VERSION.build
cd $DDB_BUILDDIR
su build -c "dpkg-buildpackage"
cd ..
mkdir out
mv -v $(find $DDB_BUILDDIR -maxdepth 1 -type f) out/

The deploy job is also simple. It commit artifacts to a Git project using Git-LFS to handle large objects, essentially something like this:

if ! grep -q '^pool/**' .gitattributes; then
    git lfs track 'pool/**'
    git add .gitattributes
    git commit -m"Track pool/* with Git-LFS." .gitattributes
fi
POOLDIR=$(if test "$(echo "$PACKAGE" | cut -c1-3)" = "lib"; then C=4; else C=1; fi; echo "$DDB_PACKAGE" | cut -c1-$C)
mkdir -pv pool/main/$POOLDIR/
rm -rfv pool/main/$POOLDIR/$PACKAGE
mv -v out pool/main/$POOLDIR/$PACKAGE
git add pool
git commit -m"Add $PACKAGE." -m "$CI_JOB_URL" -m "$VERSION" -a
if test "${DDB_GIT_TOKEN:-}" = ""; then
    echo "SKIP: Skipping git push due to missing DDB_GIT_TOKEN (see README)."
else
    git push -o ci.skip
fi

That’s it! The actual implementation is a bit longer, but the major difference is for log and error handling.

You may review the source code of the base Debdistbuild pipeline definition, the base Debdistbuild script and the rc.d/-style scripts implementing the build.d/ process and the deploy.d/ commands.

There was one complication related to artifact size. GitLab.org job artifacts are limited to 1GB. Several packages in Debian produce artifacts larger than this. What to do? GitLab supports up to 5GB for files stored in its package registry, but this limit is too close for my comfort, having seen some multi-GB artifacts already. I made the build job optionally upload artifacts to a S3 bucket using SHA256 hashed file hierarchy. I’m using Hetzner Object Storage but there are many S3 providers around, including self-hosting options. This hierarchy is compatible with the Git-LFS .git/lfs/object/ hierarchy, and it is easy to setup a separate Git-LFS object URL to allow Git-LFS object downloads from the S3 bucket. In this mode, only Git-LFS stubs are pushed to the git repository. It should have no trouble handling the large number of files, since I have earlier experience with Apt mirrors in Git-LFS.

To speed up job execution, and to guarantee a stable build environment, instead of installing build-essential packages on every build job execution, I prepare some build container images. The project responsible for this is tentatively called stage-N-containers. Right now it create containers suitable for rolling builds of trixie on amd64, arm64, and riscv64, and a container intended for as use the stage-0 based on the 20250407 docker images of bookworm on amd64 and arm64 using the snapshot.d.o 20250407 archive. Or actually, I’m using snapshot-cloudflare.d.o because of download speed and reliability. I would have prefered to use my own snapshot mirror with Hetzner bandwidth, alas the Debian snapshot team have concerns about me publishing the list of (SHA1 hash) filenames publicly and I haven’t been bothered to set up non-public access.

Debdistbuild has built around 2.500 packages for bookworm on amd64 and bookworm on arm64. To confirm the generality of my approach, it also build trixie on amd64, trixie on arm64 and trixie on riscv64. The riscv64 builds are all on my own hosted runners. For amd64 and arm64 my own runners are only used for large packages where the GitLab.com shared runners run into the 3 hour time limit.

What’s next in this venture? Some ideas include:

  • Optimize the stage-N build process by identifying the transitive closure of build dependencies from some initial set of packages.
  • Create a build orchestrator that launches pipelines based on the previous list of packages, as necessary to fill the archive with necessary packages. Currently I’m using a basic /bin/sh for loop around curl to trigger GitLab CI/CD pipelines with names derived from https://popcon.debian.org/.
  • Create and publish a dists/ sub-directory, so that it is possible to use the newly built packages in the stage-1 build phase.
  • Produce diffoscope-style differences of built packages, both stage0 against official binaries and between stage0 and stage1.
  • Create the stage-1 build containers and stage-1 archive.
  • Review build failures. On amd64 and arm64 the list is small (below 10 out of ~5000 builds), but on riscv64 there is some icache-related problem that affects Java JVM that triggers build failures.
  • Provide GitLab pipeline based builds of the Debian docker container images, cloud-images, debian-live CD and debian-installer ISO’s.
  • Provide integration with Sigstore and Sigsum for signing of Debian binaries with transparency-safe properties.
  • Implement a simple replacement for dpkg and apt using /bin/sh for use during bootstrapping when neither packaging tools are available.

What do you think?

30 April, 2025 09:25AM by simon

Utkarsh Gupta

FOSS Activites in April 2025

Here’s my 67th monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 76th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here’s what I did:

  • Updating Matomo to v5.3.1.
  • Lots of bursary stuff for DC25. We rolled out the results for the first batch.
  • Helping Andreas Tille with and around FTP team bits.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 51st month of actively contributing to Ubuntu. I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did (there’s so much and some of it might not be public…yet!), here’s a quick TL;DR of what I did:

  • Released 25.04 Plucky Puffin! \o/
  • Helped open the 25.10 Questing Quokka archive. Let the development begin!
  • Jon, VP of Engineering, asked me to lead the Canonical Release team - that was definitely not something I saw coming. :)
  • We’re now doing Ubuntu monthly releases for the devel releases - I’ll be the tech lead for the project.
  • Preparing for the May sprints - too many new things and new responsibilities. :)

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my 67th month as a Debian LTS and 54th month as a Debian ELTS paid contributor.
Due to DC25 bursary work, Ubuntu 25.04 release, and other travel bits, I only worked for 2.00 hours for LTS and 4.50 hours for ELTS.

I did the following things:

  • [ELTS] Had already backported patches for adminer for the following CVEs:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Salsa repository: https://salsa.debian.org/lts-team/packages/adminer.
    • As the same CVEs are affected LTS, we decided to release for LTS first and then for ELTS but since I had no hours for LTS, I decided to do a bit more of testing for ELTS to make sure things don’t regress in buster.
    • Will prepare LTS (and also s-p-u, sigh) updates this month and get back to ELTS thereafter.
  • [LTS] Started to prepare the LTS update for adminer for the same CVEs as for ELTS:
    • CVE-2023-45195: a SSRF attack.
    • CVE-2023-45196: a denial of service attack.
    • Haven’t fully backported the patch yet but this is what I intend to do for this month (now that I have hours :D).
  • [LTS] Partially attended the LTS meeting on Jitsi. Summary here.
    • “Partially” because I was fighting SSO auth issues with Jitsi. Looks like there were some upstream issues/activity and it was resulting in gateway crashes but all good now.
    • I was following the running notes and keeping up with things as much as I could. :)

Until next time.
:wq for today.

30 April, 2025 05:41AM

April 29, 2025

Petter Reinholdtsen

OpenSnitch 1.6.8 is now in Trixie

After some days of effort, I am happy to report that the great interactive application firewall OpenSnitch got a new version in Trixie, now with the Linux kernel based ebpf sniffer included for better accuracy. This new version made it possible for me to finally track down the rule required to avoid a deadlock when using it on a machine with the user home directory on NFS. The problematic connection originated from the Linux kernel itself, causing the /proc based version in Debian 12 to fail to properly attribute the connection and cause the OpenSnitch daemon to block while waiting for the Python GUI, which was unable to continue because the home directory was blocked waiting for the OpenSnitch daemon. A classic deadlock reported upstream for a more permanent solution.

I really love the control over all the programs and web pages calling home that OpenSnitch give me. Just today I discovered a strange connection to sb-ssl.google.com when I pulled up a PDF passed on to me via a Mattermost installation. It is some times hard to know which connections to block and which to go through, but after running it for a few months, the default rule set start to handle most regular network traffic and I only have to have a look at the more unusual connections.

If you would like to know more about what your machines programs are doing, install OpenSnitch today. It is only a apt install opensnitch away. :)

I hope to get the 1.6.9 version in experimental into Trixie before the archive enter hard freeze. This new version should have no relevant changes not already in the 1.6.8-11 edition, as it mostly contain Debian patches, but will give it a few days testing to see if there are any surprises. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29 April, 2025 02:30PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Freexian partners with Invisible Things Lab to extend security support for Xen hypervisor

Freexian is pleased to announce a partnership with Invisible Things Lab to extend the security support of the Xen type-1 hypervisor version 4.17. Three years after its initial release, Xen 4.17, the version available in Debian 12 “bookworm”, will reach end-of-security-support status upstream on December 2025. The aim of our partnership with Invisible Things is to extend the security support until, at least, July 2027. We may also explore a possibility of extending the support until June 2028, to coincide with the end of Debian 12 LTS support-period.

The security support of Xen in Debian, since Debian 8 “jessie” until Debian 11 “bullseye”, reached its end before the end of the life cycle of the release. We aim then to significantly improve the situation of Xen in Debian 12. As with similar efforts, we would like to mention that this is an experiment and that we will do our best to make it a success. We are aiming to try and to extend the security support for Xen versions included in future Debian releases, including Debian 13 “trixie”.

In the long term, we hope that this effort will ultimately allow the Xen Project to increase the official security support period for Xen releases from the current three years to at least five years, with the extra work being funded by the community of companies benefiting from the longer support period.

If your company relies on Xen and wants to help sustain LTS versions of Xen, please reach out to us. For companies using Debian, the simplest way is to subscribe to Freexian’s Debian LTS offer at a gold level (or above) and let us know that you want to contribute to Xen LTS when you send in your subscription form. For others, please reach out to us at sales@freexian.com and we will figure out a way to help you contribute.

In the mean time, this initiative has been made possible thanks to the current LTS sponsors and ELTS customers. We hope the entire community of Debian and Xen users will benefit from this initiative.

For any queries you might have, please don’t hesitate to contact us at sales@freexian.com.

About Invisible Things Lab

Invisible Things Lab (ITL) offers low-level security consulting auditing services for x86 virtualization technologies; C, C++, and assembly codebases; Intel SGX; binary exploitation and mitigations; and more. ITL also specializes in Qubes OS and Gramine consulting, including deployment, debugging, and feature development.

29 April, 2025 12:00AM

April 28, 2025

Scarlett Gately Moore

KDE Snaps and life. Spirits are up, but I need a little help please

I was just released from the hospital after a 3 day stay for my ( hopefully ) last surgery. There was concern with massive blood loss and low heart rate. I have stabilized and have come home. Unfortunately, they had to prescribe many medications this round and they are extremely expensive and used up all my funds. I need gas money to get to my post-op doctors appointments, and food would be cool. I would appreciate any help, even just a dollar!

I am already back to work, and continued work on the crashy KDE snaps in a non KDE env. ( Also affects anyone using kde-neon extensions such as FreeCAD) I hope to have a fix in the next day or so.

Fixed kate bug https://bugs.kde.org/show_bug.cgi?id=503285

Thanks for stopping by.

28 April, 2025 01:04PM by sgmoore

hackergotchi for Sergio Talens-Oliag

Sergio Talens-Oliag

ArgoCD Autopilot

For a long time I’ve been wanting to try GitOps tools, but I haven’t had the chance to try them for real on the projects I was working on.

As now I have some spare time I’ve decided I’m going to play a little with Argo CD, Flux and Kluctl to test them and be able to use one of them in a real project in the future if it looks appropriate.

On this post I will use Argo-CD Autopilot to install argocd on a k3d local cluster installed using OpenTofu to test the autopilot approach of managing argocd and test the tool (as it manages argocd using a git repository it can be used to test argocd as well).

Installing tools locally with arkade

Recently I’ve been using the arkade tool to install kubernetes related applications on Linux servers and containers, I usually get the applications with it and install them on the /usr/local/bin folder.

For this post I’ve created a simple script that checks if the tools I’ll be using are available and installs them on the $HOME/.arkade/bin folder if missing (I’m assuming that docker is already available, as it is not installable with arkade):

#!/bin/sh

# TOOLS LIST
ARKADE_APPS="argocd argocd-autopilot k3d kubectl sops tofu"

# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Install or update arkade
if command -v arkade >/dev/null; then
  echo "Trying to update the arkade application"
  sudo arkade update
else
  echo "Installing the arkade application"
  curl -sLS https://get.arkade.dev | sudo sh
fi

echo ""
echo "Installing tools with arkade"
echo ""
for app in $ARKADE_APPS; do
  app_path="$(command -v $app)" || true
  if [ "$app_path" ]; then
    echo "The application '$app' already available on '$app_path'"
  else
    arkade get "$app"
  fi
done

cat <<EOF

Add the ~/.arkade/bin directory to your PATH if tools have been installed there

EOF

The rest of scripts will add the binary directory to the PATH if missing to make sure things work if something was installed there.

Creating a k3d cluster with opentofu

Although using k3d directly will be a good choice for the creation of the cluster, I’m using tofu to do it because that will probably be the tool used to do it if we were working with Cloud Platforms like AWS or Google.

The main.tf file is as follows:

terraform {
  required_providers {
    k3d = {
      source  = "moio/k3d"
      version = "0.0.12"
    }
    sops = {
      source = "carlpett/sops"
      version = "1.2.0"
    }
  }
}

data "sops_file" "secrets" {
    source_file = "secrets.yaml"
}

resource "k3d_cluster" "argocd_cluster" {
  name    = "argocd"
  servers = 1
  agents  = 2

  image   = "rancher/k3s:v1.31.5-k3s1"
  network = "argocd"
  token   = data.sops_file.secrets.data["token"]

  port {
    host_port      = 8443
    container_port = 443
    node_filters = [
      "loadbalancer",
    ]
  }

  k3d {
    disable_load_balancer     = false
    disable_image_volume      = false
  }

  kubeconfig {
    update_default_kubeconfig = true
    switch_current_context    = true
  }

  runtime {
    gpu_request = "all"
  }
}

The k3d configuration is quite simple, as I plan to use the default traefik ingress controller with TLS I publish the 443 port on the hosts 8443 port, I’ll explain how I add a valid certificate on the next step.

I’ve prepared the following script to initialize and apply the changes:

#!/bin/sh

set -e

# VARIABLES
# Default token for the argocd cluster
K3D_CLUSTER_TOKEN="argocdToken"
# Relative PATH to install the k3d cluster using terr-iaform
K3D_TF_RELPATH="k3d-tf"
# Secrets yaml file
SECRETS_YAML="secrets.yaml"
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."

# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"

# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Go to the k3d-tf dir
cd "$WORK_DIR/$K3D_TF_RELPATH" || exit 1

# Create secrets.yaml file and encode it with sops if missing
if [ ! -f "$SECRETS_YAML" ]; then
  echo "token: $K3D_CLUSTER_TOKEN" >"$SECRETS_YAML"
  sops encrypt -i "$SECRETS_YAML"
fi

# Initialize terraform
tofu init

# Apply the configuration
tofu apply

Adding a wildcard certificate to the k3d ingress

As an optional step, after creating the k3d cluster I’m going to add a default wildcard certificate for the traefik ingress server to be able to use everything with HTTPS without certificate issues.

As I manage my own DNS domain I’ve created the lo.mixinet.net and *.lo.mixinet.net DNS entries on my public and private DNS servers (both return 127.0.0.1 and ::1) and I’ve created a TLS certificate for both entries using Let’s Encrypt with Certbot.

The certificate is updated automatically on one of my servers and when I need it I copy the contents of the fullchain.pem and privkey.pem files from the /etc/letsencrypt/live/lo.mixinet.net server directory to the local files lo.mixinet.net.crt and lo.mixinet.net.key.

After copying the files I run the following file to install or update the certificate and configure it as the default for traefik:

#!/bin/sh
# Script to update the
secret="lo-mixinet-net-ingress-cert"
cert="${1:-lo.mixinet.net.crt}"
key="${2:-lo.mixinet.net.key}"
if [ -f "$cert" ] && [ -f "$key" ]; then
  kubectl -n kube-system create secret tls $secret \
    --key=$key \
    --cert=$cert \
    --dry-run=client --save-config -o yaml  | kubectl apply -f -
  kubectl apply -f - << EOF
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
  name: default
  namespace: kube-system

spec:
  defaultCertificate:
    secretName: $secret
EOF
else
  cat <<EOF
To add or update the traefik TLS certificate the following files are needed:

- cert: '$cert'
- key: '$key'

Note: you can pass the paths as arguments to this script.
EOF
fi

Once it is installed if I connect to https://foo.lo.mixinet.net:8443/ I get a 404 but the certificate is valid.

Installing argocd with argocd-autopilot

Creating a repository and a token for autopilot

I’ll be using a project on my forgejo instance to manage argocd, the repository I’ve created is on the URL https://forgejo.mixinet.net/blogops/argocd and I’ve created a private user named argocd that only has write access to that repository.

Logging as the argocd user on forgejo I’ve created a token with permission to read and write repositories that I’ve saved on my pass password store on the mixinet.net/argocd@forgejo/repository-write entry.

Bootstrapping the installation

To bootstrap the installation I’ve used the following script (it uses the previous GIT_REPO and GIT_TOKEN values):

#!/bin/sh

set -e

# VARIABLES
# Relative PATH to the workdir from the script directory
WORK_DIR_RELPATH=".."

# Compute WORKDIR
SCRIPT="$(readlink -f "$0")"
SCRIPT_DIR="$(dirname "$SCRIPT")"
WORK_DIR="$(readlink -f "$SCRIPT_DIR/$WORK_DIR_RELPATH")"

# Update the PATH to add the arkade bin directory
# Add the arkade binary directory to the path if missing
case ":${PATH}:" in
  *:"${HOME}/.arkade/bin":*) ;;
  *) export PATH="${PATH}:${HOME}/.arkade/bin" ;;
esac

# Go to the working directory
cd "$WORK_DIR" || exit 1

# Set GIT variables
if [ -z "$GIT_REPO" ]; then
  export GIT_REPO="https://forgejo.mixinet.net/blogops/argocd.git"
fi
if [ -z "$GIT_TOKEN" ]; then
  GIT_TOKEN="$(pass mixinet.net/argocd@forgejo/repository-write)"
  export GIT_TOKEN
fi

argocd-autopilot repo bootstrap --provider gitea

The output of the execution is as follows:

❯ bin/argocd-bootstrap.sh
INFO cloning repo: https://forgejo.mixinet.net/blogops/argocd.git
INFO empty repository, initializing a new one with specified remote
INFO using revision: "", installation path: ""
INFO using context: "k3d-argocd", namespace: "argocd"
INFO applying bootstrap manifests to cluster...
namespace/argocd created
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created
secret/autopilot-secret created

INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
INFO pushing bootstrap manifests to repo
INFO applying argo-cd bootstrap application
application.argoproj.io/autopilot-bootstrap created
INFO running argocd login to initialize argocd config
Context 'autopilot' updated

INFO argocd initialized. password: XXXXXXX-XXXXXXXX
INFO run:

    kubectl port-forward -n argocd svc/argocd-server 8080:80

Now we have the argocd installed and running, it can be checked using the port-forward and connecting to https://localhost:8080/ (the certificate will be wrong, we are going to fix that in the next step).

Updating the argocd installation in git

Now that we have the application deployed we can clone the argocd repository and edit the deployment to disable TLS for the argocd server (we are going to use TLS termination with traefik and that needs the server running as insecure, see the Argo CD documentation)

❯ ssh clone ssh://git@forgejo.mixinet.net/blogops/argocd.git
❯ cd argocd
❯ edit bootstrap/argo-cd/kustomization.yaml
❯ git commit -m 'Disable TLS for the argocd-server'

The changes made to the kustomization.yaml file are the following:

--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -11,6 +11,11 @@ configMapGenerator:
         key: git_username
         name: autopilot-secret
   name: argocd-cm
+  # Disable TLS for the Argo Server (see https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v30)
+- behavior: merge
+  literals:
+  - "server.insecure=true"
+  name: argocd-cmd-params-cm
 kind: Kustomization
 namespace: argocd
 resources:

Once the changes are pushed we sync the argo-cd application manually to make sure they are applied:

argo cd sync

As a test we can download the argocd-cmd-params-cm ConfigMap to make sure everything is OK:

apiVersion: v1
data:
  server.insecure: "true"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"server.insecure":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"argo-cd","app.kubernetes.io/name":"argocd-cmd-params-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-cmd-params-cm","namespace":"argocd"}}
  creationTimestamp: "2025-04-27T17:31:54Z"
  labels:
    app.kubernetes.io/instance: argo-cd
    app.kubernetes.io/name: argocd-cmd-params-cm
    app.kubernetes.io/part-of: argocd
  name: argocd-cmd-params-cm
  namespace: argocd
  resourceVersion: "16731"
  uid: a460638f-1d82-47f6-982c-3017699d5f14

As this simply changes the ConfigMap we have to restart the argocd-server to read it again, to do it we delete the server pods so they are re-created using the updated resource:

❯ kubectl delete pods -n argocd -l app.kubernetes.io/name=argocd-server

After doing this the port-forward command is killed automatically, if we run it again the connection to get to the argocd-server has to be done using HTTP instead of HTTPS.

Instead of testing that we are going to add an ingress definition to be able to connect to the server using HTTPS and GRPC against the address argocd.lo.mixinet.net using the wildcard TLS certificate we installed earlier.

To do it we to edit the bootstrap/argo-cd/kustomization.yaml file to add the ingress_route.yaml file to the deployment:

--- a/bootstrap/argo-cd/kustomization.yaml
+++ b/bootstrap/argo-cd/kustomization.yaml
@@ -20,3 +20,4 @@ kind: Kustomization
 namespace: argocd
 resources:
 - github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.19
+- ingress_route.yaml

The ingress_route.yaml file contents are the following:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: argocd-server
  namespace: argocd
spec:
  entryPoints:
    - websecure
  routes:
    - kind: Rule
      match: Host(`argocd.lo.mixinet.net`)
      priority: 10
      services:
        - name: argocd-server
          port: 80
    - kind: Rule
      match: Host(`argocd.lo.mixinet.net`) && Header(`Content-Type`, `application/grpc`)
      priority: 11
      services:
        - name: argocd-server
          port: 80
          scheme: h2c
  tls:
    certResolver: default

After pushing the changes and waiting a little bit the change is applied and we can access the server using HTTPS and GRPC, the first way can be tested from a browser and the GRPC using the command line interface:

❯ argocd --grpc-web login argocd.lo.mixinet.net:8443
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.lo.mixinet.net:8443' updated
❯ argocd app list -o name
argocd/argo-cd
argocd/autopilot-bootstrap
argocd/cluster-resources-in-cluster
argocd/root

So things are working fine …​ and that is all on this post, folks!

28 April, 2025 07:50AM