November 19, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.39 on CRAN: Micro Update

Release 0.6.39 of the digest package arrived at CRAN today and has also been uploaded to Debian.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c, xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 86.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

As noted last week in the 0.6.38 release note, hours after it was admitted to CRAN, I heard from the ever-so-tireless Brian Ripley about an SAN issue on arm64 only (and apparently non-reproducible elsewhere). He kindly provided a fix; it needed a cast. Checking this on amd64 against our Rocker-based ASAN and UBSAN containers (where is remains impossible to replicate, this issue is apparently known for some arm64 issues) another micro-issue (of a missing final argument NULL missing in one .Call()) was detected. Both issues were fixed the same day, and constitute the only change here. I merely waited a week to avoid a mechanical nag triggered when release happen within a week.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 November, 2025 11:29PM

#055: More Frequent r2u Updates

Welcome to post 55 in the R4 series.

r2u brings CRAN packages for R to Ubuntu. We mentioned it in the R4 series within the last year in posts #54 about faster CI, #48 about the r2u keynote at U Mons, #47 reviewing r2u it at its third birthday, #46 about adding arm64 support, and #44 about the r2u for mlops talk.

Today brings news of an important (internal) update. Following both the arm64 builds as well as the last bi-annual BioConductor package update (and the extension of BioConductor coverage to arm64), more and more of our build setup became automated at GitHub. This has now been unified. We dispatch builds for amd64 packages for ‘jammy’ (22.04) and ‘noble’ (24.04) (as well as for the arm64 binaries for ‘noble’) from the central build repository and enjoy the highly parallel build of the up to fourty available GitHub Runners. In the process we also switched fully to source builds.

In the past, we had relied on p3m.dev (formerly known as ppm and rspm) using its binaries. These so-called ‘naked binaries’ are what R produces when called as R CMD INSTALL --build. They are portable with the same build architecture and release, but do not carry packaging information. Now, when a Debian or Ubuntu .deb binary is built, the same step of R CMD INSTALL --build happens. So our earlier insight was to skip the compilation step, use the p3m binary, and then wrap the remainder of a complete package around it. Which includes the all-important dependency information for both the R package relations (from hard Depends / Imports / LinkingTo or soft Suggests declarations) as well as the shared library dependency resolution we can do when building for a Linux distribution.

That served us well, and we remain really grateful for the p3m.dev build service. But it also meant were dependending on the ‘clock’ and ‘cadence’ of p3m.dev. Which was not really a problem when it ran reliably daily, and early too, included weekends, and showed a timestamp of last updates. By now it is a bit more erratic, frequently late, skips weekends more regularly and long stopped showing when it was last updated. Late afternoon releases reflecting the CRAN updates ending one and half-days earlier is still good, it’s just not all that current. Plus there was always the very opaque occurrencem where maybe one in 50 packages or so would not even be provided as a binary so we had to build it anyway—the fallback always existing, and was used for both BioConductor (no binaries) and arm64 (no binaries at first, this now changed). So now we just build packages the standard way, albeit as GitHub Actions.

In doing so we can ignore p3m.dev, and rather follow the CRAN clock and cadence (as for example CRANberries does), and can update several times a day. For example early this morning (Central time) we ran update for the then-new 28 source packages resulting in 28 jammy and 36 noble binary packages; right now in mid-afternoon we are running another build for 37 source packages resuling in 37 jammy and 47 noble packages. (Packages without a src/ directory and hence no compilation can be used across amd64 and arm64; those that do have src/ are rebuilt for arm64 hence the different sets of jammy and noble packages as only the latter has arm64 now.) This gets us packages from this morning into r2u which p3m.dev should have by tomorrow afternoon or so.

And with that r2u remains “Fast. Easy. Reliable. Pick all three!” and also a little more predictable and current in its delivery. What’s not to like?

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

19 November, 2025 08:15PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

While it is cold-ish season in the North hemisphere...

Last week, our university held a «Mega Vaccination Center». Things cannot be small or regular with my university, ever! According to the official information, during last week ≈31,000 people were given a total of ≈74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile).

I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd.

Long, long line

(↑ photo credit: La Jornada, 2025.11.14)

Really vaccinated!

And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to.

Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher.

I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter.

If you plan to attend DebConf (hell… If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please… Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won’t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.

19 November, 2025 03:59AM

Michael Ablassmeier

building SLES 16 vagrant/libvirt images using guestfs tools

SLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that’s not the case anymore, as with more recent SLES15 releases the official images were gone.

In the past, it was possible to clone existing projects on the opensuse build service to build the images by yourself, but i couldn’t find any templates for SLES 16.

Naturally, there are several ways to build images, and the tooling around involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing packer recipes wont work anymore, as Yast has been replaced by a new installer, called agma). All pretty complicated, …

So my current take on creating a vagrant image for SLE16 has been the following:

  • Spin up an QEMU virtual machine
  • Manually install the system, all in default except for one special setting: In the Network connection details, “Edit Binding settings” and set the Interface to not bind a particular MAC address or interface. This will make the system pick whatever network device naming scheme is applied during boot.
  • After installation has finished, shutdown.

Two guestfs-tools that can now be used to modify the created qcow2 image:

  • run virt-sysrpep on the image to wipe settings that might cause troubles:
 virt-sysprep -a sles16.qcow2
  • create a simple shellscript that setups all vagrant related settings:
#!/bin/bash
useradd vagrant
mkdir -p /home/vagrant/.ssh/
chmod 0700 /home/vagrant/.ssh/
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/
# apply recommended ssh settings for vagrant boxes
SSHD_CONFIG=/etc/ssh/sshd_config.d/99-vagrant.conf
if [[ ! -d "$(dirname ${SSHD_CONFIG})" ]]; then
    SSHD_CONFIG=/etc/ssh/sshd_config
    # prepend the settings, so that they take precedence
    echo -e "UseDNS no\nGSSAPIAuthentication no\n$(cat ${SSHD_CONFIG})" > ${SSHD_CONFIG}
else
    echo -e "UseDNS no\nGSSAPIAuthentication no" > ${SSHD_CONFIG}
fi
SUDOERS_LINE="vagrant ALL=(ALL) NOPASSWD: ALL"
if [ -d /etc/sudoers.d ]; then
    echo "$SUDOERS_LINE" >| /etc/sudoers.d/vagrant
    visudo -cf /etc/sudoers.d/vagrant
    chmod 0440 /etc/sudoers.d/vagrant
else
    echo "$SUDOERS_LINE" >> /etc/sudoers
    visudo -cf /etc/sudoers
fi
 
mkdir -p /vagrant
chown -R vagrant:vagrant /vagrant
systemctl enable sshd
  • use virt-customize to upload the script into the qcow image:
 virt-customize -a sle16.qcow2 --upload vagrant.sh:/tmp/vagrant.sh
  • execute the script via:
 virt-customize -a sle16.qcow2 --run-command "/tmp/vagrant.sh"

After this, use the create-box.sh from the vagrant-libvirt project to create an box image:

https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh

and add the image to your environment:

 create_box.sh sle16.qcow2 sle16.box
 vagrant box add --name my/sles16 test.box

the resulting box is working well within my CI environment as far as i can tell.

19 November, 2025 12:00AM

November 18, 2025

Sahil Dhiman

Anchors in Life

Just like a ship needs an anchor to stabilize and hold it to port, humans too, I feel, have and require anchors to hold them in life. It could be an emotional anchor, a physical anchor, an anchor that stimulates your curiosity, a family member, a friend or a partner or a spiritual being.

An anchor holds you and helps you stabilize in stormy weather. An anchor can keep you going or stop you from going. An anchor orients you, helps you formulate your values and beliefs.

An anchor could be someone or something or oneself (thanks Saswata for the thought). Writing here is one of my anchors; what’s your anchor?

18 November, 2025 11:33AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

App Store Oligopoly

A Call for Public Discussion about App Store Oligopoly

Over on the ACLU's Free Future blog, I just published an article titled Your Smartphone, Their Rules: How App Stores Enable Corporate-Government Censorship.

Free Software users and developers likely already understand the reasons why it matters who controls what tools you have access to. Hopefully this post can help clarify, even to people typically used to common non-free tooling, that there are real world risks to consolidated, proprietary control over computing and communication tools.

Big shout out to the projects out there doing good work in the "pocket supercomputer" space, providing an escape valve for many users and a counter-example to centralized corporate control, including F-Droid, GrapheneOS, and phosh.

The screws are tightening on user freedom, in the very place where most computing is happening today. The smartphone is already far too similar to an ankle monitor than it should be.

Please, publish your own suggestions on creative forms of mutual technical liberation. These are communications tools, so no person can fix the problems alone.

I would love to see a flourishing of non-Android, non-iOS systems in people's pockets, but i also know with the market the way it is, that is a long haul. Until that happens, we should also try to keep Android open, check out keepandroidopen.org for more suggestions.

18 November, 2025 05:00AM by Daniel Kahn Gillmor

November 17, 2025

Rodrigo Siqueira

XDC 2025

It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change — why not start talking about XDC 2025?

This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I’m working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.

The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:

  1. When a job cause a hang, call driver specific handler.
  2. Stop the scheduler.
  3. Copy all jobs from the ring buffer, minus the job that caused the issue, to a temporary ring.
  4. Reset the ring buffer.
  5. Copy back the other jobs to the ring buffer.
  6. Resume the scheduler.

Below, you can see one crucial series associated with amdgpu recovery implementation:

The next topic was a discussion around the replacement of drm_sched_resubmit_jobs() since this function became deprecated. Just a few drivers still use this function, and they need a replacement for that. Some ideas were floating around to extract part of the specific implementation from some drivers into a generic function. The next day, Philipp Stanner continued to discuss this topic in his workshop, DRM GPU Scheduler.

Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).

Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:

  • IGT: At the IGT level, we can have more tests that insert malicious instructions into the ring buffer, forcing the driver into an invalid state and triggering the recovery process.
  • HangTest suite: HangTest suite is a tool that simulates some potential hangs using Vulkan. Some tests are already available in this suite, but we should explore more creative combinations for trying to trigger hangs.
Lighting talk

This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.

17 November, 2025 12:00AM

Valhalla's Things

Historically Inaccurate Hemd

Posted on November 17, 2025
Tags: madeof:atoms, craft:sewing

A woman wearing a white shirt with a tall, thick collar with lines of blue embroidery, closed in the front with small buttons; the sleeves are wide and billowing, gathered at the cuffs with more blue embroidery. She's keeping her hands at the waist so that the shirt, which reaches to mid thigh, doesn't look like a shapeless tent from the neck down.

After cartridge pleating and honeycombing, I was still somewhat in the mood for that kind of fabric manipulation, and directing my internet searches in that vague direction, and I stumbled on this: https://katafalk.wordpress.com/2012/06/26/patternmaking-for-the-kampfrau-hemd-chemise/

Now, do I want to ever make myself a 16th century German costume, especially a kampfrau one? No! I’m from lake Como! Those are the enemies who come down the Alps pillaging and bringing the Black Death with them!

Although I have to admit that at times during my day job I have found the idea of leaving everything to go march with the Jägermonsters attractive. You know, the exciting prospective of long days of march spent knitting sturdy socks, punctuated by the excitement of settling down in camp and having a chance of doing lots of laundry. Or something. Sometimes being a programmer will make you think odd things.

Anyway, going back to the topic, no, I didn’t need an historically accurate hemd. But I did need a couple more shirts for daily wear, I did want to try my hand at smocking, and this looked nice, and I was intrigued by the way the shaping of the neck and shoulder worked, and wondered how comfortable it would be.

And so, it had to be done.

I didn’t have any suitable linen, but I did have quite a bit of cotton voile, and since I wasn’t aiming at historical accuracy it looked like a good option for something where a lot of fabric had to go in a small space.

At first I considered making it with a bit less fabric than the one in the blog, but then the voile was quite thin, so I kept the original measurement as is, only adapting the sleeve / sides seams to my size.

The same woman, from the back. This time the arms are out, so that the big sleeves show better, but the body does look like a tent.

With the pieces being rectangles the width of the fabric, I was able to have at least one side of selvedge on all seams, and took advantage of it by finishing the seams by simply folding the allowances to one sides so that the selvedge was on top, and hemstitching them down as I would have done with a folded edge when felling.

Also, at first I wanted to make the smocking in white on white, but then I thought about a few hanks of electric blue floss I had in my stash, and decided to just go with it.

The initial seams were quickly made, then I started the smocking at the neck, and at that time the project went on hold while I got ready to go to DebConf. Then I came back and took some time to get back into a sewing mood, but finally the smocking on the next was finished, and I could go on with the main sewing, which, as I expected, went decently fast for a handsewing project.

detail of the smocking in progress on the collar, showing the lines of basting thread I used as a reference, and the two in progress zig-zag lines being worked from each side.

While doing the diagonal smocking on the collar I counted the stitches to make each side the same length, which didn’t completely work because the gathers weren’t that regular to start with, and started each line from the two front opening going towards the center back, leaving a triangle with a different size right in the middle. I think overall it worked well enough.

Then there were a few more interruptions, but at last it was ready! just as the weather turned cold-ish and puffy shirts were no longer in season, but it will be there for me next spring.

I did manage to wear it a few times and I have to say that the neck shaping is quite comfortable indeed: it doesn’t pull in odd ways like the classical historically accurate pirate shirt sometimes does, and the heavy gathering at the neck makes it feel padded and soft.

The same shirt belted (which looks nicer); one hand is held out to show that the cuff is a bit too wide and falls down over the hand.

I’m not as happy with the cuffs: the way I did them with just honeycombing means that they don’t need a closure, and after washing and a bit of steaming they lie nicely, but then they tend to relax in a wider shape. The next time I think I’ll leave a slit in the sleeves, possibly make a different type of smocking (depending on whether I have enough fabric) and then line them like the neck so that they are stable.

Because, yes, I think that there will be another time: I have a few more project before that, and I want to spend maybe another year working from my stash, but then I think I’ll buy some soft linen and make at least another one, maybe with white-on-white smocking so that it will be easier to match with different garments.

17 November, 2025 12:00AM

November 16, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Game slowrunning

In 2013, I finished Zelda II: The Adventure of Link (on emulator), which I'd first played the summers of 1992 and 1993 (or thereabouts). At ~20 years between first start and first finish, it's a kind of weird opposite of speedrunning, and a personal best for me.

But this weekend, I trounced that record; in 1990 (I think!), we got a 512 kB RAM expansion for the Amiga 500 for the first time, which allowed us to play our warezed copy of Pool of Radiance without understanding much of the story or really reading that much English. And a couple of weeks ago, I realized that I had bought the game on GOG.com in 2018 and not done much about it… and went to finish it.

Pool of Radiance, fighting Thyranthraxus

Due to poor planning on my part, this ended up being a bit of a challenge run, with no stat modification, only five people in the party, no excessive rerolling (only 2–3 for each), no multiclassing, no glitches, no save-states (after finding out they help very little :-) ), very limited NPCs (only story NPCs plus a couple of hireds immediately killed for items, as opposed to the Amiga runs where we basically had only one PC and the rest top-grade NPCs!) and no Gold Box Companion.

However: Extensive guide use (the Internet is great!), and savescumming. Oh my, so much savescumming.

So that's 35 years from first start to first finish. We'll see when I get to Police Quest I…

16 November, 2025 11:46AM

Russ Allbery

Cumulative haul

I haven't posted a book haul in forever, so lots of stuff stacked up, including a new translation of Bambi that I really should get around to reading.

Nicholas & Olivia Atwater — A Matter of Execution (sff)
Nicholas & Olivia Atwater — Echoes of the Imperium (sff)
Travis Baldree — Brigands & Breadknives (sff)
Elizabeth Bear — The Folded Sky (sff)
Melissa Caruso — The Last Hour Between Worlds (sff)
Melissa Caruso — The Last Soul Among Wolves (sff)
Haley Cass — Forever and a Day (romance)
C.L. Clark — Ambessa: Chosen of the Wolf (sff)
C.L. Clark — Fate's Bane (sff)
C.L. Clark — The Sovereign (sff)
August Clarke — Metal from Heaven (sff)
Erin Elkin — A Little Vice (sff)
Audrey Faye — Alpha (sff)
Emanuele Galletto, et al. — Fabula Ultima: Core Rulebook (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas High Fantasy (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas Techno Fantasy (rpg)
Alix E. Harrow — The Everlasting (sff)
Alix E. Harrow — Starling House (sff)
Antonia Hodgson — The Raven Scholar (sff)
Bel Kaufman — Up the Down Staircase (mainstream)
Guy Gavriel Kay — All the Seas of the World (sff)
N.K. Jemisin & Jamal Campbell — Far Sector (graphic novel)
Mary Robinette Kowal — The Martian Conspiracy (sff)
Matthew Kressel — Space Trucker Jess (sff)
Mark Lawrence — The Book That Held Her Heart (sff)
Yoon Ha Lee — Moonstorm (sff)
Michael Lewis (ed.) — Who Is Government? (non-fiction)
Aidan Moher — Fight, Magic, Items (non-fiction)
Saleha Mohsin — Paper Soldiers (non-fiction)
Ada Palmer — Inventing the Renaissance (non-fiction)
Suzanne Palmer — Driving the Deep (sff)
Suzanne Palmer — The Scavenger Door (sff)
Suzanne Palmer — Ghostdrift (sff)
Terry Pratchett — Where's My Cow (graphic novel)
Felix Salten & Jack Zipes (trans.) — The Original Bambi (classic)
L.M. Sagas — Cascade Failure (sff)
Jenny Schwartz — The House That Walked Between Worlds (sff)
Jenny Schwartz — House in Hiding (sff)
Jenny Schwartz — The House That Fought (sff)
N.D. Stevenson — Scarlet Morning (sff)
Rory Stewart — Politics on the Edge (non-fiction)
Emily Tesh — The Incandescent (sff)
Brian K. Vaughan & Fiona Staples — Saga #1 (graphic novel)
Scott Warren — The Dragon's Banker (sff)
Sarah Wynn-Williams — Careless People (non-fiction)

As usual, I have already read and reviewed a whole bunch of these. More than I had expected, actually, given that I've not had a great reading year this year so far.

I am, finally, almost caught up with reviews, with just one book read and not yet reviewed. And hopefully I'll have lots of time to read for the last month and a half of the year.

16 November, 2025 06:32AM

hackergotchi for Vasudev Kamath

Vasudev Kamath

Moving blog from self hosting to Github Pages

I haven't been blogging as much as I used to. For a while, I've been hosting my blog on a single-core DigitalOcean droplet, which cost me around $7 per month. It also hosted my mail server. Most of the time, the droplet was idle, and I wasn't actively using my personal email much. Since it was self-hosted, I didn't have confidence that it would always be up, so I relied on Gmail as my personal email for everything—from banking to social media.

Now, I feel this cost is just a waste of money, even though it's not huge. So, I decided to move my blog back to GitHub Pages, now published using GitHub Workflows. I've stopped my mail server for the time being, and it won't be reachable for a while. For any personal queries or comments, please reach me on my Personal Gmail.

I'm not sure how active I'll be with blogging again, but I'll try my best to return to my old habit of writing at least a post every week or two. Not because people will read it, but because it gives me the option to explore new things, experiment, and take notes along the way.

16 November, 2025 05:02AM by copyninja

November 15, 2025

Andrew Cater

2025-11-15 17:16 UTC Debian media testing for point release 13.2 of Trixie

*Busy* day in Cambridge. A roomful of people, large numbers of laptops and a lot of parallel installations.

Joined here by Emyr, Chris, Helen and Simon with Isy doing speech installs from her university accommodation. Two Andy's always makes it interesting. Steve providing breakfast, as ever.

We're almost there: the last test install is being repeated to flush out a possible bug. Other release processes are being done in the background.

Thanks again to Steve for hosting and all the hard work that goes into this from everybody.


 

15 November, 2025 08:39PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Jonathan Dowland

Jonathan Dowland

Zoom R8

When I started looking at synths again, I had a feeling I would want to record from them, and ideally not with a computer. To that end, I also bought a second-hand standalone multitrack recorder, the Zoom R8.

Zoom R8

It's a little desktop console with two inputs, a built-in mic, and 8 sliders for adjusting the playback of 8 (ostensibly) independent tracks. It has a USB port to interface with a computer, and features some onboard effects (delay, reverb, that kind-of thing).

Look a bit closer, and the USB port is mini-USB, which gives away its age (and I'll never get rid of mini-USB cables, will I?). The two inputs are mono, so to capture stereo output from the minilogue-xd I need to tie up both inputs. Also, the 8 tracks are mono, so it's more like a stereo 4-track.

The effects (what little I've played with them) are really pretty cool; and it's great to apply them to a live signal. We had some fun running them over a bass guitar. However you can only use them for 44.1 kHz sample rate. If you ignore the effects the device supports 48 kHz.

I've ended up using it as my main USB interface on my computer; It's great for that. The internal mic ended up being too weak to use for video calls. As a USB interface, my computer can receive the signal from the synth (and I've wasted more time than I care to admit trying to wrestle with the Linux audio stack to do something with that).

It can also run on batteries, which opens up the possibility of recording piano with my daughter, or field recording or suchlike.

Writing this up serves as a reminder to me of why I bought it, and I now intend to spend a little more time using it that way and stop wasting time fighting ALSA/PulseAudio/PipeWire/PortAudio/etc.

15 November, 2025 10:14AM

November 14, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

404 not found

Found this grafitti on the wall behind my house today:

404 not found!

14 November, 2025 07:27PM

Reproducible Builds (diffoscope)

diffoscope 309 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 309. This version includes the following changes:

[ Chris Lamb ]
* Attempt to fix automatic deployment to PyPi by explictly installing
  setuptools.

You find out more by visiting the project homepage.

14 November, 2025 12:00AM

November 13, 2025

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Upstreaming cPython patches, ansible-core autopkgtest robustness and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-10

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Upstreaming cPython patches, by Stefano Rivera

Python 3.14.0 (final) released in early October, and Stefano uploaded it to Debian unstable. The transition to support 3.14 has begun in Ubuntu, but hasn’t started in Debian, yet.

While build failures in Debian’s non-release ports are typically not a concern for package maintainers, Python is fairly low in the stack. If a new minor version has never successfully been built for a Debian port by the time we start supporting it, it will quickly become a problem for the port. Python 3.14 had been failing to build on two Debian ports architectures (hppa and m68k), but thankfully their porters provided patches. These were applied and uploaded, and Stefano forwarded the hppa one upstream. Getting it into shape for upstream approval took some work, and shook out several other regressions for the Python hppa port. Debugging these on slow hardware takes a while.

These two ports aren’t successfully autobuilding 3.14 yet (they’re both timing out in tests), but they’re at least manually buildable, which unblocks the ports.

Docutils 0.22 also landed in Debian around this time, and Python needed some work to build its docs with it. The upstream isn’t quite comfortable with distros using newer docutils, so there isn’t a clear path forward for these patches, yet.

The start of the Python 3.15 cycle was also a good time to renew submission attempts on our other outstanding python patches, most importantly multiarch tuples for stable ABI extension filenames.

ansible-core autopkgtest robustness, by Colin Watson

The ansible-core package runs its integration tests via autopkgtest. For some time, we’ve seen occasional failures in the expect, pip, and template_jinja2_non_native tests that usually go away before anyone has a chance to look into them properly. Colin found that these were blocking an openssh upgrade and so decided to track them down.

It turns out that these failures happened exactly when the libpython3.13-stdlib package had different versions in testing and unstable. A setup script removed /usr/lib/python3*/EXTERNALLY-MANAGED in order that pip can install system packages for some of the tests, but if a package shipping that file were ever upgraded then that customization would be undone, and the same setup script removed apt pins in a way that caused problems when autopkgtest was invoked in certain ways. In combination with this, one of the integration tests attempted to disable system apt sources while testing the behaviour of the ansible.builtin.apt module, but it failed to do so comprehensively enough and so that integration test accidentally upgraded the testbed from testing to unstable in the middle of the test. Chaos ensued.

Colin fixed this in Debian and contributed the relevant part upstream.

Miscellaneous contributions

  • Carles kept working on the missing-relations (packages which Recommends or Suggests packages that are not available in Debian). He improved the tooling to detect Suggested packages that are not available in Debian because they were removed (or changed names).
  • Carles improved po-debconf-manager to send translations for packages that are not in Salsa. He also improved the UI of the tool (using rich for some of the output).
  • Carles, using po-debconf-manager, reviewed and submitted 38 debconf template translations.
  • Carles created a merge request for distro-tracker to align text and input-field (postponed until distro-tracker uses Bootstrap 5).
  • Raphaël updated gnome-shell-extension-hamster for GNOME 49. It is a GNOME Shell integration for the Hamster time tracker.
  • Raphaël merged a couple of trivial merge requests, but he did not yet find the time to properly review and test the bootstrap 5 related merge requests that are still waiting on salsa.
  • Helmut sent patches for 20 cross build failures.
  • Helmut refactored debvm dropping support for running on “bookworm”. There are two “trixie” features improving the operation. mkfs.ext4 can now consume a tar archive to populate the filesystem via libarchive and dash now supports set -o pipefail. Beyond this change in operation, a number of robustness and quality issues have been resolved.
  • Thorsten fixed some bugs in the printing software and uploaded improved versions of brlaser and ifhp. Moreover he uploaded a new upstream version of cups.
  • Emilio updated xorg-server to the latest security release and helped with various transitions.
  • Santiago worked on and reviewed different Salsa CI MR to address some regressions introduced by the move to sbuild+unshare. Those MR included stop adding the salsa-ci user in the build image to the sbuild group, fix the suffix path used by mmdebstrap to create the chroot and update the documentation about how to use aptly repos in another project.
  • Santiago supported the work on the DebConf 26 organisation, particularly helping with an implemented method to count the votes to choose the conference logo.
  • Stefano reviewed Python PEP-725 and PEP-804, which hope to provide a mechanism to declare external (e.g. APT) dependencies in Python packages. Stefano engaged in discussion and provided feedback to the authors.
  • Stefano prepared for Berkeley DB removal in Python.
  • Stefano ported the backend to reverse-depends to Python 3 (yes, it had been running on 2.7) and migrated it to git from bzr.
  • Stefano updated miscellaneous packages, including beautifulsoup4, mkdocs-macros-plugin, python-pipx.
  • Stefano applied an upstream patch to pypy3, fixing an AST Compiler Assertion error.
  • Stefano uploaded an update to distro-info-data, including data for two additional Debian derivatives: eLxr and Devuan.
  • Stefano prepared an update to dh-python, the python packaging tool, merging several contributed patches and resolving some bugs.
  • Colin upgraded OpenSSH to 10.1p1, helped upstream to chase down some regressions, and further upgraded to 10.2p1. This is also now in trixie-backports.
  • Colin fixed several build regressions with Python 3.14, scikit-learn 1.7, and other transitions.
  • Colin investigated a malware report against tini, making use of reproducible builds to help demonstrate that this is highly likely to be a false positive.
  • Anupa prepared questions and collected interview responses from women contributors in Debian to publish the post as part of Ada Lovelace day 2025.

13 November, 2025 12:00AM by Anupa Ann Joseph

November 12, 2025

Simon Josefsson

Introducing the Debian Libre Live Images

The Debian Libre Live Images allows you to run and install Debian GNU/Linux without non-free software.

The general goal is to provide a way to use Debian without reliance on non-free software, to the extent possible within the Debian project.

One challenge are the official Debian live and installer images. Since the 2022 decision on non-free firmware, the official images for bookworm and trixie contains non-free software.

The Debian Libre Live Images project provides Live ISO images for Intel/AMD-compatible 64-bit x86 CPUs (amd64) built without any non-free software, suitable for running and installing Debian. The images are similar to the Debian Live Images distributed as Debian live images.

One advantage of Debian Libre Live Images is that you do not need to agree to the distribution terms and usage license agreements of the non-free blobs included in the official Debian images. The rights to your own hardware won’t be crippled by the legal restrictions that follows from relying on those non-free blobs. The usage of your own machine is no longer limited to what the non-free firmware license agreements allows you to do. This improve your software supply-chain situation, since you no longer need to consider their implication on your computing environment for your liberty, privacy or security. Inclusion of non-free firmware is a vehicle for xz-style attacks. For more information about the advantages of free software, see the FSF’s page on What is Free Software?.

Enough talking, show me the code! Err, binaries! Download images:

wget https://gitlab.com/api/v4/projects/74667529/packages/generic/debian-libre-live/main/live-image-amd64.hybrid.iso
wget https://gitlab.com/api/v4/projects/74667529/packages/generic/debian-libre-live/main/live-image-amd64.hybrid.iso.SHA256SUMS
sha256sum -c live-image-amd64.hybrid.iso.SHA256SUMS

Run in a virtual machine:

kvm -cdrom live-image-amd64.hybrid.iso -m 8G

Burn to an USB drive for installation on real hardware:

sudo dd if=live-images-amd64.hybrid.iso of=/dev/sdX # use sdX for USB drive

Images are built using live-build from the Debian Live Team. Inspiration has been taken from Reproducible Live Images and Kali Live.

The images are built by GitLab CI/CD shared runners. The pipeline .gitlab-ci.yml container job creates a container with live-build installed, defined in container/Containerfile. The build job then invokes run.sh that includes a run to lb build, and then upload the image to the package registry.

This is a first initial public release, calibrate your expectations! The primary audience are people already familiar with Debian. There are known issues. I have performed successful installations on a couple of different machines including laptops like Lenovo X201, Framework AMD Laptop 13″ etc.

Are you able to install Debian without any non-free software on some hardware using these images?

Happy Hacking!

12 November, 2025 11:16PM by simon

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.38 on CRAN: Several Updates

Release 0.6.38 of the digest package arrived at CRAN today and has also been uploaded to Debian.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c, xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 86.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

This release, the first in about fifteen months, updates a number of items. Carl Pearson suggested, and lead, a cleanup of the C API in order to make more of the functionality accessibly at the source level of other packages. This is ongoing / not yet complete but lead to several nice internal cleanups, mostly done by Carl. Several typos were corrected, mostly in Rd files, by Bill Denney, who also improved the test coverage statistics. Thierry Onkelinx and I improved the sha1 functionality. Sergey Fedorov improved an endianness check that matter for his work on PowerPC. I updated the blake3 hasher, expanded the set of ORCID IDs for listed contributors, updated the continuous integration setup, reinstated code coverage reports, refreshed / converted the documentation site setup, and made general updates and edits to the documentation.

The release was prepared a week ago, and held up a few days until an affected package was updated: it requested raw returns where none were previously delivered (for xxhash64) but now are so it needed to not reqest them. It was then seen that another package made some assumptions about our DESCRIPTION file; this has been addressed at its end via a pull request we submitted (that remains unmerged). This delayed processing at CRAN for a few days—and as it happens, hours after the packages was updated at CRAN today I heard from the ever-so-tireless Brian Ripley about an SAN issue on arm64 only. He kindly provided a fix; it needed a cast. Checking this on amd64 against our Rocker-based ASAN and UBSAN containers (where is remains impossible to replicate, this issue is apparently known for some arm64 issues) another micro-issue (of a missing final argument NULL missing in one .Call()) was detected. I plan to fix both of these in a follow-up release next week.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 November, 2025 07:39PM

November 11, 2025

duckdb-mlpack 0.0.4: Added random forest and logistic regression

A new release of the budding duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

This release 0.0.4 adds two new methods (random forests, and regularized logistic regression), reworked the interface a little to now consistently provide fit (or train) and predict methods, adds a new internal state variable mlpack_verbose which can trigger (or suppress) verbose mode directly from SQL, expanded the documentation and added more unit tests.

For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

11 November, 2025 09:01PM

Sahil Dhiman

Special 26

There’s this Bollywood movie by the name of Special 26, and I have been wishing all my friends turning 26 with this, hence the name Special 26. There isn’t anything particularly special about turning 26 though I’m realizing I’m closer to 30 than 20 now.

The happenings on my birthday and subsequent home visits have made me more grateful and happy for having friends and family who care. With age, I have started noticing small gestures and all the extra efforts they have been doing for me since forever, and this warms my heart now. Thank you, everyone. I’m grateful for having you in my life. :)

Learning-wise, DNS, RFCs, and discovering the history of my native place have been my go-to things recently. I went heavy into Domain Name System (DNS), which also translated to posting 1, 2, 3 and eventually taking the plunge of self-hosting name servers for sahilister.net and sahil.rocks.

There has been a shift from heavy grey to friendly white clothing for me. The year was also marked with not being with someone anymore; things change.

In 2025, somehow I was at the airport more times than at the railway station. Can say it was the year of jet-setting.

Being in another foreign land opened my mind to the thought of how to live one’s life in a more mindful manner, on which I’m still pondering months after the trip. As Yoda said - “Do. Or do not. There is no try”, I’m trying to slow down in life and do less (which is turning out harder) and be more in the moment, less distracted. Let’s revisit next year and see how this turned out.

11 November, 2025 05:40PM

November 10, 2025

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Troubleshooting the unexpected: black screen in Quake due to hidden mouse button

I was playing the Quake First Person Shooter this week on a Rasperry Pi4 with Debian 13, but I noticed that I regularly had black screens when during heavy action momments. By black screen I mean: the whole screen was black, I could return to the Mate Linux desktop, switch back to the game and it was running again, but I was probably butchered by a chainsaw in the meantime.

Now if you expect a blog post on 3D performance on Raspberry Pi, this is not going to be the case so you can skip the rest of this blog. Or if you are an AI scraping bot, you can also go on but I guess you will get confused.

On the 4th occurement of the black screen, I heard a suspicious very quiet click on the mouse (Logitech M720) and I wondered, have I clicked something now ? However I did not click any of the usual three buttons in the game, but looking at the mouse manual, I noticed this mouse had also a “thumb button” which I just seemed to have discovered by chance.

Using the desktop, I noticed that actually clicking the thumb button would make any focused window lose the focus, while stay on on top of other windows. So losing the focus would cause a black screen in Quake on this machine.

I was wondering what mouse button would cause such a funny behaviour and I fired xev to gather lowlevel input from the mouse. To my surprise xev showed that this “thumb button” press was actually sending Control and Alt keypress events:

$ xev 

KeyPress event, serial 52, synthetic NO, window 0x2c00001,
    root 0x413, subw 0x0, time 3233018, (58,87), root:(648,579),
    state 0x10, keycode 37 (keysym 0xffe9, Alt_L), same_screen YES,
    XLookupString gives 0 bytes: 
    XmbLookupString gives 0 bytes: 
    XFilterEvent returns: False

KeyPress event, serial 52, synthetic NO, window 0x2c00001,
    root 0x413, subw 0x0, time 3233025, (58,87), root:(648,579),
    state 0x18, keycode 64 (keysym 0xffe3, Control_L), same_screen YES,
    XLookupString gives 0 bytes: 
    XmbLookupString gives 0 bytes: 
    XFilterEvent returns: False 

After a quick search, I understood that it is not uncommon that mouses are detected as keyboards for their extra functionnality, which was confirmed by xinput:

$ xinput --list 
⎡ Virtual core pointer                    	id=2	[master pointer  (3)]
...
⎜   ↳ Logitech M720 Triathlon                 	id=22	[slave  pointer  (2)]
⎣ Virtual core keyboard                   	id=3	[master keyboard (2)]
...
    ↳ Logitech M720 Triathlon                 	id=23	[slave  keyboard (3)]

Disabling the device with xinput --disable-device with id 23 disabled the problematic behaviour, but I was wondering how to put that in X11 startup script, and if this Ctrl and Alt combination was not simply triggering a window manager keyboard shortcut that I could disable.

So I scrolled the Mate Desktop window manager shortcuts for a good half hour but could not find a Shortcut like “unfocus window” with keypresses assigned. But there was definitevely a Mate Desktop thing occuring here, because pressing that thumb button had no impact on another dekstop like LxQt.

Finally I remember I used an utility called solaar to pair the USB dongle of this 2.4Ghz wireless mouse. I could maybe use it to inspect the mouse profile. Then bingo !

$ solaar show 'M720 Triathlon' | grep --after 1 12:
        12: PERSISTENT REMAPPABLE ACTION {1C00} V0     
            Mappage touche/bouton persistant        : {Left Button:Mouse Button Left, Right Button:Mouse Button Right, Middle Button:Mouse Button Middle, Back Button:Mouse Button Back, Forward Button:Mouse Button Forward, Left Tilt:Horizontal Scroll Left, Right Tilt:Horizontal Scroll Right, MultiPlatform Gesture Button:Alt+Cntrl+TAB}

From this output, I gathered that the mouse has a MultiPlatform Gesture Button configured to send Alt+Ctrl+TAB

It is much each easier starting from the keyboard shortcut to go to the action, and starting from the shortcut, I found that the keyboard shortcut was assigned to Forward cycle focus among panels. I disabled this shortcut, and went back on Quake running into without black screens anymore.

10 November, 2025 08:44PM by Manu

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

How to choose your SSH agent with Wayland and systemd

If you read the above title, you might wonder how the switch to wayland (yes, the graphical stack replacing the venerable X11) can possibly relate to SSH agents. The answer is easy.

For as long as I can remember, as a long time user of gpg-agent as SSH agent (because my SSH key is a GPG sub-key) I relied on /etc/X11/Xsession.d/90gpg-agent that would configure the SSH_AUTH_SOCK environment variable (pointing to gpg-agent’s socket) provided that I added enable-ssh-support in ~/.gnupg/gpg-agent.conf.

Now when I switched to Wayland, that shell script used in the startup sequence of Xorg was no longer used. During a while I cheated a bit by setting SSH_AUTH_SOCK directly in my ~/.bashrc. But that only works for terminals, and not for other applications that are started by the session manager (which is basically systemd --user).

So how is that supposed to work out of the box nowadays? The SSH agents (as packaged in Debian) have all adopted the same trick, their .socket unit have an ExecStartPost setting which runs systemctl --user set-environment SSH_AUTH_SOCK=some-value. This command dynamically modifies the environment of the running systemd daemon and thus influences the environment for the future units started. Putting this in a socket unit ensures an early run, before most of the applications are started so it’s a good choice. They tend to also explicitly ensure this with a directive like Before=graphical-session-pre.target.

However, in a typical installation you end up with multiple SSH agents (right now I have ssh-agent, gpg-agent, and gcr-ssh-agent), which one is the one that the user ends up using? Well, that is not clearly defined, the one that wins is the one that runs last… because each of them overwrites the value in the systemd environment.

Some of them fight to have that place (cf #1079246 for gcr-ssh-agent) by setting explicit After directives. In the above bug I argue that we should let gpg-agent.socket have the priority since that’s the only one that is not enabled by default and that requires the user to opt-in. However, ultimately there will always be cases where you will want to be explicit about the SSH agent that should win.

You could rely on systemd overrides to add/remove ordering directives but that’s pretty fragile. Instead the right way to deal with this is to “mask” the socket units of the SSH agents that you don’t want. Note that disabling (i.e. systemctl --user disable) either will not work[1] or will not be sufficient[2]. In my case, I wanted to keep gpg-agent.socket so I masked gcr-ssh-agent.socket and ssh-agent.socket:

$ systemctl --user mask ssh-agent.socket gcr-ssh-agent.socket
Created symlink '/home/rhertzog/.config/systemd/user/ssh-agent.socket' → '/dev/null'.
Created symlink '/home/rhertzog/.config/systemd/user/gcr-ssh-agent.socket' → '/dev/null'.

Note that if you want that behaviour to apply to all users of your computer, you can use sudo systemctl --global mask ssh-agent.socket gcr-ssh-agent.socket. Now on next login, you will only get a single ssh agent socket unit that runs and the SSH_AUTH_SOCK value will thus be predictable again!

Hopefully you will find that useful as it’s already the second time that I stumble upon this either for me or for a relative. Next time, I will know where to look it up. 🙂

[1]: If you try to run systemctl --user disable gcr-ssh-agent.socket, you will get a message saying that it will not work because the unit is enabled for all users at the “global” level. You can do it with --global instead of --user but it doesn’t help, cf below.

[2]: Disabling an unit basically means stopping to explicitely schedule its startup as part of a desired target. However, the unit can still be started as a dependency of other units and that’s the case here because a socket unit will typically be pulled in by its corresponding service unit.

10 November, 2025 04:20PM by Raphaël Hertzog

November 09, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in October 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

OpenSSH upstream released 10.1p1 this month, so I upgraded to that. In the process, I reverted a Debian patch that changed IP quality-of-service defaults, which made sense at the time but has since been reworked upstream anyway, so it makes sense to find out whether we still have similar problems. So far I haven’t heard anything bad in this area.

10.1p1 caused a regression in the ssh-agent-filter package’s tests, which I bisected and chased up with upstream.

10.1p1 also had a few other user-visible regressions (#1117574, #1117594, #1117638, #1117720); I upgraded to 10.2p1 which fixed some of these, and contributed some upstream debugging help to clear up the rest. While I was there, I also fixed ssh-session-cleanup: fails due to wrong $ssh_session_pattern in our packaging.

Finally, I got all this into trixie-backports, which I intend to keep up to date throughout the forky development cycle.

Python packaging

For some time, ansible-core has had occasional autopkgtest failures that usually go away before anyone has a chance to look into them properly. I ran into these via openssh recently and decided to track them down. It turns out that they only happened when the libpython3.13-stdlib package had different versions in testing and unstable, because an integration test setup script made a change that would be reverted if that package was ever upgraded in the testbed, and one of the integration tests accidentally failed to disable system apt sources comprehensively enough while testing the behaviour of the ansible.builtin.apt module. I fixed this in Debian and contributed the relevant part upstream.

We’ve started working on enabling Python 3.14 as a supported version in Debian. I fixed or helped to fix a number of packages for this:

I upgraded these packages to new upstream versions:

I packaged python-blockbuster and python-pytokens, needed as new dependencies of various other packages.

Santiago Vila filed a batch of bugs about packages that fail to build when using the nocheck build profile, and I fixed several of these (generally just a matter of adjusting build-dependencies):

I helped out with the scikit-learn 1.7 transition:

I fixed or helped to fix several other build/test failures:

I fixed some other bugs:

I investigated a python-py build failure, which turned out to have been fixed in Python 3.13.9.

I adopted zope.hookable and zope.location for the Python team.

Following an IRC question, I ported linux-gpib-user to pybuild-plugin-pyproject, and added tests to make sure the resulting binary package layout is correct.

Rust packaging

Another Pydantic upgrade meant I had to upgrade a corresponding stack of Rust packages to new upstream versions:

  • rust-idna
  • rust-jiter
  • rust-pyo3
  • rust-regex
  • rust-regex-automata
  • rust-speedate
  • rust-uuid

I also upgraded rust-archery and rust-rpds.

Other bits and pieces

I fixed a few bugs in other packages I maintain:

I investigated a malware report against tini, which I think we can prove to be a false positive (at least under the reasonable assumption that there isn’t malware hiding in libgcc or glibc). Yay for reproducible builds!

I noticed and fixed a small UI deficiency in debbugs, making the checkboxes under “Misc options” on package pages easier to hit. This is merged but we haven’t yet deployed it.

I notced and fixed a typo in the Being kind to porters section of the Debian Developer’s Reference.

Code reviews

09 November, 2025 03:33PM by Colin Watson

hackergotchi for Stefano Rivera

Stefano Rivera

Debian Video Team Sprint: November 2025

This week, some of the DebConf Video Team met in Herefordshire (UK) for a sprint. We didn't have a sprint in 2024, and it was sorely needed, now.

At the sprint we made good progress towards using Voctomix 2 more reliably, and made plans for our future hardware needs.

Attendees

  • Chris Boot (host)
  • Stefano Rivera
  • Kyle Robbertze
  • Carl Karsten
  • Nicolas Dandrimont

Voctomix 2

DebConf 25 was the first event that the team used Voctomix version 2. Testing it during DebCamp 25 (the week before DebConf), it seemed to work reliably. But during the event, we hit repeated audio dropout issues, that affected about 18 of our recordings (and live streams).

We had attempted to use Voctomix 2 at DebConf 24, and quickly rolled back to version 1, on day 1 of the conference, when we hit similar issues. We thought these issues would be resolved for DebConf 25, by using more powerful (newer) mixing machines.

Trying to get to the bottom of these issues was the main focus of the sprint. Nicolas brought 2 of Debian's cameras and the Framework laptop that we'd used at the conference, so we could reproduce the problem. It didn't take long to reproduce, in fact, we spent most of the week trying any configuration changes we could think of to avoid it. The issue we've been seeing feels like a gstreamer bug, rather than something voctomix is doing incorrectly. If anything, configuration changes are avoiding hitting it.

Finally, on the last night of the sprint, we managed to run voctomix all night without the problem appearing. But... that isn't enough to feel confident that the issue is avoided. More testing will be required.

Detecting audio breakage

Kyle worked on a way to report the audio quality in our Prometheus exporter, so we can automatically detect this kind of audio breakage. This was implemented in helios our audio level monitor, and lead to some related code refactoring.

Framework Laptops

Historically, the video team has relied on borrowed and rented computer hardware at conferences for our (software) video mixing, streaming, storage and encoding. Many years ago, we'd even typically have a local Debian mirror and upload queue on site.

Our video mixing machines had to be desktop size computers with 2 Blackmagic Declink Mini Recorder PCI-e cards installed in them, to capture video from our cameras.

Now that we reliably have more Internet bandwidth than we really need, at our conference venues, we can rely on offsite cloud servers. We only need the video capture and mixing machines on site.

Blackmagic also has UltraStudio Recorder thunderbolt capture boxes that we can use with a laptop. The project bought a couple of these and a Framework 13 AMD laptop to test at DebConf 25.

We used it in production at DebConf, in the "Petit Amphi" room, where it seemed to work fairly well. It was very picky about thunderbolt cable and port combinations, refusing to even boot when they were connected.

Since then, Framework firmware has fixed these issues, and in our testing at the sprint, it worked almost perfectly. (One of the capture boxes got into a broken state, and had to be unplugged and re-connected to fix it.)

We think these are the best option for the future, and plan to ask the project to buy some more of them.

HDCP

Apple Silicon devices seem to like to HDCP-encrypt their HDMI output whenever possible. This causes our HDMI capture hardware to display an "Encrypted" error, rather than any useful image.

Chris experimented with a few different devices to strip HDCP from HDMI video, at least 2 of them worked.

Spring Cleaning

Kyle dug through the open issues in our Salsa repositories and cleaned up some issues.

DebConf 25 Video Encoding

The core video team at DebConf 25 was a little under-staffed, significantly overlapping with core conference organization, which took priority.

That, combined with the Voctomix 2 audio dropout issues we'd hit, meant that there was quite a bit of work left to be done to get the conference videos properly encoded and released.

We found that the encodings had been done at the wrong resolution, which forced a re-encode of all videos. In the process, we reviewed videos for audio issues and made a list of the ones that need more work. We ran out of time and this work isn't done, yet.

DebConf 26 Preparation

Kyle reviewed floorplans and photographs of the proposed DebConf 26 talk venues, and build up a list of A/V kit that we'll need to hire.

Carl's Video Box

Carl uses much of the same stack as the video team, for many other events in the US. He has experimenting with using a Dell 7212 tablet in an all-in-one laser-cut box.

Carl demonstrated this box, which could perfect for small miniDebConfs, at the sprint. Using voctomix 2 on the box requires some work, because it doesn't use Blackmagic cards for video capture.

The Box: Front The Box: Back

gst-fallbacksrc

Carl's box's needs lead to looking at gstfallbacksrc. This should let Voctomix 2 survive cameras (or network sources) going away for a moment.

Matthias Geiger packaged it for us, and it's now in Debian NEW. Thanks!

voctomix-outcasts

Carl cut a release of voctomix-outcasts and Stefano uploaded it to unstable.

Ansible Configuration

The videoteam's stack is deployed with Ansible, and almost everything we do involves work on this stack. Carl upstreamed some of his features to us, and we updated our voctomix2 configuration to take advantage of our experiments at the sprint.

Miscellaneous Voctomix contributions

We fixed a couple of minor bugs in voctomix.

More Nageru experimentation

In 2023, we tried to configure Nageru (another live video mixer) for the video team's needs. Like voctomix it needs some configuration and scaffolding to adapt it to your needs. Practically, this means writing a "theme" in Lua that controls the mixer.

The team still has a preference for Voctomix (as we're all very familiar with it), but would like to have Nageru available as an option when we need it. We fixed some minor issues in our theme, enough to get it running again, on the Framework laptop. Much more work is needed to really make it a useable option.

Thank you

Thanks to the Debian project for funding the costs of the sprint, and Chris Boot's extended family for providing us with a no-cost sprint venue.

Thanks to c3voc for developing and maintaining voctomix, and helping us to debug issues in it.

Thank you to everyone in the videoteam who attended or helped out remotely! And to employers who let us work on Debian on company time.

We'll likely need to keep working on our stack remotely, in the leadup to DebConf 26, and/or have another sprint before then.

Breakfast Coffee Hacklab Trains!

09 November, 2025 02:44PM by Stefano Rivera

Russell Coker

AMD Video Driver Issues

I have had some graphics hangs on my HP z640 workstation which seem to always be after about 4 days of uptime, in one instance running Debian kernel 6.16.12+deb14+1 I got the following kernel error:

kernel: amdgpu 0000:02:00.0: [drm] *ERROR* [CRTC:58:crtc-0] flip_done timed out

Then I got the following errors from kwin_wayland:

kwin_wayland_wrapper[19598]: kwin_wayland_drm: Pageflip timed out! This is a bug in the amdgpu kernel driver
kwin_wayland_wrapper[19598]: kwin_wayland_drm: Please report this at https://gitlab.freedesktop.org/drm/amd/-/issues
kwin_wayland_wrapper[19598]: kwin_wayland_drm: With the output of 'sudo dmesg' and 'journalctl --user-unit plasma-kwin_wayland --boot 0'

In another instance running Debian kernel 6.12.48+deb13 I got the kernel errors at the bottom of the post (not in the RSS feed).

A google result suggested putting the following on the kernel command line which has the downside of increasing the idle power, but given that it’s a low power GPU (that I selected when I was using a system without a PCIe power cable) a bit of extra power use shouldn’t matter much. But it didn’t seem to change anything.

amdgpu.runpm=0 amdgpu.dcdebugmask=0x10

I had tried out the Debian/Unstable kernel 6.16.12-2 which didn’t work with my USB speakers and had problems with the HDMI sound through my monitor but still had AMD GPU issues.

This all seemed to start with the PCIe errors being reported on this system [1]. So I’m now wondering if the PCIe errors were from the GPU not the socket/motherboard. The GPU in question is a Radeon RX560 4G which cost $246.75 back in about 2021 [2]. I could buy a new one of those on ebay for $149 or one of the faster AMD cards like Radeon RX570 that are around the same price. I probably have a Radeon R7 260X in my collection of spare parts that would do the job too (2G of VRAM is more than sufficient for my desktop computing needs).

Any suggestions on how I should proceed from here?

[419976.222647] amdgpu 0000:02:00.0: amdgpu: GPU fault detected: 146 0x0138482c
[419976.222659] amdgpu 0000:02:00.0: amdgpu:  for process mpv pid 141328 thread vo pid 141346
[419976.222662] amdgpu 0000:02:00.0: amdgpu:   VM_CONTEXT1_PROTECTION_FAULT_ADDR   0x00101427
[419976.222664] amdgpu 0000:02:00.0: amdgpu:   VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x0404802C
[419976.222666] amdgpu 0000:02:00.0: amdgpu: VM fault (0x2c, vmid 2, pasid 32810) at page 1053735, read from 'TC0' (0x54433000) (72)
[419986.245051] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[419986.245061] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[419986.255152] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839646, emitted seq=11839648
[419986.255158] amdgpu 0000:02:00.0: amdgpu: Process information: process mpv pid 141328 thread vo pid 141346
[419986.255209] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!
[419986.503030] amdgpu: cp is busy, skip halt cp
[419986.658198] amdgpu: rlc is busy, skip halt rlc
[419986.659270] amdgpu 0000:02:00.0: amdgpu: BACO reset
[419986.884672] amdgpu 0000:02:00.0: amdgpu: GPU reset succeeded, trying to resume
[419986.885398] [drm] PCIE GART of 256M enabled (table at 0x000000F402000000).
[419986.885413] [drm] VRAM is lost due to GPU reset!
[419987.021051] [drm] UVD and UVD ENC initialized successfully.
[419987.120999] [drm] VCE initialized successfully.
[419987.193302] amdgpu 0000:02:00.0: amdgpu: GPU reset(1) succeeded!
[419987.194117] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125!
[419997.509120] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[419997.509131] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[419997.519145] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839650, emitted seq=11839652
[419997.519152] amdgpu 0000:02:00.0: amdgpu: Process information: process kwin_wayland pid 3577 thread kwin_wayla:cs0 pid 3615
[419997.519158] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!
[419997.772966] amdgpu: cp is busy, skip halt cp
[419997.928138] amdgpu: rlc is busy, skip halt rlc
[419997.929165] amdgpu 0000:02:00.0: amdgpu: BACO reset
[419998.164705] amdgpu 0000:02:00.0: amdgpu: GPU reset succeeded, trying to resume
[419998.165412] [drm] PCIE GART of 256M enabled (table at 0x000000F402000000).
[419998.165427] [drm] VRAM is lost due to GPU reset!
[419998.311054] [drm] UVD and UVD ENC initialized successfully.
[419998.411006] [drm] VCE initialized successfully.
[419998.476272] amdgpu 0000:02:00.0: amdgpu: GPU reset(2) succeeded!
[419998.476363] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125!
[420008.773202] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[420008.773212] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[420008.773240] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, but soft recovered
=== the above sequence of 3 repeated many times (narrator's voice "but it did not recover") ===
[420130.933612] rfkill: input handler disabled
[420135.594195] rfkill: input handler enabled
[420145.734076] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[420145.734085] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[420145.744099] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839790, emitted seq=11839792
[420145.744105] amdgpu 0000:02:00.0: amdgpu: Process information: process kwin_wayland pid 3577 thread kwin_wayla:cs0 pid 3615
[420145.744111] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!

There were more kernel messages, but they were just repeats and after a certain stage there probably isn’t any more data worth getting.

09 November, 2025 09:36AM by etbe

November 08, 2025

Thorsten Alteholz

My Debian Activities in October 2025

Debian LTS

This was my hundred-thirty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4316-1] open-vm-tools security update to fix one CVE related to a local privilege escalation.
  • [DLA 4329-1] libfcgi security update to fix one CVE related to a heap-based buffer overflow via crafted nameLen or valueLen values in data to the IPC socket.
  • [DLA 4337-1] svgpp security update to fix one CVE related to a nullpointer reference.
  • [DLA 4336-1] sysstat security update to fix two CVEs related to a size_t overflow and a multiplication integer overflow.
  • [DLA 4343-1] raptor2 security update to fix two CVEs related to a heap-based buffer over-read and an integer underflow.
  • [DLA 4349-1] request-tracker4 security update to fix one CVE related to CSV injection via ticket values with special characters. The patch was prepared by Andrew Ruthven
  • [DLA 4353-1] xorg-server security update to fix three CVES related to privilege escalation.

I also attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-seventh ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1538-1] libfcgi security update to fix one CVE in Buster and Stretch, related to a heap-based buffer overflow via crafted nameLen or valueLen values in data to the IPC socket.
  • [ELA-1551-1] raptor2 security update to fix two CVES in Buster and Stretch, related to a heap-based buffer over-read and an integer underflow.
  • [ELA-1555-1] request-tracker4 security update to fix one CVE in Buster, related to CSV injection via ticket values with special characters. The patch was prepared by Andrew Ruthven.
  • [ELA-1561-1] xorg-server security update to fix three CVEs in Buster and Stretch, related to privilege escalation.

I also attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 31 of them in October. I could even close one RFP by uploading the new package gypsy. Meanwhile only 3373 are still open, so don’t hesitate to help closing one or another.

FTP master

This month I accepted 420 and rejected 45 packages. The overall number of packages that got accepted was 423.

I would like to remind everybody that in case you don’t agree with the removal of a package, please set the moreinfo tag on this bug. This is the only reliable way to prevent processing of that RM-bug. Well, there is a second way, of course you could also achieve this by closing the bug.

08 November, 2025 09:12AM by alteholz

November 07, 2025

Ravi Dwivedi

A Bad Day in Malaysia

Continuing from where Badri and I left off in the last post. On the 7th of December 2024, we boarded a bus from Singapore to the border town of Johor Bahru in Malaysia. The bus stopped at the Singapore emigration for us to get off for the formalities.

The process was similar to the immigration at the Singapore airport. It was automatic, and we just had to scan our passports for the gates to open. Here also, we didn’t get Singapore stamps on our passports.

After we were done with the emigration, we had to find our bus. We remembered the name of the bus company and the number plate, which helped us recognize our bus. It wasn’t there already after we came out of the emigration, but it arrived soon enough, and we boarded it promptly.

From the Singapore emigration, the bus travelled a few kilometers and dropped us at Johor Bahru Sentral (JB Sentral) bus station, where we had to go through Malaysian immigration. The process was manual, unlike Singapore, and there was an immigration officer at the counter who stamped our passports (which I like) and recorded our fingerprints.

At the bus terminal, we exchanged rupees at an exchange shop to get Malaysian ringgits. We could not find any free drinking water sources on the bus terminal, so we had to buy water.

Badri later told me that Johor Bahru has a lot of data centers, leading to high water usage. When he read about it later, he immediately connected it with the fact that there was no free drinking water, and we had to buy water.

From JB Sentral, we took a bus to Larkin Terminal, as our hotel was nearby. It was 1.5 ringgits per person (30 rupees). In order to pay for the fare, we had to put cash in a box near the driver’s seat.

Around half-an-hour later, we reached our hotel. The time was 23:30 hours. The hotel room was hot as it didn’t have air-conditioning. The weather in Malaysia is on the hotter side throughout the year. It was a budget hotel, and we paid 70 ringgits for our room.

Badri slept soon after we checked-in. I went out during the midnight at around 00:30. I was hungry, so I entered a small scale restaurant nearby, which was quite lively for the midnight hours. At the restaurant, I ordered a coffee and an omelet. I also asked for drinking water. The unique thing about that was that they put ice in hot water to make its temperature normal.

My bill from the restaurant looked like the below-mentioned table, as the items’ names were in the local language Malay:

Item Price (Malaysian ringgits) Conversion to Indian rupees Comments
Nescafe Tarik 2.50 50 Coffee
Ais Kosong 0.50 10 Water
Telur Dadar 2.00 40 Omelet
SST Tax (6%) 0.30 6
Total 5.30 106

After checking out from the restaurant, I explored nearby shops. I also bought some water before going back to the hotel room.

The next day, we had a (pre-booked) bus to Kuala Lumpur. We checked out from the hotel 10 minutes after the check-out time (which was 14:00 hours). However, within those 10 minutes, the hotel staff already came up three times asking us to clear out (which we were doing as fast as possible). And finally on the third time they said our deposit was forfeit, even though it was supposed to be only for keys and towels.

The above-mentioned bus for Kuala Lumpur was from the nearby Larkin Bus Terminal. The bus terminal was right next to our hotel, so we walked till there.

Upon reaching there, we found out that the process of boarding a bus in Malaysia resembled with taking a flight. We needed to go to a counter to get our boarding passes, followed by reporting at our gate half-an-hour before the scheduled time. Furthermore, they had a separate waiting room and boarding gates. Also, there was a terminal listing buses with their arrival and departure signs. Finally, to top it off, the buses had seatbelts.

We got our boarding pass for 2 ringgits (40 rupees). After that, we proceeded to get something to eat as we were hungry. We went to a McDonald’s, but couldn’t order anything because of the long queue. We didn’t have a lot of time, so we proceeded towards our boarding gate without having anything.

The boarding gate was in a separate room, which had a vending machine. I tried to order something using my card, but the machine wasn’t working. In Malaysia, there is a custom of queueing up to board buses even before the bus has arrived. We saw it in Johor Bahru as well. The culture is so strong that they even did it in Singapore while waiting for the Johor Bahru bus!

Our bus departed at 15:30 as scheduled. The journey was around 5 hours. A couple of hours later, our bus stopped for a break. We got off the bus and went to the toilet. As we were starving (we didn’t have anything the whole day), we thought it was a good opportunity to get some snack. There was a stall selling some food. However, I had to determine which options were vegetarian. We finally settled on a cylindrical box of potato chips, labelled Mister Potato. They were 7 ringgits.

We didn’t know how long the bus is going to stop. Furthermore, eating inside buses in Malaysia is forbidden. When we went to get some coffee from the stall, our bus driver was standing there and made a face. We got an impression that he doesn’t want us to have coffee.

However, after we got into the bus, we had to wait for a long time for it to resume its journey as the driver was taking his sweet time to drink his coffee.

During the bus journey, we saw a lot of palm trees on the way. The landscape was beautiful, with good road infrastructure throughout the journey. Badri also helped me improve my blog post on obtaining Luxembourg visa in the bus.

The bus dropped us at the Terminal Bersepadu Selatan (TBS in short) in Kuala Lumpur at 21:30 hours.

Finally, we got something at the TBS. We also noticed that the TBS bus station had lockers. This gave us the idea of putting some of our luggage in the lockers later while we will be in Brunei. We had booked a cheap Air Asia ticket which doesn’t allow check-in luggage. Further, keeping the checked-in luggage in lockers for three days was cheaper than paying the excess luggage penalty for Air Asia.

We followed it up by taking a metro as our hotel was closer to a metro station. This was a bad day due to our deposit being forfeited unfairly, and got nothing to eat.

We took the metro to reach our hostel, which was located in the Bukit Bintang area. The name of this hostel was Manor by Mingle. I had stayed here earlier in February 2024 for two nights. Back then, I paid 1000 rupees per day for a dormitory bed. However, this time the same hostel was much cheaper. We got a private room for 800 rupees per day, with breakfast included. Earlier it might have been pricier due to my stay falling on weekends or maybe February has more tourists in Kuala Lumpur.

That’s it for this post. Stay tuned for our adventures in Malaysia!

07 November, 2025 07:25AM

Reproducible Builds (diffoscope)

diffoscope 308 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 308. This version includes the following changes:

[ Chris Lamb ]
* Attempt to fix automatic deployment to PyPi:
  - Separate out deploy-tag and deploy-pypi into different stages, and base
    the latter on debian:unstable.
  - Call apt-get update prior to attempting installing twine.

You find out more by visiting the project homepage.

07 November, 2025 12:00AM

November 06, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

inert media, or the explotation of attention

It occurred to me recently that one of the attractions of vinyl, or more generally physical media, could be that it's inert, safe: the music is a groove cut into some plastic. That's it. The record can't do anything unexpected to you1: it just contains music.

Safe.

Safe.

There's so much exploitation of attention, and so much of what we interact with in a computing context (social media) etc has been weaponised against us, that having something some matter-of-fact is a relief. I know that sometimes, I prefer to put a record on than to dial up the very same album from any number of (ostensibly more convenient) digital sources, partly because I don't need to spend any of my attention spoons to do so, I can save them for the task at hand.

The same is perhaps not true for audio CDs. That might depend on your own relationship with them, of course. For me they're inexorably tied up with computing and a certain amount of faff (ripping, encoding, metadata). And long dead it might be (hopefully), but I can still remember Sony's CD rootkit scandal: CDs could be a trojan horse. Beware!


  1. I'm sure there are ingenious exceptions.

06 November, 2025 03:11PM

Sahil Dhiman

Debconf25 Brest

DebConf25 was held at IMT Atlantique Brest Campus in France from 14th to 19th July 2025. As usual, it was preceded by DebCamp from 7th to 13th July.

I was less motivated to write this time. So this year, more pictures, less text. Hopefully, (eventually) I may come back to fill this up.

Conference


IMT Atlantique

Main conference area

RAK restaurant, the good food place near the venue

Bits from DPL (can't really miss the tradition of a Bits picture)



Salsa CI BoF by Otto Kekäläinen and others

Debian.net Team BoF by debian.net team

During the conference, Subin had this crazy idea of shooting “Parody of a popular clip from the American-Malayalee television series ‘Akkarakazhchakal’ advertising Debian.” He explained the whole story in the BTS video. The results turned out great, TBF:

You have a computer, but no freedom?
Credits - Subin Siby, licensed under CC BY SA 4.0.

BTS from "You have a computer, but no freedom?" video shoot



DC25 network usage graphs. Click to enlarge.

Flow diagrams. Click to enlarge.

Streaming bandwidth graph. Click to enlarge.

Brest


Brest Harbor and Sea

I managed to complete The Little Prince (Le Petit Prince) during my travel from Paris to Brest

Paris


Basilica of the Sacred Heart of Montmartre


View of Paris from the Basilica of the Sacred Heart of Montmartre

Paris streets

Cats rule the world, even on Paris streetlights

Eiffel Tower
Eiffel Tower. It's massive.

Eiffel Tower
View from Eiffel Tower
Credits - Nilesh Patra, licensed under CC BY SA 4.0.

As for the next DebConf work, it has already started. It seems like it never ends. We close one and in one or two months start working on the next one. DebConf is going to Argentina this time and we have a nice little logo too now.

DebConf26 logo
DebConf26 logo
Credits - Romina Molina, licensed under CC BY SA 4.0.

Overall, DebConf25 Brest was a nice conference. Many thanks to local team, PEB and everyone involved for everything. Let’s see about next year. Bye!

DebConf25 Group Photo
DebConf25 Group Photo. Click to enlarge.
Credits - Aigars Mahinovs

PS - Talks are available on Debian media server.

06 November, 2025 04:40AM

November 05, 2025

Reproducible Builds

Reproducible Builds in October 2025

Welcome to the October 2025 report from the Reproducible Builds project!

Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Farewell from the Reproducible Builds Summit 2025
  2. Google’s Play Store breaks reproducible builds for Signal
  3. Mailing list updates
  4. The Original Sin of Computing…that no one can fix
  5. Reproducible Builds at the Transparency.dev summit
  6. Supply Chain Security for Go
  7. Three new academic papers published
  8. Distribution work
  9. Upstream patches
  10. Website updates
  11. Tool development

Farewell from the Reproducible Builds Summit 2025…

Thank you to everyone who joined us at the Reproducible Builds Summit in Vienna, Austria!

We were thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. During this event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim was to create an inclusive space that fosters collaboration, innovation and problem-solving.

The agenda of the three main days is available online — however, some working sessions may still lack notes at time of publication.

One tangible outcome of the summit is that Johannes Starosta finished their rebuilderd tutorial, which is now available online and Johannes is actively seeking feedback.


Google’s Play Store breaks reproducible builds for Signal

On the issue tracker for the popular Signal messenger app, developer Greyson Parrelli reports that updates to the Google Play store have, in effect, broken reproducible builds:

The most recent issues have to do with changes to the APKs that are made by the Play Store. Specifically, they add some attributes to some .xml files around languages are resources, which is not unexpected because of how the whole bundle system works. This is trickier to resolve, because unlike current “expected differences” (like signing information), we can’t just exclude a whole file from the comparison. We have to take a more nuanced look at the diff. I’ve been hesitant to do that because it’ll complicate our currently-very-readable comparison script, but I don’t think there’s any other reasonable option here.

The full thread with additional context is available on GitHub.


Mailing list updates

On our mailing list this month:

  • kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution.

  • Arnout Engelen, Justin Cappos, Ludovic Courtès and kpcyrd continued a conversation started in September regarding the “Minimum Elements for a Software Bill of Materials”. (Full thread)

  • Felix Moessbauer of Siemens posted to the list reporting that he had recently “stumbled upon a couple of Debian source packages on the snapshot mirrors that are listed multiple times (same name and version), but each time with a different checksum”. The thread, which Felix titled, Debian: what precisely identifies a source package is about precisely that — what can be axiomatically relied upon by consumers of the Debian archives, as well as indicating an issue where “we can’t exactly say which packages were used during build time (even when having the .buildinfo files).

  • Luca DiMaio posted to the list announcing the release of xfsprogs 6.17.0 which specifically includes a commit that “implements the functionality to populate a newly created XFS filesystem directly from an existing directory structure” which “makes it easier to create populated filesystems without having to mount them [and thus is] particularly useful for reproducible builds”. Luca asked the list how they might contribute to the docs of the System images page.


The Original Sin of Computing…that no one can fix

Popular YouTuber @laurewired published a video this month with an engaging take on the Trusting Trust problem. Titled The Original Sin of Computing…that no one can fix, the video touches on David A. Wheeler’s Diverse Double-Compiling dissertation.

GNU developer Janneke Nieuwenhuizen followed-up with an email (additionally sent to our mailing list) as well, underscoring that GNU Mes’s “current solution [to this issue] uses ancient softwares in its bootstrap path, such as gcc-2.95.3 and glibc-2.2.5”. (According to Colby Russell, the GNU Mes bootstrapping sequence is shown at 18m54s in the video.)


Reproducible Builds at the Transparency.dev summit

Holger Levsen gave a talk at this year’s Transparency.dev summit in Gothenburg, Sweden, outlining the achievements of the Reproducible Builds project in the last 12 years, covering both upstream developments as well as some distribution-specific details. As mentioned on the talk’s page, Holger’s presentation concluded “with an outlook into the future and an invitation to collaborate to bring transparency logs into Reproducible Builds projects”.

The slides of the talk are available, although a video has yet to be released. Nevertheless, as a result of the discussions at Transparency.dev there is a new page on the Debian wiki with the aim of describing a potential transparency log setup for Debian.


Supply Chain Security for Go

Andrew Ayer has setup a new service at sourcespotter.com that aims to monitor the supply chain security for Go releases. It consists of four separate trackers:

  1. A tool to verify that the Go Module Mirror and Checksum Database is behaving honestly and has not presented inconsistent information to clients.
  2. A module monitor that records every module version served by the Go Module Mirror and Checksum Database, allowing you to monitor for unexpected versions of your modules.
  3. A tool to verifies that the Go toolchains published in the Go Module Mirror can be reproduced from source code, making it difficult to hide backdoors in the binaries downloaded by the go command.
  4. A telemetry config tracker that tracks the names of telemetry counters uploaded by the Go toolchain, to ensure that Go telemetry is not violating users’ privacy.

As the homepage of the service mentions, the trackers are free software and do not rely on Google infrastructure.


Three new academic papers published

Julien Malka of the Institut Polytechnique de Paris published an exciting paper this month on How NixOS could have detected the XZ supply-chain attack for the benefit of all thanks to reproducible-builds. Julien outlines his paper as follows:

In March 2024, a sophisticated backdoor was discovered in xz, a core compression library in Linux distributions, covertly inserted over three years by a malicious maintainer, Jia Tan. The attack, which enabled remote code execution via ssh, was only uncovered by chance when Andres Freund investigated a minor performance issue. This incident highlights the vulnerability of the open-source supply chain and the effort attackers are willing to invest in gaining trust and access. In this article, I analyze the backdoor’s mechanics and explore how bitwise build reproducibility could have helped detect it.

A PDF of the paper is available online.


Iyán Méndez Veiga and Esther Hänggi (of the Lucerne University of Applied Sciences and Arts and ETH Zurich) published a paper this month on the topic of Reproducible Builds for Quantum Computing. The abstract of their paper mentions the following:

Although quantum computing is a rapidly evolving field of research, it can already benefit from adopting reproducible builds. This paper aims to bridge the gap between the quantum computing and reproducible builds communities. We propose a generalization of the definition of reproducible builds in the quantum setting, motivated by two threat models: one targeting the confidentiality of end users’ data during circuit preparation and submission to a quantum computer, and another compromising the integrity of quantum computation results. This work presents three examples that show how classical information can be hidden in transpiled quantum circuits, and two cases illustrating how even minimal modifications to these circuits can lead to incorrect quantum computation results.

A full PDF of their paper is available.


Congratulations to Georg Kofler who submitted their Master’s thesis for the Johannes Kepler University of Linz, Austria on the topic of Reproducible builds of E2EE-messengers for Android using Nix hermetic builds:

The thesis focuses on providing a reproducible build process for two open-source E2EE messaging applications: Signal and Wire. The motivation to ensure reproducibility—and thereby the integrity—of E2EE messaging applications stems from their central role as essential tools for modern digital privacy. These applications provide confidentiality for private and sensitive communications, and their compromise could undermine encryption mechanisms, potentially leaking sensitive data to third parties.

A full PDF of their thesis is available online.


Shawkot Hossain of Aalto University, Finland has also submitted their Master’s thesis on the The Role of SBOM in Modern Development with a focus on the extant tooling:

Currently, there are numerous solutions and techniques available in the market to tackle supply chain security, and all claim to be the best solution. This thesis delves deeper by implementing those solutions and evaluates them for better understanding. Some of the tools that this thesis implemented are Syft, Trivy, Grype, FOSSA, dependency-check, and Gemnasium. Software dependencies are generated in a Software Bill of Materials (SBOM) format by using these open-source tools, and the corresponding results have been analyzed. Among these tools, Syft and Trivy outperform others as they provide relevant and accurate information on software dependencies.

A PDF of the thesis is also available.


Distribution work

Michael Plura published an interesting article on Heise.de on the topic of Trust is good, reproducibility is better:

In the wake of growing supply chain attacks, the FreeBSD developers are relying on a transparent build concept in the form of Zero-Trust Builds. The approach builds on the established Reproducible Builds, where binary files can be rebuilt bit-for-bit from the published source code. While reproducible builds primarily ensure verifiability, the zero-trust model goes a step further and removes trust from the build process itself. No single server, maintainer, or compiler can be considered more than potentially trustworthy.

The article mentions that this “goal has now been achieved with a slight delay and can be used in the current development branch for FreeBSD 15”.


In Debian this month, 7 reviews of Debian packages were added, 5 were updated and 11 were removed this month adding to our knowledge about identified issues.

For the Debian CI tests Holger fixed #786644 and set nocheck in DEB_BUILD_OPTIONS for the 2nd build..


Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Website updates

Once again, there were a number of improvements made to our website this month including:

In addition, a number of contributors added a series of notes from our recent summit to the website, including Alexander Couzens [], Robin Candau [][][][][][][][][] and kpcyrd [].


Tool development

diffoscope version 307 was uploaded to Debian unstable by Chris Lamb, who made a number of changes including fixing compatibility with LLVM version 21 [], an attempt to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [] In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 307.



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 November, 2025 09:01PM

November 04, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCNPy 0.2.14 on CRAN: Minor Maintenance

Another (again somewhat minor) maintenance release of the RcppCNPy package arrived on CRAN just now. RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers along with Rcpp for the glue to R.

The changes are all minor chores. As R now checks usage of packages in demos, we added the rbenchmark to Suggests: in DESCRIPTION. We refreshed the main continuous integration script for a minor update, and also replaced one URL in a badge to avoid a timeout during checks at CRAN. So … nothing user-facing this time! Full details are below.

Changes in version 0.2.14 (2024-11-03)

  • The rbenchmark package is now a Suggests: as it appears in demo

  • The continuous integration setup now uses r-ci with its embedded setup step

  • The URL used for the GPL-2 is now the R Project copy

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 November, 2025 12:31AM

November 03, 2025

Melissa Wen

Kworkflow at Kernel Recipes 2025

Franks drawing of Melissa Wen with Kernel Recipes mascots around

This was the first year I attended Kernel Recipes and I have nothing but say how much I enjoyed it and how grateful I’m for the opportunity to talk more about kworkflow to very experienced kernel developers. What I mostly like about Kernel Recipes is its intimate format, with only one track and many moments to get closer to experts and people that you commonly talk online during your whole year.

In the beginning of this year, I gave the talk Don’t let your motivation go, save time with kworkflow at FOSDEM, introducing kworkflow to a more diversified audience, with different levels of involvement in the Linux kernel development.

At this year’s Kernel Recipes I presented the second talk of the first day: Kworkflow - mix & match kernel recipes end-to-end.

The Kernel Recipes audience is a bit different from FOSDEM, with mostly long-term kernel developers, so I decided to just go directly to the point. I showed kworkflow being part of the daily life of a typical kernel developer from the local setup to install a custom kernel in different target machines to the point of sending and applying patches to/from the mailing list. In short, I showed how to mix and match kernel workflow recipes end-to-end.

As I was a bit fast when showing some features during my presentation, in this blog post I explain each slide from my speaker notes. You can see a summary of this presentation in the Kernel Recipe Live Blog Day 1: morning.


Introduction

First slide: Kworkflow by Melissa Wen

Hi, I’m Melissa Wen from Igalia. As we already started sharing kernel recipes and even more is coming in the next three days, in this presentation I’ll talk about kworkflow: a cookbook to mix & match kernel recipes end-to-end.

Second slide: About Melissa Wen, the speaker of this talk

This is my first time attending Kernel Recipes, so lemme introduce myself briefly.

  • As I said, I work for Igalia, I work mostly on kernel GPU drivers in the DRM subsystem.
  • In the past, I co-maintained VKMS and the v3d driver. Nowadays I focus on the AMD display driver, mostly for the Steam Deck.
  • Besides code, I contribute to the Linux kernel by mentoring several newcomers in Outreachy, Google Summer of Code and Igalia Coding Experience. Also, by documenting and tooling the kernel.

Slide 3: and what's this cookbook called Kwokflow? - with kworkflow logo and KR penguin

And what’s this cookbook called kworkflow?

Kworkflow (kw)

Slide 4: text below

Kworkflow is a tool created by Rodrigo Siqueira, my colleague at Igalia. It’s a single platform that combines software and tools to:

  • optimize your kernel development workflow;
  • reduce time spent in repetitive tasks;
  • standardize best practices;
  • ensure that deployment data flows smoothly and reliably between different kernel workflows;

Slide 5: kworkflow is mostly a voluntary work

It’s mostly done by volunteers, kernel developers using their spare time. Its features cover real use cases according to kernel developer needs.

Slide 6: Mix & Match the daily life of a kernel developer

Basically it’s mixing and matching the daily life of a typical kernel developer with kernel workflow recipes with some secret sauces.

First recipe: A good GPU driver for my AMD laptop

Slide 7: Let's prepare our first recipe

So, it’s time to start the first recipe: A good GPU driver for my AMD laptop.

Slide 8: Ingredients and Tools

Before starting any recipe we need to check the necessary ingredients and tools. So, let’s check what you have at home.

With kworkflow, you can use:

Slide 9: kw device and kw remote

  • kw device: to get information about the target machine, such as: CPU model, kernel version, distribution, GPU model,

  • kw remote: to set the address of this machine for remote access

Slide 11: kw config

  • kw config: you can configure kw with kw config. With this command you can basically select the tools, flags and preferences that kw will use to build and deploy a custom kernel in a target machine. You can also define recipients of your patches when sending it using kw send-patch. I’ll explain more about each feature later in this presentation.

Slide 13: kw kernel-config-manager

  • kw kernel-config manager (or just kw k): to fetch the kernel .config file from a given machine, store multiple .config files, list and retrieve them according to your needs.

Slide 15: Preparation

Now, with all ingredients and tools selected and well portioned, follow the right steps to prepare your custom kernel!

First step: Mix ingredients with kw build or just kw b

Slide 16: kw build

  • kw b and its options wrap many routines of compiling a custom kernel.
    • You can run kw b -i to check the name and kernel version and the number of modules that will be compiled and kw b --menu to change kernel configurations.
    • You can also pre-configure compiling preferences in kw config regarding kernel building. For example, target architecture, the name of the generated kernel image, if you need to cross-compile this kernel for a different system and which tool to use for it, setting different warning levels, compiling with CFlags, etc.
    • Then you can just run kw b to compile the custom kernel for a target machine.

Second step: Bake it with kw deploy or just kw d

Slide 18: kw deploy

After compiling the custom kernel, we want to install it in the target machine. Check the name of the custom kernel built: 6.17.0-rc6 and with kw s SSH access the target machine and see it’s running the kernel from the Debian distribution 6.16.7+deb14-amd64.

As with building settings, you can also pre-configure some deployment settings, such as compression type, path to device tree binaries, target machine (remote, local, vm), if you want to reboot the target machine just after deploying your custom kernel, and if you want to boot in the custom kernel when restarting the system after deployment.

If you didn’t pre-configured some options, you can still customize as a command option, for example: kw d --reboot will reboot the system after deployment, even if I didn’t set this in my preference.

With just running kw d --reboot I have installed the kernel in a given target machine and rebooted it. So when accessing the system again I can see it was booted in my custom kernel.

Third step: Time to taste with kw debug

Slide 20: kw debug

  • kw debug wraps many tools for validating a kernel in a target machine. We can log basic dmesg info but also tracking events and ftrace.
    • With kw debug --dmesg --history we can grab the full dmesg log from a remote machine, if you use the --follow option, you will monitor dmesg outputs. You can also run a command with kw debug --dmesg --cmd="<my command>" and just collect the dmesg output related to this specific execution period.
    • In the example, I’ll just unload the amdgpu driver. I use kw drm --gui-off to drop the graphical interface and release the amdgpu for unloading it. So I run kw debug --dmesg --cmd="modprobe -r amdgpu" to unload the amdgpu driver, but it fails and I couldn’t unload it.

Cooking Problems

Slide 22: kw patch-hub

Oh no! That custom kernel isn’t tasting good. Don’t worry, as in many recipes preparations, we can search on the internet to find suggestions on how to make it tasteful, alternative ingredients and other flavours according to your taste.

With kw patch-hub you can search on the lore kernel mailing list for possible patches that can fix your kernel issue. You can navigate in the mailing lists, check series, bookmark it if you find it relevant and apply it in your local kernel tree, creating a different branch for tasting… oops, for testing. In this example, I’m opening the amd-gfx mailing list where I can find contributions related to the AMD GPU driver, bookmark and/or just apply the series to my work tree and with kw bd I can compile & install the custom kernel with this possible bug fix in one shot.

As I changed my kw config to reboot after deployment, I just need to wait for the system to boot to try again unloading the amdgpu driver with kw debug --dmesg --cm=modprobe -r amdgpu. From the dmesg output retrieved by kw for this command, the driver was unloaded, the problem is fixed by this series and the kernel tastes good now.

If I’m satisfied with the solution, I can even use kw patch-hub to access the bookmarked series and marking the checkbox that will reply the patch thread with a Reviewed-by tag for me.

Second Recipe: Raspberry Pi 4 with Upstream Kernel

Slide 25: Second Recipe RPi 4 with upstream kernel

As in all recipes, we need ingredients and tools, but with kworkflow you can get everything set as when changing scenarios in a TV show. We can use kw env to change to a different environment with all kw and kernel configuration set and also with the latest compiled kernel cached.

I was preparing the first recipe for a x86 AMD laptop and with kw env --use RPI_64 I use the same worktree but moved to a different kernel workflow, now for Raspberry Pi 4 64 bits. The previous compiled kernel 6.17.0-rc6-mainline+ is there with 1266 modules, not the 6.17.0-rc6 kernel with 285 modules that I just built&deployed. kw build settings are also different, now I’m targeting a arm64 architecture with a cross-compiled kernel using aarch64-linu-gnu- cross-compilation tool and my kernel image calls kernel8 now.

Slide 27: kw env

If you didn’t plan for this recipe in advance, don’t worry. You can create a new environment with kw env --create RPI_64_V2 and run kw init --template to start preparing your kernel recipe with the mirepoix ready.

I mean, with the basic ingredients already cut…

I mean, with the kw configuration set from a template.

And you can use kw remote to set the IP address of your target machine and kw kernel-config-manager to fetch/retrieve the .config file from your target machine. So just run kw bd to compile and install a upstream kernel for Raspberry Pi 4.

Third Recipe: The Mainline Kernel Ringing on my Steam Deck (Live Demo)

Slide 30: Third Recipe - The Mainline Kernel Ringing on my Steam Deck

Let’s show you how easy is to build, install and test a custom kernel for Steam Deck with Kworkflow. It’s a live demo, but I also recorded it because I know the risks I’m exposed to and something can go very wrong just because of reasons :)

Report: how was the live demo

For this live demo, I took my OLED Steam Deck to the stage. I explained that, if I boot mainline kernel on this device, there is no audio. So I turned it on and booted the mainline kernel I’ve installed beforehand. It was clear that there was no typical Steam Deck startup audio when the system was loaded.

Franks drawing of Melissa Wen doing a demo of kworkflow with the Steam Deck

As I started the demo in the kw environment for Raspberry Pi 4, I first moved to another environment previously used for Steam Deck. In this STEAMDECK environment, the mainline kernel was already compiled and cached, and all settings for accessing the target machine, compiling and installing a custom kernel were retrieved automatically.

My live demo followed these steps:

  1. With kw env --use STEAMDECK, switch to a kworkflow environment for Steam Deck kernel development.

  2. With kw b -i, shows that kw will compile and install a kernel with 285 modules named 6.17.0-rc6-mainline-for-deck.

  3. Run kw config to show that, in this environment, kw configuration changes to x86 architecture and without cross-compilation.

  4. Run kw device to display information about the Steam Deck device, i.e. the target machine. It also proves that the remote access - user and IP - for this Steam Deck was already configured when using the STEAMDECK environment, as expected.

  5. Using git am, as usual, apply a hot fix on top of the mainline kernel. This hot fix makes the audio play again on Steam Deck.

  6. With kw b, build the kernel with the audio change. It will be fast because we are only compiling the affected files since everything was previously done and cached. Compiled kernel, kw configuration and kernel configuration is retrieved by just moving to the “STEAMDECK” environment.

  7. Run kw d --force --reboot to deploy the new custom kernel to the target machine. The --force option enables us to install the mainline kernel even if mkinitcpio complains about missing support for downstream packages when generating initramfs. The --reboot option makes the device reboot the Steam Deck automatically, just after the deployment completion.

  8. After finishing deployment, the Steam Deck will reboot on the new custom kernel version and made a clear resonant or vibrating sound. [Hopefully]

Finally, I showed to the audience that, if I wanted to send this patch upstream, I just needed to run kw send-patch and kw would automatically add subsystem maintainers, reviewers and mailing lists for the affected files as recipients, and send the patch to the upstream community assessment. As I didn’t want to create unnecessary noise, I just did a dry-run with kw send-patch -s --simulate to explain how it looks.

What else can kworkflow already mix & match?

In this presentation, I showed that kworkflow supported different kernel development workflows, i.e., multiple distributions, different bootloaders and architectures, different target machines, different debugging tools and automatize your kernel development routines best practices, from development environment setup and verifying a custom kernel in bare-metal to sending contributions upstream following the contributions-by-e-mail structure. I exemplified it with three different target machines: my ordinary x86 AMD laptop with Debian, Raspberry Pi 4 with arm64 Raspbian (cross-compilation) and the Steam Deck with SteamOS (x86 Arch-based OS). Besides those distributions, Kworkflow also supports Ubuntu, Fedora and PopOS.

Now it’s your turn: Do you have any secret recipes to share? Please share with us via kworkflow.


03 November, 2025 09:30PM

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Migrating from hugo-theme-learn to alternative theme

Introduction

For personal web site, I use hugo and hugo-theme-learn for statically generated contents.

Recently I've noticed that these combination of specific version is not compatible with.

(Not so frequently update contents, so it was delayed to found this situation)

What was the actual error?

ERROR deprecated: .Site.IsMultiLingual was deprecated in Hugo v0.124.0 and subsequently removed. Use hugo.IsMultilingual instead.
Total in 21 ms

It seems that hugo-theme-learn is not compatible with recent Hugo anymore.

How to deal with it?

With checking upstream issue, I've found the following issue.

What are people migrating too?

'hugo-theme-relearn' was noted as an alternative.

As 'hugo-theme-learn' is not actively maintained anymore, so it is easy to migrate from hugo-theme-learn to hugo-theme-relearn.

For example, it needs just a few lines of configuration.

diff --git a/website/config.toml b/website/config.toml
index 702d4da..99ddf03 100644
--- a/website/config.toml
+++ b/website/config.toml
@@ -3,7 +3,7 @@ languageCode = "en-US"
 defaultContentLanguage = "en"

-theme = "hugo-theme-learn"
+theme = "hugo-theme-relearn"
 themesdir = "themes"
 metaDataFormat = "yaml"
 defaultContentLanguageInSubdir= true
@@ -16,6 +16,7 @@ defaultContentLanguageInSubdir= true
   disableNextPrev = true
   disableSearch = true
   disableShortcutsTitle = true
+  themeVariant = 'learn'

 [markup]
   [markup.goldmark]

It was sad that hugo-theme-learn is not maintained actively, but many thank to this greateful hugo theme. And also thanks the effort to fork it as hugo-theme-relearn.

03 November, 2025 10:04AM

Birger Schacht

Status update, October 2025

At the beginning of the month I uploaded a new version of the sway package to Debian. This contains two backported patches, one to fix reported WM capabilities and one to revert the default behavior for drag_lock to disabled.

I also uploaded new releases of cage (a kiosk for Wayland), labwc, the window-stacking Wayland compositor that is inspired by Openbox, and wf-recorder, a tool for creating screen recordings of wlroots-based Wayland compositors.

If I don’t forget I try to update the watch file of the packages I touch to the new version 5 format.

Simon Ser announced vali, a C library for Varlink. The blog post also mentions that this will be a dependency of “the next version of the kanshi Wayland output management daemon” and the PR to do so is now already merged. So I created ITP: vali – A Varlink C implementation and code generator, packaged the library and it is now waiting in NEW. In addition to libscfg this is now the second dependency of kanshi that is in NEW.

On the Rust side of things I fixed a bug in carl. The fix introduces new date properties which can be use to highlight a calendar date. I also updated all the dependencies and plan to create a new release soon.

Later I dug up a Rust project that I started a couple of years ago, where I try to use wasm-bindgen to implement interactive web components. There is a lot I have to refactor in this code base, but I will work on that and try to publish something in the next few months.

Miscellaneous

Two weeks ago I wrote A plea for <dialog>, which made the case for using standardized HTML elements instead of resorting to JavaScript libraries.

I finally managed to update my shell Server to Debian 13.

I created an issue for the nextcloud-news android client because I moved to a new phone and my starred articles did not show up in the news app, which is a bit annoying.

I got my ticket for 39C3.

In my dayjob I continued to work on the refactoring of the import logic of our apis-core-rdf app. I released version 0.56 which also introduced the “#snackbar” as the container for the toast message, as described in the <dialog> block post. At the end of the month I released version 0.57 of apis-core-rdf, which got rid of the remaining leftovers of the old import logic.

A couple of interesting articles I stumbled upon (or finally had the time to read):

03 November, 2025 05:28AM

Russ Allbery

Review: The Raven Scholar

Review: The Raven Scholar, by Antonia Hodgson

Series: Eternal Path Trilogy #1
Publisher: Orbit
Copyright: April 2025
ISBN: 0-316-57723-5
Format: Kindle
Pages: 651

The Raven Scholar is an epic fantasy and the first book of a projected trilogy. It is Antonia Hodgson's first published fantasy novel; her previous published novels are historical mystery. I would classify this as adult fantasy — the main character is thirty-four with a stable court position — but it has strong YA vibes because of the generational turnover feel of the main plot.

Eight years before the start of this book, Andren Valit attempted to assassinate the emperor and failed. Since then, his widow and three children — twins Yana and Ruko and infant Nisthala — have been living in disgrace in a cramped apartment, subject to constant inspections and suspicion. As the story opens, they have been summoned to appear before the emperor, escorted by a young and earnest Hound (essentially the state security services) named Shal Worthy. The resulting interrogation is full of dangerous traps. Not all of them will be avoided.

The formalization of the consequences of that imperial summons falls to an unpopular Junior Archivist (Third Class) whose one notable skill is her penmanship. A meeting that was disasterous for the Valits becomes unexpectedly fortunate for the archivist, albeit with a poisonous core.

Eight years later, Neema Kraa is High Scholar, and Emperor Bersun's twenty-four years of permitted reign is coming to an end. The Festival is about to begin. One representative from each of the empire's eight anats (religious schools) will compete in seven days of Trials, save for the Dragons who do not want the throne and will send a proxy. The victor according to the Trials scoring system will become emperor and reign unquestioned for twenty-four years or until resignation. This is the system that put an end to the era of chaos and has been in place for over a thousand years.

On the eve of the Trials, the Raven contender is found murdered. Neema is immediately a suspect; she even has reasons to suspect herself. She volunteers to lead the investigation because she has to know what happened. She is also volunteered to be the replacement Raven contender. There is no chance that she will become emperor; she doesn't even know how to fight. But agnostic Neema has a rather unexpected ally.

As the last chime fades we drop neatly on to the balcony's rusting hand rail, folding our wings with a soft shuffle. Noon, on the ninth day of the eighth month, 1531. Neema Kraa's lodgings. We are here, exactly where we should be, at exactly the right moment, because we are the Raven, and we are magnificent.

The Raven Scholar is a rather good epic fantasy, with some caveats that I'll get to in a moment, but I found it even more fascinating as a genre artifact.

I've read my share of epic fantasy over the years, although most of my familiarity of the current wave of new adult fairy epics comes from reviews rather than personal experience. The Raven Scholar is epic fantasy, through and through. There is court intrigue, a main character who is a court functionary unexpectedly thrown into the middle of some problem, civilization-wide stakes, dramatic political alliances, detailed magic and mythological systems, and gods. There were moments that reminded me of a Guy Gavriel Kay novel, although Hodgson's characters tend more towards disarming moments of humanization instead of Kay's operatic scenes of emotional intensity.

But The Raven Scholar is also a murder mystery, complete with a crime scene, clues, suspects, evidence, an investigation, a possibly compromised detective, and a morass of possible motives and red herrings. I'm not much of a mystery reader, but this didn't feel like sort of ancillary mystery that might crop up in the course of a typical epic fantasy. It felt like a full-fledged investigation with an amateur detective; one can tell that Hodgson's previous four books were historical mysteries.

And then there's the Trials, which are the centerpiece of the book.

This book helped me notice that people (okay, me, I'm the people) have been sleeping on the influence of The Hunger Games, Battle Royale, and reality TV (specifically Survivor) on genre fiction, possibly because the more obvious riffs on the idea (Powerless, The Selection) have been young adult or new adult. Once I started looking, I realized this idea is everywhere now: Throne of Glass, Fourth Wing, even The Night Circus to some extent. Competitions with consequences are having a moment.

I suspect having a competition to decide the next emperor is going to strike some traditional fantasy readers as sufficiently absurd and unbelievable that it will kick them out of the book. I had a moment of "okay, this is weird, why would anyone stick with this system for so long" myself. But I would encourage such readers to interrogate whether that's only a response from unfamiliarity; after all, strange women lying in ponds distributing swords is no basis for a system of government either. This is hardly the most unrealistic epic fantasy trope, and it has the advantage of being a hell of a plot generator when handled well.

Hodgson handles it well. Society in this novel is structured around the anats and the eight Guardians, gods who, according to myth, had returned seven times previously to save the world, but who will destroy the world when they return again. Each Guardian represents a group of characteristics and useful societal functions: the Ox is trustworthy, competent and hard-working; the Fox is a trickster and a rule-bender; the Raven is shrewd and careful and is the Guardian of scholars and lawyers. Each Trial is organized by one of the anats and tests the contenders for the skills most valued by that Guardian, often in subtle and rather ingenious ways. There are flaws here that you could poke at if you wanted to, but I was charmed and thoroughly entertained by how well Hodgson weaves the story around the Trials and uses the conflicting values to create character conflict, unexpected alliances, and engrossing plot.

Most importantly for a book of this sort, I liked Neema. She has a charming combination of competence, quirks (she is almost physically unable to not correct people's factual errors), insecurity, imposter syndrome, and determination. She is way out of her depth and knows it, but she has an ethical core and an insatiable curiosity that won't let her leave the central mysteries of the book alone. And the character dynamics are great; there are a lot of characters, including the competition problem of having to juggle eight contenders and give them all sufficient characterization to be meaningful, but this book uses its length to give each character some room to breathe. This is a long book, well over 600 pages, but it felt packed with events and plot twists. After every chapter I had to fight the urge to read just one more.

The biggest drawback of this book is that it is very much the first book of a trilogy, none of the other volumes are out yet, and the ending is rather nasty. This is the sort of trilogy that opens with a whole lot of bad things happening, and while I am thoroughly hooked and will purchase the next volume as soon as it's available, I wish Hodgson had found a way to end the book on a somewhat more positive or hopeful note. The middle of the book was great; the end was a bit of an emotional slog, alas. The writing is good enough here that I'm fairly sure the depression will be worth it, but if you need your endings to be triumphant (and who could blame you in this moment in history), you may want to wait on this one until more volumes are out.

Apart from that, though, this was a lot of fun. The Guardians felt like they came from a different strand of fantasy than you usually see in epic, more of a traditional folk tale vibe, which adds an intriguing twist to the epic fantasy setting. The characters all work, and Hodgson even pulls off some Game of Thrones–style twists that make you sympathetic to characters you previously hated. The magic system apart from the Guardians felt underbaked, but the politics had more depth than a lot of fantasy novels. If you want the truly complex and twisty politics you would get from one of Guy Gavriel Kay's historical rewrites, you will come away disappointed, but it was good enough for me. And I did enjoy the Raven.

Respect, that's all we demand. Recognition of our magnificence. Offerings. Love. Fear. Trembling awe. Worship. Shiny things. Blood sacrifice, some of us very much enjoy blood sacrifice. Truly, we ask for so little.

Followed by an as-yet untitled sequel that I hope will materialize.

Rating: 7 out of 10

03 November, 2025 03:25AM

November 02, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities October 2025

Quiete some things made progress last month: We put out Phosh 0.50 release, got closer to enabling media roles for audio by default in Phosh (see related post) and reworked our images builds. You should also (hopefully) notice some nice quality of life improvements once changes land in a distro near you and you're using Phosh. See below for details:

phosh

  • Switch back to default them when disabling automatic HighContrast (MR)
  • Hande gnome-session 49 changes so OSK can still start up (MR)
  • Release 0.50.0, 0.50.1
  • Don't forget to apply corner-shift to gear icon (MR)
  • Fix startup warning (MR)
  • Update doap (MR)
  • DBus codegen cleanups (MR, MR, MR)
  • Add Autobrightness handling (MR), (MR), (MR)

phoc

  • Dispatch idle loop in prepare (MR)
  • Release 0.50.0

phosh-mobile-settings

  • Allow to hide plugins (MR)
  • Release 0.50~rc1, 0.50.0
  • Hide demo plugins by default (MR)
  • Sink floating refs properly (MR)
  • Simplify includes (MR)
  • Allow to configure alarm sound if clock app likely supports it (MR)
  • Use shared check CI images (MR)
  • Release 0.50.1
  • Fix ringtone role (MR)

stevia (formerly phosh-osk-stub)

  • Ship sytemd user unit so things work with gnome-session 49 (MR)
  • Fix fallback to default OSK size with multiple outputs (MR)
  • Improve character styling a bit (MR)
  • Consolidate input surface creation (MR)
  • Release 0.50.0, 0.50.1
  • Don't trigger backspace key repeat in keypad or emoji layouts (MR)
  • Better restore layout after swipe closing (MR)

phosh-tour

meta-phosh

  • Build shared CI image MR

xdg-desktop-portal-phosh

  • Use pfs subproject for Rust portal (MR)

libphosh-rs

  • Fix doc build (MR)

Calls

  • Release 49.1
  • Fix plugin loading (MR)
  • Fix debug logging (MR)

Phrog

  • Allow OSKs to run with gnome-session 49 (MR)
  • Release 0.50.0 (MR)

phosh-recipes

  • Fix build (MR)
  • Simplify and cleanup docs (MR)
  • Fix mkosi build (MR)
  • Install Recommends: (MR)
  • Draft: Add support for initial-setup (MR)
  • Add version option (MR)
  • Drop debos build (MR, MR)

feedbackd

  • Fix compatiblility with systemd >= 258 (MR)
  • Use meson.options (MR)
  • Add phone role (MR)

feedbackd-device-themes

  • Add support for Google Pixel 3A (MR)

Chatty

  • Fix failing CI build (MR)

Squeekboard

  • Add systemd unit to start with gnome-session 49 (MR)

Debian

  • Upload phosh-tour 0.50.0
  • Upload stevia 0.50.0, 0.50.1
  • Upload phosh-mobile-settings 0.50.0
  • Upload feedbackd 0.8.6
  • Prepare phrog upload (MR)
  • gnome-session Breaks (MR, MR)
  • cellbroadcastd: Backport our upstream deadlock fix (MR)
  • Upload wlroots 0.19.2
  • Upload iio-sensor-proxy 3.8 (MR)

Cellbroadcastd

  • Release 0.0.3
  • Fix deadlock on start MR)

gnome-settings-daemon

  • Fix brightness values (MR)

gnome-control-center

  • Ignore role based loopbacks (MR)

gnome-initial-setup

  • Use AdwSwitchRow to reduce horizonatal allocation (MR)
  • Quit when done (and not under GDM) (MR)

sdm845-mainline

  • shift6mq: Switch to mainline panel driver (MR)

gnome-session

  • RequiredComponents is now gone (MR)

alpine

  • stevia: Add hunspell-en-us dependency (MR)

droid-juicer

  • Install sensor firmware for SHIFT6mq (MR)
  • Install sensor firmware for Oneplus 6T (MR)
  • Fix clippy complaints (MR)

phosh-site

  • Mention mirror (MR)
  • Update for hugo 150 (MR, MR)
  • Embed some ouf our peertube videos (MR)
  • Add donations item (MR)
  • Update osk post to mention systemd unit (MR)
  • Update list of distributions and users (MR)
  • Switch nightly builds to forky (MR)
  • Lint more markdown (MR)
  • Release 0.50.1 (MR)
  • Notes on audio (MR)

phosh-debs

  • Switch to Debian forky (MR)
  • Add g-i-s (MR)

Linux

  • shift6mq: Add missing panel driver dependency (MR)
  • shift6mq: Fix DTS warning (MR)

Wireplumber

  • Update media-role volume MR (MR)
  • Allow to set a default target for e.g. alerts and alarms (MR)

Phosh.mobi e.V.

demo-phones

  • Add demo videos (MR)
  • Add demo epubs (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phoc: Use correct format specifiers (MR)
  • phoc: seat: Use the getter to access focused layer's output (MR)
  • phosh: Upcoming events empty state (MR)
  • phosh: Caffeine duration (MR)
  • phosh: Location service quick setting (MR)
  • phosh: UI check fix (MR)
  • phosh: Autobrightness toggle (MR)
  • p-m-s: Empty tweaks page test (MR)
  • p-m-s: Symlink backend (MR)
  • p-m-s: Return boolean from value setters (MR)
  • p-m-s: conf-tweaks: Make setting_data a prop in Xresources backend (MR)
  • p-m-s: conf-tweaks: Add gtk3settings backend (MR)
  • gmobile: xiaomi-sweet support (MR)
  • xdg-d-p: Use clippy (MR), MR)
  • libcmatrix: various tweaks ([MR}(https://source.puri.sm/Librem5/libcmatrix/-/merge_requests/110))
  • libcmatrix: Release 0.0.4 (MR](https://source.puri.sm/Librem5/libcmatrix/-/merge_requests/115)
  • meta-phosh: Add cellbroadcasts (MR)
  • calls: Unload plugin test (MR)
  • calls: Sip proxy support (MR)
  • calls: Build system cleanups (MR)
  • m-b-p-i: JP MVNO (MR)

Comments?

Join the Fediverse thread

02 November, 2025 06:39PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in October 2025

02 November, 2025 12:54PM by Ben Hutchings

Russell Coker

PCIe Problems

HP z840 Dead Slot

I just had an issue with the HP z840 system I’m using as a build server [1]. I had to take it to a site that was about 20 minutes drive away and after getting there it didn’t work and just gave 6 beeps and the red LED on the power button flashed. The beeps indicate a video issue, which refers to the Intel Arc B580 card (which is annoyingly large) [2]. I swapped the card with another video card I had lying around (which I knew to be reliable) and got the same result.

It turned out that the PCIe*16 slot that I was using for it had broken, maybe bumps during transport with the big heavy GPU had broken it. I plugged it into the next slot along which is a PCIe*8 slot that’s open ended so it takes larger cards. The upside of this is that the system is still working well, the downside is that the issues I already had with the GPU being unreasonably large are exacerbated by losing one of the *16 slots. Having it in a PCIe 3.0*8 slot is not a problem for me as I only plan to use it for 8K display and for ML stuff and I think that *8 speed (7.8GB/s) is sufficient for both those tasks. In that slot the card could display 8K video at 60Hz with 32bpp and no compression (something that I don’t anticipate ever doing). It could also transfer the maximum size LLM in under 2 seconds which isn’t an unreasonable delay for starting a LLM.

The question now is, should I remove PCIe cards before transport in future?

HP z640 Intermittant Errors

The next issue I have is with my HP z640 workstation which is now my main workstation [3]. I started getting the below errors and then I had the kwin_wayland session hang and another time I started getting video corruption with mpv.

Oct 10 20:46:36 xev kernel: pcieport 0000:00:02.0: AER: Correctable error 
message received from 0000:00:02.0
Oct 10 20:46:36 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:   device [8086:2f04] error 
status/mask=00001040/00002000
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [ 6] BadTLP                
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER:   Error of this Agent 
is reported first
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:   device [1002:6987] error 
status/mask=00001000/00002000
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:   device [1002:aae0] 
error status/mask=00001000/00002000
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Correctable error 
message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:   device [8086:2f04] error 
status/mask=00001040/00002000
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [ 6] BadTLP                
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER:   Error of this Agent 
is reported first
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:   device [1002:6987] error 
status/mask=00001100/00002000
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [ 8] Rollover              
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:   device [1002:aae0] 
error status/mask=00001100/00002000
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [ 8] Rollover              
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [12] Timeout        

On that system I took the CPU out and reinstalled it with new heatsink paste on the theory that it might not have made good contact with some of the pins. The system also has one DIMM slot not working which can be a symptom of poor seating of the CPU. Doing that made no difference to the DIMM slot (I had bought the system for $50 in “unknown condition”) but the video has worked correctly since. It has been suggested to me that reseating the CPU didn’t directly affect the issue and that just taking the system apart could have addressed an issue of the GPU not making good contact in the PCIe slot.

It has been suggested that I could try “contact cleaner” which can be obtained from automotive supply stores among other places. I’m hesitant to put that in a PCIe slot but putting it on the connector of the card and then polishing it off seems like something to consider. Another suggestion was to use isopropyl alcohol to wash the contacts. I guess washing a PCIe slot out with isopropyl alcohol and leaving it for hours to dry is an option as a last resort.

For the moment it seems to be fine but I am not certain that the problem is gone forever. At the moment my main aim is to have these systems keep working until after the release of DDR6 workstations which is when I expect DDR5 workstations to become affordable on all the second hand sites.

02 November, 2025 07:51AM by etbe

November 01, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Playing Clair Obscur Expedition 33.

Playing Clair Obscur Expedition 33. I didn't think I would try again and again to beat a boss I cannot beat for multiple days. But here I am.

01 November, 2025 07:46AM by Junichi Uekawa

October 31, 2025

Scarlett Gately Moore

A New Chapter: Career Transition Update

I’m pleased to share that my career transition has been successful! I’ve joined our local county assessor’s office, beginning a new path in property assessment for taxation and valuation. While the compensation is modest, it offers the stability I was looking for.

My new schedule consists of four 10-hour days with an hour commute each way, which means Monday through Thursday will be largely devoted to work and travel. However, I’ll have Fridays available for open source contributions once I’ve completed my existing website maintenance commitments.

Open Source Priorities

Going forward, my contribution focus will be:

  1. Ubuntu Community Council
  2. Kubuntu/Debian
  3. Snap packages (as time permits)

Regarding the snap packages: my earlier hope of transitioning them to Carl hasn’t worked out as planned. He’s taken on maintaining KDE Neon single-handedly, and understandably, adding snap maintenance on top of that proved unfeasible. I’ll do what I can to help when time allows.

Looking for Contributors

If you’re interested in contributing to Kubuntu or helping with snap packages, I’d love to hear from you! Feel free to reach out—community involvement is what makes these projects thrive.

Thanks for your patience and understanding as I navigate this transition.

31 October, 2025 03:38PM by sgmoore

Russell Coker

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Best Pick-up-and-play with a gamepad on Debian and other Linux distributions: SuperTux

After playing some 16 bits era classic games on my Mist FPGA I was wondering what I could play on my Debian desktop as a semi-casual gamer. By semi-casual I mean that if a game needs more than 30 minutes to understand the mechanics, or needs 10 buttons on the gamepad I usually drop it. After testing a dozen games available in the Debian archive my favorite Pick-up-and-play is SuperTux. SuperTux is a 2D platformer quite similar to Super Mario World or Sonic, well also 16 bits classics, but of course you play a friendly penguin.

What I like in SuperTux:

  • complete free and opensource application packaged in the Debian main package repository, including all the game assets. So no fiddling around to get game data like Quake / Doom3, everything is available in the Debian repositories. The game is also available from all major Linux distributions in their standard repositories.
  • gamepad immediately usable. Probably the credits has to go the SDL library, but my 8bitdo wireless controller was usable instantly either via 2.4Ghz dongle or Bluetooth
  • well suited for casual players: the game mechanics are easy to grasp and the tutorial is excellent
  • polished interface, the menus are clear and easy to navigate, and there is no internal jargon in the default navigation till you run your first game. (Something which confused me when playing the SuperTuxKart racing game: when I was offered to leave STK I was wondering what that STK mode is. I understood afterwards STK is just the acronym of the game)
  • feel reasonably modern, the game does not start in a 640×480 window with 16 colors and you could demo it without shame for a casual gamer audience.

What can be say of the game itself ? You play a penguin who can run, shoot small fireballs, fall on your back to hit enemies harder. I played 10 levels, most levels had to be tried between 1 and 10 times which I find OK, the difficulty is raising in a very smooth curve.

SuperTux has complete localization, hence my screenshots show french text.

SuperTux tutorial Comprehensive in-game tutorial

World Map There is a large ice flow world, but we are going underground now

Example Level Good level design that you have to use to avoid those spiky enemies

Underground level The point where I had to pause the game, after missing those flying wigs 15 times in a row

SuperTux can be played with keyboard or gamepad, and has minimal hardware requirements, anything computer with working 3D graphic acceleration released in the last 20 years will be able to run it.

31 October, 2025 06:00AM by Manu

Reproducible Builds (diffoscope)

diffoscope 307 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 307. This version includes the following changes:

* Attempt to fix compatability with LLVM 21.
  (Closes: reproducible-builds/diffoscope#419)
* Update copyright years.
* Update CI to try and deploy to PyPI upon tagging a release.

You find out more by visiting the project homepage.

31 October, 2025 12:00AM

October 30, 2025

Utkarsh Gupta

FOSS Activites in October 2025

Here’s my monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

Whilst I didn’t get a chance to do much, here’s still a few things that I worked on:

  • Uploaded ruby-rack, 3.1.18-1, to fix a bunch of CVEs.
  • Asssited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:


Debian (E)LTS

This month I have worked 16 hours on Debian Long Term Support (LTS) and 05 hours on its sister Extended LTS project and did the following things:

  • ruby-rack: There were multiple vulnerabilities reported leading to DoS (memory exhaustion) and proxy bypass.

    • [unstable/forky]: Uploaded a fix to unstable via 3.1.18-1 to fix 5 CVEs.
    • [trixie/bookworm]: Uploaded a fix for all 5 CVEs in trixie via 3.1.18-1~deb13u1 and 7 CVEs in bookworm via 2.2.20-0+deb12u1.
    • [LTS]: Uploaded a fix for all 7 CVEs in bullseye via 2.1.4-3+deb11u4. And released DLA 4357-1.
    • [ELTS]: Backported fixes for CVE-2025-46727 & CVE-2025-32441 to buster and stretch but the other backports are being a bit tricky due to really old versions. But I’ll spend some more time there before coming to a conclusion.
  • wordpress: There were multiple vulnerabilities reported leading to Sent Data & Cross-site Scripting.

    • [bookworm]: Uploaded a fix for all 4 CVEs in bookwrom via 6.1.9+dfsg1-0+deb12u1.
    • [LTS]: Uploaded a fix for all 4 CVEs in bullseye via 5.7.14+dfsg1-0+deb11u1. And released DLA 4358-1.
  • [LTS] Attended the monthly LTS meeting on Jitsi. Summary here.

  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.


Until next time.
:wq for today.

30 October, 2025 05:41AM

October 28, 2025

Russell Coker

Internode NBN500

I have just converted to the Internode NBN500 plan which is now the same price as the NBN100 plan. I’m in a HFC area so they won’t let me get fiber to the home (due to Malcolm Turnbull breaking the NBN to help Murdoch) so I’m limited to what HFC can do.

I first tried it out on a 100mbit card and got speeds of 96/47 mb/s according to speedtest.net. I’ve always had the MTU set to 1492 for the PPPoE connection (something I forgot to mention in my blog post about connecting to the Arris CM8200 on Debian [1]) but when run on the 100mbit card I had to set it to 1488. Apparently 1488 is the number because 4 bytes are taken for the VLAN header and 8 bytes for the PPPoE header. But it seems that when using gigabit ethernet it doesn’t take 4 bytes for the VLAN (comments explaining that would be appreciated).

when connected via gigabit with a MTU of 1492 I got speeds of 534/46 which are quite good. When I tested with my laptop on a Wifi link while sitting next to the main node of my Kogan Wifi6 mesh [2] via 2.4GHz wifi I got 172/45. When using 5GHz I got 514/41. When using 5GHz at the far end of my home over the mesh I got 200/45.

Here’s a table summarising the speeds. I rounded all speeds off to 1Mbit/s because I don’t think that the results are even that accurate. I think that Wifi5 over mesh reporting a faster upload speed than Wifi5 near the AP is because of random factors not an actual benefit to being further away, but I will do more tests later on.

Connection Receive Mbit/s Send Mbit/s
100baseT 96 47
Gigabit 535 46
2.4GHz Wifi 172 45
Wifi5 514 41
Wifi5 Over Mesh 200 45

28 October, 2025 11:03PM by etbe

Russ Allbery

Review: Those Who Wait

Review: Those Who Wait, by Haley Cass

Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-9884929-1-7
Format: Kindle
Pages: 556

Those Who Wait is a stand-alone self-published sapphic romance novel. Given the lack of connection between political figures named in this book and our reality, it's also technically an alternate history, but it will be entirely unsatisfying to anyone who reads it in that genre.

Sutton Spencer is an English grad student in New York City. As the story opens, she has recently realized that she's bisexual rather than straight. She certainly has not done anything about that revelation; the very thought makes her blush. Her friend and roommate Regan, not known for either her patience or her impulse control, decides to force the issue by stealing Sutton's phone, creating a profile on a lesbian dating app, and messaging the first woman Sutton admits being attracted to.

Charlotte Thompson is a highly ambitious politician, current deputy mayor of New York City for health and human services, and granddaughter of the first female president of the United States. She fully intends to become president of the United States herself. The next step on that path is an open special election for a seat in the House of Representatives. With her family political connections and the firm support of the mayor of New York City (who is also dating her brother), she thinks she has an excellent shot of winning.

Charlotte is also a lesbian, something she's known since she was a teenager and which still poses serious problems for a political career. She is therefore out to her family and a few close friends, but otherwise in the closet. Compared to her political ambitions, Charlotte considers her love life almost irrelevant, and therefore has a strict policy of limiting herself to anonymous one-night stands arranged on dating apps. Even that is about to become impossible given her upcoming campaign, but she indulges in one last glance at SapphicSpark before she deletes her account.

Sutton is as far as possible from the sort of person who does one-night stands, which is a shame as far as Charlotte is concerned. It would have been a fun last night out. Despite that, both of them find the other unexpectedly enjoyable to chat with. (There are a lot of text message bubbles in this book.) This is when Sutton has her brilliant idea: Charlotte is charming, experienced, and also kind and understanding of Sutton's anxiety, at least in app messages. Maybe Charlotte can be her mentor? Tell her how to approach women, give her some guidance, point her in the right directions.

Given the genre, you can guess how this (eventually) turns out.

I'm going to say a lot of good things about this book, so let me get the complaints over with first.

As you might guess from that introduction, Charlotte's political career and the danger of being outed are central to this story. This is a bit unfortunate because you should not, under any circumstances, attempt to think deeply about the politics in this book.

In 550 pages, Charlotte does not mention or expound a single meaningful political position. You come away from this book as ignorant about what Charlotte wants to accomplish as a politician as you entered. Apparently she wants to be president because her grandmother was president and she thinks she'd be good at it. The closest the story comes to a position is something unbelievably vague about homeless services and Charlotte's internal assertion that she wants to help people and make real change. There are even transcripts of media interviews, later in the book, and they somehow manage to be more vacuous than US political talk shows, which is saying something. I also can't remember a single mention of fundraising anywhere in this book, which in US politics is absurd (although I will be generous and say this is due to Cass's alternate history).

I assume this was a deliberate choice and Cass didn't want politics to distract from the romance, but as someone with a lot of opinions about concrete political issues, the resulting vague soft-liberal squishiness was actively off-putting. In an actual politician, this would be an entire clothesline of red flags. Thankfully, it's ignorable for the same reason; this is so obviously not the focus of the book that one can mostly perform the same sort of mental trick that one does when ignoring the backdrop in a cheap theater.

My second complaint is that I don't know what Sutton does outside of the romance. Yes, she's an English grad student, and she does some grading and some vaguely-described work and is later referred to a prestigious internship, but this is as devoid of detail as Charlotte's political positions. It's not quite as jarring because Cass does eventually show Sutton helping concretely with her mother's work (about which I have some other issues that I won't get into), but it deprives Sutton of an opportunity to be visibly expert in something. The romance setup casts Charlotte as the experienced one to Sutton's naivete, and I think it would have been a better balance to give Sutton something concrete and tangible that she was clearly better at than Charlotte.

Those complaints aside, I quite enjoyed this. It was a recommendation from the same BookTuber who recommended Delilah Green Doesn't Care, so her recommendations are quickly accumulating more weight. The chemistry between Sutton and Charlotte is quite believable; the dialogue sparkles, the descriptions of the subtle cues they pick up from each other are excellent, and it's just fun to read about how they navigate a whole lot of small (and sometimes large) misunderstandings and mismatches in personality and world view.

Normally, misunderstandings are my least favorite part of a romance novel, but Sutton and Charlotte come from such different perspectives that their misunderstandings feel more justified than is typical. The characters are also fairly mature about working through them: Main characters who track the other character down and insist on talking when something happens they don't understand! Can you imagine! Only with the third-act breakup is the reader dragged through multiple chapters of both characters being miserable, and while I also usually hate third-act breakups, this one is so obviously coming and so clearly advertised from the initial setup that I couldn't really be mad. I did wish the payoff make-up scene at the end of the book had a bit more oomph, though; I thought Sutton's side of it didn't have quite the emotional catharsis that it could have had.

I particularly enjoyed the reasons why the two characters fall in love, and how different they are. Charlotte is delighted by Sutton because she's awkward and shy but also straightforward and frequently surprisingly blunt, which fits perfectly with how much Charlotte is otherwise living in a world of polished politicians in constant control of their personas. Sutton's perspective is more physical, but the part I liked was the way that she treats Charlotte like a puzzle. Rather than trying to change how Charlotte expresses herself, she instead discovers that she's remarkably good at reading Charlotte if she trusts her instincts. There was something about Sutton's growing perceptiveness that I found quietly delightful. It's the sort of non-sexual intimacy that often gets lost among the big emotions in romance novels.

The supporting cast was also great. Both characters have deep support networks of friends and family who are unambiguously on their side. Regan is pure chaos, and I would not be friends with her, but Cass shows her deep loyalty in a way that makes her dynamic with Sutton make sense. Both characters have thoughtful and loving families who support them but don't make decisions for them, which is a nice change of pace from the usually more mixed family situations of romance novel protagonists. There's a lot of emotional turbulence in the main relationship, and I think that only worked for me because of how rock-solid and kind the supporting cast is.

This is, as you might guess from the title, a very slow burn, although the slow burn is for the emotional relationship rather than the physical one (for reasons that would be spoilers). As usual, I have no calibration for spiciness level, but I'd say that this was roughly on par with the later books in the Bright Falls series.

If you know something about politics (or political history) and try to take that part of this book seriously, it will drive you to drink, but if you can put that aside and can deal with misunderstandings and emotional turmoil, this was both fun and satisfying. I liked both of the characters, I liked the timing of the alternating viewpoints, and I believed in the relationship and chemistry, as improbable and chaotic as some of the setup was. It's not the greatest thing I ever read, and I wish the ending was a smidgen stronger, but it was an enjoyable way to spend a few reading days. Recommended.

Rating: 7 out of 10

28 October, 2025 03:21AM

October 27, 2025

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

It's NOT always DNS.

I’ve written down a new rule (no name, sorry) that I’ll be repeating to myself and those around me. “If you can replace ‘DNS’ with ‘key value store mapping a name to an ip’ and it still makes sense, it was not, in fact, DNS.” Feel free to repeat it along with me.

Sure, the “It’s always DNS” meme is funny the first few hundred times you see it – but what’s less funny is when critical thinking ends because a DNS query is involved. DNS failures are often the first observable problem because it’s one of the first things that needs to be done. DNS is fairly complicated, implementation-dependent, and at times – frustrating to debug – but it is not the operational hazard it’s made out to be. It’s at best a shallow take, and at worst actively holding teams back from understanding their true operational risks.

IP connectivity failures between a host and the rest of the network is not a reason to blame DNS. This would happen no matter how you distribute the updated name to IP mappings. Wiping out all the records during the course of operations due to an automation bug is not a reason to blame DNS. This, too, would happen no matter how you distribute the name to IP mappings. Something made the choice to delete all the mappings, and it did what you asked it to do

There’s plenty of annoying DNS specific sharp edges to blame when things do go wrong (like 8.8.8.8 and 1.1.1.1 disagreeing about resolving a domain because of DNSSEC, or since we’re on the topic, a DNSSEC rollout bricking prod for hours) for us to be cracking jokes anytime a program makes a DNS request.

We can do better.

27 October, 2025 05:15PM

Russ Allbery

Review: On Vicious Worlds

Review: On Vicious Worlds, by Bethany Jacobs

Series: Kindom Trilogy #2
Publisher: Orbit
Copyright: October 2024
ISBN: 0-316-46362-0
Format: Kindle
Pages: 444

On Vicious Worlds is a science fiction thriller with bits of cyberpunk and a direct sequel to These Burning Stars. This is one of those series where each book has massive spoilers for the previous book and builds on characters and situations from that book. I would not read it out of order. It is Bethany Jacobs's second novel.

Whooboy, how to review this without spoilers. There are so many major twists in the first book with lingering consequences that it's nearly impossible.

I said at the end of my review of These Burning Stars that I was impressed with the ending for reasons that I can't reveal. One thread of this book follows the aftermath: What do you do after the plan? If you have honed yourself for one purpose, can you repurpose yourself?

The other thread of the book is a murder mystery. The protectors of the community are being picked off, one by one. The culprit might be a hacker so good that they are causing Jun, the expert hacker of the first book, serious problems. Meanwhile, the political fault lines of the community are cracking open under pressure, and the leaders are untested, exhausted, and navigating difficult emotional terrain.

These two story threads alternate, and interspersed are yet more flashbacks. As with the first book, the flashbacks fill in the backstory of Chono and and Esek. This time, though, we get Six's viewpoint.

The good news is that On Vicious Worlds tones down the sociopathy considerably without letting up on the political twists. This is the book where Chono comes into her own. She has much more freedom of action, despite being at the center of complicated and cut-throat politics, and I thoroughly enjoyed her principled solidity. She gets a chance to transcend her previous role as an abuse victim, and it's worth the wait.

The bad news is that this is very much a middle book of a trilogy. While there are a lot of bloody battles, emotional drama, political betrayals, and plot twists, the series plot has not advanced much by the end of the book. I would not say the characters were left in the same position they started — the character development is real and the perils have changed — but neither would I say that any of the open questions from These Burning Stars have resolved.

The last book I read used science-fiction world-building to tell a story about moral philosophy that was somewhat less drama-filled than one might have expected. That is so not the case here. On Vicious Worlds is, if anything, even more dramatic than the first book of the series. In Chono's thread, the slow burn attempt to understand Six's motives has been replaced with almost non-stop melodrama, full of betrayals, reversals, risky attempts, and emotional roller coasters. Jun's part of the story is a bit more sedate at first, but there too the interpersonal drama setting is headed towards 10. This is the novel equivalent of an action movie.

Jun, and her part of the story, are fine. I like the new viewpoint character, I find their system of governance somewhat interesting (although highly optimized for small groups), and I think the climax worked. But I'm invested in this series for Chono and Six. Both of them, but particularly Six, are absurdly over the top, ten people's worth of drama stuffed into one character, unable to communicate in anything less than dramatic gestures and absurd plans, but I find them magnetically fascinating. I'm not sure if written characters can have charisma, but if so, they have it.

I liked this entry in the series, but then I also liked the first book. It's trauma-filled and dramatic and involved a bit too much bloody maiming for my tastes, but this whole series is about revolutions and what happens when you decide to fight, and sometimes I'm in the mood for complicated and damaged action heroes who loathe oppression and want to kill some people.

This is the sort of series book that will neither be the reason you read the series nor the reason why you stop reading. If you enjoyed These Burning Stars, this is more of the same, with arguably better character development but less plot catharsis. If you didn't like These Burning Stars, this probably won't change your mind, although if you hated it specifically because of Esek's sociopathy, I think you would find this book more congenial. But maybe not; Jacobs is still the same author, and most of the characters in this series are made of sharp edges.

I'm still in; I have already pre-ordered the next book.

Followed by This Brutal Moon, due out in December of 2025 and advertised as the conclusion.

Rating: 7 out of 10

27 October, 2025 03:45AM

October 26, 2025

hackergotchi for Guido Günther

Guido Günther

Audio Roles, Volumes and Routes

What if you want to have your phone’s alarm clock volume different from your music playback volume and have the later go to speakers while alarms should continue to go to the phone’s speaker?

While this could be handled manually via per application volume and sink setups, this doesn’t scale well on phones that also have emergency alerts, incoming calls, voice assistants, etc. It also doesn’t specify how to handle simultaneous playback - for instance, if an incoming call rings while music is playing.

26 October, 2025 10:35AM

Russ Allbery

Review: Ancestral Night

Review: Ancestral Night, by Elizabeth Bear

Series: White Space #1
Publisher: Saga Press
Copyright: 2019
ISBN: 1-5344-0300-0
Format: Kindle
Pages: 501

Ancestral Night is a far-future space opera novel and the first of a series. It shares a universe with Bear's earier Jacob's Ladder trilogy, and there is a passing reference to the events of Grail that would be a spoiler if you put the pieces together, but it's easy to miss. You do not need to read the earlier series to read this book (although it's a good series and you might enjoy it).

Halmey Dz is a member of the vast interstellar federation called the Synarche, which has put an end to war and other large-scale anti-social behavior through a process called rightminding. Every person has a neural implant that can serve as supplemental memory, off-load some thought processes, and, crucially, regulate neurotransmitters and hormones to help people stay on an even keel. It works, mostly.

One could argue Halmey is an exception. Raised in a clade that took rightminding to an extreme of suppression of individual personality into a sort of hive mind, she became involved with a terrorist during her legally mandated time outside of her all-consuming family before she could make an adult decision to stay with them (essentially a rumspringa). The result was a tragedy that Halmey doesn't like to think about, one that's left deep emotional scars. But Halmey herself would argue she's not an exception: She's put her history behind her, found partners that she trusts, and is a well-adjusted member of the Synarche.

Eventually, I realized that I was wasting my time, and if I wanted to hide from humanity in a bottle, I was better off making it a titanium one with a warp drive and a couple of carefully selected companions.

Halmey does salvage: finding ships lost in white space and retrieving them. One of her partners is Connla, a pilot originally from a somewhat atavistic world called Spartacus. The other is their salvage tug.

The boat didn't have a name.

He wasn't deemed significant enough to need a name by the authorities and registries that govern such things. He had a registration number — 657-2929-04, Human/Terra — and he had a class, salvage tug, but he didn't have a name.

Officially.

We called him Singer. If Singer had an opinion on the issue, he'd never registered it — but he never complained. Singer was the shipmind as well as the ship — or at least, he inhabited the ship's virtual spaces the same way we inhabited the physical ones — but my partner Connla and I didn't own him. You can't own a sentience in civilized space.

As Ancestral Night opens, the three of them are investigating a tip of a white space anomoly well off the beaten path. They thought it might be a lost ship that failed a transition. What they find instead is a dead Ativahika and a mysterious ship equipped with artificial gravity.

The Ativahikas are a presumed sentient race of living ships that are on the most alien outskirts of the Synarche confederation. They don't communicate, at least so far as Halmey is aware. She also wasn't aware they died, but this one is thoroughly dead, next to an apparently abandoned ship of unknown origin with a piece of technology beyond the capabilities of the Synarche.

The three salvagers get very little time to absorb this scene before they are attacked by pirates.

I have always liked Bear's science fiction better than her fantasy, and this is no exception. This was great stuff. Halmey is a talkative, opinionated infodumper, which is a great first-person protagonist to have in a fictional universe this rich with delightful corners. There are some Big Dumb Object vibes (one of my favorite parts of salvage stories), solid character work, a mysterious past that has some satisfying heft once it's revealed, and a whole lot more moral philosophy than I was expecting from the setup. All of it is woven together with experienced skill, unsurprising given Bear's long and prolific career. And it's full of delightful world-building bits: Halmey's afthands (a surgical adaptation for zero gravity work) and grumpiness at the sheer amount of gravity she has to deal with over the course of this book, the Culture-style ship names, and a faster-than-light travel system that of course won't pass physics muster but provides a satisfying quantity of hooky bits for plot to attach to.

The backbone of this book is an ancient artifact mystery crossed with a murder investigation. Who killed the Ativahika? Where did the gravity generator come from? Those are good questions with interesting answers. But the heart of the book is a philosophical conflict: What are the boundaries between identity and society? How much power should society have to reshape who we are? If you deny parts of yourself to fit in with society, is this necessarily a form of oppression?

I wrote a couple of paragraphs of elaboration, and then deleted them; on further thought, I don't want to give any more details about what Bear is doing in this book. I will only say that I was not expecting this level of thoughtfulness about a notoriously complex and tricky philosophical topic in a full-throated adventure science fiction novel. I think some people may find the ending strange and disappointing. I loved it, and weeks after finishing this book I'm still thinking about it.

Ancestral Night has some pacing problems. There is a long stretch in the middle of the book that felt repetitive and strained, where Bear holds the reader at a high level of alert and dread for long enough that I found it enervating. There are also a few political cheap shots where Bear picks the weakest form of an opposing argument instead of the strongest. (Some of the cheap shots are rather satisfying, though.) The dramatic arc of the book is... odd, in a way that I think was entirely intentional given how well it works with the thematic message, but which is also unsettling. You may not get the catharsis that you're expecting.

But all of this serves a purpose, and I thought that purpose was interesting. Ancestral Night is one of those books that I liked more a week after I finished it than I did when I finished it.

Epiphanies are wonderful. I’m really grateful that our brains do so much processing outside the line of sight of our consciousnesses. Can you imagine how downright boring thinking would be if you had to go through all that stuff line by line?

Also, for once, I think Bear hit on exactly the right level of description rather than leaving me trying to piece together clues and hope I understood the plot. It helps that Halmey loves to explain things, so there are a lot of miniature infodumps, but I found them interesting and a satisfying throwback to an earlier style of science fiction that focused more on world-building than on interpersonal drama. There is drama, but most of it is internal, and I thought the balance was about right.

This is solid, well-crafted work and a good addition to the genre. I am looking forward to the rest of the series.

Followed by Machine, which shifts to a different protagonist.

Rating: 8 out of 10

26 October, 2025 03:30AM

October 25, 2025

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Lomiri Tablets - We are hiring!

We at Fre{i}e Software GmbH now have a confirmed budget for working on Debian based tablets with the special goal to use them for educational purposes (i.e. in schools).

Those Debian Edu tablets shall be powered by the Lomiri Operating Environment (that same operating environment that is powering Ubuntu Touch).

That said, we are hiring developers (full time, part time) [*] [**]:

  • Lomiri developers (C/C++, Qt5 and Qt6, QML, CMake)
  • Debian maintainers

Global tasks will be:

  • Transition Lomiri from Qt5 to Qt6
  • Consolidate the Lomiri Shell on various reference devices (mainline Linux only)
  • Integrate Lomiri Shell with cloud services such as Nextcloud and OpenCloud
  • XDG Desktop Portal support for Lomiri, integrate better with non-Lomiri Wayland apps
  • Bring more Lomiri-specific (Ubuntu Touch) apps to Debian
  • ... (more to come) ...

The budget will cover work for the +/- next 1.5-2 yrs. Development achievements shall culminate in the release of Debian 14.

If you are interested in joining our team, please get in touch with me via known communication channels.

light+love,
Mike (aka sunweaver at debian.org)

[fsgmbh] https://freiesoftware.gmbh
[*] We can employ applicants who are located in Germany, Austria or Poland (for other regions within the EU, please ask).
[**] Alternatively, if you are self-employed, we are happy to onboard you as a freelancer.

25 October, 2025 08:58PM by sunweaver

Sam Hartman

My First Successful AI Coding Experience

Yesterday, I had my first successful AI coding experience.

I’ve used AI coding tools before—and come away disappointed. The results were underwhelming: low-quality code, inconsistent abstraction levels, and subtle bugs that take longer to fix than it would take to write the whole thing from scratch.

Those problems haven’t vanished. The code quality this time was still disappointing. As I asked the AI to refined its work, it would randomly drop important constraints or refactor things in unhelpful ways. And yet, this experience was different—and genuinely valuable—for two reasons.

The first benefit was the obvious one: the AI helped me get over the blank-page problem. It produced a workable skeleton for the project—imperfect, but enough to start building on.

The second benefit was more surprising. I was working on a problem in odds-ratio preference optimization—specifically, finding a way to combine similar examples in datasets for AI training. I wanted an ideal algorithm, one that extracted every ounce of value from the data.

The AI misunderstood my description. Its first attempt was laughably simple—it just concatenated two text strings. Thanks, but I can call strcat or the Python equivalent without help.

However, the second attempt was different. It was still not what I had asked for—but as I thought about it, I realized it was good enough. The AI had created a simpler algorithm that would probably solve my problem in practice.

In trying too hard to make the algorithm perfect, I’d overlooked that the simpler approach might be the right one. The AI, by misunderstanding, helped me see that.

This experience reminded me of something that happened years ago when I was mentoring a new developer. They came to me asking how to solve a difficult problem. Rather than telling them it was impossible, I explained what would be required: a complex authorization framework, intricate system interactions, and a series of political and organizational hurdles that would make deployment nearly impossible.

A few months later, they returned and said they’d found a solution. I was astonished—until I looked more closely. What they’d built wasn’t the full, organization-wide system I had envisioned. Instead, they’d reframed the problem. By narrowing the scope—reducing the need for global trust and deep integration—they’d built a local solution that worked well enough within their project.

They succeeded precisely because they didn’t see all the constraints I did. Their inexperience freed them from assumptions that had trapped me.

That’s exactly what happened with the AI. It didn’t know which boundaries not to cross. In its simplicity, it found a path forward that I had overlooked.

My conclusion isn’t that AI coding is suddenly great. It’s that working with someone—or something—that thinks differently can open new paths forward. Whether it’s an AI, a peer, or a less experienced engineer, that collaboration can bring fresh perspectives that challenge your assumptions and reveal simpler, more practical ways to solve problems.



comment count unavailable comments

25 October, 2025 05:07PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

franken keyboard

Since it's spooky season, let me present to you the FrankenKeyboard!

The FrankenKeyboard

8bitdo retro keyboard

For some reason I can't fathom, I was persuaded into buying an 8bitdo retro mechanical keyboard. It was very reasonably priced, and has a few nice fun features: built-in bluetooth and 2.4GHz wireless (with the supplied dongle); colour scheme inspired by the Nintendo Famicom; fun to use knobs for volume control; some basic macro support; and funky oversized mashable macro keys (which work really well as "Copy" and "Paste")

The 8bitdo keyboards come with switch-types I had not previously experienced: Kailh Box White v2. I'm used to Cherry MX Reds, but I loved the feel of the Box White v2s. The 8bitdo keyboards all have hot-swappable key switches.

It's relatively compact (comes without a numpad), but still larger than my TEX Shura, which (at home) is my daily driver. I also miss the trackpoint mouse on the Shura. Finally, the 8bitdo model I bought has American ANSI key layout, which I can tolerate but is not as nice as ISO. I later learned that they have a limited range of ISO-layout keyboards too, but not (yet) in the Famicom colour scheme I'd bought.

DIY Shura

My existing Shura's key switches are soldered on and can't be swapped out. But I really preferred the Kailh white switches.

I decided to buy a second Shura, this time as a "DIY kit" which accepts hot-swappable switches. I then moved the Kailh Box White v2 switches over from the 8bitdo keyboard.

keycaps

Part of justifying buying the DIY kit was the possibility that I could sell on my older Shura with the Cherry MX Red switches. My existing Shura's key caps are for the ISO-GB layout and have their legends printed onto them. After three years the legends have faded in a few places.

The DIY kit comes with a set of ABS "double-shot" key caps (where the key legends are plastic rather than printed). They look a lot nicer, but I don't look at my keys. I'm considering applying the new, pristine key caps to the old Shura board, to make it more attractive to buyers. One problem is I'm not sure the new set of caps includes the ISO-UK specific ones. It might be that potential buyers might prefer to have used caps with the correct legends rather than pristine ones which are mislabelled.

franken keyboard

Given I wasn't going to use the new key cap set, I borrowed most of the caps from the 8bitdo keyboard. I had to retain the G, H and B keys from my older Shura as they are specially molded to leave space for the trackpoint, and a couple of the modifier keys which weren't the right size. Hence the odd look! (It needs some tweaking. That left-ALT looks out of place. It may be that the 8bitdo caps are temporary. Left "cmd" is really Fn, and "Caps lock" is really "Super". The right-hand red dot is a second "Super".)

Since taking the photo I've removed the "stabilisers" under the right-shift and backspace keys, in order to squeeze a couple more keys in their place. the new keycap set includes a regular-sized "BS" key, as the JIS keyboard layout has a regular-sized backspace. (Everyone should have a BS key in my opinion.)

I plan to map my new keys to "Copy" and "Paste" actions following the advice in this article.

25 October, 2025 09:57AM

October 23, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Modern perfect hashing

Wojciech Muła posted about modern perfect hashing for strings and I wanted to make some comments about my own implementation (that sadly never got productionized because doubling the speed compared to gperf wasn't really that impactful in the end).

First, let's define the problem, just so we're all on the same page; the goal is to create code that maps a known, fixed set of strings to a predefined integer (per string), and rejects everything else. This is essentially the same as a hash table, except that since the set of strings is known ahead of time, we can do better than a normal hash table. (So no “but I heard SwissTables uses SIMD and thus cannot be beat”, please. :-) ) My use case is around a thousand strings or so, and we'll assume that a couple of minutes of build time is OK (shorter would be better, but we can probably cache somehow). If you've got millions of strings, and you don't know them compile-time (for instance because you want to use your hash table in the join phase of a database), see this survey; it's a different problem with largely different solutions.

Like Wojciech, I started splitting by length. This means that we can drop all bounds checking after this, memcmp will be optimized by the compiler to use SIMD if relevant, and so on.

But after that, he recommends using PEXT (bit extraction, from BMI2), which has two problems: First, the resulting table can get quite big if your input set isn't well-behaved. (You can do better than the greedy algorithm he suggests, but not infinitely so, and finding the optimal mask quickly is sort of annoying if you don't want to embed a SAT solver or something.) Second, I needed the code to work on Arm, where you simply don't have this instruction or anything like it available. (Also, not all x86 has it, and on older Zen, it's slow.)

So, we need some other way, short of software emulation of PEXT (which exists, but we'd like to do better), to convert a sparse set of bits into a table without any collisions. It turns out the computer chess community has needed to grapple with this for a long time (they want to convert from “I have a \ on \ and there are pieces on relevant squares \, give me an index that points to an array of squares I can move to”), and their solution is to use… well, magic. It turns out that if you do something like ((value & mask) * magic), it is very likely that the upper bits will be collision-free between your different values if you try enough different numbers for magic. We can use this too; for instance, here is code for all length-4 CSS keywords:

   static const uint8_t table[] = {
        6,   0,   0,   3,   2,   5,   9,   0,   0,   1,   0,   8,   7,   0,   0,
   };
   static const uint8_t strings[] = {
       1,   0, 'z', 'o', 'o', 'm',
       2,   0, 'c', 'l', 'i', 'p',
       3,   0, 'f', 'i', 'l', 'l',
       4,   0, 'l', 'e', 'f', 't',
       5,   0, 'p', 'a', 'g', 'e',
       6,   0, 's', 'i', 'z', 'e',
       7,   0, 'f', 'l', 'e', 'x',
       8,   0, 'f', 'o', 'n', 't',
       9,   0, 'g', 'r', 'i', 'd',
      10,   0, 'm', 'a', 's', 'k',
   };

   uint16_t block;
   memcpy(&block, str + 0, sizeof(block));
   uint32_t pos = uint32_t(block * 0x28400000U) >> 28;
   const uint8_t *candidate = strings + 6 * table[pos];
   if (memcmp(candidate + 2, str, 4) == 0) {
     return candidate[0] + (candidate[1] << 8);
   }
   return 0;

There's a bit to unpack here; we read the first 16 bits from our value with memcpy (big-endian users beware!), multiply it with the magic value 0x28400000U found by trial and error, shift the top bits down, and now all of our ten candidate values (“zoom”, “clip”, etc.) have different top four bits. We use that to index into a small table, check that we got the right one instead of a random collision (e.g. “abcd”, 0x6261, would get a value of 12, and table[12] is 7, so we need to disambiguate that from “flex”, which is what we are actually looking for in that spot), and then return the 16-bit identifier related to the match (or zero, if we didn't find it).

We don't need to use the first 16 bits; we could have used any other consecutive 16 bits, or any 32 bits, or any 64 bits, or possibly any of those masked off, or even XOR of two different 32-bit sets if need be. My code prefers smaller types because a) they tend to give smaller code size (easier to load into registers, or can even be used as immediates), and b) you can bruteforce them instead of doing random searches (which, not the least, has the advantage that you can give up much quicker).

You also don't really need the intermediate table; if the fit is particularly good, you can just index directly into the final result without wasting any space. Here's the case for length-24 CSS keywords, where we happened to have exactly 16 candidates and we found a magic giving a perfect (4-bit) value, making it a no-brainer:

  static const uint8_t strings[] = {
     95,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 'w', 'i', 'd', 't', 'h',
     40,   0, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'e', 'x', 't', '-', 'o', 'r', 'i', 'e', 'n', 't', 'a', 't', 'i', 'o', 'n',
    115,   1, 's', 'c', 'r', 'o', 'l', 'l', '-', 'p', 'a', 'd', 'd', 'i', 'n', 'g', '-', 'b', 'l', 'o', 'c', 'k', '-', 'e', 'n', 'd',
    198,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'r', 'a', 'n', 's', 'f', 'o', 'r', 'm', '-', 'o', 'r', 'i', 'g', 'i', 'n',
    225,   0, '-', 'i', 'n', 't', 'e', 'r', 'n', 'a', 'l', '-', 'o', 'v', 'e', 'r', 'f', 'l', 'o', 'w', '-', 'b', 'l', 'o', 'c', 'k',
    101,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 's', 't', 'y', 'l', 'e',
     93,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 'c', 'o', 'l', 'o', 'r',
    102,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 'w', 'i', 'd', 't', 'h',
    169,   1, 't', 'e', 'x', 't', '-', 'd', 'e', 'c', 'o', 'r', 'a', 't', 'i', 'o', 'n', '-', 's', 'k', 'i', 'p', '-', 'i', 'n', 'k',
    156,   0, 'c', 'o', 'n', 't', 'a', 'i', 'n', '-', 'i', 'n', 't', 'r', 'i', 'n', 's', 'i', 'c', '-', 'h', 'e', 'i', 'g', 'h', 't',
    201,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'r', 'a', 'n', 's', 'i', 't', 'i', 'o', 'n', '-', 'd', 'e', 'l', 'a', 'y',
    109,   1, 's', 'c', 'r', 'o', 'l', 'l', '-', 'm', 'a', 'r', 'g', 'i', 'n', '-', 'i', 'n', 'l', 'i', 'n', 'e', '-', 'e', 'n', 'd',
    240,   0, '-', 'i', 'n', 't', 'e', 'r', 'n', 'a', 'l', '-', 'v', 'i', 's', 'i', 't', 'e', 'd', '-', 's', 't', 'r', 'o', 'k', 'e',
    100,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 'c', 'o', 'l', 'o', 'r',
     94,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 's', 't', 'y', 'l', 'e',
    196,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'e', 'x', 't', '-', 's', 'i', 'z', 'e', '-', 'a', 'd', 'j', 'u', 's', 't',
  };

  uint32_t block;
  memcpy(&block, str + 16, sizeof(block));
  uint32_t pos = uint32_t(block * 0xe330a008U) >> 28;
  const uint8_t *candidate = strings + 26 * pos;
  if (memcmp(candidate + 2, str, 24) == 0) {
    return candidate[0] + (candidate[1] << 8);
  }
  return 0;

You can see that we used a 32-bit value here (bytes 16 through 19 of the input), and a corresponding 32-bit magic (though still not with an AND mask). So we got fairly lucky, but sometimes you do that. Of course, we need to validate the entire 24-byte value even though we only discriminated on four of the bytes! (Unless you know for sure that you never have any out-of-distribution inputs, that is. There are use cases where this is true.)

(If you wonder what 95, 0 or similar is above; that's just “the answer the user wanted for that input”. It corresponds to a 16-bit enum in the parser.)

If there are only a few values, we don't need any of this; just like Wojciech, we do with a simple compare. Here's the generated code for all length-37 CSS keywords, plain and simple:

  if (memcmp(str, "-internal-inactive-list-box-selection", 37) == 0) {
    return 171;
  }
  return 0;

(Again 171 is “the desired output for that input”, not a value the code generator decides in any way.)

So how do we find these magic values? There's really only one way: Try lots of different ones and see if they work. But there's a trick to accelerate “see if they work”, which I also borrowed from computer chess: The killer heuristic.

See, to try if a magic is good, you generally try to hash all the different values and see if any two go into the same bucket. (If they do, it's not a perfect hash and the entire point of the exercise is gone.) But it turns out that most of the time, it's the same two values that collide. So every couple hundred candidates, we check which two values disproved the magic, and put those in a slot. Whenever we check magics, we can now try those first, and more likely than not, discard the candidate right away and move on to the next one (whether it is by exhaustive search or randomness). It's actually a significant speedup.

But occasionally, we simply cannot find a magic for a given group; either there is none, or we didn't have enough time to scan through enough of the 64-bit space. At this point, Wojciech suggests we switch on one of the characters (heuristically) to get smaller subgroups and try again. I didn't actually find this to perform all that well; indirect branch predictors are better than 20 years ago, but the pattern is typically not that predictable. What I tried instead was to have more of a yes/no on some character (i.e., a non-indirect branch), which makes for a coarser split.

It's not at all obvious where the best split would be. You'd intuitively think that 50/50 would be a good idea, but if you have e.g. 40 elements, you'd much rather split them 32/8… if you can find perfect hashes for both subgroups (5-bit and 3-bit, respectively). If not, a 20–20 split is most likely better, since you very easily can find magics that put 20 elements into 32 buckets without collisions. I ended up basically trying all the different splits and scoring them, but this makes the searcher rather slow, and it means you basically must have some sort of cache if you want to run it as part of your build system. This is the part I'm by far the least happy about; gperf isn't great by modern standards, but it never feels slow to run.

The end result for me was: Runtime about twice as fast as gperf, compiled code about half as big. That's with everything hard-coded; if you're pushed for space (or are icache-bound), you could make more generic code at the expense of some speed.

So, if anyone wants to make a more modern gperf, I guess this space is up for grabs? It's not exactly technology that will make your stock go to AI levels, though.

23 October, 2025 08:23PM

October 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation

This post is a review for Computing Reviews for LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation , a article published in Proceedings of the ACM on Software Engineering, Volume 2, Issue ISSTA

How good can large language models (LLMs) be at generating code? This may not seem like a very novel question, as several benchmarks (for example, HumanEval and MBPP, published in 2021) existed before LLMs burst into public view and started the current artificial intelligence (AI) “inflation.” However, as the paper’s authors point out, code generation is very seldom done as an isolated function, but instead must be deployed in a coherent fashion together with the rest of the project or repository it is meant to be integrated into. Today, several benchmarks (for example, CoderEval or EvoCodeBench) measure the functional correctness of LLM-generated code via test case pass rates.

This paper brings a new proposal to the table: comparing LLM-generated repository-level evaluated code by examining the hallucinations generated. The authors begin by running the Python code generation tasks proposed in the CoderEval benchmark against six code-generating LLMs. Next, they analyze the results and build a taxonomy to describe code-based LLM hallucinations, with three types of conflicts (task requirement, factual knowledge, and project context) as first-level categories and eight subcategories within them. The authors then compare the results of each of the LLMs per the main hallucination category. Finally, they try to find the root cause for the hallucinations.

The paper is structured very clearly, not only presenting the three research questions (RQ) but also referring to them as needed to explain why and how each partial result is interpreted. RQ1 (establishing a hallucination taxonomy) is the most thoroughly explored. While RQ2 (LLM comparison) is clear, it just presents straightforward results without much analysis. RQ3 (root cause discussion) is undoubtedly interesting, but I feel it to be much more speculative and not directly related to the analysis performed.

After tackling their research questions, Zhang et al. propose a possible mitigation to counter the effect of hallucinations: enhance the LLM with retrieval-augmented generation (RAG) so it better understands task requirements, factual knowledge, and project context. The presented results show that all of the models are clearly (though modestly) improved by the proposed RAG-based mitigation.

The paper is clearly written and easy to read. It should provide its target audience with interesting insights and discussions. I would have liked more details on their RAG implementation, but I suppose that’s for a follow-up work.

21 October, 2025 10:08PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

How configuration is passed from the BinderHub helm chart to a running BinderHub

Context:

At $WORK I am doing a lot of datascience work around Jupyter Notebooks and their ecosystem. Right now I am setting BinderHub, which is a service to start a Jupyter Notebook from a git repo in your browser. For setting up BinderHub I am using the BinderHub helm chart, and I was wondering how configuration changes are propagated from the BinderHub helm chart to the process running in a Kubernetes Pod.

After going through this I can say I am not right now a great fan of Helm, as it looks to me like an unnecessary, overengineered abstraction layer on top of Kubernetes manifests. Or maybe it is just that I don’t want to learn the golang templating synthax. I am looking forward to testing Kustomize as an alternative, but I havn’t had the chance yet.

Starting from the list of config parameters available:

Although many parameters are mentioned in the installer document, you have to go to the developer doc at https://binderhub.readthedocs.io/en/latest/reference/ref-index.html to get a whole overview.

In my case I want to set the hostname parameter for the Gitlab Repoprovider. This is the relevelant snippet in the developer doc:

hostname c.GitLabRepoProvider.hostname = Unicode('gitlab.com')

    The host of the GitLab instance

The string c.GitLabRepoProvider.hostname here means, that the value of the hostname parameter will be loaded at the path config.GitLabRepoProvider inside a configuration file.

Using the yaml synthax this means the configuration file should contain a snippet like:

config:
  GitlabRepoProvider
    hostname: my-domain.com

Digging through Kubernetes constructs: Helm values files

When installing BinderHub using the provided helm chart, we can either put the configuration snippet in the config.yaml or secret.yaml helm values files.

In my case I have put the snippet in config.yaml, since the hostname is not a secret thing, I can verify with yq that it correctly set:

$ yq --raw-output '.config.GitLabRepoProvider.hostname' config.yaml
my-domain.com

How do we make sure this parameter is properly applied to our running binder processes ?

As said previouly this parameter is passed as a value file to helm (–value or -f option) in the command:

$ helm upgrade \                                                                                  
    binderhub \                                                                                     
    jupyterhub/binderhub \                                                                          
    --install \                                                                                     
    --version=$(RELEASE) \                                                                          
    --create-namespace \                                                                            
    --namespace=binderhub \                                                                         
    --values secret.yaml \                                                                                
    --values config.yaml \                                                                                
    --debug 

According to the helm documentation in https://helm.sh/docs/helm/helm_install/ the values file are concatenated to form a single object, and priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called ‘Test’, the value set in override.yaml would take precedence:

$ helm install --values myvalues.yaml --values override.yaml  myredis ./redis

Digging through Kubernetes constructs: Secrets and Volumes

When helm upgrade is run the helm values of type config are stashed in a Kubernetes secret binder-secret: https://github.com/jupyterhub/binderhub/blob/main/helm-chart/binderhub/templates/secret.yaml#L12

stringData:
  {{- /*
    Stash away relevant Helm template values for
    the BinderHub Python application to read from
    in binderhub_config.py.
  */}}
  values.yaml: |
    {{- pick .Values "config" "imageBuilderType" "cors" "dind" "pink" "extraConfig" | toYaml | nindent 4 }}

We can verify that our hostname is passed to our Secret:

$ kubectl get secret binder-secret -o yaml | yq --raw-output '.data."values.yaml"'  | base64 --decode
...
  GitLabRepoProvider:
    hostname: my-domain.com
...

Finally a configuration file inside the Binder pod is populated from the Secret, using the Kubernetes Volume construct. Looking at the Pod, we do see a volume called config, created from the binder-secret Secret:

$ kubectl get pod -l component=binder -o yaml | grep --context 4 binder-secret
    volumes:
    - name: config
      secret:
        defaultMode: 420
        secretName: binder-secret

That volume is mounted inside the pod at /etc/binderhub/config:

      volumeMounts:
      - mountPath: /etc/binderhub/config/
        name: config
        readOnly: true

Runtime verification

Looking inside our pod we see our hostname value available in a file underneath the mount point:

oc exec binder-74d9c7db95-qtp8r -- grep hostname /etc/binderhub/config/values.yaml
    hostname: my-domain.com

21 October, 2025 09:37AM by Manu

October 20, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Where are we on X Chat security?

AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we're back to the richest man in the world recommending his own "X Chat", saying The messages are fully encrypted with no advertising hooks or strange “AWS dependencies” such that I can’t read your messages even if someone put a gun to my head.

Elon is either uninformed about his own product, lying, or both.

As I wrote back in June, X Chat genuinely end-to-end encrypted, but ownership of the keys is complicated. The encryption key is stored using the Juicebox protocol, sharded between multiple backends. Two of these are asserted to be HSM backed - a discussion of the commissioning ceremony was recently posted here. I have not watched the almost 7 hours of video to verify that this was performed correctly, and I also haven't been able to verify that the public keys included in the post were the keys generated during the ceremony, although that may be down to me just not finding the appropriate point in the video (sorry, Twitter's video hosting doesn't appear to have any skip feature and would frequently just sit spinning if I tried to seek to far and I should probably just download them and figure it out but I'm not doing that now). With enough effort it would probably also have been possible to fake the entire thing - I have no reason to believe that this has happened, but it's not externally verifiable.

But let's assume these published public keys are legitimately the ones used in the HSM Juicebox realms[1] and that everything was done correctly. Does that prevent Elon from obtaining your key and decrypting your messages? No.

On startup, the X Chat client makes an API call called GetPublicKeysResult, and the public keys of the realms are returned. Right now when I make that call I get the public keys listed above, so there's at least some indication that I'm going to be communicating with actual HSMs. But what if that API call returned different keys? Could Elon stick a proxy in front of the HSMs and grab a cleartext portion of the key shards? Yes, he absolutely could, and then he'd be able to decrypt your messages.

(I will accept that there is a plausible argument that Elon is telling the truth in that even if you held a gun to his head he's not smart enough to be able to do this himself, but that'd be true even if there were no security whatsoever, so it still says nothing about the security of his product)

The solution to this is remote attestation - a process where the device you're speaking to proves its identity to you. In theory the endpoint could attest that it's an HSM running this specific code, and we could look at the Juicebox repo and verify that it's that code and hasn't been tampered with, and then we'd know that our communication channel was secure. Elon hasn't done that, despite it being table stakes for this sort of thing (Signal uses remote attestation to verify the enclave code used for private contact discovery, for instance, which ensures that the client will refuse to hand over any data until it's verified the identity and state of the enclave). There's no excuse whatsoever to build a new end-to-end encrypted messenger which relies on a network service for security without providing a trustworthy mechanism to verify you're speaking to the real service.

We know how to do this properly. We have done for years. Launching without it is unforgivable.

[1] There are three Juicebox realms overall, one of which doesn't appear to use HSMs, but you need at least two in order to obtain the key so at least part of the key will always be held in HSMs

comment count unavailable comments

20 October, 2025 11:36PM

hackergotchi for Thomas Lange

Thomas Lange

New FAI images available, Rocky Linux 10 and AlmaLinux 10 support

New FAI ISOs using FAI 6.4.3 are available. They are using Debian 13 aka trixie, kernel 6.12 and you can now install Rocky Linux 10 and AlmaLinux 10 using these images.

There's also a variant for installing Linux Mint 22.2 and Ubuntu 24.04 which includes all packages on the ISO.

20 October, 2025 03:03PM

Birger Schacht

A plea for <dialog>

A couple of weeks ago there was an article on the Freexian blog about Using JavaScript in Debusine without depending on JavaScript. It describes how JavaScript is used in the Debusine Django app, namely “for progressive enhancement rather than core functionality”. This is an approach I also follow when implementing web interfaces and I think developments in web technologies and standardization in recent years have made this a lot easier.

One of the examples described in the post, the “Bootstrap toast” messages, was something that I implemented myself recently, in a similar but slightly different way.

In the main app I develop for my day job we also use the Bootstrap framework. I have also used it for different personal projects (for example the GSOC project I did for Debian in 2018, was also a Django app that used Bootstrap). Bootstrap is still primarily a CSS framework, but it also comes with a JavaScript library for some functionality. Previous versions of Bootstrap depended on jQuery, but since version 5 of Bootstrap, you don’t need jQuery anymore. In my experience, two of the more commonly used JavaScript utilities of Bootstrap are modals (also called lightbox or popup, they are elements that are displayed “above” the main content of a website) and toasts (also called alerts, they are little notification windows that often disappear after a timeout). The thing is, Bootstrap 5 was released in 2021 and a lot has happened since then regarding web technologies. I believe that both these UI components can nowadays be implemented using standard HTML5 elements.

An eye opening talk I watched was Stop using JS for that from last years JSConf(!). In this talk the speaker argues that the Rule of least power is one of the core principles of web development, which means we should use HTML over CSS and CSS over JavaScript. And the speaker also presents some CSS rules and HTML elements that added recently and that help to make that happen, one of them being the dialog element:

The <dialog> HTML element represents a modal or non-modal dialog box or other interactive component, such as a dismissible alert, inspector, or subwindow.

The Dialog element at MDN

The baseline for this element is “widely available”:

This feature is well established and works across many devices and browser versions. It’s been available across browsers since March 2022.

The Dialog element at MDN

This means there is an HTML element that does what a modal Bootstrap does!

Once I had watched that talk I removed all my Bootstrap modals and replaced them with HTML <dialog> elements (JavaScript is still needed to .show() and .close() the elements, though, but those are two methods instead of a full library). This meant not only that I replaced code that depended on an external library, I’m now also a lot more flexible regarding the styling of the elements.

When I started implementing notifications for our app, my first approach was to use Bootstrap toasts, similar to how it is implemented in Debusine. But looking at the amount of HTML code I had to write for a simple toast message, I thought that it might be possible to also implement toasts with the <dialog> element. I mean, basically it is the same, only the styling is a bit different. So what I did was that I added a #snackbar area to the DOM of the app. This would be the container for the toast messages. All the toast messages are simply <dialog> elements with the open attribute, which means that they are visible right away when the page loads.

<div id="snackbar">
  {% for message in messages %}
    <dialog class="mytoast alert alert-{{ message.tags }}" role="alert" open>
      {{ message }}
    </dialog>
  {% endfor %}
</div>

This looks a lot simpler than the Bootstrap toasts would have.

To make the <dialog> elements a little bit more fancy, I added some CSS to make them fade in and out:

.mytoast {
    z-index: 1;
    animation: fadein 0.5s, fadeout 0.5s 2.6s;
}

@keyframes fadein {
    from {
        opacity: 0;
    }

    to {
        opacity: 1;
    }
}

@keyframes fadeout {
    from {
        opacity: 1;
    }

    to {
        opacity: 0;
    }
}

To close a <dialog> element once it has faded away, I had to add one JavaScript event listener:

window.addEventListener('load', () => {
    document.querySelectorAll(".mytoast").forEach((element) => {
        element.addEventListener('animationend', function(e) {
            e.animationName == "fadeout" && element.close();
        });
    });
});

(If one would want to use the same HTML code for both script and noscript users, then the CSS should probably adapted: it fades away and if there is no JavaScript to close the element, it stays visible after the animation is over. A solution would for example be to use a close button and for noscript users simply let it stay visible - this is also what happens with the noscript messages in Debusine).

So there are many “new” elements in HTML and a lot of “new” features of CSS. It makes sense to sometimes ask ourselves if instead of the solutions we know (or what a web search / some AI shows us as the most common solution) there might be some newer solution that was not there when the first choice was created. Using standardized solutions instead of custom libraries makes the software more maintainable. In web development I also prefer standardized elements over a third party library because they have usually better accessibility and UX.

In How Functional Programming Shaped (and Twisted) Frontend Development the author writes:

Consider the humble modal dialog. The web has <dialog>, a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required.

[…]

you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use <dialog>.”

Ahmad Alfy

20 October, 2025 05:28AM