September 23, 2025

Ravi Dwivedi

Singapore Trip

In December 2024, I went on a trip through four countries - Singapore, Malaysia, Brunei, and Vietnam - with my friend Badri. This post covers our experiences in Singapore.

I took an IndiGo flight from Delhi to Singapore, with a layover in Chennai. At the Chennai airport, I was joined by Badri. We had an early morning flight from Chennai that would land in Singapore in the afternoon. Within 48 hours of our scheduled arrival in Singapore, we submitted an arrival card online. At immigration, we simply needed to scan our passports at the gates, which opened automatically to let us through, and then give our address to an official nearby. The process was quick and smooth, but it unfortunately meant that we didn’t get our passports stamped by Singapore.

Before I left the airport, I wanted to visit the nature-themed park with a fountain I saw in pictures online. It is called Jewel Changi, and it took quite some walking to get there. After reaching the park, we saw a fountain that could be seen from all the levels. We roamed around for a couple of hours, then proceeded to the airport metro station to get to our hotel.

Jewel Changi

A shot of Jewel Changi.

There were four ATMs on the way to the metro station, but none of them provided us with any cash. This was the first country (outside India, of course!) where my card didn’t work at ATMs.

To use the metro, one can tap the EZ-Link card or bank cards at the AFC gates to get in. You cannot buy tickets using cash. Before boarding the metro, I used my credit card to get Badri an EZ-Link card from a vending machine. It was 10 Singapore dollars (₹630) - 5 for the card, and 5 for the balance. I had planned to use my Visa credit card to pay for my own fare. I was relieved to see that my card worked, and I passed through the AFC gates.

We had booked our stay at a hostel named Campbell’s Inn, which was the cheapest we could find in Singapore. It was ₹1500 per night for dorm beds. The hostel was located in Little India. While Little India has an eponymous metro station, the one closest to our hostel was Rochor.

On the way to the hostel, we found out that our booking had been canceled.

We had booked from the Hostelworld website, opting to pay the deposit in advance and to pay the balance amount in person upon reaching. However, Hostelworld still tried to charge Badri’s card again before our arrival. When the unauthorized charge failed, they sent an automatic message saying “we tried to charge” and to contact them soon to avoid cancellation, which we couldn’t do as we were in the plane.

Despite this, we went to the hostel to check the status of our booking.

The trip from the airport to Rochor required a couple of transfers. It was 2 Singapore dollars (approx. ₹130) and took approximately an hour.

Upon reaching the hostel, we were informed that our booking had indeed been canceled, and were not given any reason for the cancelation. Furthermore, no beds were available at the hostel for us to book on the spot.

We decided to roam around and look for accommodation at other hostels in the area. Soon, we found a hostel by the name of Snooze Inn, which had two beds available. It was 36 Singapore dollars per person (around ₹2300) for a dormitory bed. Snooze Inn advertised supporting RuPay cards and UPI. Some other places in that area did the same. We paid using my card. We checked in and slept for a couple of hours after taking a shower.

By the time we woke up, it was dark. We met Praveen’s friend Sabeel to get my FLX1 phone. We also went to Mustafa Center nearby to exchange Indian rupees for Singapore dollars. Mustafa Center also had a shopping center with shops selling electronic items and souvenirs, among other things. When we were dropping off Sabeel at a bus stop, we discovered that the bus stops in Singapore had a digital board mentioning the bus routes for the stop and the number of minutes each bus was going to take.

In addition to an organized bus system, Singapore had good pedestrian infrastructure. There were traffic lights and zebra crossings for pedestrians to cross the roads. Unlike in Indian cities, rules were being followed. Cars would stop for pedestrians at unmanaged zebra crossings; pedestrians would in turn wait for their crossing signal to turn green before attempting to walk across. Therefore, walking in Singapore was easy.

Traffic rules were taken so seriously in Singapore I (as a pedestrian) was afraid of unintentionally breaking them, which could get me in trouble, as breaking rules is dealt with heavy fines in the country. For example, crossing roads without using a marked crossing (while being within 50 meters of it) - also known as jaywalking - is an offence in Singapore.

Moreover, the streets were litter-free, and cleanliness seemed like an obsession.

After exploring Mustafa Center, we went to a nearby 7-Eleven to top up Badri’s EZ-Link card. He gave 20 Singapore dollars for the recharge, which credited the card by 19.40 Singapore dollars (0.6 dollars being the recharge fee).

When I was planning this trip, I discovered that the World Chess Championship match was being held in Singapore. I seized the opportunity and bought a ticket in advance. The next day - the 5th of December - I went to watch the 9th game between Gukesh Dommaraju of India and Ding Liren of China. The venue was a hotel on Sentosa Island, and the ticket was 70 Singapore dollars, which was around ₹4000 at the time.

We checked out from our hostel in the morning, as we were planning to stay with Badri’s aunt that night. We had breakfast at a place in Little India. Then we took a couple of buses, followed by a walk to Sentosa Island. Paying the fare for the buses was similar to the metro - I tapped my credit card in the bus, while Badri tapped his EZ-Link card. We also had to tap it while getting off.

If you are tapping your credit card to use public transport in Singapore, keep in mind that the total amount of all the trips taken on a day is deducted at the end. This makes it hard to determine the cost of individual trips. For example, I could take a bus and get off after tapping my card, but I would have no way to determine how much this journey cost.

When you tap in, the maximum fare amount gets deducted. When you tap out, the balance amount gets refunded (if it’s a shorter journey than the maximum fare one). So, there is incentive for passengers not to get off without tapping out. Going by your card statement, it looks like all that happens virtually, and only one statement comes in at the end. Maybe this combining only happens for international cards.

We got off the bus a kilometer away from Sentosa Island and walked the rest of the way. We went on the Sentosa Boardwalk, which is itself a tourist attraction. I was using Organic Maps to navigate to the hotel Resorts World Sentosa, but Organic Maps’ route led us through an amusement park. I tried asking the locals (people working in shops) for directions, but it was a Chinese-speaking region, and they didn’t understand English. Fortunately, we managed to find a local who helped us with the directions.

Sentosa Boardwalk

A shot of Sentosa Boardwalk.

Following the directions, we somehow ended up having to walk on a road which did not have pedestrian paths. Singapore is a country with strict laws, so we did not want to walk on that road. Avoiding that road led us to the Michael Hotel. There was a person standing at the entrance, and I asked him for directions to Resorts World Sentosa. The person told me that the bus (which was standing at the entrance) would drop me there! The bus was a free service for getting to Resorts World Sentosa. Here I parted ways with Badri, who went to his aunt’s place.

I got to the Resorts Sentosa and showed my ticket to get in. There were two zones inside - the first was a room with a glass wall separating the audience and the players. This was the room to watch the game physically, and resembled a zoo or an aquarium. :) The room was also a silent room, which means talking or making noise was prohibited. Audiences were only allowed to have mobile phones for the first 30 minutes of the game - since I arrived late, I could not bring my phone inside that room.

The other zone was outside this room. It had a big TV on which the game was being broadcast along with commentary by David Howell and Jovanka Houska - the official FIDE commentators for the event. If you don’t already know, FIDE is the authoritative international chess body.

I spent most of the time outside that silent room, giving me an opportunity to socialize. A lot of people were from Singapore. I saw there were many Indians there as well. Moreover, I had a good time with Vasudevan, a journalist from Tamil Nadu who was covering the match. He also asked questions to Gukesh during the post-match conference. His questions were in Tamil to lift Gukesh’s spirits, as Gukesh is a Tamil speaker.

Tea and coffee were free for the audience. I also bought a T-shirt from their stall as a souvenir.

After the game, I took a shuttle bus from Resorts World Sentosa to a metro station, then travelled to Pasir Ris by metro, where Badri was staying with his aunt. I thought of getting something to eat, but could not find any cafés or restaurants while I was walking from the Pasir Ris metro station to my destination, and was positively starving when I got there.

Badri’s aunt’s place was an apartment in a gated community. On the gate was a security guard who asked me the address of the apartment. Upon entering, there were many buildings. To enter the building, you need to dial the number of the apartment you want to go to and speak to them. I had seen that in the TV show Seinfeld, where Jerry’s friends used to dial Jerry to get into his building.

I was afraid they might not have anything to eat because I told them I was planning to get something on the way. This was fortunately not the case, and I was relieved to not have to sleep with an empty stomach.

He said that even if you forget your laptop in a public space, you can go back the next day to find it right there in the same spot. He said if you forget your laptop somewhere, you will find it at the same spot the next day. I also learned that owning cars was discouraged in Singapore - the government imposes a high registration fee on them, while also making public transport easy to use and affordable. I also found out that 7-Eleven is not that popular among residents in Singapore, unlike in Malaysia or Thailand.

The next day was our third and final day in Singapore. We had a bus in the evening to Johor Bahru in Malaysia. We got up early, had breakfast, and checked out from Badri’s aunt’s home. A store by the name of Cat Socrates was our first stop for the day, as Badri wanted to buy some stationery. The plan was to take the metro, followed by the bus. So we got to Pasir Ris metro station. Next to the metro station was a mall. In the mall, Badri found an ATM where our cards worked, and we got some Singapore dollars.

It was noon when we reached the stationery shop mentioned above. We had to walk a kilometer from the place where the bus dropped us. It was a hot, sunny day in Singapore, so walking was not comfortable. We had to go through residential areas in Singapore. We saw some non-touristy parts of Singapore.

After we were done with the stationery shop, we went to a hawker center to get lunch. Hawker centers are unique to Singapore. They have a lot of shops that sell local food at cheap prices. It is similar to a food court. However, unlike the food courts in malls, hawker centers are open-air and can get quite hot.

Jewel Changi

This is the hawker center we went to.

To have something, you just need to buy it from one of the shops and find a table. After you are done, you need to put your tray in the tray-collecting spots. I had a kaya toast with chai, since there weren’t many vegetarian options. I also bought a persimmon from a nearby fruit vendor. On the other hand, Badri sampled some local non-vegetarian dishes.

A sign saying, 'No table littering, by law.'

Table littering at the hawker center was prohibited by law.

Next, we took a metro to Raffles Place, as we wanted to visit Merlion, the icon of Singapore. It is a statue having the head of a lion and the body of a fish. While getting through the AFC gates, my card was declined. Therefore, I had to buy an EZ-Link card, which I had been avoiding because the card itself costs 5 Singapore dollars.

From the Raffles Place metro station, we walked to Merlion. The place also gave a nice view of Marina Bay Sands. It was filled with tourists clicking pictures, and we also did the same.

Merlion from behind

Merlion from behind, giving a good view of Marina Bay Sands.

After this, we went to the bus stop to catch our bus to the border city of Johor Bahru, Malaysia. The bus was more than an hour late, and we worried that we had missed the bus. I asked an Indian woman at the stop who also planned to take the same bus, and she told us that the bus was late. Finally, our bus arrived, and we set off for Johor Bahru.

Before I finish, let me give you an idea of my expenditure. Singapore is an expensive country, and I realized that expenses could go up pretty quickly. Overall, my stay in Singapore for 3 days and 2 nights was approx. 5500 rupees. That too, when we stayed one night at Badri’s aunt’s place (so we didn’t have to pay for accomodation for one of the nights) and didn’t have to pay for a couple of meals. This amount doesn’t include the ticket for the chess game, but includes the costs of getting there. If you are in Singapore, it is likely you will pay a visit to Sentosa Island anyway.

Stay tuned for our experiences in Malaysia!

Credits: Thanks to Dione, Sahil, Badri and Contrapunctus for reviewing the draft.

23 September, 2025 11:35AM

September 22, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 12/n

Context

Update to latest rockchip-devel

For some reason I decided to try re-applying the PCI series. Good news: the pci series finally applies cleanly.

$ git fetch collabora && git switch -c tmp collabora  # [1]
$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
$ git switch reform-patches  # [2]
$ git rebase -i tmp
  1. https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git#rockchip-devel
  2. https://salsa.debian.org/bremner/collabora-rockchip-3588#reform-patches

Rebuild the kernel

$ cp /boot/config-6.17.0-rc7+ .config
$ make olddefconfig
$ yes '' | make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

try the hibernation test, again

Running the following test script

set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
echo disk >  /sys/power/state
sleep 2
modprobe mt76x2u

Initially there is some output like this

[  151.752683] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[  151.754035] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[  157.821584] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[  157.822139] rockchip-dw-pcie a40c00000.pcie: fail to resume
[  157.822636] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[  157.823442] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110

A small amount of detective work suggests that a40c00000.pcie corresponds to the first PCI bridge on the rk3588 SOC.

$ ls -l /sys/bus/pci/devices
total 0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0003:30:00.0 -> ../../../devices/platform/a40c00000.pcie/pci0003:30/0003:30:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:40:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:41:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0/0004:41:00.0

Then after a pause,

[ 1032.039237] watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 6
[ 1032.039778] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat x_tables bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 rk805_pwrkey snd_soc_tlv320aic31xx snd_soc_simple_card mac80211 rockchip_saradc reform2_lpc(OE) industrialio_triggered_buffer libarc4 kfifo_buf cfg80211 industrialio rockchip_thermal rockchip_rng cdc_acm rfkill snd_soc_rockchip_i2s_tdm hantro_vpu rockchip_rga panthor v4l2_vp9 v4l2_jpeg snd_soc_audio_graph_card videobuf2_dma_sg v4l2_h264 drm_gpuvm snd_soc_simple_card_utils drm_exec evdev joydev dm_mod nvme_fabrics efi_pstore configfs nfnetlink autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C) videobuf2_dma_contig videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev
[ 1032.039834]  videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 hid_generic usbhid hid onboard_usb_dev nvme nvme_core nvme_keyring nvme_auth snd_soc_hdmi_codec snd_soc_core xhci_plat_hcd xhci_hcd snd_pcm_dmaengine snd_pcm snd_timer snd soundcore rtc_pcf8523 fan53555 micrel phy_package stmmac_platform stmmac pcs_xpcs phylink mdio_devres rk808_regulator of_mdio sdhci_of_dwcmshc fixed_phy sdhci_pltfm fwnode_mdio libphy sdhci phy_rockchip_usbdp dw_mmc_rockchip dw_mmc_pltfm typec phy_rockchip_naneng_combphy pwm_rockchip dw_wdt phy_rockchip_samsung_hdptx dwc3 cqhci dw_mmc mdio_bus rockchip_dfi ehci_platform rockchipdrm ulpi ehci_hcd dw_hdmi_qp ohci_platform udc_core ohci_hcd analogix_dp dw_mipi_dsi i2c_rk3x cpufreq_dt usbcore phy_rockchip_inno_usb2 dw_mipi_dsi2 drm_dp_aux_bus usb_common [last unloaded: mt76x2u]
[ 1032.039886] Sending NMI from CPU 5 to CPUs 6:

previous episode

22 September, 2025 05:20PM

hackergotchi for Evgeni Golov

Evgeni Golov

Booting Vagrant boxes with UEFI on Fedora: Permission denied

If you're still using Vagrant (I am) and try to boot a box that uses UEFI (like boxen/debian-13), a simple vagrant init boxen/debian-13 and vagrant up will entertain you with a nice traceback:

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Removing domain...
==> default: Deleting the machine folder
/usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Libvirt::Domain#create': Call to virDomainCreate failed: internal error: process exited while connecting to monitor: 2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Fog::Libvirt::Compute::Shared#vm_action'
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/models/compute/server.rb:81:in 'Fog::Libvirt::Compute::Server#start'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/start_domain.rb:546:in 'VagrantPlugins::ProviderLibvirt::Action::StartDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_boot_order.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::SetBootOrder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/share_folders.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::ShareFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folders.rb:87:in 'Vagrant::Action::Builtin::SyncedFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/delayed.rb:19:in 'Vagrant::Action::Builtin::Delayed#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in 'Vagrant::Action::Builtin::SyncedFolderCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/plugins/synced_folders/nfs/action_cleanup.rb:25:in 'VagrantPlugins::SyncedFolderNFS::ActionCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSValidIds#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_network_interfaces.rb:197:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworkInterfaces#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_networks.rb:40:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworks#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain.rb:452:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/resolve_disk_settings.rb:143:in 'VagrantPlugins::ProviderLibvirt::Action::ResolveDiskSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain_volume.rb:97:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomainVolume#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_box_image.rb:127:in 'VagrantPlugins::ProviderLibvirt::Action::HandleBoxImage#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/handle_box.rb:56:in 'Vagrant::Action::Builtin::HandleBox#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in 'VagrantPlugins::ProviderLibvirt::Action::HandleStoragePool#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in 'VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/provision.rb:80:in 'Vagrant::Action::Builtin::Provision#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/cleanup_on_failure.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::CleanupOnFailure#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/box_check_outdated.rb:93:in 'Vagrant::Action::Builtin::BoxCheckOutdated#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/config_validate.rb:25:in 'Vagrant::Action::Builtin::ConfigValidate#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:248:in 'Vagrant::Machine#action_raw'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:217:in 'block in Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/environment.rb:631:in 'Vagrant::Environment#lock'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Method#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/batch_action.rb:86:in 'block (2 levels) in Vagrant::BatchAction#run'

The important part here is

Call to virDomainCreate failed: internal error: process exited while connecting to monitor:
2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}:
Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)

Of course we checked that the file permissions on this file are correct (I'll save you the ls output), so what's next? Yes, of course, SELinux!

# ausearch -m AVC
time->Mon Sep 22 12:07:55 2025
type=AVC msg=audit(1758535675.080:1613): avc:  denied  { read } for  pid=257204 comm="qemu-system-x86" name="OVMF_CODE.fd" dev="dm-2" ino=1883946 scontext=unconfined_u:unconfined_r:svirt_t:s0:c352,c717 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

A process in the svirt_t domain tries to access something in the user_home_t domain and is denied by the kernel. So far, SELinux is both working as designed and preventing us from doing our work, nice.

For "normal" (non-UEFI) boxes, Vagrant uploads the image to libvirt, which stores it in ~/.local/share/libvirt/images/ and boots fine from there. For UEFI boxen, one also needs loader and nvram files, which Vagrant keeps in ~/.vagrant.d/boxes/<box_name> and that's what explodes in our face here.

As ~/.local/share/libvirt/images/ works well, and is labeled svirt_home_t let's see what other folders use that label:

# semanage fcontext -l |grep svirt_home_t
/home/[^/]+/\.cache/libvirt/qemu(/.*)?             all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.config/libvirt/qemu(/.*)?            all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.libvirt/qemu(/.*)?                   all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/gnome-boxes/images(/.*)? all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/boot(/.*)?       all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/images(/.*)?     all files          unconfined_u:object_r:svirt_home_t:s0

Okay, that all makes sense, and it's just missing the Vagrant-specific folders!

# semanage fcontext -a -t svirt_home_t '/home/[^/]+/\.vagrant.d/boxes(/.*)?'

Now relabel the Vagrant boxes:

% restorecon -rv ~/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/metadata_url from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_0.img from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/metadata.json from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/Vagrantfile from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_VARS.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_update_check from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0

And it works!

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Domain launching with graphics connection settings...
==> default:  -- Graphics Port:      5900
==> default:  -- Graphics IP:        127.0.0.1
==> default:  -- Graphics Password:  Not defined
==> default:  -- Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.124.157:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

22 September, 2025 10:37AM by evgeni

Russell Coker

More About the Colmi P80

The FOSS Android program for communicating with smart watches is Gadget Bridge which now has support for the Colmi P80 [1].

I first blogged about the Colmi P80 just over a month ago [2]. Now I have a couple of relatives using it happily on Android with the proprietary app. I couldn’t use it myself because I require more control over which apps have their notifications go to the watch than the Colmi app offers. Also I’m trying to move away from non-free software.

Yesterday the f-droid repository informed me that there was a new version of Gadget Bridge and the changelog indicated support for the Colmi P80 so I connected the P80 and disconnected the PineTime.

The first problem I noticed is that the vibrator on the P80 when on it’s maximum setting is much weaker than that on the PineTime, so weak that I often didn’t notice it. Maybe if I wore it for a few weeks I would teach myself to notice it but it should just be able to work with me on this. If it could be set to have multiple bursts of vibrating then that would work.

The next problem is that the P80 by default does not turn the screen on when there’s a notification and there seems to be no way to configure it to do so. I configured it to turn on when I raise my arm which can mostly work but that still relies on me noticing the vibration. Vibration and the screen light turning on would be harder to miss than vibration on it’s own.

I don’t recall seeing any review of smart watches ever that stated whether the screen would turn on when there’s a notification or whether the vibration was easy to notice.

One problem with both the PineTime (running InfiniTime) and the P80 is that when the screen is turned on (through gesture, pushing the button, or a notification in the case of the Pinetime) it is active for swiping to change the settings. I would like to have some other action required before settings can be changed so that if the screen turns on when I’m asleep my watch won’t brush against something and change it’s settings (which has happened).

It’s neat how Gadget Bridge supports talking to multiple smart watches at the same time. One useful feature for that would be to have different notification settings for each watch. I can imagine someone changing between a watch for jogging and a watch for work and wanting different settings.

Colmi P80 Problems

No authentication for Bluetooth connections.

Runs non-free software so no chance to fix things.

Battery life worse than PineTime (but not really bad).

Vibration weak.

Screen doesn’t turn on when notification is sent.

Conclusion

I’m using the PineTime as my daily driver again. While it works well enough for some people (even with the Colmi proprietary app) it doesn’t do what I want. It is however a good test device for FOSS work on the phone side, it has a decent feature set and is cheap.

Apart from lack of authentication and running non-free software the problems are mostly a matter of taste. Some people might think it’s great the way it works.

22 September, 2025 08:39AM by etbe

Vincent Bernat

Akvorado release 2.0

Akvorado 2.0 was released today! Akvorado collects network flows with IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database. Users can browse the data through a web console. This release introduces an important architectural change and other smaller improvements. Let’s dive in! 🤿

$ git diff --shortstat v1.11.5
 493 files changed, 25015 insertions(+), 21135 deletions(-)

New “outlet” service

The major change in Akvorado 2.0 is splitting the inlet service into two parts: the inlet and the outlet. Previously, the inlet handled all flow processing: receiving, decoding, and enrichment. Flows were then sent to Kafka for storage in ClickHouse:

Akvorado flow processing before the change: flows are received and processed by the inlet, sent to Kafka and stored in ClickHouse
Akvorado flow processing before the introduction of the outlet service

Network flows reach the inlet service using UDP, an unreliable protocol. The inlet must process them fast enough to avoid losing packets. To handle a high number of flows, the inlet spawns several sets of workers to receive flows, fetch metadata, and assemble enriched flows for Kafka. Many configuration options existed for scaling, which increased complexity for users. The code needed to avoid blocking at any cost, making the processing pipeline complex and sometimes unreliable, particularly the BMP receiver.1 Adding new features became difficult without making the problem worse.2

In Akvorado 2.0, the inlet receives flows and pushes them to Kafka without decoding them. The new outlet service handles the remaining tasks:

Akvorado flow processing after the change: flows are received by the inlet, sent to Kafka, processed by the outlet and inserted in ClickHouse
Akvorado flow processing after the introduction of the outlet service

This change goes beyond a simple split:3 the outlet now reads flows from Kafka and pushes them to ClickHouse, two tasks that Akvorado did not handle before. Flows are heavily batched to increase efficiency and reduce the load on ClickHouse using ch-go, a low-level Go client for ClickHouse. When batches are too small, asynchronous inserts are used (e20645). The number of outlet workers scales dynamically (e5a625) based on the target batch size and latency (50,000 flows and 5 seconds by default).

This new architecture also allows us to simplify and optimize the code. The outlet fetches metadata synchronously (e20645). The BMP component becomes simpler by removing cooperative multitasking (3b9486). Reusing the same RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on the garbage collector (8b580f).

The effect on Akvorado’s overall performance was somewhat uncertain, but a user reported 35% lower CPU usage after migrating from the previous version, plus resolution of the long-standing BMP component issue. 🥳

Other changes

This new version includes many miscellaneous changes, such as completion for source and destination ports (f92d2e), and automatic restart of the orchestrator service (0f72ff) when configuration changes to avoid a common pitfall for newcomers.

Let’s focus on some key areas for this release: observability, documentation, CI, Docker, Go, and JavaScript.

Observability

Akvorado exposes metrics to provide visibility into the processing pipeline and help troubleshoot issues. These are available through Prometheus HTTP metrics endpoints, such as /api/v0/inlet/metrics. With the introduction of the outlet, many metrics moved. Some were also renamed (4c0b15) to match Prometheus best practices. Kafka consumer lag was added as a new metric (e3a778).

If you do not have your own observability stack, the Docker Compose setup shipped with Akvorado provides one. You can enable it by activating the profiles introduced for this purpose (529a8f).

The prometheus profile ships Prometheus to store metrics and Alloy to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka metrics are collected through the exporter bundled with Alloy (560113). Other metrics are exposed using Prometheus metrics endpoints and are automatically fetched by Alloy with the help of some Docker labels, similar to what is done to configure Traefik. cAdvisor was also added (83d855) to provide some container-related metrics.

The loki profile ships Loki to store logs (45c684). While Alloy can collect and ship logs to Loki, its parsing abilities are limited: I could not find a way to preserve all metadata associated with structured logs produced by many applications, including Akvorado. Vector replaces Alloy (95e201) and features a domain-specific language, VRL, to transform logs. Annoyingly, Vector currently cannot retrieve Docker logs from before it was started.

Finally, the grafana profile ships Grafana, but the shipped dashboards are broken. This is planned for a future version.

Documentation

The Docker Compose setup provided by Akvorado makes it easy to get the web interface up and running quickly. However, Akvorado requires a few mandatory steps to be functional. It ships with comprehensive documentation, including a chapter about troubleshooting problems. I hoped this documentation would reduce the support burden. It is difficult to know if it works. Happy users rarely report their success, while some users open discussions asking for help without reading much of the documentation.

In this release, the documentation was significantly improved.

$ git diff --shortstat v1.11.5 -- console/data/docs
 10 files changed, 1873 insertions(+), 1203 deletions(-)

The documentation was updated (fc1028) to match Akvorado’s new architecture. The troubleshooting section was rewritten (17a272). Instructions on how to improve ClickHouse performance when upgrading from versions earlier than 1.10.0 was added (5f1e9a). An LLM proofread the entire content (06e3f3). Developer-focused documentation was also improved (548bbb, e41bae, and 871fc5).

From a usability perspective, table of content sections are now collapsable (c142e5). Admonitions help draw user attention to important points (8ac894).

Admonition in Akvorado documentation to ask a user not to open an issue or start a discussion before reading the documentation
Example of use of admonitions in Akvorado's documentation

Continuous integration

This release includes efforts to speed up continuous integration on GitHub. Coverage and race tests run in parallel (6af216 and fa9e48). The Docker image builds during the tests but gets tagged only after they succeed (8b0dce).

GitHub workflow for CI with many jobs, some of them running in parallel, some not
GitHub workflow to test and build Akvorado

End-to-end tests (883e19) ensure the shipped Docker Compose setup works as expected. Hurl runs tests on various HTTP endpoints, particularly to verify metrics (42679b and 169fa9). For example:

## Test inlet has received NetFlow flows
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_flow_input_udp_packets_total{job="akvorado-inlet",listener=":2055"})
HTTP 200
[Captures]
inlet_receivedflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_receivedflows" > 10

## Test inlet has sent them to Kafka
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_kafka_sent_messages_total{job="akvorado-inlet"})
HTTP 200
[Captures]
inlet_sentflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_sentflows" >= {{ inlet_receivedflows }}

Docker

Akvorado ships with a comprehensive Docker Compose setup to help users get started quickly. It ensures a consistent deployment, eliminating many configuration-related issues. It also serves as a living documentation of the complete architecture.

This release brings some small enhancements around Docker:

Previously, many Docker images were pulled from the Bitnami Containers library. However, VMWare acquired Bitnami in 2019 and Broadcom acquired VMWare in 2023. As a result, Bitnami images were deprecated in less than a month. This was not really a surprise4. Previous versions of Akvorado had already started moving away from them. In this release, the Apache project’s Kafka image replaces the Bitnami one (1eb382). Thanks to the switch to KRaft mode, Zookeeper is no longer needed (0a2ea1, 8a49ca, and f65d20).

Akvorado’s Docker images were previously compiled with Nix. However, building AArch64 images on x86-64 is slow because it relies on QEMU userland emulation. The updated Dockerfile uses multi-stage and multi-platform builds: one stage builds the JavaScript part on the host platform, one stage builds the Go part cross-compiled on the host platform, and the final stage assembles the image on top of a slim distroless image (268e95 and d526ca).

# This is a simplified version
FROM --platform=$BUILDPLATFORM node:20-alpine AS build-js
RUN apk add --no-cache make
WORKDIR /build
COPY console/frontend console/frontend
COPY Makefile .
RUN make console/data/frontend

FROM --platform=$BUILDPLATFORM golang:alpine AS build-go
RUN apk add --no-cache make curl zip
WORKDIR /build
COPY . .
COPY --from=build-js /build/console/data/frontend console/data/frontend
RUN go mod download
RUN make all-indep
ARG TARGETOS TARGETARCH TARGETVARIANT VERSION
RUN make

FROM gcr.io/distroless/static:latest
COPY --from=build-go /build/bin/akvorado /usr/local/bin/akvorado
ENTRYPOINT [ "/usr/local/bin/akvorado" ]

When building for multiple platforms with --platform linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted line execute only once for all platforms. This significantly speeds up the build. 🚅

Akvorado now ships Docker images for these platforms: linux/amd64, linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU. On x86-64, there are two choices. If your CPU is recent enough, Docker downloads linux/amd64/v3. This version contains additional optimizations and should run faster than the linux/amd64 version. It would be interesting to ship an image for linux/arm64/v8.2, but Docker does not support the same mechanism for AArch64 yet (792808).

Go

This release includes many changes related to Go but not visible to the users.

Toolchain

In the past, Akvorado supported the two latest Go versions, preventing immediate use of the latest enhancements. The goal was to allow users of stable distributions to use Go versions shipped with their distribution to compile Akvorado. However, this became frustrating when interesting features, like go tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be compiled with older toolchains by automatically downloading a newer one (94fb1c).5 Users can still override GOTOOLCHAIN to revert this decision. The recommended toolchain updates weekly through CI to ensure we get the latest minor release (5b11ec). This change also simplifies updates to newer versions: only go.mod needs updating.

Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have started converting some unit tests to the new test/synctest package (bd787e, 7016d8, and 159085).

Testing

When testing equality, I use a helper function Diff() to display the differences when it fails:

got := input.Keys()
expected := []int{1, 2, 3}
if diff := helpers.Diff(got, expected); diff != "" {
    t.Fatalf("Keys() (-got, +want):\n%s", diff)
}

This function uses kylelemons/godebug. This package is no longer maintained and has some shortcomings: for example, by default, it does not compare struct private fields, which may cause unexpectedly successful tests. I replaced it with google/go-cmp, which is stricter and has better output (e2f1df).

Another package for Kafka

Another change is the switch from Sarama to franz-go to interact with Kafka (756e4a and 2d26c5). The main motivation for this change is to get a better concurrency model. Sarama heavily relies on channels and it is difficult to understand the lifecycle of an object handed to this package. franz-go uses a more modern approach with callbacks6 that is both more performant and easier to understand. It also ships with a package to spawn fake Kafka broker clusters, which is more convenient than the mocking functions provided by Sarama.

Improved routing table for BMP

To store its routing table, the BMP component used kentik/patricia, an implementation of a patricia tree focused on reducing garbage collection pressure. gaissmai/bart is a more recent alternative using an adaptation of Donald Kunth’s ART algorithm that promises better performance and delivers it: 90% faster lookups and 27% faster insertions (92ee2e and fdb65c).

Unlike kentik/patricia, gaissmai/bart does not help efficiently store values attached to each prefix. I adapted the same approach as kentik/patricia to store route lists for each prefix: store a 32-bit index for each prefix, and use it to build a 64-bit index for looking up routes in a map. This leverages Go’s efficient map structure.

gaissmai/bart also supports a lockless routing table version, but this is not simple because we would need to extend this to the map storing the routes and to the interning mechanism. I also attempted to use Go’s new unique package to replace the intern package included in Akvorado, but performance was worse.7

Miscellaneous

Previous versions of Akvorado were using a custom Protobuf encoder for performance and flexibility. With the introduction of the outlet service, Akvorado only needs a simple static schema, so this code was removed. However, it is possible to enhance performance with planetscale/vtprotobuf (e49a74, and 8b580f). Moreover, the dependency on protoc, a C++ program, was somewhat annoying. Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf schema into Go code (f4c879).

Another small optimization to reduce the size of the Akvorado binary by 10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It includes the ASN database, as well as the SVG images for the documentation. A small layer of code makes this change transparent (b1d638 and e69b91).

JavaScript

Recently, two large supply-chain attacks hit the JavaScript ecosystem: one affecting the popular packages chalk and debug and another impacting the popular package @ctrl/tinycolor. These attacks also exist in other ecosystems, but JavaScript is a prime target due to heavy use of small third-party dependencies. The previous version of Akvorado relied on 653 dependencies.

npm-run-all was removed (3424e8, 132 dependencies). patch-package was removed (625805 and e85ff0, 69 dependencies) by moving missing TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a linter written in Rust (97fd8c, 125 dependencies, including the plugins).

I switched from npm to Pnpm, an alternative package manager (fce383). Pnpm does not run install scripts by default8 and prevents installing packages that are too recent. It is also significantly faster.9 Node.js does not ship Pnpm but it ships Corepack, which allows us to use Pnpm without installing it. Pnpm can also list licenses used by each dependency, removing the need for license-compliance (a35ca8, 42 dependencies).

For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite was replaced with its faster Rolldown version (463827).

After these changes, Akvorado “only” pulls 225 dependencies. 😱

Next steps

I would like to land three features in the next version of Akvorado:

  • Add Grafana dashboards to complete the observability stack. See issue #1906 for details.

  • Integrate OVH’s Grafana plugin by providing a stable API for such integrations. Akvorado’s web console would still be useful for browsing results, but if you want to build and share dashboards, you should switch to Grafana. See issue #1895.

  • Move some work currently done in ClickHouse (custom dictionaries, GeoIP and IP enrichment) back into the outlet service. This should give more flexibility for adding features like the one requested in issue #1030.


I started working on splitting the inlet into two parts more than one year ago. I found more motivation in recent months, partly thanks to Claude Code, which I used as a rubber duck. Almost none of the produced code was kept:10 it is like an intern who does not learn. 🦆


  1. Many attempts were made to make the BMP component both performant and not blocking. See for example PR #254, PR #255, and PR #278. Despite these efforts, this component remained problematic for most users. See issue #1461 as an example. ↩︎

  2. Some features have been pushed to ClickHouse to avoid the processing cost in the inlet. See for example PR #1059↩︎

  3. This is the biggest commit:

    $ git show --shortstat ac68c5970e2c | tail -1
    231 files changed, 6474 insertions(+), 3877 deletions(-)
    

    ↩︎

  4. Broadcom is known for its user-hostile moves. Look at what happened with VMWare. ↩︎

  5. As a Debian developer, I dislike these mechanisms that circumvent the distribution package manager. The final straw came when Go 1.25 spent one month in the Debian NEW queue, an arbitrary mechanism I don’t like at all. ↩︎

  6. In the early years of Go, channels were heavily promoted. Sarama was designed during this period. A few years later, a more nuanced approach emerged. See notably “Go channels are bad and you should feel bad.” ↩︎

  7. This should be investigated further, but my theory is that the intern package uses 32-bit integers, while unique uses 64-bit pointers. See commit 74e5ac↩︎

  8. This is also possible with npm. See commit dab2f7↩︎

  9. An even faster alternative is Bun, but it is less available. ↩︎

  10. The exceptions are part of the code for the admonition blocks, the code for collapsing the table of content, and part of the documentation. ↩︎

22 September, 2025 08:12AM by Vincent Bernat

September 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

We, Programmers A Chronicle of Coders from Ada to AI

This post is an unpublished review for We, Programmers A Chronicle of Coders from Ada to AI

When this book was presented as available to review, I jumped for it: who does not love reading a nice bit of computing history, as told by a well-known author (affectionaly known as “Uncle Bob”), one that has been immersed in computing since forever… What is not to like there?

Reading on, the book does not disappoint. Much to the contrary, it digs into details absent in most computer history books that, being an Operating Systems and Computer Architecture geek, I absolutely enjoyed. But let me first address the book’s organization.

The book is split in four parts. Part 1, “Setting the stage” is a short introduction, answering the questioun “Who are we?” (addressing “we” as the programmers, of course), describing the fascination most of us has ever felt when realizing the computer was there to obey us, to do our bidding, and we could absolutely control it.

Part 2, “The Giants”, talks about the Giants our computing world owes to, and on whose shoulders we stand on. It digs with a level of detail I had never seen before in the personal life and technical contributions (as well as the hoops they had to jump through to get their work done). Nine chapters cover “Giants” ranging chronologically from Charles Babbage and Ada Lovelace to Ken Thompson, Dennis Richie and Brian Kernighan (of course, several giants who did their contribution together are grouped in the same chapter). This is the part with most historic technical details often overlooked — What was the word size in the first computers, before even the concept of a “byte” had been brought into regular use? What was the register structure of early CPUs, and why did it lead to requiring self-modifying code to be able to execute loops?

Then, just as Unix and C get invented, Part 3 skips to computer history as seen by the eyes of “Uncle Bob”. I must admit the change of rhythm initially startled me, but it went through quite well. The focus was no longer in the Giants of the field, but on one particular person who… Casts a very long shadow. The narrative follows the author’s career, since being a boy given access to electronics by his father’s line of work, until he becomes a computing industry leader in the early 2000s with Extreme Programming and becoming among the first producers of training material in video format, something today might be recognized as an “influencer”. This first-person narrative reaches year 2023.

But the book is not just a historical overview of the computing world, of course. “Uncle Bob” has a final section with his thoughts for the future of computing. Being this a book for programmers, it is fitting to start by talking about changes in programming languages we should expect to see towards the future and where such changes are prone to take place. Second, the unavoidable topic of Artificial Intelligence is presented: What is it and what does it spell for computing, and in particular, for programming? Third, what does the future of hardware development look like? Fourth, mostly to my surprise, what is prone to be the evolution of the World Wide Web, and finally, what is the future of programming — and programmers.

At 480 pages, the book is a volume to be taken seriously. But space is very well used with this text. The material is easy to read, often funny, but always informative. If you enjoy computer history and understanding the little details in the implementations, it might very well be the book you want.

21 September, 2025 08:07PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

rcppmlpackexamples 0.0.1 on CRAN: New Package

mlpack is a fabulous project providing countless machine learning algorithms in clean and performant C++ code as a header-only library. This gives both high performance and the ability to run the code in resource-constrained environment such as embedded systems. Bindings to a number of other languages are available, and an examples repo provides examples.

The project also has a mature R package on CRAN which offers the various algorithms directly in R. Sometimes, however, one might want to use the header-only C++ code in another R package. How to do that was not well documented. A user alerted me by email to this fact a few weeks ago, and this lead to both an updated mlpack release at CRAN and this package.

In short, we show via three complete examples how to access the mlpack code in C++ code in this package, offering a re-usable stanza to start a different package from. The only other (header-only) dependencies are provided by CRAN packages RcppArmadillo and RcppEnsmallen wrapping, respectively, the linear algebra and optimization libraries used by mlpack.

Courtesy of my CRANberries, there is also a a ‘new package’ note (no diffstat report yet). More detailed information is on the rcppmlpackexamples page, or the github repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

21 September, 2025 05:07PM

hackergotchi for Joey Hess

Joey Hess

cheap DIY solar fence design

A year ago I installed a 4 kilowatt solar fence. I'm revisiting it this Sun Day, to share the design, now that I have prooved it out.

The solar fence and some other ground and pole mount solar panels, seen through leaves.

Solar fencing manufacturers have some good simple designs, but it's hard to buy for a small installation. They are selling to utility scale solar mostly. And those are installed by driving metal beams into the ground, which requires heavy machinery.

Since I have experience with Ironridge rails for roof mount solar, I decided to adapt that system for a vertical mount. Which is something it was not designed for. I combined the Ironridge hardware with regular parts from the hardware store.

The cost of mounting solar panels nowadays is often higher than the cost of the panels. I hoped to match the cost, and I nearly did. The solar panels cost $100 each, and the fence cost $110 per solar panel. This fence was significantly cheaper than conventional ground mount arrays that I considered as alternatives, and made a better use of a difficult hillside location.

I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail. (Longer rails would need a center post anyway, and the 7 foot long rails have cheaper shipping, since they do not need to be shipped freight.)

For the fence posts, I used regular 4x4" treated posts. 12 foot long, set in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6 inches of gravel on the bottom.

detail of how the rails are mounted to the posts, and the panels to the rails

To connect the Ironridge rails to the fence posts, I used the Ironridge LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8” x 3 inch hot-dipped galvanized lag screw. Since a treated post can react badly with an aluminum bracket, there needs to be some flashing between the post and bracket. I used Shurtape PW-100 tape for that. I see no sign of corrosion after 1 year.

The rest of the Ironridge system is a T-bolt that connects the rail to the L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners (UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips.

Since the Ironridge hardware is not designed to hold a solar panel at a 90 degree angle, I was concerned that the panels might slide downward over time. To help prevent that, I added some additional support brackets under the bottom of the panels. So far, that does not seem to have been a problem though.

I installed Aptos 370 watt solar panels on the fence. They are bifacial, and while the posts block the back partially, there is still bifacial gain on cloudy days. I left enough space under the solar panels to be able to run a push mower under them.

Me standing in front of the solar fence at end of construction

I put pairs of posts next to one-another, so each 7 foot segment of fence had its own 2 posts. This is the least elegant part of this design, but fitting 2 brackets next to one-another on a single post isn't feasible. I bolted the pairs of posts together with some spacers. A side benefit of doing it this way is that treated lumber can warp as it dries, and this prevented much twisting of the posts.

Using separate posts for each segment also means that the fence can traverse a hill easily. And it does not need to be perfectly straight. In fact, my fence has a 30 degree bend in the middle. This means it has both south facing and south-west facing panels, so can catch the light for longer during the day.

After building the fence, I noticed there was a slight bit of sway at the top, since 9 feet of wooden post is not entirely rigid. My worry was that a gusty wind could rattle the solar panels. While I did not actually observe that happening, I added some diagonal back bracing for peace of mind.

view of rear upper corner of solar fence, showing back bracing connection

Inspecting the fence today, I find no problems after the first year. I hope it will last 30 years, with the lifespan of the treated lumber being the likely determining factor.

As part of my larger (and still ongoing) ground mount solar install, the solar fence has consistently provided great power. The vertical orientation works well at latitude 36. It also turned out that the back of the fence was useful to hang conduit and wiring and solar equipment, and so it turned into the electrical backbone of my whole solar field. But that's another story..

solar fence parts list

quantity cost per unit description
10 $27.89 7 foot Ironridge XR-10 rail
12 $20.18 12 foot treated 4x4
30 $4.86 Ironridge UFO-CL-01-A1
20 $0.87 Ironridge UFO-STP-40MM-M1
1 $12.62 Ironridge XR-10 end caps (20 pack)
20 $2.63 Ironridge LFT-03-M1
20 $1.69 Ironridge BHW-SQ-02-A1
22 $2.65 5/8” x 3 inch hot-dipped galvanized lag screw
10 $0.50 6” gravel per post
30 $6.91 50 lb bags of quickcrete
1 $15.00 Shurtape PW-100 Corrosion Protection Pipe Wrap Tape
N/A $30 other bolts and hardware (approximate)

$1100 total

(Does not include cost of panels, wiring, or electrical hardware.)

21 September, 2025 04:15PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Lavalamps (things that spark joy)

photograph of a Mathmos Telstar rocket lava lamp with red wax and purple water

Life can sometimes be tricky, and it's useful to know that there are some simple things to take pleasure from. Amongst them for me are lava lamps.

At some point in the late 90s, my brother and I somehow had 6 lavalamps between us. I'm not sure what happened to them (and the gallery of photos I had of them has long disappeared from my site.)

More recently, I stumbled across a Mathmos "Telstar" rocket-shaped lava lamp in a charity shop: silver metal; purple water; red wax.

It now adorns my study desk.

21 September, 2025 12:21PM

hackergotchi for Bits from Debian

Bits from Debian

Bits From Argentina - August 2025

DebConf26 is already in the air in Argentina. Organizing DebConf26 give us the opportunity to talk about Debian in our country again. This is not the first time that Debian has come here, previously Argentina has hosted DebConf 8 in Mar del Plata.

In August, Nattie Mayer-Hutchings and Stefano Rivera from DebConf Committee visited the venue where the next DebConf will take place. They came to Argentina in order to see what it is like to travel from Buenos Aires to Santa Fe (the venue of the next DebConf). In addition, they were able to observe the layout and size of the classrooms and halls, as well as the infrastructure available at the venue, which will be useful for the Video Team.

But before going to Santa Fe, on the August 27th, we organized a meetup in Buenos Aires at GCoop, where we hosted some talks:

GCoop Talks

On August 28th, we had the opportunity to get to know the Venue. We walked around the city and, obviously, sampled some of the beers from Santa Fe.

On August 29th we met with representatives of the University and local government who were all very supportive. We are very grateful to them for opening their doors to DebConf.

UNL Meeting

In the afternoon we met some of the local free software community at an event we held in ATE Santa Fe. The event included several talks:

  • ¿Qué es Debian? - Pablo (sultanovich) / Emmanuel Arias
  • Ciberrestauradores: Gestores de basura electrónica - Programa RAEES Acutis
  • Debian and DebConf (Stefano Rivera/Nattie Mayer-Hutchings)

ATE Talks

Thanks to Debian Argentina, and all the people who will make DebConf26 possible.

Thanks to Nattie Mayer-Hutchings and Stefano Rivera for reviewing an earlier version of this article.

21 September, 2025 11:05AM by Emmanuel Arias

September 20, 2025

hackergotchi for Thomas Goirand

Thomas Goirand

Real-Time OpenStack Packaging Status with Event-Driven Automation

tl;dr: https://osbpo.debian.net/deb-status is now real-time updated and much better than it used to, helping the OpenStack packaging team be a way more efficient.

How it used to be

For years, the Debian OpenStack team has relied on automated tools to track the status of OpenStack packages across Debian releases. Our goal has always been simple: transparency, efficiency, accuracy.

We used to use a tool called os-version-checker, written by Michal Arbet, which generated a static status page at https://osbpo.debian.net/deb-status. It was functional and served us well — but it had limitations:

  • It ran on a cron job, not on demand
  • It processed all OpenStack releases at once, making it slow
  • The rsync from Jenkins hosts to osbpo.debian.net was also cron-driven
  • No immediate feedback after a package build

This meant that when a developer pushed a new package to salsa (the Debian GitLab instance) in the team’s repository, the following would happen:

  • Jenkins would build the backport
  • Store it in a local repository
  • Wait up to 30 minutes (or more) for the cron job to run rsync + status update
  • Only then would the status page reflect the new version

For maintainers actively working on a new release, this delay was frustrating. You’d fix a bug, push, build — and still see your package marked as “missing” or “out of date” for minutes. You had no real-time feedback. This was also an annoyance for testing, because when fixing a bug, I often had to trigger the rsync manually in order to not wait for it, so I could do my tests. Now, osbpo is always up-to-date a few seconds after the build of the package.

The New Way: Event-Driven, Real-Time Updates

We’ve rebuilt the system from the ground up to be fast, responsive, and event-driven. Now, the workflow is:

  • Developer git push → triggers Jenkins
  • Jenkins builds the package → publishes to local repo
  • Jenkins immediately triggers a webhook on osbpo.debian.net

The webhook on osbpo does:

  • rsyncs the new package to the central Debian repo
  • Pulls the latest OpenStack releases from git and use its YAML data (instead of parsing the release HTML pages)
  • Regenerates the status page, comparing what upstream released and what’s in Debian

No more cron. No more waiting…

How it works

The central osbpo.debian.net server runs:

  • webhook — to receive secure, HMAC-verified triggers that it processes in an async way
  • Apache — to serve the status pages and the Debian OpenStack repositories
  • Custom scripts — to rsync packages, validate, and generate reports

Jenkins instances are configured to curl the webhook on successful build. The status page is generated by openstack-debian-release-manager, a new tool I’ve packaged and deployed. The dashboard uses AJAX to load content dynamically (like when browsing from one release to another), with sorting, metadata, and real-time auto-refresh every 10 seconds.

openstack-debian-release-manager is easy to deploy and configure, and will do most (if not all) of the needed configuration. Uploading it to Debian is probably not needed, and a bit over-kill, so I believe I’ll just keep it in Salsa for the moment, unless there’s a way to make it more generic so it can help someone else (another team?) in Debian.

Room for improvement

There’s still things I want to add. Namely:

  • Add status for Debian stable (ie: without the osbpo.debian.net add-on repository), which we used to have with os-version-checker.
  • Add a per-release config file option to be able to mask not packaged project on a per OpenStack release granularity

Special thanks to Michal Arbet for the original os-version-checker that served me for years, helping me to never forget a missing OpenStack package release.

20 September, 2025 07:59PM by Goirand Thomas

September 19, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 15.0.2-2 on CRAN: Transition to Armadillo 15

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1261 other packages on CRAN, downloaded 41.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 647 times according to Google Scholar.

This versions updates the 15.0.2-1 release from last week. Following fairly extensive email discussions with CRAN, we are now accelerating the transition to Armadillo. When C++14 or newer is used (which after all is the default since R 4.1.0 released May 2021, see WRE Section 1.2.4), or when opted into, the newer Armadillo is selected. If on the other hand either C++11 is still forced, or the legacy version is explicitly selected (which currently one package at CRAN does), then Armadillo 14.6.3 is selected.

Most packages will not see a difference and automatically switch to the newer Armadillo. However, some packages will see one or two types of warning. First, if C++11 is still actively selected via for examples CXX_STD then CRAN will nudge a change to a newer compilation standard (as they have been doing for some time already). Preferably the change should be to simply remove the constraint and let R pick the standard based on its version and compiler availability. These days that gives us C++17 in most cases; see WRE Section 1.2.4 for details. (Some packages may need C++14 or C++17 or C++20 explicitly and can also do so.)

Second, some packages may see a deprecation warning. Up until Armadillo 14.6.3, the package suppressed these and you can still get that effect by opting into that version by setting -DARMA_USE_LEGACY. (However this route will be sunset ‘eventually’ too.) But one really should update the code to the non-deprecated version. In a large number of cases this simply means switching from using arma::is_finite() (typically called on a scalar double) to calling std::isfinite(). But there are some other cases, and we will help as needed. If you maintain a package showing deprecation warnings, and are lost here and cannot workout the conversion to current coding styles, please open an issue at the RcppArmadillo repository (i.e. here) or in your own repository and tag me. I will also reach out to the maintainers of a smaller set of packages with more than one reverse dependency.

A few small changes have been made internal packaging and documentation, a small synchronization with upstream for two commits since the 15.0.2 release, as well as a link to the ldlasb2 repository and its demonstration regarding some ill-stated benchmarks done elsewhere.

The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.0.2-2 (2025-09-18)

  • Minor update to skeleton Makevars,Makevars.win

  • Update README.md to mention ldlasb2 repository

  • Minor documentation update (#487)

  • Synchronized with Armadillo upstream (#488)

  • Refine Armadillo version selection in coordination with CRAN maintainers to support transition towards Armadillo 15.0.*

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

19 September, 2025 12:47PM

September 18, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Still use Twitter/X? Consider dropping it...

Many people that were once enthusiast Twitter users have dropped as a direct or indirect effect of its ownership change and the following policy changes. Given Twitter X is getting each time more irrelevant, it is less interesting and enciting for more and more people… But also, its current core users (mostly, hate-apologists of the right-wing mindset that finds conspiration theories everywhere) are becoming more commonplace, and by sheer probability (if not for algorithmic bias), every time it becomes more likely a given piece of content will be linked to what their authors would classify as crap.

So, there has been in effect an X exodus. This has been reported in media outlets as important as Reuters, or The Guardianresearch institutes such as Berkeley, even media that no matter how hard you push cannot be identified as the radical left Mr. Trump is so happy to blame for everything, such as Forbes

Today I read a short note in a magazine I very much enjoy, Communications of the ACM, where SIGDOC (the ACM’s Special Interest Group on Design of Communication) is officially closing their X account. The reasoning is crystal clear. They have the mission to create and study User Experience (UX) implementations and report on it, «focused on making communication clearer and more human centered». That is no longer, for many reasons, a goal that can be furthered by the means of an X account.

(BTW, and… How many people are actually angry that Mr. Musk took the X11 old logo and made it his? I am sure it is now protected under too many layers of legalese, even though I am aware of it since at least 30 years ago…)

18 September, 2025 07:57PM

John Goerzen

Running an Accurate 80×25 DOS-Style Console on Modern Linux Is Possible After All

Here, in classic Goerzen deep dive fashion, is more information than you knew you wanted about a topic you’ve probably never thought of. I found it pretty interesting, because it took me down a rabbit hole of subsystems I’ve never worked with much and a mishmash of 1980s and 2020s tech.

I had previously tried and failed to get an actual 80x25 Linux console, but I’ve since figured it out!

This post is about the Linux text console – not X or Wayland. We’re going to get the console right without using those systems. These instructions are for Debian trixie, but should be broadly applicable elsewhere also. The end result can look like this:

Photo of a color VGA monitor displaying a BBS login screen

(That’s a Wifi Retromodem that I got at VCFMW last year in the Hayes modem case)

What’s a pixel?

How would you define a “pixel” these days? Probably something like “a uniquely-addressable square dot in a two-dimensional grid”.

In the world of VGA and CRTs, that was just a logical abstraction. We got an API centered around that because it was convenient. But, down the VGA cable and on the device, that’s not what a pixel was.

A pixel, back then, was a time interval. On a multisync monitor, which were common except in the very early days of VGA, the timings could be adjusted which produced logical pixels of different sizes. Those screens often had a maximum resolution but not necessarily a “native resolution” in the sense that an LCD panel does. Different timings produced different-sized pixels with equal clarity (or, on cheaper monitors, equal fuzziness).

A side effect of this was that pixels need not be square. And, in fact, in the standard DOS VGA 80x25 text mode, they weren’t.

You might be seeing why DVI, DisplayPort, and HDMI replaced VGA for LCD monitors: with a VGA cable, you did a pixel-to-analog-timings conversion, then the display did a timings-to-pixels conversion, and this process could be a bit lossy. (Hence why you sometimes needed to fill the screen with an image and push the “center” button on those older LCD screens)

(Note to the pedantically-inclined: yes I am aware that I have simplified several things here; for instance, a color LCD pixel is made up of approximately 3 sub-dots of varying colors, and that things like color eInk displays have two pixel grids with different sizes of pixels layered atop each other, and printers are another confusing thing altogether, and and and…. MOST PEOPLE THINK OF A PIXEL AS A DOT THESE DAYS, OK?)

What was DOS text mode?

We think of this as the “standard” display: 80 columns wide and 25 rows tall. 80x25. By the time Linux came along, the standard Linux console was VGA text mode – something like the 4th incarnation of text modes on PCs (after CGA, MDA, and EGA). VGA also supported certain other sizes of characters giving certain other text dimensions, but if I cover all of those, this will explode into a ridiculously more massive page than it already is.

So to display text on an 80x25 DOS VGA system, ultimately characters and attributes were written into the text buffer in memory. The VGA system then rendered it to the display as a 720x400 image (at 70Hz) with non-square pixels such that the result was approximately a 4:3 aspect ratio.

The font used for this rendering was a bitmapped one using 8x16 cells. You might do some math here and point out that 8 * 80 is only 640, and you’d be correct. The fonts were 8x16 but the rendered cells were 9x16. The extra pixel was normally used for spacing between characters. However, in line graphics mode, characters 0xC0 through 0xDF repeated the 8th column in the position of the 9th, allowing the continuous line-drawing characters we’re used to from TUIs.

Problems rendering DOS fonts on modern systems

By now, you’re probably seeing some of the issues we have rendering DOS screens on more modern systems. These aren’t new at all; I remember some of these from back in the days when I ran OS/2, and I think also saw them on various terminals and consoles in OS/2 and Windows.

Some issues you’d encounter would be:

  • Incorrect aspect ratio caused by using the original font and rendering it using 1:1 square pixels (resulting in a squashed appearance)
  • Incorrect aspect ratio for ANOTHER reason, caused by failing to render column 9, resulting in text that is overall too narrow
  • Characters appearing to be touching each other when they shouldn’t (failing to render column 9; looking at you, dosbox)
  • Gaps between line drawing characters that should be continuous, caused by rendering column 9 as empty space in all cases

Character set issues

DOS was around long before Unicode was. In the DOS world, there were codepages that selected the glyphs for roughly the high half of the 256 possible characters. CP437 was the standard for the USA; others existed for other locations that needed different characters. On Unix, the USA pre-Unicode standard was Latin-1. Same concept, but with different character mappings.

Nowadays, just about everything is based on UTF-8. So, we need some way to map our CP437 glyphs into Unicode space. If we are displaying DOS-based content, we’ll also need a way to map CP437 characters to Unicode for display later, and we need these maps to match so that everything comes out right. Whew.

So, let’s get on with setting this up!

Selecting the proper video mode

As explained in my previous post, proper hardware support for DOS text mode is limited to x86 machines that do not use UEFI. Non-x86 machines, or x86 machines with UEFI, simply do not contain the necessary support for it. As these are now standard, most of the time, the text console you see on Linux is actually the kernel driving the video hardware in graphics mode, and doing the text rendering in software.

That’s all well and good, but it makes it quite difficult to actually get an 80x25 console.

First, we need to be running at 720x400. This is where I ran into difficulty last time. I realized that my laptop’s LCD didn’t advertise any video modes other than its own native resolution. However, almost all external monitors will, and 720x400@70 is a standard VGA mode from way back, so it should be well-supported.

You need to find the Linux device name for your device. You can look at the possible devices with ls -l /sys/class/drm. If you also have a GUI, xrandr may help too. But in any case, each directory under /sys/class/drm has a file named modes, and if you cat them all, you will eventually come across one with a bunch of modes defined. Drop the leading “card0” or whatever from the directory name, and that’s your device. (Verify that 720x400 is in modes while you’re at it.)

Now, you’re going to edit /etc/default/grub and add something like this to GRUB_CMDLINE_LINUX_DEFAULT:

video=DP-1:720x400@70

Of course, replace DP-1 with whatever your device is.

Now you can run update-grub and reboot. You should have a 720x400 display.

At first, I thought I had succeeded by using Linux’s built-in VGA font with that mode. But it looked too tall. After noticing that repeated 0s were touching, I got suspicious about the missing 9th column in the cells. stty -a showed that my screen was 90x25, which is exactly what it would show if I was using 8x16 instead of 9x16 cells. Sooo…. I need to prepare a 9x16 font.

Preparing a font

Here’s where it gets complicated.

I’ll give you the simple version and the hard mode.

The simple mode is this: Download https://www.complete.org/downloads/CP437-VGA.psf.gz and stick it in /usr/local/etc, then skip to the “Activating the font” section below.

The font assembled here is based on the Ultimate Oldschool PC Font Pack v2.2, which is (c) 2016-2020 VileR and licensed under Creative Commons Attribution-ShareAlike 4.0 International License. My psf file is derived from this using the instructions below.

Building it yourself

First, install some necessary software: apt-get install fontforge bdf2psf

Start by going to the Oldschool PC Font Pack Download page. Download oldschool_pc_font_pack_v2.2_FULL.zip and unpack it.

The file we’re interested in is otb - Bm (linux bitmap)/Bm437_IBM_VGA_9x16.otb. Open it in fontforge by running fontforge BmPlus_IBM_VGA_9x16.otb. When it asks if you will load the bitmap fonts, hit select all, then yes. Go to File -> generate fonts. Save in a BDF, no need for outlines, and use “guess” for resolution.

Now you have a file such as Bm437_IBM_VGA_9x16-16.bdf. Excellent.

Now we need to generate a Unicode map file. We will make sure this matches the system’s by enumerating every character from 0x00 to 0xFF, converting it from CP437 to Unicode, and writing the appropriate map.

Here’s a Python script to do that:

for i in range(0, 256):
    cp437b = b'%c' % i
    uni = ord(cp437b.decode('cp437'))
    print(f"U+{uni:04x}")

Save that file as genmap.py and run python3 genmap.py > cp437-uni.

Now, we’re ready to build the psf file:

bdf2psf --fb Bm437_IBM_VGA_9x16-16.bdf \
  /dev/null cp437-uni 256 CP437-VGA.psf

By convention, we normally store these files gzipped, so gzip CP437-VGA.psf.

You can test it on the console with setfont CP437-VGA.psf.gz.

Now copy this file into /usr/local/etc.

Activating the font

Now, edit /etc/default/console-setup. It should look like this:

# CONFIGURATION FILE FOR SETUPCON

# Consult the console-setup(5) manual page.

ACTIVE_CONSOLES="/dev/tty[1-6]"

CHARMAP="UTF-8"

CODESET="Lat15"
FONTFACE="VGA"
FONTSIZE="8x16"
FONT=/usr/local/etc/CP437-VGA.psf.gz

VIDEOMODE=

# The following is an example how to use a braille font
# FONT='lat9w-08.psf.gz brl-8x8.psf'

At this point, you should be able to reboot. You should have a proper 80x25 display! Log in and run stty -a to verify it is indeed 80x25.

Using and testing CP437

Part of the point of CP437 is to be able to access BBSs, ANSI art, and similar.

Now, remember, the Linux console is still in UTF-8 mode, so we have to translate CP437 to UTF-8, then let our font map translate it back to CP437. A weird trip, but it works.

Let’s test it using the Textfiles ANSI art collection. In the artworks section, I randomly grabbed a file near the top: borgman.ans. Download that, and display with:

clear; iconv -f CP437 -t UTF-8 < borgman.ans

You should see something similar to – but actually more accurate than – the textfiles PNG rendering of it, which you’ll note has an incorrect aspect ratio and some rendering issues. I spot-checked with a few others and they seemed to look good. belinda.ans in particular tries quite a few characters and should give you a good sense if it is working.

Use with interactive programs

That’s all well and good, but you’re probably going to want to actually use this with some interactive program that expects CP437. Maybe Minicom, Kermit, or even just telnet?

For this, you’ll want to apt-get install luit. luit maps CP437 (or any other encoding) to UTF-8 for display, and then of course the Linux console maps UTF-8 back to the CP437 font.

Here’s a way you can repeat the earlier experiment using luit to run the cat program:

clear; luit -encoding CP437 cat borgman.ans

You can run any command under luit. You can even run luit -encoding CP437 bash if you like. If you do this, it is probably a good idea to follow my instructions on generating locales on my post on serial terminals, and then within luit, set LANG=en_us.IBM437. But note especially that you can run programs like minicom and others for accessing BBSs under luit.

Final words

This gave you a nice DOS-type console. Although it doesn’t have glyphs for many codepoints, it does run in UTF-8 mode and therefore is compatible with modern software.

You can achieve greater compatibility with more UTF-8 codepoints with the DOS font, at the expense of accuracy of character rendering (especially for the double-line drawing characters) by using /usr/share/bdf2psf/standard.equivalents instead of /dev/null in the bdf2psf command.

Or you could go for another challenge, such as using the DEC vt-series fonts for coverage of ISO-8859-1. But just using fonts extracted from DEC ROM won’t work properly, because DEC terminals had even more strangeness going on than DOS fonts.

18 September, 2025 12:58PM by John Goerzen

September 17, 2025

Installing and Using Debian With My Decades-Old Genuine DEC vt510 Serial Terminal

Six years ago, I was inspired to buy a DEC serial terminal. Since then, my collection has grown to include several DEC models, an IBM 3151, a Wyse WY-55, a Televideo 990, and a few others.

When you are running a terminal program on Linux or MacOS, what you are really running is a terminal emulator. In almost all cases, the terminal emulator is emulating one of the DEC terminals in the vt100 through vt520 line, which themselves use a command set based on an ANSI standard.

In short, you spend all day using a program designed to pretend to be the exact kind of physical machine I’m using for this experiment!

I have long used my terminals connected to a Raspberry Pi 4, but due to the difficulty of entering a root filesystem encryption password using a serial console on a Raspberry Pi, I am switching to an x86 Mini PC (with a N100 CPU).

While I have used a terminal with the Pi, I’ve never before used it as a serial console all the way from early boot, and I have never installed Debian using the terminal to run the installer. A serial terminal gives you a login prompt. A serial console gives you access to kernel messages, the initrd environment, and sometimes even the bootloader.

This might be fun, I thought.

I selected one of my vt510 terminals for this. It is one of my newer ones, having been built in 1993. But it has a key feature: I can remap Ctrl to be at the caps lock position, something I do on every other system I use anyhow. I could have easily selected an older one from the 1980s.

A DEC vt510 terminal showing the Debian installer

Kernel configuration

To enable a serial console for Linux, you need to pass a parameter on the kernel command line. See the kernel documentaiton for more. I very frequently see instructions that are incomplete; they particularly omit flow control, which is most definitely needed for these real serial terminals.

I run my terminal at 57600 bps, so the parameter I need is console=ttyS0,57600n8r. The “r” means to use hardware flow control (ttyS0 corresponds to the first serial port on the system; use ttyS1 or something else as appropriate for your situation). While booting the Debian installer, according to Debian’s instructions, it may be useful to also add TERM=vt102 (the installer doesn’t support the vt510 terminal type directly). The TERM parameter should not be specified on a running system after instlalation.

Booting the Debian installer

When you start the Debian installer, to get it into serial mode, you have a couple of options:

  1. You can use a traditional display and keyboard just long enough to input the kernel parameters described above
  2. You can edit the bootloader configuration on the installer’s filesystem prior to booting from it

Option 1 is pretty easy. Option 2 is hard mode, but not that bad.

On x86, the Debian installer boots in at least two different ways: it uses GRUB if you’re booting under UEFI (which is most systems these days), or ISOLINUX if you are booting from the BIOS.

If using GRUB, the file to edit on the installer image is boot/grub/grub.cfg.

Near the top, add these lines:

serial --unit=0 --speed=57600 --word=8 --parity=no --stop=1
terminal_input console serial
terminal_output console serial

Unit 0 corresponds to ttyS0 as above.

GRUB’s serial command does not support flow control. If your terminal gets corrupted during the GRUB stage, you may need to configure it to a slower speed.

Then, find the “linux” line under the “Install” menuentry. Edit it to insert console=ttyS0,57600n8r TERM=vt102 right after the vga=788.

Save, unmount, and boot. You should see the GRUB screen displayed on your serial terminal. Select the Install option and the installer begins.

If you are using BIOS boot, I’m sure you can do something similar with the files in the isolinux directory, but haven’t researched it.

Now, you can install Debian like usual!

Configuring the System

I was pleasantly surprised to find that Debian’s installer took care of many, but not all, of the things I want to do in order to make the system work nicely with a serial terminal. You can perform these steps from a chroot under the installer environment before a reboot, or later in the running system.

First, while Debian does set up a getty (the program that displays the login prompt) on the serial console by default, it doesn’t enable hardware flow control. So let’s do that.

Configuring the System: agetty with systemd

Run systemctl edit serial-getty@ttyS0.service. This opens an editor that lets you customize the systemd configuration for a given service without having to edit the file directly. All you really need to do is modify the agetty command, so we just override it. At the top, in the designated area, write:

[Service]
ExecStart=
ExecStart=-/sbin/agetty --wait-cr -8 -h -L=always %I 57600 vt510

The empty ExecStart= line is necessary to tell systemd to remove the existing ExecStart command (otherwise, it will logically contain two ExecStart lines, which is an error).

These arguments say:

  • –wait-cr means to wait for the user to press Return at the terminal before attempting to display the login prompt
  • -8 tells it to assume 8-bit mode on the serial line
  • -h enables hardware flow control
  • -L=always enables local line mode, disabling monitoring of modem control lines
  • %I substitutes the name of the port from systemd
  • 57600 gives the desired speed, and vt510 gives the desired setting for the TERM environment variable

The systemd documentation refers to this page about serial consoles, which gives more background. However, I think it is better to use the systemctl edit method described here, rather than just copying the config file, since this lets things like new configurations with new Debian versions take effect.

Configuring the System: Kernel and GRUB

Your next stop is the /etc/default/grub file. Debian’s installer automatically makes some changes here. There are three lines you want to change. First, near the top, edit GRUB_CMDLINE_LINUX_DEFAULT and add console=tty0 console=ttyS0,57600n8r. By specifying console twice, you allow output to go both to the standard display and to the serial console. By specifying the serial console last, you make it be the preferred one for things like entering the root filesystem password.

Next, towards the bottom, make sure these two lines look like this:

GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=57600 --word=8 --parity=no --stop=1"

Finally, near the top, you may want to raise the GRUB_TIMEOUT to somewhere around 10 to 20 seconds since things may be a bit slower than you’re used to.

Save the file and run update-grub.

Now, GRUB will display on both your standard display and the serial console. You can edit the boot command from either. If you have a VGA or HDMI monitor attached, for instance, and need to not use the serial console, you can just edit the Linux command line in GRUB and remove the reference to ttyS0 for one boot. Easy!

That’s it. You now have a system that is fully operational from a serial terminal.

My original article from 2019 has some additional hints, including on how to convert from UTF-8 for these terminals.

Update 2025-09-17: It is also useful to set up proper locales. To do this, first edit /etc/locale.gen. Make sure to add, or uncomment:

en_US ISO-8859-1
en_US.IBM437 IBM437
en_US.UTF-8 UTF-8 

Then run locale-gen. Normally, your LANG will be set to en_us.UTF-8, which will select the appropriate encoding. Plain en_US will select ISO-8859-1, which you need for the vt510. Then, add something like this to your ~/.bashrc:

if [ `tty` = "/dev/ttyS0" -o "$TERM" = "vt510" ]; then
        stty -iutf8
        # might add ixon ixoff
        export LANG=en_US
        export MANOPT="-E ascii"
        stty rows 25
fi

if [ "$TERM" = "screen" -o "$TERM" = "vt100" ]; then
    export LANG=en_US.utf8
fi

Finally, in my ~/.screenrc, I have this. It lets screen convert between UTF-8 and ISO-8859-1:

defencoding UTF-8
startup_message off
vbell off
termcapinfo * XC=B%,‐-,
maptimeout 5
bindkey -k ku stuff ^[OA
bindkey -k kd stuff ^[OB
bindkey -k kr stuff ^[OC
bindkey -k kl stuff ^[OD

17 September, 2025 12:49PM by John Goerzen

September 16, 2025

Raju Devidas

Building Debian 13 Trixie Vagrant Image

I sometimes use Vagrant to deploy my VM&aposs and recently when I tried to deploy one for Trixie, I could see one available. So I checked the official Debian images on Vagrant cloud at https://portal.cloud.hashicorp.com/vagrant/discover/debian and could not find an image for trixie on Vagrant cloud.

Also looked at other cloud image sources like Docker hub, and I could see an image their for Trixie. So I looked into how I can generate a Vagrant image locally for Debian to use.

make install-build-deps



Searched on Salsa and stumbled upon https://salsa.debian.org/cloud-team/debian-vagrant-images

Cloned the repo from salsa

$ git clone https://salsa.debian.org/cloud-team/debian-vagrant-images.git

Install the build dependencies

$ make install-build-deps

this will install some dependency packages, will ask for sudo password if need to install something not already installed.

Let&aposs call make help

$ make help
To run this makefile, run:
   make <DIST>-<CLOUD>-<ARCH>
  WHERE <DIST> is bullseye, buster, stretch, sid or testing
    And <CLOUD> is azure, ec2, gce, generic, genericcloud, nocloud, vagrant, vagrantcontrib
    And <ARCH> is amd64, arm64, ppc64el
Set DESTDIR= to write images to given directory.

$ make trixie-vagrant-amd64
umask 022; \
./bin/debian-cloud-images build \
  trixie vagrant amd64 \
  --build-id vagrant-cloud-images-master \
  --build-type official
usage: debian-cloud-images build
debian-cloud-images build: error: argument RELEASE: invalid value: trixie
make: *** [Makefile:22: trixie-vagrant-amd64] Error 2

As you can see, trixie is not even in the available options and it is not building as well. Before trying to look at updating the codebase, I looked at the pending MR&aposs on Salsa and found Michael Ablassmeier&aposs pending merge request at https://salsa.debian.org/cloud-team/debian-vagrant-images/-/merge_requests/18

So let me test that commit and see if I can build trixie locally from Michael&aposs MR

$ git clone https://salsa.debian.org/debian/debian-vagrant-images.git
Cloning into &aposdebian-vagrant-images&apos...
remote: Enumerating objects: 5310, done.
remote: Counting objects: 100% (256/256), done.
remote: Compressing objects: 100% (96/96), done.
remote: Total 5310 (delta 141), reused 241 (delta 135), pack-reused 5054 (from 1)
Receiving objects: 100% (5310/5310), 629.81 KiB | 548.00 KiB/s, done.
Resolving deltas: 100% (2875/2875), done.

$ cd debian-vagrant-images/

$ git checkout 8975eb0 #the commit id of MR 

Now let&aposs see if we can build trixie now

$ make help
To run this makefile, run:
   make <DIST>-<CLOUD>-<ARCH>
  WHERE <DIST> is bullseye, buster, stretch, sid or testing
    And <CLOUD> is azure, ec2, gce, generic, genericcloud, nocloud, vagrant, vagrantcontrib
    And <ARCH> is amd64, arm64, ppc64el
Set DESTDIR= to write images to given directory.



$ make trixie-vagrant-amd64
umask 022; \
./bin/debian-cloud-images build \
  trixie vagrant amd64 \
  --build-id vagrant-cloud-images-master \
  --build-type official
2025-09-17 00:36:25,919 INFO Adding class DEBIAN
2025-09-17 00:36:25,919 INFO Adding class CLOUD
2025-09-17 00:36:25,919 INFO Adding class TRIXIE
2025-09-17 00:36:25,920 INFO Adding class VAGRANT
2025-09-17 00:36:25,920 INFO Adding class AMD64
2025-09-17 00:36:25,920 INFO Adding class LINUX_IMAGE_BASE
2025-09-17 00:36:25,920 INFO Adding class GRUB_PC
2025-09-17 00:36:25,920 INFO Adding class LAST
2025-09-17 00:36:25,921 INFO Running FAI: sudo env PYTHONPATH=/home/rajudev/dev/salsa/michael/debian-vagrant-images/src/debian_cloud_images/build/../.. CLOUD_BUILD_DATA=/home/rajudev/dev/salsa/michael/debian-vagrant-images/src/debian_cloud_images/data CLOUD_BUILD_INFO={"type": "official", "release": "trixie", "release_id": "13", "release_baseid": "13", "vendor": "vagrant", "arch": "amd64", "build_id": "vagrant-cloud-images-master", "version": "20250917-1"} CLOUD_BUILD_NAME=debian-trixie-vagrant-amd64-official-20250917-1 CLOUD_BUILD_OUTPUT_DIR=/home/rajudev/dev/salsa/michael/debian-vagrant-images CLOUD_RELEASE_ID=vagrant CLOUD_RELEASE_VERSION=20250917-1 fai-diskimage --verbose --hostname debian --class DEBIAN,CLOUD,TRIXIE,VAGRANT,AMD64,LINUX_IMAGE_BASE,GRUB_PC,LAST --size 100G --cspace /home/rajudev/dev/salsa/michael/debian-vagrant-images/src/debian_cloud_images/build/fai_config debian-trixie-vagrant-amd64-official-20250917-1.raw

..... continued

Although we can now build the images, we just don&apost see an option for it in the help text, not even for bookworm. Just the text in Makefile is outdated, but I can build and trixie Vagrant box now. Thanks to Michael for the fix.

16 September, 2025 09:14PM by Raju Vindane

John Goerzen

I just want an 80×25 console, but that’s no longer possible

Update 2025-09-18: I figured out how to do this, at least for many non-laptop screens. This post still contains a lot of good background detail, however.

Somehow along the way, a feature that I’ve had across DOS, OS/2, FreeBSD, and Linux — and has been present on PCs for more than 40 years — is gone.

That feature, of course, is the 80×25 text console.

Linux has, for awhile now, rendered its text console using graphic modes. You can read all about it here. This has been necessary because only PCs really had the 80×25 text mode (Raspberry Pis, for instance, never did), and even they don’t have it when booted with UEFI.

I’ve lately been annoyed that:

  • The console is a different size on every screen — both in terms of size of letters and the dimensions of it
  • If a given machine has more than one display, one or both of them will have parts of the console chopped off
  • My system seems to run with three different resolutions or fonts at different points of the boot process. One during the initrd, and two different ones during the remaining boot.

And, I wanted to run some software on the console that was designed with 80×25 in mind. And I’d like to be able to plug in an old VGA monitor and have it just work if I want to do that.

That shouldn’t be so hard, right? Well, the old vga= option that you are used to doesn’t work when you booted from UEFI or on non-x86 platforms. Most of the tricks you see online for changing resolutions, etc., are no longer relevant. And things like setting a resolution with GRUB are useless for systems that don’t use GRUB (including ARM).

VGA text mode uses 8×16 glyphs in 9×16 cells, where the pixels are non-square, giving a native resolution of 720×400 (which historically ran at 70Hz), which should have streched pixels to make a 4:3 image.

While it is possible to select a console font, and 8×16 fonts are present and supported in Linux, it appears to be impossible to have a standard way to set 720×400 so that they present in a reasonable size, at the correct aspect ratio, with 80×25.

Tricks like nomodeset no longer work on UEFI or ARM systems. It’s possible that kmscon or something like it may help, but I’m not even certain of that (video=eDP1:720×400 produced an error saying that 720×400 wasn’t a supported mode, so I’m unsure kmscon would be any better.) Not that it matters; all the kmscon options to select a font or zoom are broken, and it doesn’t offer mode selection anyhow.

I think I’m going to have to track down an old machine.

Sigh.

16 September, 2025 01:53AM by John Goerzen

September 15, 2025

Sven Hoexter

HaProxy: Configuring SNI for a TLS Proxy

If you use HaProxy to e.g. terminate TLS on the frontend and connect via TLS to a backend, one has to take care of sending the SNI (server name indication) extension in the TLS handshake sort of manually.

Even if you use host names to address the backend server, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt

HaProxy will try to establish the connection without SNI. You manually have to enforce SNI here, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The surprising thing here was that it requires an expression, so you can not just write sni foobar.example, you've to wrap it in an expression. The simplest one is making sure it's a string.

Update: Might be noteworthy that you've to configure SNI for the health check separately, and in that case it's a string not an expression. E.g.

server foobar foobar.example:2342 check check-ssl check-sni foobar.example ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The ca-file is shared between the ssl context and the check-ssl.

15 September, 2025 12:44PM

Google Cloud: When the Load Balancer Frontend Hands you an F

If someone hands you an IP:Port of a Google Cloud load balancer, and tells you to connect there with TLS, but all you receive in return is an F (and a few other bytes with none printable characters) on running openssl s_client -connect ..., you might be missing SNI (server name indication). Sadly the other side was not transparent enough to explain in detail which exact type of Google Cloud load balancer they used, but the conversation got more detailed and up to a working TLS connection when the missing -servername foobar.host.name was added. I could not find any sort of official documentation on the responses of the GFE (the frontend part) when TLS parameters do not match the expectations. Also you won't have anything in the logs, because logging at Google Cloud is a backend function, and as long as your requests do not reach the backend, there are no logs. That makes it rather unpleasant to debug such cases, when one end says "I do not see anything in the logs", and the other one says "you reject my connection and just reply F".

15 September, 2025 12:32PM

September 14, 2025

Ian Jackson

tag2upload in the first month of forky

tl;dr: tag2upload (beta) is going well so far, and is already handling around one in 13 uploads to Debian.

Introduction and some stats

We announced tag2upload’s open beta in mid-July. That was in the middle of the the freeze for trixie, so usage was fairly light until the forky floodgates opened.

Since then the service has successfully performed 637 uploads, of which 420 were in the last 32 days. That’s an average of about 13 per day. For comparison, during the first half of September up to today there have been 2475 uploads to unstable. That’s about 176/day.

So, tag2upload is already handling around 7.5% of uploads. This is very gratifying for a service which is advertised as still being in beta!

Sean and I are very pleased both with the uptake, and with the way the system has been performing.

Recent UI/UX improvements

During this open beta period we have been hard at work. We have made many improvements to the user experience.

Current git-debpush in forky, or trixie-backports, is much better at detecting various problems ahead of time.

When uploads do fail on the service the emailed error reports are now more informative. For example, anomalies involving orig tarballs, which by definition can’t be detected locally (since one point of tag2upload is not to have tarballs locally) now generally result in failure reports containing a diffstat, and instructions for a local repro.

Why we are still in beta

There are a few outstanding work items that we currently want to complete before we declare the end of the beta.

Retrying on Salsa-side failures

The biggest of these is that the service should be able to retry when Salsa fails. Sadly, Salsa isn’t wholly reliable, and right now if it breaks when the service is trying to handle your tag, your upload can fail.

We think most of these failures could be avoided. Implementing retries is a fairly substantial task, but doesn’t pose any fundamental difficulties. We’re working on this right now.

Other notable ongoing work

We want to support pristine-tar, so that pristine-tar users can do a new upstream release. Andrea Pappacoda is working on that with us. See #1106071. (Note that we would generally recommend against use of pristine-tar within Debian. But we want to support it.)

We have been having conversations with Debusine folks about what integration between tag2upload and Debusine would look like. We’re making some progress there, but a lot is still up in the air.

We are considering how best to provide tag2upload pre-checks as part of Salsa CI. There are several problems detected by the tag2upload service that could be detected by Salsa CI too, but which can’t be detected by git-debpush.

Common problems

We’ve been monitoring the service and until very recently we have investigated every service-side failure, to understand the root causes. This has given us insight into the kinds of things our users want, and the kinds of packaging and git practices that are common. We’ve been able to improve the system’s handling of various anomalies and also improved the documentation.

Right now our failure rate is still rather high, at around 7%. Partly this is because people are trying out the system on packages that haven’t ever seen git tooling with such a level of rigour.

There are two classes of problem that are responsible for the vast majority of the failures that we’re still seeing:

Reuse of version numbers, and attempts to re-tag

tag2upload, like git (and like dgit), hates it when you reuse a version number, or try to pretend that a (perhaps busted) release never happened.

git tags aren’t namespaced, and tend to spread about promiscuously. So replacing a signed git tag, with a different tag of the same name, is a bad idea. More generally, reusing the same version number for a different (signed!) package is poor practice. Likewise, it’s usually a bad idea to remove changelog entries for versions which were actually released, just because they were later deemed improper.

We understand that many Debian contributors have gotten used to this kind of thing. Indeed, tools like dcut encourage it. It does allow you to make things neat-looking, even if you’ve made mistakes - but really it does so by covering up those mistakes!

The bottom line is that tag2upload can’t support such history-rewriting. If you discover a mistake after you’ve signed the tag, please just burn the version number and add a new changelog stanza.

One bonus of tag2upload’s approach is that it will discover if you are accidentally overwriting an NMU, and report that as an error.

Discrepancies between git and orig tarballs

tag2upload promises that the source package that it generates corresponds precisely to the git tree you tag and sign.

Orig tarballs make this complicated. They aren’t present on your laptop when you git-debpush. When you’re not uploading a new upstream version, the tag2upload service reuses existing orig tarballs from the archive. If your git and the archive’s orig don’t agree, the tag2upload service will report an error, rather than upload a package with contents that differ from your git tag.

With the most common Debian workflows, everything is fine:

If you base everything on upstream git, and make your orig tarballs with git archive (or git deborig), your orig tarballs are the same as the git, by construction. We recommend usually ignoring upstream tarballs: most upstreams work in git, and their tarballs can contain weirdness that we don’t want. (At worst, the tarball can contain an attack that isn’t visible in git, as with xz!)

Alternatively, if you use gbp import-orig, the differences (including an attack like Jia Tan’s) are imported into git for you. Then, once again, your git and the orig tarball will correspond.

But there are other workflows where this correspondence may not hold. Those workflows are hazardous, because the thing you’re probably working with locally for your routine development is the git view. Then, when you upload, your work is transplanted onto the orig tarball, which might be quite different - so what you upload isn’t what you’ve been working on!

This situation is detected by tag2upload, precisely because tag2upload checks that it’s keeping its promise: the source package is identical to the git view. (dgit push makes the same promise.)

Get involved

Of course the easiest way to get involved is to start using tag2upload.

We would love to have more contributors. There are some easy tasks to get started with, in bugs we’ve tagged “newcomer” — mostly UX improvements such as detecting certain problems earlier, in git-debpush.

More substantially, we are looking for help with sbuild: we’d like it to be able to work directly from git, rather than needing to build source packages: #868527.



comment count unavailable comments

14 September, 2025 03:36PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.1.14 on CRAN: New Upstream Major

A brand new release 0.1.14 of the RcppSimdJson package is now on CRAN.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.

This version includes the new major upstream release 4.0.0 with major new features including a ‘builder’ for creating JSON from the C++ side objects. This is something a little orthogonal to the standard R usage of the package to parse and load JSON data but could still be of interest to some.

The short NEWS entry for this release follows.

Changes in version 0.1.14 (2025-09-13)

  • simdjson was upgraded to version 4.0.0 (Dirk in #96

  • Continuous integration now relies a token for codecov.io

Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

14 September, 2025 02:52PM

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Zero-configuration TLS and password management best practices in MariaDB 11.8

Featured image of post Zero-configuration TLS and password management best practices in MariaDB 11.8

Locking down database access is probably the single most important thing for a system administrator or software developer to prevent their application from leaking its data. As MariaDB 11.8 is the first long-term supported version with a few new key security features, let’s recap what the most important things are every DBA should know about MariaDB in 2025.

Back in the old days, MySQL administrators had a habit of running the clumsy mysql-secure-installation script, but it has long been obsolete. A modern MariaDB database server is already secure by default and locked down out of the box, and no such extra scripts are needed. On the contrary, the database administrator is expected to open up access to MariaDB according to the specific needs of each server. Therefore, it is important that the DBA can understand and correctly configure three things:

  1. Separate application-specific users with granular permissions allowing only necessary access and no more.
  2. Distributing and storing passwords and credentials securely
  3. Ensuring all remote connections are properly encrypted

For holistic security, one should also consider proper auditing, logging, backups, regular security updates and more, but in this post we will focus only on the above aspects related to securing database access.

How encrypting database connections with TLS differs from web server HTTP(S)

Even though MariaDB (and other databases) use the same SSL/TLS protocol for encrypting remote connections as web servers and HTTPS, the way it is implemented is significantly different, and the different security assumptions are important for a database administrator to grasp.

Firstly, most HTTP requests to a web server are unauthenticated, meaning the web server serves public web pages and does not require users to log in. Traditionally, when a user logs in over a HTTP connection, the username and password were transmitted in plaintext as a HTTP POST request. Modern TLS, which was previously called SSL, does not change how HTTP works but simply encapsulates it. When using HTTPS, a web browser and a web server will start an encrypted TLS connection as the very first thing, and only once established, do they send HTTP requests and responses inside it. There are no passwords or other shared secrets needed to form the TLS connection. Instead, the web server relies on a trusted third party, a Certificate Authority (CA), to vet that the TLS certificate offered by the web server can be trusted by the web browser.

For a database server like MariaDB, the situation is quite different. All users need to first authenticate and log in to the server before getting being allowed to run any SQL and getting any data out of the server. The database server and client programs have built-in authentication methods, and passwords are not, and have never been, sent in plaintext. Over the years, MySQL and its successor, MariaDB, have had multiple password authentication methods: the original SHA-1-based hashing, later double SHA-1-based mysql_native_password, followed by sha256_password and caching_sha256_password in MySQL and ed25519 in MariaDB. The MariaDB.org blog post by Sergei Golubchik recaps the history of these well.

Even though most modern MariaDB installations should be using TLS to encrypt all remote connections in 2025, having the authentication method be as secure as possible still matters, because authentication is done before the TLS connection is fully established.

To further harden the authentication agains man-in-the-middle attacks, a new password the authentication method PARSEC was introduced in MariaDB 11.8, which builds upon the previous ed25519 public-key-based verification (similar to how modern SSH does), and also combines key derivation with PBKDF2 with hash functions (SHA512,SHA256) and a high iteration count.

At first it may seem like a disadvantage to not wrap all connections in a TLS tunnel like HTTPS does, but actually not having the authentication done in a MitM resistant way regardless of the connection encryption status allows a clever extra capability that is now available in MariaDB: as the database server and client already have a shared secret that is being used by the server to authenticate the user, it can also be used by the client to validate the server’s TLS certificate and no third parties like CAs or root certificates are needed. MariaDB 11.8 was the first LTS version to ship with this capability for zero-configuration TLS.

Note that the zero-configuration TLS also works on older password authentication methods and does not require users to have PARSEC enabled. As PARSEC is not yet the default authentication method in MariaDB, it is recommended to enable it in installations that use zero-configuration TLS encryption to maximize the security of the TLS certificate validation.

Why the ‘root’ user in MariaDB has no password and how it makes the database more secure

Relying on passwords for security is problematic, as there is always a risk that they could leak, and a malicious user could access the system using the leaked password. It is unfortunately far too common for database passwords to be stored in plaintext in configuration files that are accidentally committed into version control and published on GitHub and similar platforms. Every application or administrative password that exists should be tracked to ensure only people who need it know it, and rotated at regular intervals to ensure old employees etc won’t be able to use old passwords. This password management is complex and error-prone.

Replacing passwords with other authentication methods is always advisable when possible. On a database server, whoever installed the database by running e.g. apt install mariadb-server, and configured it with e.g. nano /etc/mysql/mariadb.cnf, already has full root access to the operating system, and asking them for a password to access the MariaDB database shell is moot, since they could circumvent any checks by directly accessing the files on the system anyway. Therefore, MariaDB, since version 10.4 stopped requiring the root user to enter a password when connecting locally, and instead checks using socket authentication whether the user is the operating-system root user or equivalent (e.g. running sudo). This is an elegant way to get rid of a password that was actually unnecessary to begin with. As there is no root password anymore, the risk of an external user accessing the database as root with a leaked password is fully eliminated.

Note that socket authentication only works for local connections on the same server. If you want to access a MariaDB server remotely as the root user, you would need to configure a password for it first. This is not generally recommended, as explained in the next section.

Create separate database users for normal use and keep ‘root’ for administrative use only

Out of the box a MariaDB installation is already secure by default, and only the local root user can connect to it. This account is intended for administrative use only, and for regular daily use you should create separate database users with access limited to the databases they need and the permissions required.

The most typical commands needed to create a new database for an app and a user the app can use to connect to the database would be the following:

sql
CREATE DATABASE app_db;
CREATE USER 'app_user'@'%' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON app_db.* TO 'app_user'@'%';
FLUSH PRIVILEGES;

Alternatively, if you want to use the parsec authentication method, run this to create the user:

sql
CREATE OR REPLACE USER 'app_user'@'%'
 IDENTIFIED VIA parsec
 USING PASSWORD('your_secure_password');

Note that the plugin auth_parsec is not enabled by default. If you see the error message ERROR 1524 (HY000): Plugin 'parsec' is not loaded fix this by running INSTALL SONAME 'auth_parsec';.

In the CREATE USER statements, the @'%' means that the user is allowed to connect from any host. This needs to be defined, as MariaDB always checks permissions based on both the username and the remote IP address or hostname of the user, combined with the authentication method. Note that it is possible to have multiple user@remote combinations, and they can have different authentication methods. A user could, for example, be allowed to log in locally using the socket authentication and over the network using a password.

If you are running a custom application and you know exactly what permissions are sufficient for the database users, replace the ALL PRIVILEGES with a subset of privileges listed in the MariaDB documentation.

For new permissions to take effect, restart the database or run FLUSH PRIVILEGES.

Allow MariaDB to accept remote connections and enforce TLS

Using the above 'app_user'@'%' is not enough on its own to allow remote connections to MariaDB. The MariaDB server also needs to be configured to listen on a network interface to accept remote connections. As MariaDB is secure by default, it only accepts connections from localhost until the administrator updates its configuration. On a typical Debian/Ubuntu system, the recommended way is to drop a new custom config in e.g. /etc/mysql/mariadb.conf.d/99-server-customizations.cnf, with the contents:

[mariadbd]
# Listen for connections from anywhere
bind-address = 0.0.0.0
# Only allow TLS encrypted connections
require-secure-transport = on

For settings to take effect, restart the server with systemctl restart mariadb. After this, the server will accept connections on any network interface. If the system is using a firewall, the port 3306 would additionally need to be allow-listed.

To confirm that the settings took effect, run e.g. mariadb -e "SHOW VARIABLES LIKE 'bind_address';" , which should now show 0.0.0.0.

When allowing remote connections, it is important to also always define require-secure-transport = on to enforce that only TLS-encrypted connections are allowed. If the server is running MariaDB 11.8 and the clients are also MariaDB 11.8 or newer, no additional configuration is needed thanks to MariaDB automatically providing TLS certificates and appropriate certificate validation in recent versions.

On older long-term-supported versions of the MariaDB server one would have had to manually create the certificates and configure the ssl_key, ssl_cert and ssl_ca values on the server, and distribute the certificate to the clients as well, which was cumbersome, so good it is not required anymore. In MariaDB 11.8 the only additional related config that might still be worth setting is tls_version = TLSv1.3 to ensure only the latest TLS protocol version is used.

Finally, test connections to ensure they work and to confirm that TLS is used by running e.g.:

shell
mariadb --user=app_user --password=your_secure_password \
 --host=192.168.1.66 -e '\s'

The result should show something along:

--------------
mariadb from 11.8.3-MariaDB, client 15.2 for debian-linux-gnu (x86_64)
...
Current user: app_user@192.168.1.66
SSL: Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
...

If running a Debian/Ubuntu system, see the bundled README with zcat /usr/share/doc/mariadb-server/README.Debian.gz to read more configuration tips.

Should TLS encryption be used also on internal networks?

If a database server and app are running on the same private network, the chances that the connection gets eavesdropped on or man-in-the-middle attacked by a malicious user are low. However, it is not zero, and if it happens, it can be difficult to detect or prove that it didn’t happen. The benefit of using end-to-end encryption is that both the database server and the client can validate the certificates and keys used, log it, and later have logs audited to prove that connections were indeed encrypted and show how they were encrypted.

If all the computers on an internal network already have centralized user account management and centralized log collection that includes all database sessions, reusing existing SSH connections, SOCKS proxies, dedicated HTTPS tunnels, point-to-point VPNs, or similar solutions might also be a practical option. Note that the zero-configuration TLS only works with password validation methods. This means that systems configured to use PAM or Kerberos/GSSAPI can’t use it, but again those systems are typically part of a centrally configured network anyway and are likely to have certificate authorities and key distribution or network encryption facilities already set up.

In a typical software app stac however, the simplest solution is often the best and I recommend DBAs use the end-to-end TLS encryption in MariaDB 11.8 in most cases.

Hopefully with these tips you can enjoy having your MariaDB deployments both simpler and more secure than before!

14 September, 2025 12:00AM

September 12, 2025

hackergotchi for Christoph Berg

Christoph Berg

The Cost of TDE and Checksums in PGEE

It's been a while since the last performance check of Transparent Data Encryption (TDE) in Cybertec's PGEE distribution - that was in 2016. Of course, the question is still interesting, so I did some benchmarks.

Since the difference is really small between running without any extras, with data checksums turned on, and with both encryption and checksums turned on, we need to pick a configuration that will stress-test these features the most. So in the spirit of making PostgreSQL deliberately run slow, I went with only 1MB of shared_buffers with a pgbench workload of scale factor 50. The 770MB of database size will easily fit into RAM. However, having such a small buffer cache setting will cause a lot of cache misses with pages re-read from the OS disk cache, checksums checked, and the page decrypted again. To further increase the effect, I ran pgbench --skip-some-updates so the smaller, in-cache-anyway pgbench tables are not touched. Overall, this yields a pretty consistent buffer cache hit rate of only 82.8%.

Here are the PGEE 17.6 tps (transactions per second) numbers averaged over a few 1-minute 3-client pgbench runs for different combinations of data checksums on/off, TDE off, and the various supported key bit lengths:

  no checksums   data checksums  
no TDE 2455,6 100,00 % 2449,7 99,76 %
128 bits 2440,9 99,40 % 2443,3 99,50 %
192 bits 2439,6 99,35 % 2446,1 99,61 %
256 bits 2450,3 99,78 % 2443,1 99,49 %

There is a lot of noise in the individual runtimes before averaging, so the numbers must be viewed with some care (192-bit TDE is certainly not faster with checksums than without), but if we dare to interpret these tiny differences, we can conclude the following:

  • The cost of enabling data checksums on this bad-cache-ratio workload is about 0.25 %.
  • The cost of enabling both TDE encryption and data checksums on this workload is about 0.5%.

Any workload with a better shared_buffers cache hit rate would see a lower penalty of enabling checksums and TDE than that.

 

The post The Cost of TDE and Checksums in PGEE appeared first on CYBERTEC PostgreSQL | Services & Support.

12 September, 2025 06:00AM by Christoph Berg

hackergotchi for Freexian Collaborators

Freexian Collaborators

Using JavaScript in Debusine without depending on JavaScript (by Enrico Zini)

Debusine is a tool designed for Debian developers and Operating System developers in general. This posts describes our approach to the use of JavaScript, and some practical designs we came up with to integrate it with Django with minimal effort.

Debusine web UI and JavaScript

Debusine currently has 3 user interfaces: a client on the command line, a RESTful API, and a Django-based Web UI.

Debusine’s web UI is a tool to interact with the system, and we want to spend most of our efforts in creating a system that works and works well, rather than chasing the latest and hippest of the frontend frameworks for the web.

Also, Debian as a community has an aversion to having parts of the JavaScript ecosystem in the critical path of its core infrastructure, and in our professional experience this aversion is not at all unreasonable.

This leads to having some interesting requirements for the web UI, that (rather surprisingly, one would think) one doesn’t usually find advertised in many projects:

  • Straightforward to create and maintain.
  • Well integrated with Django.
  • Easy to package in Debian, with as little vendoring as possible, which helps mitigate supply chain attacks
  • Usable without JavaScript whenever possible, for progressive enhancement rather than core functionality.

The idea is to avoid growing the technical complexity and requirements of the web UI, both server-side and client-side, for functionality that is not needed for this kind of project, with tools that do not fit well in our ecosystem.

Also, to limit the complexity of the JavaScript portions that we do develop, we choose to limit our JavaScript browser supports to the main browser versions packaged in Debian Stable, plus recent oldstable.

This makes JavaScript easier to write and maintain, and it also makes it less needed, as modern HTML plus modern CSS interfaces can go a long way with less scripting interventions.

We recently encoded JavaScript practices and tradeoffs in a JavaScript Practices chapter of Debusine’s documentation.

How we use JavaScript

From the start we built the UI using Bootstrap, which helps in having responsive layouts that can also work on mobile devices.

When we started having large select fields, we introduced Select2 to make interaction more efficient, and which degrades gracefully to working HTML.

Both Bootstrap and Select2 are packaged in Debian.

Form validation is done server-side by Django, and we do not reimplement it client-side in JavaScript, as we prefer the extra round trip through a form submission to the risk of mismatches between the two validations.

In those cases where a UI task is not at all possible without JavaScript, we can make its support mandatory as long as the same goal can be otherwise achieved using the debusine client command.

Django messages as Bootstrap toasts

Django has a Messages framework that allows different parts of a view to push messages to the user, and it is useful to signal things like a successful form submission, or warnings on unexpected conditions.

Django messages integrate well with Bootstrap toasts, which use a recognisable notification language, are nicely dismissible and do not invade the rest of the page layout.

Since toasts require JavaScript to work, we added graceful degradation. to Bootstrap alerts

Doing so was surprisingly simple: we handle the toasts as usual, and also render the plain alerts inside a <noscript> tag.

This is precisely the intended usage of the <noscript> tag, and it works perfectly: toasts are displayed by JavaScript when it’s available, or rendered as alerts when not.

The resulting Django template is something like this:

<div aria-live="polite" aria-atomic="true" class="position-relative">
    <div class="toast-container position-absolute top-0 end-0 p-3">
    {% for message in messages %}
        <div class="toast" role="alert" aria-live="assertive" aria-atomic="true">
            <div class="toast-header">
                <strong class="me-auto">{{ message.level_tag|capfirst }}</strong>
                <button type="button"
                        class="btn-close"
                        data-bs-dismiss="toast"
                        aria-label="Close"></button>
            </div>
            <div class="toast-body">{{ message }}</div>
        </div>
    {% endfor %}
    </div>
</div>

<!-- … -->

{% if messages %}
<noscript>
    {% for message in messages %}
        <div class="alert alert-primary" role="alert">
            {{ message }}
        </div>
    {% endfor %}
</noscript>
{% endif %}

We have a webpage to test the result.

JavaScript incremental improvement of formsets

Debusine is built around workspaces, which are, among other things, containers for resources.

Workspaces can inherit from other workspaces, which act as fallback lookups for resources. This allows, for example, to maintain an experimental package to be built on Debian Unstable, without the need to copy the whole Debian Unstable workspace. A workspace can inherit from multiple others, which are looked up in order.

When adding UI to configure workspace inheritance, we faced the issue that plain HTML forms do not have a convenient way to perform data entry of an ordered list.

We initially built the data entry around Django formsets, which support ordering using an extra integer input field to enter the ordering position. This works, and it’s good as a fallback, but we wanted something more appropriate, like dragging and dropping items to reorder them, as the main method of interaction.

Our final approach renders the plain formset inside a <noscript> tag, and the JavaScript widget inside a display: none element, which is later shown by JavaScript code.

As the workspace inheritance is edited, JavaScript serializes its state into <form type='hidden'> fields that match the structure used by the formset, so that when the form is submitted, the view performs validation and updates the server state as usual without any extra maintenance burden.

Serializing state as hidden form fields looks a bit vintage, but it is an effective way of preserving the established data entry protocol between the server and the browser, allowing us to do incremental improvement of the UI while minimizing the maintenance effort.

More to come

Debusine is now gaining significant adoption and is still under active development, with new features like personal archives coming soon.

This will likely mean more user stories for the UI, so this is a design space that we are going to explore again and again in the coming future.

Meanwhile, you can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org!

12 September, 2025 12:00AM by Enrico Zini

Michael Ablassmeier

qmpbackup and proxmox 9

The latest Proxmox release introduces a new Qemu machine version that seems to behave differently for how it addresses the virtual disk configuration.

Also, the regular “query-block” qmp command doesn’t list the created bitmaps as usual.

If the virtual machine version is set to “9.2+pve”, everything seems to work out of the box.

I’ve released Version 0.50 with some small changes so its compatible with the newer machine versions.

12 September, 2025 12:00AM

September 11, 2025

Jonathan Wiltshire

Debian stable updates explained: security, updates, and point releases

Please consider supporting my work in Debian and elsewhere through Liberapay.

Debian stable updates work through three main channels: point releases, security repositories, and the updates repository. Understanding these ensures your system stays secure and current.

A note about suite names

Every Debian release, or suite, has a codename — the most recent major release was trixie, or Debian 13. The codename uniquely identifies that suite.

We also use changeable aliases to add meaning to the suite’s lifecycle. For example, trixie currently has the alias stable, but when forky becomes stable instead, trixie will become known as oldstable.

This post uses either codenames or aliases depending on context. In source lists, codenames are generally preferred since that avoids surprise major upgrades right after a release is made.

The stable suites (point releases)

stable and oldstable (currently trixie and bookworm) are only updated during a “point release.” This is a minor update released to a major version. For example, 13.1 is the first minor update to trixie. It’s not possible to install older minor versions of a suite except via the snapshots mechanism (not covered here). It’s possible to view past versions via snapshot.debian.org, which preserves historical Debian archives.

There are also the testing and unstable aliases for the development suites. However, these are not relevant for users who want to run officially released versions.

Almost every stable installation of Debian will be opted into a stable or oldstable base suite. An example APT source might look like:

Type: deb
URIs: http://deb.debian.org/debian
Suites: trixie
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

Or, in legacy sources.list style:

deb https://deb.debian.org/debian trixie main

The security suites (DSAs explained)

For urgent security-related updates, the Security Team maintains a counterpart suite for each stable suite. These are called stable-security and oldstable-security when maintained by Debian’s security team, and oldstable-security, oldoldstable-security, etc when maintained by the LTS team.

Example APT source:

Type: deb
URIs: https://deb.debian.org/debian-security
Suites: trixie-security
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

Or, in legacy sources.list style:

deb https://deb.debian.org/debian-security trixie-security main

The Debian installer enables the security suites by default. Debian Security Announcements (DSAs) are published to debian-security-announce@lists.debian.org.

The updates suites (SUAs and maintenance)

For urgent non-security updates, the final recommended suites are stable-updates and oldstable-updates. This is where updates staged for a point release, but needed sooner, are published. Examples include virus database updates, timezone changes, urgent bug fixes for specific problems and corrections to errors in the release process itself.

Example APT source:

Type: deb
URIs: https://deb.debian.org/debian
Suites: trixie-updates
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

Or, in legacy sources.list style:

deb https://deb.debian.org/debian trixie-updates main

Debian enables the updates suites by default. Stable Update Announcements (SUAs) are published to debian-stable-announce@lists.debian.org. This is also where announcements of forthcoming point releases are published.

Summary

These are the recommended suites for all production Debian systems:

SuiteExample codenamePurposeAnnouncements
stabletrixieBase suite containing all the available software for a release. Point releases every 2–4 months including lower-severity security fixes that do not require immediate release.Debian Release Announcements on debian-announce
stable-securitytrixie-securityUrgent security updates.Debian Security Announcements on debian-security-announce
stable-updatestrixie-updatesUrgent non-security updates, data updates and release maintenance.Stable Update Announcements on debian-stable-announce

After a release moves from oldstable to unsupported status, Long Term Support (LTS) takes over for several more years. LTS provides urgent security updates for selected architectures. For details, see wiki.debian.org/LTS.

If you’d like to stay informed, the official Debian announcement lists and release.debian.org share the latest schedules and updates.


Photo by Brian Wangenheim on Unsplash

11 September, 2025 08:29PM by Jonathan

John Goerzen

Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs

In my post yesterday, ARM is great, ARM is terrible (and so is RISC-V), I described my desire to find ARM hardware with AES instructions to support full-disk encryption, and the poor state of the OS ecosystem around the newer ARM boards.

I was anticipating buying either a newer ARM SBC or an x86 mini PC of some sort.

More-efficient AES alternatives

Always one to think, “what if I didn’t have to actually buy something”, I decided to research whether it was possible to use encryption algorithms that are more performant on the Raspberry Pi 4 I already have.

The answer was yes. From cryptsetup benchmark:

root@mccoy:~# cryptsetup benchmark --cipher=xchacha12,aes-adiantum-plain64 
# Tests are approximate using memory only (no storage IO).
#            Algorithm |       Key |      Encryption |      Decryption
xchacha12,aes-adiantum        256b       159.7 MiB/s       160.0 MiB/s
xchacha20,aes-adiantum        256b       116.7 MiB/s       169.1 MiB/s
    aes-xts                   256b        52.5 MiB/s        52.6 MiB/s

With best-case reads from my SD card at 45MB/s (with dd if=/dev/mmcblk0 of=/dev/null bs=1048576 status=progress), either of the ChaCha-based algorithms will be fast enough. “Great,” I thought. “Now I can just solve this problem without spending a dollar.”

But not so fast.

Serial terminals vs. serial consoles

My primary use case for this device is to drive my actual old DEC vt510 terminal. I have long been able to do that by running a getty for my FTDI-based USB-to-serial converter on /dev/ttyUSB0. This gets me a login prompt, and I can do whatever I need from there.

This does not get me a serial console, however. The serial console would show kernel messages and could be used to interact with the pre-multiuser stages of the system — that is, everything before the loging prompt. You can use it to access an emergency shell for repair, etc.

Although I have long booted that kernel with console=tty0 console=ttyUSB0,57600, the serial console has never worked but I’d never bothered investigating because the text terminal was sufficient.

You might be seeing where this is going: to have root on an encrypted LUKS volume, you have to enter the decryption password in the pre-multiuser environment (which happens to be on the initramfs).

So I started looking. First, I extracted the initrd with cpio and noticed that the ftdi_sio and usbserial modules weren’t present. Added them to /etc/initramfs-tools/modules and rebooted; no better.

So I found the kernel’s serial console guide, which explicitly notes “To use a serial port as console you need to compile the support into your kernel”. Well, I have no desire to custom-build a kernel on a Raspberry Pi with MicroSD storage every time a new kernel comes out.

I thought — well I don’t stricly need the kernel to know about the console on /dev/ttyUSB0 for this; I just need the password prompt — which comes from userspace — to know about it.

So I looked at the initramfs code, and wouldn’t you know it, it uses /dev/console. Looking at /proc/consoles on that system, indeed it doesn’t show ttyUSB0. So even though it is possible to load the USB serial driver in the initramfs, there is no way to make the initramfs use it, because it only uses whatever the kernel recognizes as a console, and the kernel won’t recognize this. So there is no way to use a USB-to-serial adapter to enter a password for an encrypted root filesystem.

Drat.

The on-board UARTs?

I can hear you know: “The Pi already has on-board serial support! Why not use that?”

Ah yes, the reason I don’t want to use that is because it is difficult to use that, particularly if you want to have RTS/CTS hardware flow control (or DTR/DSR on these old terminals, but that’s another story, and I built a custom cable to map it to RTS/CTS anyhow).

Since you asked, I’ll take you down this unpleasant path.

The GPIO typically has only 2 pins for serial communication: 8 and 10, for TX and RX, respectively.

But dive in and you get into a confusing maze of UARTs. The “mini UART”, the one we are mostly familiar with on the Pi, does not support hardware flow control. The PL011 does. So the natural question is: how do we switch to the PL011, and what pins does it use? Great questions, and the answer is undocumented, at least for the Pi 4.

According to that page, for the Pi 4, the primary UART is UART1, UART1 is the mini UART, “the secondary UART is not normally present on the GPIO connector” and might be used by Bluetooth anyway, and there is no documented pin for RTS/CTS anyhow. (Let alone some of the other lines modems use) There are supposed to be /dev/ttyAMA* devices, but I don’t have those. There’s an enable_uart kernel parameter, which does things like stop the mini UART from changing baud rates every time the VPU changes clock frequency (I am not making this up!), but doesn’t seem to control the PL011 UART selection. This page has a program to do it, and map some GPIO pins to RTS/CTS, in theory.

Even if you get all that working, you still have the problem that the Pi UARTs (all of them of every type) is 3.3V and RS-232 is 5V, so unless you get a converter, you will fry your Pi the moment you connect it to something useful. So, you’re probably looking at some soldering and such just to build a cable that will work with an iffy stack.

So, I could probably make it work given enough time, but I don’t have that time to spare working with weird Pi serial problems, so I have always used USB converters when I need serial from a Pi.

Conclusion

I bought a fanless x86 micro PC with a N100 chip and all the ports I might want: a couple of DB-9 serial ports, some Ethernet ports, HDMI and VGA ports, and built-in wifi. Done.

11 September, 2025 01:41PM by John Goerzen

hackergotchi for Christoph Berg

Christoph Berg

A Trip To Vienna With Surprises

My trip to pgday.at started Wednesday at the airport in Düsseldorf. I was there on time, and the plane started with an estimated flight time of about 90 minutes. About half an hour into the flight, the captain announced that we would be landing in 30 minutes - in Düsseldorf, because of some unspecified technical problems. Three hours after the original departure time, the plane made another attempt, and we made it to Vienna.

On the plane I had already met Dirk Krautschick who had the great honor of bringing Slonik (in the form of a big extra bag) to the conference, and we took a taxi to the hotel. On the taxi, the next surprise happened: Hans-Jürgen Schönig unfortunately couldn't make it to the conference, and his talks had to be replaced. I had submitted a talk to the conference, but it was not accepted, and neither queued on the reserve list. But two speakers on the reserve list had cancelled, and another was already giving a talk in parallel to the slot that had to be filled, so Pavlo messaged me if I could hold the talk - well of course I could. Before, I didn't have any specific plans for the evening yet, but suddenly I was a speaker, so I joined the folks going to the speakers dinner at the Wiener Grill Haus two corners from the hotel. It was a very nice evening, chatting with a lot of folks from the PostgreSQL community that I had not seen for a while.

Thursday was the conference day. The hotel was a short walk from the venue, the Apothekertrakt in Vienna's Schloss Schönbrunn. The courtyard was already filled with visitors registering for the conference. Since I originally didn't have a talk scheduled, I had signed up to volunteer for a shift as room host. We got our badge and swag bag, and I changed into the "crew" T-shirt.

The opening and sponsor keynotes took place in the main room, the Orangerie. We were over 100 people in the room, but apparently still not enough to really fill it, so the acoustics with some echo made it a bit difficult to understand everything. I hope that part can be improved for next time (which is planned to happen!).

I was host for the Maximilian room, where the sponsor sessions were scheduled in the morning. The first talk was by our Peter Hofer, also replacing the absent Hans. He had only joined the company at the beginning of the same week, and was already tasked to give Hans' talk on PostgreSQL as Open Source. Of course he did well.

Next was Tanmay Sinha from Readyset. They are building a system that caches expensive SQL queries and selectively invalidates the cache whenever any data used by these queries changes. Whenever actually fixing the application isn't feasible, that system looks like an interesting alternative to manually maintaining materialized views, or perhaps using pg_ivm.

After lunch, I went to Federico Campoli's Mastering Index Performance, but really spent the time polishing the slides for my talk. I had given the original version at pgconf.de in Berlin in May, and the slides were still in German, so I had to do some translating. Luckily, most slides are just git commit messages, so the effort was manageable.

The next slot was mine, talking about Modern VACUUM. I started with a recap of MVCC, vacuum and freezing in PostgreSQL, and then showed how over the past years, the system was updated to be more focused (the PostgreSQL 8.4 visibility map tells vacuum which pages to visit), faster (12 made autovacuum run 10 times faster by default), less scary (14 has an emergency mode where freezing switches to maximum speed if it runs out of time; 16 makes freezing create much less WAL) and more performant (17 makes vacuum use much less memory). In summary, there is still room for the DBA to tune some knobs (for example, the default autovacuum_max_workers=3 isn't much), but the vacuum default settings are pretty much okay these days for average workloads. Specific workloads still have a whopping 31 postgresql.conf settings at their disposal just for vacuum.

Right after my talk, there was another vacuum talk: When Autovacuum Met FinOps by Mayuresh Bagayatkar. He added practical advice on tuning the performance in cloud environments. Luckily, our contents did not overlap.

After the coffee break, I was again room host, now for Floor Drees and Contributing to Postgres beyond code. She presented the various ways in which PostgreSQL is more than just the code in the Git repository: translators, web site, system administration, conference organizers, speakers, bloggers, advocates. As a member of the PostgreSQL Contributors Committee, I could only approve and we should closer cooperate in the future to make people's contributions to PostgreSQL more visible and give them the recognition they deserve.

That was already the end of the main talks and everyone rushed to the Orangerie for the lightning talks. My highlight was the Sheldrick Wildlife Trust. Tickets for the conference had included the option to donate for the elephants in Kenya, and the talk presented the trust's work in the elephant orphanage there.

After the conference had officially closed, there was a bonus track: the Celebrity DB Deathmatch, aptly presented by Boriss Mejias. PostgreSQL, MongoDB, CloudDB and Oracle were competing for the grace of a developer. MongoDB couldn't stand the JSON workload, CloudDB was dismissed for handing out new invoices all the time, and Oracle had even brought a lawyer to the stage, but then lost control over a literally 10 meter long contract with too much fine print. In the end, PostgreSQL (played by Floor) won the love of the developer (played by our Svitlana Lytvynenko).

The day closed with a gathering at the Brandauer Schlossbräu - just at the other end of the castle ground, but still a 15min walk away. We enjoyed good beer and Kaiserschmarrn. I went back to the hotel a bit before midnight, but some extended that time quite some bit more.

On Friday, my flight back was only in the afternoon, so I spent some time in morning in the Technikmuseum just next to the hotel, enjoying some old steam engines and a live demonstration of Tesla coils. This time, the flight actually went to the destination, and I was back in Düsseldorf in the late afternoon.

In summary, pgday.at was a very nice event in a classy location. Thanks to the organizers for putting in all the work - and next year, Hans will hopefully be present in person!

The post A Trip To Vienna With Surprises appeared first on CYBERTEC PostgreSQL | Services & Support.

11 September, 2025 05:43AM by Christoph Berg

hackergotchi for Gunnar Wolf

Gunnar Wolf

Saying _hi_ to my good Reproducible Builds friends while reading a magazine article

Just wanted to share… I enjoy reading George V. Neville’s Kode Vicious column, which regularly appears on some of ACM’s publications I follow, such as ACM Queue or Communications.

Today I was very pleasantly surprised, while reading the column titled «Can’t we have nice things» Kode Vicious answers to a question on why computing has nothing comparable to the beauty of ancient physics laboratories turned into museums (i.e. Faraday’s laboratory) by giving a great hat tip to a project stemmed off Debian, and where many of my good Debian friends spend a lot of their energies: Reproducible builds. KV says:

Once the proper measurement points are known, we want to constrain the system such that what it does is simple enough to understand and easy to repeat. It is quite telling that the push for software that enables reproducible builds only really took off after an embarrassing widespread security issue ended up affecting the entire Internet. That there had already been 50 years of software development before anyone thought that introducing a few constraints might be a good idea is, well, let’s just say it generates many emotions, none of them happy, fuzzy ones.

Yes, KV is a seasoned free software author. But I found it heart warming that the Reproducible Builds project is mentioned without needing to introduce it (assuming familiarity across the computing industry and academia), recognized as game-changing as we understood it would be over ten years ago when it was first announced, and enabling of beauty in computing.

Congratulations to all of you who have made this possible!

RB+ACM

11 September, 2025 12:02AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, August 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In August, 21 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 10.0h (out of 0.0h assigned and 14.0h from previous period), thus carrying over 4.0h to the next month.
  • Andrej Shadura did 12.0h (out of 9.0h assigned and 3.0h from previous period).
  • Bastien Roucariès did 20.0h (out of 19.75h assigned and 0.25h from previous period).
  • Ben Hutchings did 22.75h (out of 16.5h assigned and 6.25h from previous period).
  • Carlos Henrique Lima Melara did 10.0h (out of 10.0h assigned).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 23.25h (out of 23.25h assigned).
  • Emilio Pozuelo Monfort did 23.25h (out of 23.25h assigned).
  • Guilhem Moulin did 15.0h (out of 15.0h assigned).
  • Jochen Sprickerhof did 11.0h (out of 6.0h assigned and 16.75h from previous period), thus carrying over 11.75h to the next month.
  • Lee Garrett did 16.25h (out of 0.0h assigned and 16.25h from previous period).
  • Lucas Kanashiro did 20.0h (out of 1.25h assigned and 18.75h from previous period).
  • Markus Koschany did 5.0h (out of 13.0h assigned and 9.75h from previous period), thus carrying over 17.75h to the next month.
  • Paride Legovini did 8.0h (out of 0.0h assigned and 8.0h from previous period).
  • Roberto C. Sánchez did 7.5h (out of 11.75h assigned and 11.0h from previous period), thus carrying over 15.25h to the next month.
  • Santiago Ruano Rincón did 13.5h (out of 7.25h assigned and 7.75h from previous period), thus carrying over 1.5h to the next month.
  • Stefano Rivera did 0.5h (out of 0.0h assigned and 3.0h from previous period), thus carrying over 2.5h to the next month.
  • Sylvain Beucler did 10.0h (out of 23.25h assigned), thus carrying over 13.25h to the next month.
  • Thorsten Alteholz did 22.75h (out of 22.75h assigned).
  • Tobias Frost did 4.0h (out of 0.0h assigned and 12.0h from previous period), thus carrying over 8.0h to the next month.
  • Utkarsh Gupta did 16.0h (out of 22.75h assigned), thus carrying over 6.75h to the next month.

Evolution of the situation

In August, we released 27 DLAs.

The month of August marked the release of Debian 13 (codename “trixie”). This is worth noting because it brought with it the return of the customary fast development pace of Debian unstable, which included several contributions from LTS Team members. More on that below.

Of the many security updates which were published (and a few non-security updates as well), some notable ones are highlighted here.

  • Notable security updates:
    • gnutls28 prepared by Adrian Bunk, fixes several potential denial of service vulnerabilities
    • apache2, prepared by Bastien Roucariès, fixes several vulnerabilities including a potential denial of service and SSL/TLS-related access control
    • mbedtls (original update, regression update) prepared by Andrej Shadura, fixes several potential denial of service and information disclosure vulnerabilities
    • openjdk-17, prepared by Emilio Pozuelo Monfort, fixes several vulnerabilities which could result in denial of service, information disclosure or weakened TLS connections
  • Notable non-security updates:
    • distro-info-data, prepared by Stefano Rivera, adds information concerning future Debian and Ubuntu releases
    • ca-certificates-java, prepared by Bastien Roucariès, fixes some bugs which could disrupt future updates

The LTS Team continues to welcome the collaboration of maintainers from across the Debian community. The contributions of maintainers from outside the LTS Team include: postgresql-13 (Christoph Berg), sope (Jordi Mallach), thunderbird (Carsten Schoenert), and iperf3 (Roberto Lumbreras).

Finally, LTS Team members also contributed updates of the following packages:

  • redis (to stable), prepared by Chris Lamb
  • firebird3.0 (to oldstable and stable), prepared by Adrian Bunk
  • node-tmp (to oldstable, stable, and unstable), prepared by Adrian Bunk
  • openjpeg2 (to oldstable, stable, and unstable), prepared by Adrian Bunk
  • apache2 (to oldstable), prepared by Bastien Roucariès
  • unbound (to oldstable), prepared by Guilhem Moulin
  • luajit (to oldstable), prepared by Guilhem Moulin
  • golang-github-gin-contrib-cors (to oldstable and stable), prepared by Thorsten Alteholz
  • libcoap3 (to stable), prepared by Thorsten Alteholz
  • libcommons-lang-java and libcommons-lang3-java (both to unstable), prepared by Daniel Leidert
  • python-flask-cors (to oldstable), prepared by Daniel Leidert

The LTS Team would especially like to thank our many longtime friends and sponsors for their support and collaboration.

Thanks to our sponsors

Sponsors that joined recently are in bold.

11 September, 2025 12:00AM by Roberto C. Sánchez

Debian Contributions: Preparing for setup.py install deprecation, Salsa CI, Debian 13 "trixie" release and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-08

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Preparing for setup.py install deprecation, by Colin Watson

setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (though they don’t necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). Some of the Python team talked about this a bit at DebConf, and Colin volunteered to write up some notes on cases where this isn’t straightforward. This page will likely grow as the team works on this problem.

Salsa CI, by Santiago Ruano Rincón

Santiago fixed some pending issues in the MR that moves the pipeline to sbuild+unshare, and after several months, Santiago was able to mark the MR as ready. Part of the recent fixes include handling external repositories, honoring the RELEASE autodetection from d/changelog (thanks to Ahmed Siam for spotting the main reason of the issue), and fixing a regression about the apt resolver for *-backports releases. Santiago is currently waiting for a final review and approval from other members of the Salsa CI team, and being able to merge it. Thanks to all the folks who have helped testing the changes or provided feedback so far. If you want to test the current MR, you need to include the following pipeline definition in your project’s CI config file:

---
include:
  - https://salsa.debian.org/santiago/pipeline/raw/sbuild-unshare-02-salsa-ci/salsa-ci.yml
  - https://salsa.debian.org/santiago/pipeline/raw/sbuild-unshare-02-salsa-ci/pipeline-jobs.yml

As a reminder, this MR will make the Salsa CI pipeline build the packages more similar to how it’s built by the Debian official builders. This will also save some resources, since the default pipeline will have one stage less (the provisioning) stage, and will make it possible for more projects to be built on salsa.debian.org (including large projects and those from the OCaml ecosystem), etc. See the different issues being fixed in the MR description.

Debian 13 “trixie” release, by Emilio Pozuelo Monfort

On August 9th, Debian 13 “trixie” was released, building on two years worth of updates and bug fixes from hundreds of developers. Emilio helped coordinate the release, communicating with several teams involved in the process.

DebConf 26 Site Visit, by Stefano Rivera

Stefano visited Santa Fe, Argentina, the site for DebConf 26 next year. The aim of the visit was to help build a local team and see the conference venue first-hand. Stefano and Nattie represented the DebConf Committee. The local team organized Debian meetups in Buenos Aires and Santa Fe, where Stefano presented a talk on Debian and DebConf. Venues were scouted and the team met with the university management and local authorities.

Miscellaneous contributions

  • Raphaël updated tracker.debian.org after the “trixie” release to add the new “forky” release in the set of monitored distributions. He also reviewed and deployed the work of Scott Talbert showing open merge requests from salsa in the “action needed” panel.
  • Raphaël reviewed some DEP-3 changes to modernize the embedded examples in light of the broad git adoption.
  • Raphaël configured new workflows on debusine.debian.net to upload to “trixie” and trixie-security, and officially announced the service on debian-devel-announce, inviting Debian developers to try the service for their next upload to unstable.
  • Carles created a merge request for django-compressor upstream to fix an error when concurrent node processing happened. This will allow removing a workaround added in openstack-dashboard and avoid the same bug in other projects that use django-compressor.
  • Carles prepared a system to detect packages that Recommends packages which don’t exist in unstable. Processed (either reported or ignored due to mis-detected problems or temporary problems) 16% of the reports. Will continue next month.
  • Carles got familiar and gave feedback for the freedict-wikdict package. Planned contributions with the maintainer to improve the package.
  • Helmut responded to queries related to /usr-move.
  • Helmut adapted crossqa.d.n to the release of “trixie”.
  • Helmut diagnosed sufficient failures in rebootstrap to make it work with gcc-15.
  • Helmut fixed the CI pipeline of debvm.
  • Helmut sent patches for 19 cross build problems.
  • Faidon discovered that the Multi-Arch hinter would emit confusing hints about :any annotations. Helmut identified the root cause to be the handling of virtual packages and fixed it.
  • Enrico took some dust off python-debiancontributors and prototyped a receiving end for salsa webpings, to start followup work to contributors.debian.org discussions at DebConf25.
  • Colin upgraded about 70 Python packages to new upstream versions, which is around 10% of the backlog; this included a complicated Pydantic upgrade in collaboration with the Rust team.
  • Colin fixed a bug in debbugs that caused incoming emails to bugs.debian.org with certain header contents to go missing.
  • Thorsten uploaded sane-airscan, which was already in experimental, to unstable.
  • Thorsten created a script to automate the upload of new upstream versions of foomatic-db. The database contains information about printers and regularly gets an update. Now it is possible to keep the package more up to date in Debian.
  • Stefano prepared updates to almost all of his packages that had new versions waiting to upload to unstable. (beautifulsoup4, hatch-vcs, mkdocs-macros-plugin, pypy3, python-authlib, python-cffi, python-mitogen, python-pip, python-pipx, python-progress, python-truststore, python-virtualenv, re2, snowball, soupsieve).
  • Stefano uploaded two new python3.13 point releases to unstable.
  • Stefano updated distro-info-data in stable releases, to document the “trixie” release and expected EoL dates.
  • Stefano did some debian.social sysadmin work (keeping up quotas with growing databases and filesystems).
  • Stefano supported the Debian treasurers in processing some of the DebConf 25 reimbursements.
  • Lucas uploaded ruby3.4 to experimental. It was already approved by FTP masters.
  • Lucas uploaded ruby-defaults to experimental to add support for ruby3.4. It will allow us to start triggering test rebuilds and catch any FTBFS with ruby3.4.
  • Lucas did some administrative work for Google Summer of Code (GSoC) and replied to some queries from mentors and students.
  • Anupa helped to organize release parties for Debian 13 and Debian Day events.
  • Anupa did the live coverage for the Debian 13 release and prepared the Bits post for the release announcement and 32nd Debian Day as part of the Debian Publicity team.
  • Anupa attended a Debian Day event organized by FOSS club SSET as a speaker.

11 September, 2025 12:00AM by Anupa Ann Joseph

September 10, 2025

John Goerzen

ARM is great, ARM is terrible (and so is RISC-V)

I’ve long been interested in new and different platforms. I ran Debian on an Alpha back in the late 1990s and was part of the Alpha port team; then I helped bootstrap Debian on amd64. I’ve got somewhere around 8 Raspberry Pi devices in active use right now, and the free NNCPNET Internet email service I manage runs on an ARM instance at a cloud provider.

ARM-based devices are cheap in a lot of ways: they use little power and there are many single-board computers based on them that are inexpensive. My 8-year-old’s computer is a Raspberry Pi 400, in fact.

So I like ARM.

I’ve been looking for ARM devices that have accelerated AES (Raspberry Pi 4 doesn’t) so I can use full-disk encryption with them. There are a number of options, since ARM devices are starting to go more mid-range. Radxa’s ROCK 5 series of SBCs goes up to 32GB RAM. The Orange Pi 5 Max and Ultra have up to 16GB RAM, as does the Raspberry Pi 5. Pine64’s Quartz64 has up to 8GB of RAM. I believe all of these have the ARM cryptographic extensions. They’re all small and most are economical.

But I also dislike ARM. There is a terrible lack of standardization in the ARM community. They say their devices run Linux, but the default there is that every vendor has their own custom Debian fork, and quite likely kernel fork as well. Most don’t maintain them very well.

Imagine if you were buying x86 hardware. You might have to manage AcerOS, Dellbian, HPian, etc. Most of them have no security support (particularly for the kernel). Some are based on Debian 11 (released in 2021), some Debian 12 (released in 2023), and none on Debian 13 (released a month ago).

That is exactly the situation we have on ARM. While Raspberry Pi 4 and below can run Debian trixie directly, Raspberry Pi has not bothered to upstream support for the Pi 5 yet, and Raspberry Pi OS is only based on Debian bookworm (released in 2023) and very explicitly does not support a key Debian feature: you can’t upgrade from one Raspberry Pi OS release to the next, so it’s a complete reinstall every 2 years instead of just an upgrade. OrangePiOS only supports Debian bookworm — but notably, their kernel is mostly stuck at 5.10 for every image they have (bookworm shipped with 6.1 and bookworm-backports supports 6.12).

Radxa has a page on running Debian on one specific board, they seem to actually not support Debian directly, but rather their fork Radxa OS. There’s a different installer for every board; for instance, this one for the Rock 4D. Looking at it, I can see that it uses files from here and here, with custom kernel, gstreamer, u-boot, and they put zfs in main for some reason.

From Pine64, the Quartz64 seems to be based on an ancient 4.6 or 4.19 kernel. Perhaps, though, one might be able to use Debian’s Pine A64+ instructions on it. Trixie doesn’t have a u-boot image for the Quartz64 but it does have device tree files for it.

RISC-V seems to be even worse; not only do we have this same issue there, but support in trixie is more limited and so is performance among the supported boards.

The alternative is x86-based mini PCs. There are a bunch based on the N100, N150, or Celeron. Many of them support AES-NI and the prices are roughly in line with the higher-end ARM units. There are some interesting items out there; for instance, the Radxa X4 SBC features both an N100 and a RP2040. Fanless mini PCs are available from a number of vendors. Companies like ZimaBoard have interesting options like the ZimaBlade also.

The difference in power is becoming less significant; it seems the newer ARM boards need 20W or 30W power supplies, and that may put them in the range of the mini PCs. As for cost, the newer ARM boards need a heat sink and fan, so by the time you add SBC, fan, storage, etc. you’re starting to get into the price range of the mini PCs.

It is great to see all the options of small SBCs with ARM and RISC-V processors, but at some point you’ve got to throw up your hands and go “this ecosystem has a lot of problems” and consider just going back to x86. I’m not sure if I’m quite there yet, but I’m getting close.

Update 2025-09-11: I found a performant encryption option for the Pi 4, but was stymied by serial console problems; see the update post.

10 September, 2025 01:16PM by John Goerzen

September 09, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.9 on CRAN: Maintenance

Release 0.2.9 of our RcppSMC package arrived at CRAN today. RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

This release is again entirely internal. It updates the code for the just-released RcppArmadillo 15.0.2-1, in particular opts into Armadillo 15.0.2. And it makes one small tweak to the continuous integration setup switching to the r-ci action.

The release is summarized below.

Changes in RcppSMC version 0.2.9 (2025-09-09)

  • Adjust to RcppArmadillo 15.0.* by setting ARMA_USE_CURRENT and updating two expressions from deprecated code

  • Rely on r-ci GitHub Action which includes the bootstrap step

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More information is on the RcppSMC page and the repo. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

09 September, 2025 08:42PM

Sven Hoexter

debian/watch version 5 is a beauty

Kudos to yadd@ (and who else was involved to make that happen), for the new watch file v5 format. Especially templates for the big git hoster make it much nicer. Prepared two of my packages to switch on the next upload, exfatprogs tracking github releases and pflogsumm scraping a web page. That is much easier to read and less error prone.

09 September, 2025 03:02PM

September 08, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 15.0.2-1 on CRAN: New Upstream, Some Changes

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1279 other packages on CRAN, downloaded 41.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 644 times according to Google Scholar.

Fairly shortly after the release of Armadillo 14.6.3 (which became our precedig release 14.6.3), Conrad released version 15.0.0 of Armadillo. It contains several breaking changes which made it unsuitable for CRAN as is. First, the minimum C++ standard was lifted to C++14. There are many good reasons for doing so, but doing it unannounced with no transition offered cannot work at CRAN. For (mostly historical) reasons (as C++11 was once a step up from even more archaic code) many packages still enforce C++11. These packages would break under Armadillo 15.0.0. With some reverse-dependency checks we quickly found that this would affect over 200 packages. Second, Armadillo frequently deprecates code and issues a warning offering a hint towards an improved / current alternative. That too is good practice, and recommended. But at CRAN a deprecation warning seen leads to NOTE and an unfixed NOTEs can lead to archival which is not great. For that reason, we had added a global ‘muffler’ suppressing the deprecation warnings. This was now removed in C++14 (as one can rely on newer compiler attributes we cannot undo, whereas the old scheme just used macros). Again, not something we can do at CRAN.

So I hatched the plan of offering both the final release under the previous state of the world, i.e. 14.6.3, along with 15.0.0. User could then opt-into 15.0.0 but have a fallback of 14.6.3 allowing all packages to coexist. The first idea was to turn the fallback on only if C++11 compilation was noticed. This lead still a lot of deprecation warnings meaning we had to more strongly default to 14.6.3 and only if users actively selected 15.0.* would they get it. This seemed to work. We were still faced with two hard failures. Turns out one was local to a since-archived package, the other reveal a bad interaction betweem Armadillo and gcc for complex variables and lead to bug fix 15.0.1. Which I wrapped and uploaded a week ago.

Release 15.0.1 was quickly seen as only temporary because I overlooked two issues. Moving ‘legacy’ and ‘current’ into, respectively, a directory of that name meant to top-level include header armadillo remained. This broked a few use cases where packages use a direct #include <armadillo> (which we do not recommend as it may miss a few other settings we apply for the R case). The other is that an old test for ‘limited’ LAPACK capabilities made noise in some builds (for example on Windows). As (current) R versions generally have sufficient LAPACK and BLAS, this too has been removed.

We have tested and rested these changed extensively. The usual repository with the logs shows eight reverse-dependency runs each of which can take up to a day. This careful approach, together with pacing uploads at CRAN usually works. We got a NOTE for ‘seven uploads in six months’ for this one, but a CRAN maintainer quickly advanced it, and the fully automated test showed no regression or other human intervention which is nice given the set of changes, and the over 1200 reverse dependencies.

The changes since the last announced CRAN release 14.6.3 follow.

Changes in RcppArmadillo version 15.0.2-1 (2025-09-08)

  • Upgraded to Armadillo release 15.0.2-1 (Medium Roast)

    • Optionally use OpenMP parallelisation for fp16 matrix multiplication

    • Faster vectorisation of cube tubes

  • Provide a top-level include file armadillo as fallback (#480)

  • Retire the no-longer-needed check for insufficient LAPACK as R now should supply a sufficient libRlapack (when used) (#483)

  • Two potential narrowing warnings are avoided via cast

Changes in RcppArmadillo version 15.0.1-1 (2025-09-01)

  • Upgraded to Armadillo release 15.0.1-1 (Medium Roast)

    • Workaround for GCC compiler bug involving misoptimisation of complex number multiplication
  • This version contains both 'legacy' and 'current' version of Armadillo (see also below). Package authors should set a '#define' to select the 'current' version, or select the 'legacy' version (also chosen as default) if they must. See GitHub issue #475 for more details.

  • Updated DESCRIPTION and README.md

Changes in RcppArmadillo version 15.0.0-1 (2025-08-21) (GitHub Only)

  • Upgraded to Armadillo release 15.0.0-1 (Medium Roast)

    • C++14 is now the minimum required C++ standard

    • Aded preliminary support for matrices with half-precision fp16 element type

    • Added second form of cond() to allow detection of failures

    • Added repcube()

    • Added .freeze() and .unfreeze() member functions to wall_clock

    • Extended conv() and conv2() to accept the "valid" shape argument

  • Also includes Armadillo 14.6.3 as fallback for C++11 compilations

  • This new 'dual' setup had been rigorously tested with five interim pre-releases of which several received full reverse-dependency checks

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 September, 2025 07:03PM

Thorsten Alteholz

My Debian Activities in August 2025

Debian LTS

This was my hundred-thirty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4272-1] aide security update of two CVEs where a local attacker can take advantage of these flaws to hide the addition or removal of a file from the the report, tamper with the log output, or cause aide to crash during report printing or database listing.
  • [DLA 4284-1] udisks2 security update to fix one CVE related to a possible local privilege escalation.
  • [DLA 4285-1] golang-github-gin-contrib-cors security update to fix one CVE related to circumvention of restrictions.
  • [#1112054] trixie-pu of golang-github-gin-contrib-cors, prepared and uploaded
  • [#1112335] trixie-pu of libcoap3, prepared and uploaded
  • [#1112053] bookworm-pu of golang-github-gin-contrib-cors, prepared and uploaded

I also continued my work on suricata and could backport all patches. Now I have to do some tests with the package. I also started to work on an openafs regression and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1499-1] aide security update to fix one embargoed CVE in Stretch, related to a crash. The other CVE mentioned above was not affecting Stretch.
  • [ELA-1508-1] udisks2 security update to fix one embargoed CVE in Stretch and Buster.

I could also mark the CVEs of libcoap as not-affected. I also attended the monthly LTS/ELTS meeting.Of course like for LTS, As suricata had been requested for Stretch as well now, I didn’t not finish my work here.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 31 of them in August.

FTP master

Yeah, Trixie has been released, the tired bones need to be awaken again :-). This month I accepted 203 and rejected 18 packages. The overall number of packages that got accepted was 243.

08 September, 2025 03:35PM by alteholz

Michael Ablassmeier

Vagrant images for trixie

It’s no news that the vagrant license has changed while ago, which resulted in less motivation to maintain it in Debian (understandably).

Unfortunately this means there are currently no official vagrant images for Debian trixie, for reasons

Of course there are various boxes floating around on hashicorp’s vagrant cloud, but either they do not fit my needs (too big) or i don’t consider them trustworthy enough…

Building the images using the existing toolset is quite straight forward. The required scripts are maintained in the Debian Vagrant images repository.

With a few additional changes applied and following the instructions of the README, you can build the images yourself.

For me, the built images work like expected.

08 September, 2025 12:00AM

September 07, 2025

Reproducible Builds (diffoscope)

diffoscope 306 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 306. This version includes the following changes:

[ Zbigniew Jędrzejewski-Szmek ]
* Fix compatibility with RPM 6.
* Use regular 'open' calls instead of the deprecated 'codecs.open'.
* Accept additional 'v' when calling 'fdtump --version'.

You find out more by visiting the project homepage.

07 September, 2025 12:00AM

September 06, 2025

Reproducible Builds

Reproducible Builds in August 2025

Welcome to the August 2025 report from the Reproducible Builds project!

Welcome to the latest report from the Reproducible Builds project for August 2025. These monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds and live-bootstrap at WHY2025
  3. DALEQ Explainable Equivalence for Java Bytecode
  4. Reproducibility regression identifies issue with AppArmor security policies
  5. Rust toolchain fixes
  6. Distribution work
  7. diffoscope
  8. Website updates
  9. Reproducibility testing framework
  10. Upstream patches

Reproducible Builds Summit 2025

Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!**

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Reproducible Builds and live-bootstrap at WHY2025

WHY2025 (What Hackers Yearn) is a nonprofit outdoors hacker camp that takes place in Geestmerambacht in the Netherlands (approximately 40km north of Amsterdam). The event is “organised for and by volunteers from the worldwide hacker community, and knowledge sharing, technological advancement, experimentation, connecting with your hacker peers, forging friendships and hacking are at the core of this event”.

At this year’s event, Frans Faase gave a talk on live-bootstrap, an attempt to “provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system”.

Frans’ talk is available to watch on video and his slides are available as well.


DALEQ Explainable Equivalence for Java Bytecode

Jens Dietrich of the Victoria University of Wellington, New Zealand and Behnaz Hassanshahi of Oracle Labs, Australia published an article this month entitled DALEQ — Explainable Equivalence for Java Bytecode which explores the options and difficulties when Java binaries are not identical despite being from the same sources, and what avenues are available for proving equivalence despite the lack of bitwise correlation:

[Java] binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process.

Jens and Behnaz therefore propose a tool called DALEQ, which:

disassembles Java byte code into a relational database, and can normalise this database by applying Datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with Datalog proofs recording the normalisation process. We demonstrate the impact of DALEQ in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, DALEQ is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that DALEQ outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.

Jens also posted this news to our mailing list.


Reproducibility regression identifies issue with AppArmor security policies

Tails developer intrigeri has tracked and followed a reproducibility regression in the generation of AppArmor policy caches, and has identified an issue with the 4.1.0 version of AppArmor.

Although initially tracked on the Tails issue tracker, intrigeri filed an issue on the upstream bug tracker. AppArmor developer John Johansen replied, confirming that they can reproduce the issue and went to work on a draft patch. Through this, John revealed that it was caused by an actual underlying security bug in AppArmor — that is to say, it resulted in permissions not (always) matching what the policy intends and, crucially, not merely a cache reproducibility issue.

Work on the fix is ongoing at time of writing.


Rust toolchain fixes

Rust Clippy is a linting tool for the Rust programming language. It provides a collection of lints (rules) designed to identify common mistakes, stylistic issues, potential performance problems and unidiomatic code patterns in Rust projects. This month, however, Sosthène Guédon filed a new issue in the GitHub requesting a new check that “would lint against non deterministic operations in proc-macros, such as iterating over a HashMap”.


Distribution work

In Debian this month:

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 303, 304 and 305 to Debian:

  • Improvements:

    • Use sed(1) backreferences when generating debian/tests/control to avoid duplicating ourselves. []
    • Move from a mono-utils dependency to versioned mono-devel | mono-utils dependency, taking care to maintain the [!riscv64] architecture restriction. []
    • Use sed over awk to avoid mangling dependency lines containing = (equals) symbols such as version restrictions. []
  • Bug fixes:

    • Fix a test after the upload of systemd-ukify version 258~rc3. []
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
    • Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2) time. []
    • Don’t check for PyPDF version 3 specifically; check for >= 3. []
  • Misc:

    • Update copyright years. [][]

In addition, Martin Joerg fixed an issue with the HTML presenter to avoid crash when page limit is None [] and Zbigniew Jędrzejewski-Szmek fixed compatibility with RPM 6 []. Lastly, John Sirois fixed a missing requests dependency in the trydiffoscope tool. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Run 4 workers on the o4 node again in order to speed up testing. [][][][]
    • Also test trixie-proposed-updates and trixie-updates etc. [][]
    • Gather seperate statistics for each tested release. []
    • Support sources from all Debian suites. []
    • Run new code from the prototype database rework branch for the amd64-pull184 pseudo-architecture. [][]
    • Add a number of helpful links. [][][][][][][][][]
    • Temporarily call debrebuild without the --cache argument to experiment with a new version of devscripts. [][][]
    • Update public TODO. []
  • Installation tests:

    • Add comments to explain structure. []
    • Mark more old jobs as old or “dead”. [][][]
    • Turn the maintenance job into a no-op. []
  • Jenkins node maintenance:

    • Increase penalties if the osuosl5 or ionos7 nodes are down. []
    • Stop trying to fix network automatically. []
    • Correctly mark ppc64el architecture nodes when down. []
    • Upgrade the remaining arm64 nodes to Debian trixie in anticipation of the release. [][]
    • Allow higher SSD temperatures on the riscv64 architecture. []
  • Debian-related:

    • Drop the armhf architecture; many thanks to Vagrant for physically hosting the nodes for ten years. [][]
    • Add Debian forky, and archive bullseye. [][][][][][][]
    • Document the filesystem space savings from dropping the armhf architecture. []
    • Exclude i386 and armhfr from JSON results. []
    • Update TODOs for when Debian trixie and forky have been released. [][]
  • tests.reproducible-builds.org-related:

  • Misc:

    • Detect errors with openQA erroring out. []
    • Drop the long-disabled openwrt_rebuilder jobs. []
    • Use qa-jenkins-dev@alioth-lists.debian.net as the contact for jenkins.debian.net. []
    • Redirect reproducible-builds.org/vienna25 to reproducible-builds.org/vienna2025. []

    • Disable all OpenWrt reproducible CI jobs, in coordination with the OpenWrt community. [][]
    • Make reproduce.debian.net accessable via IPv6. []
    • Ignore that the megacli RAID controller requires packages from Debian bookworm. []

In addition,

  • James Addison migrated away from deprecated toplevel deb822 Python module in favour of debian.deb822 in the bin/reproducible_scheduler.py script [] and removed a note on reproduce.debian.net note after the release of Debian trixie [].

  • Jochen Sprickerhof made a huge number of improvements to the reproduce.debian.net statistics calculation [][][][][][] as well as to the reproduce.debian.net service more generally [][][][][][][][].

  • Mattia Rizzolo performed a lot of work migrating scripts to SQLAlchemy version 2.0 [][][][][][] in addition to making some changes to the way openSUSE reproducibility tests are handled internally. []

  • Lastly, Roland Clobus updated the Debian Live packages after the release of Debian trixie. [][]


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 September, 2025 09:58PM

Antonio Terceiro

autopkgtest support in Debian: a more optimistic view

Yesterday I posted about the history, in numbers, of the support for autopkgtest in the Debian archive. I had analyzed the presence of a Testsuite: field in source packages, from wheezy to trixie, and noticed a slowdown in the growth rate of autopkgtest support, in proportional terms. In each new release, the percentage of packages declaring a test suite grew less than in the previous release, for the last 4 releases.

A night of sleep and a rainy morning later, I come back with a more optimistic view, and present to you the following data, expanded from the raw data:

Release year Release Yes No Total Δ Yes Δ No Δ Total
2013 wheezy 5 17170 17175 -- -- --
2015 jessie 1112 19484 20596 1107 2314 3421
2017 stretch 5110 19735 24845 3998 251 4249
2019 buster 9966 18535 28501 4856 -1200 3656
2021 bullseye 13949 16994 30943 3983 -1541 2442
2023 bookworm 17868 16473 34341 3919 -521 3398
2025 trixie 21527 16143 37670 3659 -330 3329

A few observations:

  • Since stretch, we have been consistently adding autopkgtest support to close to 4,000 packages on each release, on average.
  • Since buster, the number of packages without autopkgtest support has decreased in the hundreds.
  • On average, each release has 3,400 packages more than the previous, while also bringing 4,000 extra packages with autopkgtest support. I have the following hypotheses for this:
    1. a large part of new packages are added already with autopkgtests;
    2. a smaller but reasonably large number of existing packages get autopkgtests added on each release.

All in all, I think this data show that Debian maintainers recognize the usefulness of automated testing and are engaged in improving our QA process.

06 September, 2025 09:19AM

September 05, 2025

Past halfway there: history of autopkgtest support in Debian

The Release of Debian 13 ("Trixie") last month marked another milestone on the effort to provide automated test support for Debian packages in their installed form. We have achieved the mark of 57% of the source packages in the archive declaring support for autopkgtest.

Release Packages with tests Total number of packages % of packages with tests
wheezy 5 17175 0%
jessie 1112 20596 5%
stretch 5110 24845 20%
buster 9966 28501 34%
bullseye 13949 30943 45%
bookworm 17868 34341 52%
trixie 21527 37670 57%

The code that generated this table is provided at the bottom.

The growth rate has been consistently decreasing at each release after stretch. That probably means that the low hanging fruit -- adding support en masse for large numbers of similar packages, such as team-maintained packages for a given programming language -- has been picked, and from now on the work gets slightly harder. Perhaps there is a significant long tail of packages that will never get autopkgtest support.

Looking for common prefixes among the packages missing a Testsuite: field gives me us the largest groups of packages missing autopkgtest support:

$ grep-dctrl -v -F Testsuite --regex -s Package -n . trixie | cut -d - -f 1 | uniq -c | sort -n| tail -20
     50 apertium
     50 kodi
     51 lomiri
     53 maven
     55 libjs
     57 globus
     66 cl
     67 pd
     72 lua
     79 php
     88 puppet
     91 r
    111 gnome
    124 ruby
    140 ocaml
    152 rust
    178 golang
    341 fonts
    557 python
   1072 haskell

There seems to be a fair amount of Haskell and Python. If someone could figure out a way of testing installed fonts in a meaningful way, this would a be a good niche where we can cover 300+ packages.

There is a another analysis that can be made, which I didn't: which percentage of new packages introduced in a given release have declared autopkgtest support, compared with the total of new packages in that release? My data only counts the totals, so we start with the technical debt of the almost all of the 17,000 packages with no tests in wheezy, which was the stable at the time I started Debian CI. How many of those got tests since then?

Note that not supporting autopkgtest does not mean that a package is not tested at all: it can run build-time tests, which are also useful. Not supporting autopkgtest, though, means that their binaries in the archive can't be automatically tested in their installed, but then there is a entire horde of volunteers running testing and unstable on a daily basis who test Debian and report bugs.

This is the script that produced the table in the beginning of this post:

#!/bin/sh

set -eu

extract() {
  local release
  local url
  release="$1"
  url="$2"

  if [ ! -f "${release}" ]; then
    rm -f "${release}.gz"
    curl --silent -o ${release}.gz "${url}"
    gunzip "${release}.gz"
  fi

  local with_tests
  local total
  with_tests="$(grep-dctrl -c -F Testsuite --regex . $release)"
  total="$(grep-dctrl -c -F Package --regex . $release)"

  echo "| ${release} | ${with_tests} | ${total} | $((100*with_tests/total))% |"
}

echo "| **Release** | **Packages with tests** | **Total number of packages** | **% of packages with tests** |"
echo "|-------------|-------------------------|------------------------------|------------------------------|"
for release in wheezy jessie stretch buster; do
  extract "${release}" "http://archive.debian.org/debian/dists/${release}/main/source/Sources.gz"
done
for release in bullseye bookworm trixie; do
  extract "${release}" "http://ftp.br.debian.org/debian/dists/${release}/main/source/Sources.gz"
done

05 September, 2025 08:22PM

September 04, 2025

Noah Meyerhans

False Positives

There are times when an email based workflow gets really difficult. One of those times is when discussing projects related to spam and malware detection.

 noahm@debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected
submit@bugs.debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected

This was, in fact, a false positive. And now, because reportbug doesn’t record outgoing messages locally, I need to retype the whole thing.

(NB. this is not a complaint about the policies deployed on the Debian mail servers; they’d be negligent if they didn’t implement such policies on today’s internet.)

04 September, 2025 02:53PM by Noah Meyerhans (frodo+blog@morgul.net)

September 03, 2025

hackergotchi for Joachim Breitner

Joachim Breitner

F91 in Lean

Back in March, with version 4.17.0, Lean introduced partial_fixpoint, a new way to define recursive functions. I had drafted a blog post for the official Lean FRO blog back then, but forgot about it, and with the Lean FRO blog discontinued, I’ll just publish it here, better late than never.

With the partial_fixpoint mechanism we can model possibly partial functions (so those returning an Option) without an explicit termination proof, and still prove facts about them. See the corresponding section in the reference manual for more details.

On the Lean Zulip, I was asked if we can use this feature to define the McCarthy 91 function and prove it to be total. This function is a well-known tricky case for termination proofs.

First let us have a brief look at why this function is tricky to define in a system like Lean. A naive definition like

def f91 (n : Nat) : Nat :=
  if n > 100
  then n - 10
  else f91 (f91 (n + 11))

does not work; Lean is not able to prove termination of this functions by itself.

Even using well-founded recursion with an explicit measure (e.g. termination_by 101 - n) is doomed, because we would have to prove facts about the function’s behaviour (namely that f91n = f91101 = 91 for 90 ≤ n ≤ 100) and at the same time use that fact in the termination proof that we have to provide while defining the function. (The Wikipedia page spells out the proof.)

We can make well-founded recursion work if we change the signature and use a subtype on the result to prove the necessary properties while we are defining the function. Lean by Example shows how to do it, but for larger examples this approach can be hard or tedious.

With partial_fixpoint, we can define the function as a partial function without worrying about termination. This requires a change to the function’s signature, returning an Option Nat:

def f91 (n : Nat) : Option Nat :=
  if n > 100
    then pure (n - 10)
    else f91 (n + 11) >>= f91
partial_fixpoint

From the point of view of the logic, Option.none is then used for those inputs for which the function does not terminate.

This function definition is accepted and the function runs fine as compiled code:

#eval f91 42

prints some 91.

The crucial question is now: Can we prove anything about f91 In particular, can we prove that this function is actually total?

Since we now have the f91 function defined, we can start proving auxillary theorems, using whatever induction schemes we need. In particular we can prove that f91 is total and always returns 91 for n ≤ 100:

theorem f91_spec_high (n : Nat) (h : 100 < n) : f91 n = some (n - 10) := by
  unfold f91; simp [*]

theorem f91_spec_low (n : Nat) (h₂ : n ≤ 100) : f91 n = some 91 := by
  unfold f91
  rw [if_neg (by omega)]
  by_cases n < 90
  · rw [f91_spec_low (n + 11) (by omega)]
    simp only [Option.bind_eq_bind, Option.some_bind]
    rw [f91_spec_low 91 (by omega)]
  · rw [f91_spec_high (n + 11) (by omega)]
    simp only [Nat.reduceSubDiff, Option.some_bind]
    by_cases h : n = 100
    · simp [f91, *]
    · exact f91_spec_low (n + 1) (by omega)

theorem f91_spec (n : Nat) : f91 n = some (if n ≤ 100 then 91 else n - 10) := by
  by_cases h100 : n ≤ 100
  · simp [f91_spec_low, *]
  · simp [f91_spec_high, Nat.lt_of_not_le ‹_›, *]

-- Generic totality theorem
theorem f91_total (n : Nat) : (f91 n).isSome := by simp [f91_spec]

(Note that theorem f91_spec_low is itself recursive in a somewhat non-trivial way, but Lean can figure that out all by itself. Use termination_by? if you are curious.)

This is already a solid start! But what if we want a function of type f91! (n : Nat) : Nat, without the Option? Then can derive that from the partial variant, as we have just proved that to be actually total:

def f91! (n : Nat) : Nat  := (f91 n).get (f91_total n)

theorem f91!_spec (n : Nat) : f91! n = if n ≤ 100 then 91 else n - 10 := by
  simp [f91!, f91_spec]

Using partial_fixpoint one can decouple the definition of a function from a termination proof, or even model functions that are not terminating on all inputs. This can be very useful in particular when using Lean for program verification, such as with the aeneas package, where such partial definitions are used to model Rust programs.

03 September, 2025 08:18PM by Joachim Breitner (mail@joachim-breitner.de)

Enrico Zini

CAdES signatures on Debian

CAdES is a digital signature standard that is used and sometimes mandated, by the Italian Public Administration.

To be able to do my job, I own a Carta Nazionale dei Servizi (CNS) with which I can generate legally binding signatures. Now comes the problem of finding a software to do it.

Infocamere Firma4NG

InfoCamere are distributing a software called Firma4NG, with a Linux option, which, I'm pleased to say, seems to work just fine.

Autofirma

AutoFirma is a Java software for digital signatures distributed by the Spanish government, which has a Linux version.

It is licensed as GPL-2+ | EUPL-1.1, and the source seems to be here.

While my Spanish is decent I lack jargon for this specific field, and I didn't manage to make it work with my CNS.

Autogram

Andrej Shadura pointed me to Autogram, a Slovakian software for digital signatures, licensed under the EUPL-1.2.

The interface is still only in Slovakian, so tried it but I didn't go very far in trying to make it work.

OpenSSL

In trixie, openssl is almost, but not quite, able to do it. Here's as far as I've got.

Install opensc

apt install opensc

Test if you can access the smart card with:

pkcs11-tool --list-objects [-l]

You can find other pkcs11-tool examples here

Set up a pkcs11 provider for openssl

apt install pkcs11-provider

Edit /etc/ssl/openssl.cnf:

  • In [provider_sect] add pkcs11 = pkcs11_sect
  • In [default_sect], uncomment activate = 1
  • Add this new section:
[pkcs11_sect]
module = /usr/lib/x86_64-linux-gnu/ossl-modules/pkcs11.so
pkcs11-module-path = /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so
default_algorithms = ALL
activate = 1

Test with openssl list -providers

You can check if openssl can see keys on the card:

openssl pkey -in 'pkcs11:id=%01' -pubin -pubout -text

See PKCS11 URI documentation here.

Install the PKCS11 engine for openssl

apt install libengine-pkcs11-openssl

It looks like providers replaced engines, and this would not be needed, but I couldn't find a way to convince openssl to work without this.

Sign a document

openssl cms -nodetach -binary -cades -outform DER -in filename -out filename.p7m -sign -signer 'pkcs11:id=%01' -keyform engine -engine pkcs11

It verifies correctly using the Austrian verification system.

All the Italian verification systems I tried, however, complain that, although the signature is valid, the certificate is emitted by an unqualified CA and the certificate revocation information cannot be found.

PAdES

When signing PDF files, the PAdES standard is sometimes accepted.

LibreOffice is able to generate PAdES signatures using the "File / Digital signatures…" menu, and provided the smart card is in the reader it is able to use it. Both LibreOffice and Okular can verify that the signature is indeed there.

However, when trying to validate the signature using Italian validators, I get the same complaints about unqualified CAs and missing revocation information.

Wall of shame

Dike GoSign

Infocert (now Tinexta) used to distribute a software called "Dike GoSign" that worked on Ubuntu, which I used on a completely isolated VM, and it was awful but it worked.

I had to regenerate the VM for it, and discovered that the version they distribute now will refuse to work unless one signs in online with a Tinexta account. From the same company that asks you to install their own root certifiactes to use their digital signature system.

Gross.

Dropped.

Aruba Sign

Aruba used to distribute a software called Aruba Sign, which also worked on Ubuntu.

Ubuntu support has been discontinued, and they now only offer support for Windows or Mac.

Yuck. Dropped.

03 September, 2025 03:38PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in August 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python team

forky is open! As a result I’m starting to think about the upcoming Python 3.14. At some point we’ll doubtless do a full test rebuild, but in advance of that I concluded that one of the most useful things I could do would be to work on our very long list of packages with new upstream versions. Of course there’s no real chance of this ever becoming empty since upstream maintainers aren’t going to stop work for that long, but there are a lot of packages there where we’re quite a long way out of date, and many of those include fixes that we’ll need for 3.14, either directly or by fixing interactions with new versions of other packages that in turn will need to be fixed. We can backport changes when we need to, but more often than not the most efficient way to do things is just to keep up to date.

So, I upgraded these packages to new upstream versions (deep breath):

  • aioftp
  • aiosignal (building on work by IanLucca)
  • audioop-lts
  • celery
  • djangorestframework
  • djoser
  • fpylll
  • frozenlist
  • git-repo-updater
  • ipykernel
  • klepto
  • kombu
  • multipart
  • netmiko (sponsoring work by Eduardo Silva; contributed supporting fix upstream)
  • pathos
  • ppft
  • pydantic
  • pydantic-core
  • pydantic-settings
  • pylsqpack
  • pymssql
  • pytest-mock
  • pytest-pretty
  • pytest-repeat
  • pytest-rerunfailures
  • python-a2wsgi
  • python-apptools (sponsoring work by Kathlyn Lara Murussi)
  • python-asgiref
  • python-asyncssh
  • python-bitarray
  • python-bitstring
  • python-bytecode
  • python-channels-redis
  • python-charset-normalizer
  • python-daphne
  • python-django-analytical
  • python-django-guid
  • python-django-health-check
  • python-django-pgbulk
  • python-django-pgtrigger
  • python-django-postgres-extra
  • python-django-storages
  • python-holidays
  • python-httpx-sse
  • python-icalendar
  • python-lazy-model
  • python-line-profiler
  • python-lz4
  • python-marshmallow-dataclass
  • python-mastodon
  • python-model-bakery
  • python-oauthlib
  • python-parse-type
  • python-pathvalidate
  • python-pgspecial
  • python-processview
  • python-pytest-subtests
  • python-roman
  • python-semantic-release
  • python-testfixtures
  • python-time-machine
  • python-tokenize-rt
  • python-typeguard
  • python-typing-extensions
  • python-urllib3
  • pyupgrade
  • requests (fixing CVE-2024-47081)
  • responses
  • zope.deferredimport
  • zope.schema
  • zope.testrunner

That’s only about 10% of the backlog, but of course others are working on this too. If we can keep this up for a while then it should help.

I packaged pytest-run-parallel, pytest-unmagic (still in NEW), and python-forbiddenfruit (still in NEW), all needed as new dependencies of various other packages.

setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (note that this does not mean that they necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). We talked about this a bit at DebConf, and I said that I’d noticed a number of packages where this isn’t straightforward and promised to write up some notes. I wrote the Python/PybuildPluginPyproject wiki page for this; I expect to add more bits and pieces to it as I find them.

On that note, I converted several packages to pybuild-plugin-pyproject:

  • billiard
  • lazr.config
  • python-timeline
  • zope.sqlalchemy
  • zope.testing

I fixed several build/test failures:

I fixed some other bugs:

I reviewed Debian defaults: nftables as banaction and systemd as backend, but it looked as though nothing actually needed to be changed so we closed this with no action.

Rust team

Upgrading Pydantic was complicated, and required a rust-pyo3 transition (which Jelmer Vernooij started and Peter Michael Green has mostly been driving, thankfully), packaging rust-malloc-size-of (including an upstream portability fix), and upgrading several packages to new upstream versions:

  • rust-serde
  • rust-serde-derive
  • rust-serde-json
  • rust-smallvec
  • rust-speedate
  • rust-time
  • rust-time-core
  • rust-time-macros

bugs.debian.org

I fixed bugs.debian.org: misspelled checkbox id “uselessmesages”, as well as a bug that caused incoming emails with certain header contents to go missing.

OpenSSH

I fixed openssh-server: refuses further connections after having handled PerSourceMaxStartups connections with a cherry-pick from upstream.

Other bits and pieces

I upgraded libfido2 to a new upstream version.

I fixed mimalloc: FTBFS on armhf: cc1: error: ‘-mfloat-abi=hard’: selected architecture lacks an FPU, which was blocking changes to pendulum in the Python team. I also spent some time helping to investigate libmimalloc3: Illegal instruction Running mtxrun —generate, though that bug is still open.

I fixed various autopkgtest bugs in gssproxy, prompted by #1007 in Debusine.

Since my old team is decommissioning Bazaar/Breezy code hosting in Launchpad (the end of an era, which I have distinctly mixed feelings about), I converted Storm to git.

03 September, 2025 10:56AM by Colin Watson

Paul Wise

FLOSS Activities August 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Obsolete conffile in zmap

All work was done on a volunteer basis.

03 September, 2025 03:59AM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in August 2025

03 September, 2025 01:14AM by Ben Hutchings

Valhalla's Things

English Paper Piecing, Done Wrong

Posted on September 3, 2025
Tags: madeof:bits

A square mat made of orange, green and grey knit fabric hexagons sewn together.

For quite some time, I have been thinking about trying a bit of patchwork, and English Paper Piecing looked like a technique suited to my tastes, with the handsewing involved and the fact of having a paper pattern of sort and everything.

The problem is, most of the scraps of fabric I get from my sewing aren’t really suitable for quilting, with a lot of them being either too black and too thick or too white and too thin.

The other side of the same mat, made of orange and green squares.

On the other hand, my partner wears polo shirts at work, and while I try to mend the holes that form, after a while the edges get worn, and they just are no longer suitable for the office, even with some creative mending, and they get downgraded to home wear. But then more office shirts need to be bought, and the home ones accumulate, and there is only so much room for polo shirts in the house, and the worst ones end up in my creative reuse pile.

Some parts are worn out and they will end up as cabbage stuffing for things, but some are still in decent enough conditions and could be used as fabric.

But surely, for English Paper Piecing you’d need woven fabric, not knit, even if it’s the dense piqué used in polo shirts, right? Especially if it’s your first attempt at the technique, right?

The hexagon side of the mat, with my hexagonal pattern weights decorated with Standard Compliant stickers: they fit exactly on the mat pattern.

Well, probably it wouldn’t work with complex shapes, but what about some 5-ish cm tall Standard Compliant bestagon? So I printed out some hexagons on thick paper, printed some bigger hexagons with sewing allowance as a cutting aid, found two shirts in the least me colours I could find (and one in grey because it was the best match for the other two) and decided to sacrifice them for the experiment.

And as long as the paper was still in the pieces, the work went nicely, so I persevered while trying to postpone the Moment of Truth.

The squares side of the mat, with a few random Piecepack pieces: the tiles take almost exactly 2 × 2 squares, and the coins fit inside each square with room to pick them up.

After a while I measured things out and saw that I could squeeze a 6.5 × 7 hexagon pattern into something resembling a square that was a multiple of the 2.5 cm square on the back of my Piecepack tiles, and decided to go for another Standard for the back (because of course I wasn’t going to buy new fabric for lining the work).

I kept the paper in the pieces until both sides were ready, and used it to sew them right sides together, leaving the usual opening in the middle of one side.

Then I pressed, removed the paper, turned everything inside out, pressed again and. It worked!

The hexagon side of the mat, with a set of polyhedral dice.

The hexagons look like hexagons, the squares look like squares, the whole thing feels soft and drapey, but structurally sound. And it’s a bit lumpy, but not enough to cause issues when using it as a soft surface to put over a noisy wooden table to throw dice on.

I considered adding some lightweight batting in the middle, but there was really no need for it, and wondered about how to quilt the piece in a way that worked with the patterns on the two sides, but for something this small it wasn’t really required.

However, I decided to add a buttonhole stitch border on all edges, to close the opening I had left and to reinforce especially the small triangles on the hexagons side, as those had a smaller sewing allowance and could use it.

The squares sides of the mat, with some blue and purple stones  in the starting position for a hnefatafl game.

And of course, the 11 × 11 squares side wasn’t completely an accident, but part of A Plan.

For this project there isn’t really a pattern, but I did publish the files I used to print the paper pieces even if they were pretty trivial.

And there are more polo shirts in that pile, and while they won’t be suitable for anything complex, maybe I could try some rhombs, or even kites and darts?

03 September, 2025 12:00AM

September 02, 2025

Debian Outreach Team

Spaarsh Gsoc Report

– layout: post title: “GSoC 2025 Report: Enhancing Debian packages with ROCm GPU acceleration” date: 2025-09-01 categories: gsoc debian ROCm debian-packaging author: Spaarsh Thakkar —

GitLab Salsa: @Spaarsh

GitHub: Spaarsh

Introduction

I am Spaarsh Thakkar, a final-year Computer Science Engineering undergrad from India. My interests lie in research and systems. My recent work has been in and around Graphics Processing Units and I also hold a keen interest in Computer Networks. At the time of writing, I have been an open-source contributor for almost a year.

Proposal Description (as shown on GSoC Project Profile1)

Due to Debian’s open-source nature, no Debian package in main can have a proprietary GPU package listed as a dependency. While AI and HPC workloads increasingly rely on GPU acceleration, many Debian packages still focus solely on CUDA, which is proprietary.

With the advent of ROCm, an open-source GPU computing platform, we can now integrate full-fledged AMD GPU support into Debian packages. This will improve the experience of developers working in AI/ML and HPC while positioning Debian as a strong OS choice for GPU-driven workloads. The proposal aims to aid in solving the aforementioned program by packaging several ROCm packages for debian and add ROCm support to some existing debian packages.

The deliverables are as follows:

  1. New Debian packages with GPU support
  2. Enhanced GPU support within existing Debian packages
  3. More autopackagetests running on the Debian ROCm CI

Key Objectives

Enable ROCm in:

  1. dbcsr
  2. gloo
  3. cp2k

Publish the following packages to debian apt archive:

  1. hipblas-common
  2. hipBLASlt

Work Report

1. Publishing hipblas-common to apt

This objective was successfully completed, resulting in hipblas-common being published in the apt repository2.

The process involved the following steps:

  1. Filing a Intent-To-Package (ITP)3
  2. Pulling the upstream source code repository from GitHub
  3. Adding the debian/ packaging files
  4. Testing the package locally
  5. Creating the corresponding project under rocm-team4
  6. Applying the necessary changes
  7. Building the package
  8. Testing it using sbuild
  9. Signing the package files
  10. Uploading the package to the mentors.debian.net archive(now in official archive)5
  11. Addressing review feedback and making changes
  12. Requesting sponsorship6
  13. Securing sponsorship, which led to the package being accepted into the experimental branch of apt

Since the beginning of GSoC, the package has also been promoted to the unstable branch2.


2. DBCSR ROCm and Multi-Arch Support

During my GSoC project, I worked on extending the DBCSR (Distributed Block Compressed Sparse Row)7 package to improve its ROCm/HIP support, and handling multi-architecture GPU kernels in a way that is both practical for upstream maintainers and debian package developers.

The code changes can be found at my dbcsr fork here8.

ROCm/HIP Enablement

  • Enabled ROCm backend support to DBCSR, allowing GPU acceleration beyond CUDA by enabling HIP-based builds.
  • Investigated and resolved build issues specific to HIP kernels within DBCSR.

Multi-Architecture GPU Kernel Handling

(The following content was presented in greater detail at DebConf’25 as well. The presentation video can be found here9 and the presentation slide can be found here10).

  • DBCSR contains GPU kernels that are heavily optimized for specific architectures. By default, these are built for a single target architecture, which poses challenges for packaging where binaries need to support multiple possible GPU targets.
  • Explored different strategies for solving the multi-arch GPU kernel distribution problem, including:

    • Option 1: Fat binaries (embedding multiple GPU architectures into a single binary, with runtime dispatch). This is ideal for end-users but requires deeper changes upstream and is not straightforward with HIP/ROCm.
    • Option 2: Arch-specific libraries (e.g., libdbcsr.gfxXXX.a), where the alternatives system or explicit user selection would determine which one is used. This solves the problem but pushes complexity downstream into packaging and user configuration.
    • Option 3: Prefixed functions inside a single file, where kernels are compiled separately per architecture, functions are renamed with an arch prefix, and runtime logic in DBCSR decides which kernel to invoke. This shifts complexity upstream but could give a clean downstream experience.
  • I critically analyzed these options in the context of Debian packaging and upstream maintainability. Arch-specific .a files introduce exponential dependency complexity. The prefixed-function approach seemed like a plausible way forward, though it requires upstream buy-in.
  • After consulting with my mentor, these concerns were raised in the dbcsr repository as a discussion here11 

Summary

My work involved:

  • Enabling HIP/ROCm support in DBCSR.

  • Prototyping strategies for handling GPU multi-arch builds.

  • Evaluating the trade-offs between upstream maintainability and downstream packaging complexity.


3. gloo, hipification and source code issues

One of the other packages that were targeted was gloo12. It is a collective communications library and has the implementations of different Machine Learning communication algorithms.

The code changes can be found at my gloo fork here13 (some changes have not be committed at the time of writing).

HIP/ROCm Enablement

  1. Fixing old ROCm CMake functions The upstream Gloo codebase still used old ROCm CMake functions that began with the hip_ prefix (for example, hip_add_executable). These functions have since been deprecated/removed. I updated the build system to use the modern ROCm CMake equivalents so that the package can build properly in a current ROCm environment.

  2. Debian packaging changes I modified debian/control to add a new package, libgloo-rocm, in addition to the existing packages. This allows proper separation and handling of ROCm-enabled builds in Debian.

  3. First successful library build After these changes, I was able to successfully build the library. However, I ran into issues when trying to produce the shared library: there were undefined symbol errors at link time.

Source Code Issue

On investigating the undefined symbol errors, I identified that these came from a lack of explicit template instantiation for some Gloo classes. Since C++ templates only get compiled when explicitly used or instantiated, this resulted in missing symbols in the shared library.

To solve this, I explored the source code and noticed that the HIP backend code was not natively written — it was generated from the CUDA backend using a custom hipification script maintained by the repo.

  • I experimented with modifying the HIPification process itself, trying out hipify-perl14 instead of the repository’s custom Python script.
  • I also tried tweaking the source code in places where template instantiations were missing, so that the ROCm build would correctly export the needed symbols.

Summary

The issue is still unresolved. The core problem lies in how the source code is structured: the HIP backend is almost entirely auto-generated from CUDA code, and the process does not handle template instantiations correctly. Because of this, the Debian package for Gloo with ROCm support is not yet ready for release, and further source-level fixes are required to make the ROCm build reliable.


4. cp2k

CP2K15 is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.

HIP/ROCm Enablement

cp2k depends on dbcsr and hence, HIP/ROCm enablement in this package required the dbcsr16 package to get through.

Even though dbcsr isn’t ready yet, it was worthwhile to plan how it shall be built with HIP/ROCm once we have dbcsr in place. Upon doing this, it was realized that the architecture-wise libraries provided by the dbcsr package will result in a complicated building process for cp2k.

No changes have been made to this package yet and more concrete steps shall be taken once the dbcsr package work is completed.

Summary

The multi-arch build process for cp2k maybe complicated by the one static-library-per-architecture method used in the dependent package, dbcsr.


Auxiliary Work & Activities

While working on the aformentioned GSoC Goals, there were a few other things that were also done.

  1. libamdhip64-dev bug file17

    While trying to enable HIP/ROCm in dbcsr, CMakeDetermineHIPCompiler.cmake was unable to find HIP runtime CMake package. After going through some similar issues faced by other developers earlier, it was decided to file a bug report under the libamdhip64-dev package.

    After discussions with and trying the changes suggested by Cory (my mentor) under the bug, the issue was resolved.

    Turns out, the wrong compiler was being used by me! The gcc compiler was supposed to be used and I was using hipcc. The bug was closed since it was not due an issue with the package.

    Cory suggested that I add this info under the ROCm wiki page. It is yet to be done and hopefully I get it done soon.

  2. DebConf25 Talk

    After facing the multi-arch build dilemma with dbcsr (and also getting to know about the issues faced by other fellow package developers), I came to realise that this was more than a packaging, build or programming issue. GPU-packaging was facing a policy issue.

    Hence, I decided to cover this problem in greater detail at my DebConf25 Virtual Presentation under the Outreach Session.

    Shoutout to Cory for his support and Lucas Kanashiro for encouraging me to present my work!

  3. Bi-Weekly AMD ROCm Meetings

    Shortly after the Coding period started, Cory began the initiative of Bi-Weekly AMD ROCm Meetings18. Being a part of the meetings (participated in all but one!), seeing the work the other folks are doing and being able to discuss my own problems was a delight.

  4. (Upcoming) IndiaFOSS 2025 Talk

    After understanding the nuances and beauty of the debian packaging ecosystem in these months, I decided to spread the work about debian packaging and packaging software in general. My talk19 for the same got accepted in the upcoming IndiaFOSS 202520 conference!

    I hope this beings more people towards the packaging ecosystem and to the debian developer ecosystem.

Conclusion

My GSoC time was fantastic! I plan to complete the work that I have started during my GSoC and beyond. Working with Cory21 and Utkarsh22 (a fellow GSoC’25 contributor under Cory) has been a very positive experience.

HIP/ROCm GPU-packaging is in a nascent stage. It is an exciting time to be in this space right now. The problems are new and never encountered before (CPU packaging isn’t architecture specific!). The problems were shall face in the coming time, and our solutions to them will set a precendent for the future.

References

1 : https://summerofcode.withgoogle.com/programs/2025/projects/9s4jUjV0

2 : https://tracker.debian.org/pkg/hipblas-common

3 : https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1105114

4 : https://salsa.debian.org/rocm-team

5 : https://packages.debian.org/source/sid/hipblas-common

6 : https://lists.debian.org/debian-ai/2025/05/msg00088.html

7 : https://www.cp2k.org/dbcsr

8 : https://salsa.debian.org/Spaarsh/dbcsr/

9 : https://drive.google.com/file/d/14WQuTMcI-L0lbi3zkUc9pT6RGwwVY0j1/view?usp=sharing

10 : https://docs.google.com/presentation/d/1p-nkHPgg5C5jKGy7ySZ8rts5G2vNFQpQJQ8UySOWgVE

11 : https://github.com/cp2k/dbcsr/discussions/933

12 : https://github.com/pytorch/gloo

13 : https://salsa.debian.org/Spaarsh/gloo

14 : https://tracker.debian.org/pkg/hipify

15 : https://www.cp2k.org/

16 : https://tracker.debian.org/pkg/dbcsr

17 : https://bugs.debian.org/cgi-bin/bugreport.cgi?https://fossunited.org/indiafoss/2025bug=1108159

18 : https://lists.debian.org/debian-ai/2025/05/msg00113.html

19 : https://fossunited.org/c/indiafoss/2025/cfp/dpq0b26ece

20 : https://fossunited.org/indiafoss/2025

21 : https://salsa.debian.org/cgmb

22 : https://salsa.debian.org/utk4r-sh

02 September, 2025 01:32PM by Outreach team

hackergotchi for Jonathan Dowland

Jonathan Dowland

Luminal and Lateral

For my birthday I was gifted copies of Eno's last two albums, Luminal and Lateral, both of which are collaborations with Beatie Wolfe.

Luminal and Lateral records in the sunshine

Let's start with the art. I love this semi-minimalist, bold style, and how the LP itself (in their coloured, bio-vinyl variants) feels like it's part of the artwork. I like the way the artist credits mirror each other: Wolfe, Eno for Luminal; Eno, Wolfe for Lateral.

My first "bio vinyl" LP was the Cure's last one, last year. Ahead of it arriving I planned to blog about it, but when it came arrived it turned out I had nothing interesting to say. In terms of how it feels, or sounds, it's basically the same as the traditional vinyl formulation.

The attraction of bio-vinyl to well-known environmentalists like Eno (and I guess, the Cure) is the reduced environmental impact due to changing out the petroleum and other ingredients with recycled used cooking oil. You can read more about bio-vinyl if you wish. I try not to be too cynical about things like this; my immediate response is to assume some kind of green-washing PR campaign (I'm currently reading Consumed by Saabira Chaudhuri, an excellent book that is not sadly only fuelling my cynicism) but I know Eno in particular takes this stuff seriously and has likely done more than a surface-level evaluation. So perhaps every little helps.

On to the music. The first few cuts I heard from the albums earlier in the year didn't inspire me much. Possibly I heard something from Luminal, the vocal album; and I'm generally more drawn to Eno's ambient work. (Lateral is ambient instrumental). I was not otherwise familiar with Beatie Wolfe. On returning to the albums months later, I found them more compelling. Luminal reminds me a little of Apollo: Atmospheres and Soundtracks. Lateral worked well as space music for phd-correction sessions.

The pair recently announced a third album, Liminal, to arrive in October, and totally throw off the symmetry of the first two. Two of its tracks are available to stream now in the usual places.

02 September, 2025 11:23AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

September.

September. Kids Summer Vacation is over and things are starting to come back to the usual rhythm.

02 September, 2025 12:30AM by Junichi Uekawa

Charles

Making KGB less noisy

This past month I did setup KGB to send notifications to #debian-lts when new merge requests were created in the LTS website’s repo and I learned a couple cool things. I’ve been trying to document things more so I don’t have to research the same topic months later, hence the blog seemed like a good idea, specially since many debianites have KGB set on their favorite IRC channel and this post will go to planet.debian.org.

Selecting What Goes to IRC

Salsa (Debian’s GitLab instance) can generate a lot of events for things that happen on a repository and a lot of them can be pushed to KGB via webhooks. Generally I prefer a minimal set enabled otherwise it’s too much clutter on the IRC side, but it’s important to go through each option to see what makes sense or not. From the experience I had, the following ones are the most useful to have it on:

  • Push events
  • Tag push events
  • Comments
  • Issue events
  • Merge request events
  • Pipeline events

Reducing the Noise

For Debian packaging, one may find it useful to add a pattern filter so only the packaging branch updates go to IRC. If you are using DEP-14, that’s pretty easy, “debian/*” will do the job.

Notably, “Job events” are left out. Basically it’s just too much info, you get one alert when a job is scheduled, then when it’s started and another one when it’s completed. Well, each pipeline has at least a few of them, multiply by three and you can understand my point.

Besides that, pipelines also generate the same amount of events as jobs, so it might be a problem too. Well, KGB comes to the rescue. It allows you to filter pipeline events, because you really only care about the pipeline when it fails ;-) To do just that, pipeline_only_status=failed.

Another interesting option is limiting the commits shown when the push event has too many of them. One can do that with squash_threshold=3. Remember I want less clutter?! Three commits is my limit here.

Final Result

The final URL for me looks like this (newlines added for clarity):

http://kgb.debian.net:9418/webhook/?channel=debian-<your_preferred_channel>&
                                    network=oftc&
                                    private=1&
                                    use_color=1&
                                    use_irc_notices=1&
                                    squash_threshold=3&
                                    pipeline_only_status=failed

You can see there are more options than the ones I described earlier, well, now it’s your time to go through KGB’s documentation and learn a thing or two ;-)

02 September, 2025 12:18AM

September 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities August 2025

Another short status update of what happened on my side last month. Released Phosh 0.49.0 and added some more QoL improvements to Phosh Mobile stack (e.g. around Cell broadcasts). Also pulled my SHIFT6mq out of the drawer (where it was sitting in a drawwer far too long) and got it to show a picture after a small driver fix. Thanks to the work the sdm845-mainlining folks are doing that was all that was needed. If I can get touch to work better that would be another nice device for demoing Phosh.

See below for details on the above and more:

phosh

  • Allow to auto-start pomodoro timer (MR)
  • Improve mpris player thumbnails (MR)
  • Cellbroadcast fixes (MR)
  • Release (MR)
  • searchd related build system fixes (MR)
  • gchar vs char cleanup (MR)
  • upcoming-events: Add filter icons (MR)
  • Fix missing header dependency (MR)
  • Release 0.49~rc1, 0.49.0
  • Fix some incorrect callback signatures (MR)

phoc

  • Workspace indicators (MR)
  • Don't overwrite picked output (MR)
  • Release (0.49~rc1, 0.49.0
  • Raise nofile rlimit (MR)
  • Fix gettings tarted page title (MR)
  • Update cursor when layer surface moves away from under the cursor (MR)
  • Support cursor-shape-v1 protocol (MR)
  • pointer: Use libinput's LIBINPUT_CONFIG_DRAG_LOCK_ENABLED_STICKY: (MR)

phosh-mobile-settings

  • Cellbroadcast fixes (MR)
  • build: Link statically against libcellbroadcast subproject (MR)
  • Release 0.49~rc1, 0.49.0

stevia (formerly phosh-osk-stub)

  • Release 0.49~rc1, 0.49.0
  • Fix emoji matching on big endian (MR)
  • Fix emoji matching again aftr switching to GTK's embedded emoji data (MR)
  • Fix scaling when adding new layouts (MR)
  • Improve character popover and other fixes (MR)

xdg-desktop-portal-phosh

pfs

  • Let pressing <enter> save the file (MR)

feedbackd

  • Release 0.8.4
  • Fix important override. (MR)

feedbackd-device-themes

  • Release 0.8.5
  • Lower status LED brightness on sargo (MR)

libcmatrix

  • Track room version (MR)

Chatty

  • Warning fixes (MR)
  • matrix: Show room version (MR)

Debian

Cellbroadcastd

  • Fix daemon systemd target (MR)
  • Ignore case when matching country when looking up channels (MR)
  • Meson dependency fix (MR)

ModemManager

  • Fix two country codes (MR)

gnome-clocks

  • Fix sporadic wakeup due to not diposed timers (MR) also resulting in a vala issue

git-buildpackage

  • Move test data to salsa and fetch it from there: deb, rpm, MR
  • clone: Be less strict on vcs-git URLs (MR)

mobian-recipies

  • Fail early without an ssh key (MR)

Linux

  • Shift6mq: Fix clock frequency of panel driver (MR)
  • Shift6mq: Set chassis type (MR)
  • Shift6mq: Tried to improve the touch driver to increase the sensitivity / sample rate not success yet.

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh/upcoming-events: Allow to filter out empty days in (MR)
  • phosh: keypad and search bar CSS improvements (MR)
  • p-m-s: Tweaks definition parsing code (MR)
  • p-m-s: osk-shortcuts: UI tweaks (MR)
  • p-m-s: Add gchar check (MR)
  • p-m-s: Make it a search provider (MR)
  • phoc: toplevel-addons (MR)
  • debian: MM stable update (MR)
  • stevia: Use default font (MR)
  • upcoming-events: Use filtered list model (MR)
  • pms: Tweaks rename (MR)
  • pms: Clang build fix (MR)
  • feedbackd: Udev rule for AW86927 (FP5) (MR)
  • xdpm: Allow pure rust build for using in xdg-d-p-phrosh (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 September, 2025 06:05AM

Birger Schacht

Status update, August 2025

Due to the freeze I did not do that many uploads in the last few months, so there were various new releases I packaged once Trixie was released. Regarding the release of Debian 13, Trixie, I wrote a small summary of the changes in my packages.

I uploaded an unreleased version of cage to experimental, to prepare for the transition to wlroots-0.19. Both sway and labwc already had packages in experimental that depended on the new wlroots version. When the transition happened, I uploaded the cage version to unstable, as well as labwc 0.9.1 and sway 1.11.

I updated

  • foot to 1.23.1
  • waybar to 0.14.0
  • swaylock to 1.8.3
  • git-quick-stats to 2.7.0
  • swayimg to 4.5
  • usbguard to 1.1.4
  • fcft to 3.3.2
  • fnott to 1.8.0
  • wdisplays to 1.1.3
  • wev to 1.1.0
  • wlopm to 1.0.0
  • wmenu to 0.2.0
  • libsfdo to 0.1.4

Most of the packages I uploaded using git-debpush, some of them could not be uploaded this way due to upstream using git submodules (this is 1107219). I also created 1112040 - git-debpush: should also say which tag it created and 1111504 - git-debpush: pristine-tar check warns about pristine-tar data thats not present (which is already fixed).

I uploaded wayback 0.2 to NEW, where it is waiting for review, (ITP).

In my dayjob I added extended the place lookup form of apis-core-rdf to allow searching places and selecting them on a map using leaflet and the nominatim API. Another issue I worked on was about highlighting those inputs of our generic list filter that are used to filter the results. I released a couple of bugfix releases for the v0.50 release, then v0.51 and two bugfix releases and then v0.52 and another couple of bugfix releases. v0.53 will land in a couple of days. I also released v0.6.2 of apis-highlighter-ng, which is sort of a plugin for apis-core-rdf, that allows to highlight parts of a text and link them to whatever Django object (in our case relations).

01 September, 2025 05:28AM

Russ Allbery

Review: Regenesis

Review: Regenesis, by C.J. Cherryh

Series: Cyteen #2
Publisher: DAW
Copyright: January 2009
ISBN: 0-7564-0592-0
Format: Mass market
Pages: 682

The main text below is an edited version of my original review of Regensis written on 2012-12-21. Additional comments from my re-read are after the original review.

Regenesis is a direct sequel to Cyteen, picking up very shortly after the end of that book and featuring all of the same characters. It would be absolutely pointless to read this book without first reading Cyteen; all of the emotional resonance and world-building that make Regensis work are done there, and you will almost certainly know whether you want to read it after reading the first book. Besides, Cyteen is one of the best SF novels ever written and not the novel to skip.

Because this is such a direct sequel, it's impossible to provide a good description of Regenesis without spoiling at least characters and general plot developments from Cyteen. So stop reading here if you've not yet read the previous book.

I've had this book for a while, and re-read Cyteen in anticipation of reading it, but I've been nervous about it. One of the best parts of Cyteen is that Cherryh didn't belabor the ending, and I wasn't sure what part of the plot could be reasonably extended. Making me more nervous was the back-cover text that framed the novel as an investigation of who actually killed the first Ari, a question that was fairly firmly in the past by the end of Cyteen and that neither I nor the characters had much interest in answering. Cyteen was also a magical blend of sympathetic characters, taut tension, complex plotting, and wonderful catharsis, the sort of lightning in a bottle that can rarely be caught twice.

I need not have worried. If someone had told me that Regenesis was another 700 pages of my favorite section of Cyteen, I would have been dubious. But that's exactly what it is. And the characters only care about Ari's murderer because it comes up, fairly late in the novel, as a clue in another problem.

Ari and Justin are back in the safe laboratory environment of Reseune, safe now that politics are not trying to kill or control them. Yanni has taken over administration. There is a general truce, and even some deeper agreement. Everyone can take a breath and relax, albeit with the presence of Justin's father Jordan as an ongoing irritant. But broader Union politics are not stable: there is an election in progress for the Defense councilor that may break the tenuous majority in favor of Reseune and the Science Directorate, and Yanni is working out a compromise to gain more support by turning a terraforming project loose on a remote world. As the election and the politics heat up, interpersonal relationships abruptly deteriorate, tensions with Jordan sharply worsen, and there may be moles in Reseune's iron-clad security. Navigating the crisis while keeping her chosen family safe will once again tax all of Ari's abilities.

The third section of Cyteen, where Ari finally has the tools to take fate into her own hands and starts playing everyone off against each other, is one of my favorite sections of any book. If it was yours as well, Regenesis is another 700 pages of exactly that. As an extension and revisiting, it does lose a bit of immediacy and surprise from the original. Regenesis is also less concerned with the larger questions of azi society, the nature of thought and personality, loyalty and authority, and the best model for the development of human civilization. It's more of a political thriller. But it's a political thriller that recaptures much of the drama and tension of Cyteen and is full of exceptionally smart and paranoid people thinking through all angles of a problem, working fast on their feet, and successfully navigating tricky and treacherous political landscapes.

And, like Cyteen but unlike others of Cherryh's novels I've read, it's a novel about empowerment, about seizing control of one's surroundings and effectively using all of the capability and leverage at one's fingertips. That gives it a catharsis that's almost as good as Cyteen.

It's also, like its predecessor, a surprisingly authoritarian novel. I think it's in that, more than anything else in these books, that one sees the impact of the azi. Regenesis makes it clear that the story is set, not in a typical society, but inside a sort of corporation, with an essentially hierarchical governance structure. There are other SF novels set within corporations (Solitaire comes to mind), but normally they follow peons or at best mid-level personnel or field agents, or otherwise take the viewpoint of the employees or the exploited. When they follow the corporate leaders, the focus usually isn't down inside the organization, but out into the world, with the corporation as silent resources on which the protagonist can draw.

Regenesis is instead about the leadership. It's about decisions about the future of humanity that characters feel they can make undemocratically (in part because they or their predecessors have effectively engineered the opinions of the democratic population), but it's also about how one manages and secures a top-down organization. Reseune is, as in the previous novel, a paranoid's suspicions come true; everyone is out to get everyone else, or at least might be, and the level of omnipresent security and threat forces a close parsing of alliances and motivations that elevates loyalty to the greatest virtue.

In Cyteen, we had long enough with Ari to see the basic shape of her personality and her slight divergences from her predecessor, but her actions are mostly driven by necessity. Regenesis gives us more of a picture of what she's like when her actions aren't forced, and here I think Cherryh manages a masterpiece of subtle characterization. Ari has diverged substantially from her predecessor without always realizing, and those divergences are firmly grounded in the differences she found or created between her life and the first Ari's. She has friends, confidents, and a community, which combined with past trauma has made her fiercely, powerfully protective. It's that protective instinct that weaves the plot together. So many of the events of Cyteen and Regenesis are driven by people's varying reactions to trauma.

If you, like me, loved the last third of Cyteen, read this, because Regenesis is more of exactly that. Cherryh finds new politics, new challenges, and a new and original plot within the same world and with the same characters, but it has the same feel of maneuvering, analysis, and decisive action. You will, as with Cyteen have to be comfortable with pages of internal monologue from people thinking through all sides of a problem. If you didn't like that in the previous book, avoid this one; if you loved it, here's the sequel you didn't know you were waiting for.

Original rating: 9 out of 10


Some additional thoughts after re-reading Regenesis in 2025:

Cyteen mostly held up to a re-reading and I had fond memories of Regenesis and hoped that it would as well. Unfortunately, it did not. I think I can see the shape of what I enjoyed the first time I read it, but I apparently was in precisely the right mood for this specific type of political power fantasy.

I did at least say that you have to be comfortable with pages of internal monologue, but on re-reading, there was considerably more of that than I remembered and it was quite repetitive. Ari spends most of the book chasing her tail, going over and around and beside the same theories that she'd already considered and worrying over the nuances of every position. The last time around, I clearly enjoyed that; this time, I found it exhausting and not very well-written. The political maneuvering is not that deep; Ari just shows every minutia of her analysis.

Regenesis also has more about the big questions of how to design a society and the role of the azi than I had remembered, but I'm not sure those discussions reach any satisfying conclusions. The book puts a great deal of effort into trying to convince the reader that Ari is capable of designing sociological structures that will shape Union society for generations to come through, mostly, manipulation of azi programming (deep sets is the term used in the book). I didn't find this entirely convincing the first time around, and I was even less convinced in this re-read. Human societies are a wicked problem, and I don't find Cherryh's computer projections any more convincing than Asimov's psychohistory.

Related, I am surprised, in retrospect, that the authoritarian underpinnings of this book didn't bother me more on my first read. They were blatantly obvious on the second read. This felt like something Cherryh put into these books intentionally, and I think it's left intentionally ambiguous whether the reader is supposed to agree with Ari's goals and decisions, but I was much less in the mood on this re-read to read about Ari making blatantly authoritarian decisions about the future of society simply because she's smart and thinks she, unlike others, is acting ethically. I say this even though I like Ari and mostly enjoyed spending time in her head. But there is a deep fantasy of being able to reprogram society at play here that looks a lot nastier from the perspective of 2025 than apparently it did to me in 2012.

Florian and Catlin are still my favorite characters in the series, though. I find it oddly satisfying to read about truly competent bodyguards, although like all of the azi they sit in an (I think intentionally) disturbing space of ambiguity between androids and human slaves.

The somewhat too frank sexuality from Cyteen is still present in Regenesis, but I found it a bit less off-putting, mostly because everyone is older. The authoritarian bent is stronger, since Regenesis is the story of Ari consolidating power rather than the underdog power struggle of Cyteen, and I had less tolerance for it on this re-read.

The main problem with this book on re-read was that I bogged down about halfway through and found excuses to do other things rather than finish it. On the first read, I was apparently in precisely the right mood to read about Ari building a fortified home for all of her friends; this time, it felt like endless logistics and musings on interior decorating that didn't advance the plot. Similarly, Justin and Grant's slow absorption into Ari's orbit felt like a satisfying slow burn friendship in my previous reading and this time felt touchy and repetitive.

I was one of the few avid defenders of Regenesis the first time I read it, and sadly I've joined the general reaction on a re-read: This is not a very good book. It's too long, chases its own tail a bit too much, introduces a lot more authoritarianism and doesn't question it as directly as I wanted, and gets even deeper into Cherryh's invented pseudo-psychology than Cyteen. I have a high tolerance for the endless discussions of azi deep sets and human flux thinking, and even I got bored this time through.

On re-read, this book was nowhere near as good as I thought it was originally, and I would only recommend it to people who loved Cyteen and who really wanted a continuation of Ari's story, even if it is flabby and not as well-written. I have normally been keeping the rating of my first read of books, but I went back and lowered this one by two points to ensure it didn't show as high on my list of recommendations.

Re-read rating: 6 out of 10

01 September, 2025 04:41AM

Iustin Pop

Small PSA: git.k1024.org turndown

Just a small thing: I’m going to turn down the very simple gitweb interface at https://git.k1024.org/. Way back, I thought I should have a backup for GitHub, but the decentralised Git model makes this not really needed, and gitweb is actually pretty heavy, even if it is really bare-bones.

Practically, as small as that site was, it was fine before the LLM era. Since then, I keep getting lots of traffic, as if these repositories which already exist on GitHub hold critical training information… Thus, I finally got the impetus to turn it down, for no actual loss. Keeping it would make sense only if I were to change it into a proper forge, but that’s a different beast, in which I have no interest (as a public service). So, down it goes.

I’ll probably replace all of it with a single static page, text-only even 😄

Next in terms of simplification will probably be removing series from this blog, since there’s not enough clear separation between tags and series. Or at least, I’m not consequent enough to write a very clean set of articles that can be ordered and numbered as a unit.

01 September, 2025 12:08AM

August 31, 2025

Russell Coker

hackergotchi for Otto Kekäläinen

Otto Kekäläinen

Managing procrastination and distractions

Featured image of post Managing procrastination and distractions

I’ve noticed that procrastination and inability to be consistently productive at work has become quite common in recent years. This is clearly visible in younger people who have grown up with an endless stream of entertainment literally at their fingertips, on their mobile phone. It is however a trap one can escape from with a little bit of help.

Procrastination is natural — they say humans are lazy by nature after all. Probably all of us have had moments when we choose to postpone a task we know we should be working on, and instead spent our time doing secondary tasks (valorisation). Classic example is cleaning your apartment when you should be preparing for an exam. Some may procrastinate by not doing any work at all, and just watching YouTube videos or the like. To some people, typically those who are in their 20s and early in their career, procrastination can be a big challenge and finding the discipline to stick to planned work may need intentional extra effort, and perhaps even external help.

During my 20+ year career in software development I’ve been blessed to work with engineers of various backgrounds and each with their unique set of strengths. I have also helped many grow in various areas and overcome challenges, such as lack of intrinsic motivation and managing procrastination, and some might be able to get it in check with some simple advice.

Distance yourself from the digital distractions

The key to avoiding distractions and procrastination is to make it inconvenient enough that you rarely do it. If continuing to do work is easier than switching to procrastination, work is more likely to continue.

Tips to minimize digital distractions, listed in order of importance:

  1. Put your phone away. Just like when you go to a movie and turn off your phone for two hours, you can put the phone away completely when starting to work. Put the phone in a different room to ensure there is enough physical distance between you and the distraction, so it is impossible for you to just take a “quick peek”.
  2. Turn off notifications from apps. Don’t let the apps call you like sirens luring Odysseus. You don’t need to have all the notifications. You will see what the apps have when you eventually open them at a time you choose to use them.
  3. Remove or disable social media apps, games and the like from your phone and your computer. You can install them back when you have vacation. You can probably live without them for some time. If you can’t remove them, explore your phone’s screen time restriction features to limit your own access to apps that most often waste your time. These features are sometimes listed in the phone settings under “digital health”.
  4. Have a separate work computer and work phone. Having dedicated ones just for work that are void of all unnecessary temptations helps keep distance from the devices that could derail your focus.
  5. Listen to music. If you feel your brain needs a dose of dopamine to get you going, listening to music helps satisfy your brain’s cravings while still being able to simultaneously keep working.

Doing a full digital detox is probably not practical, or not sustainable for an extended time. One needs apps to stay in touch with friends and family, and staying current in software development probably requires spending some time reading news online and such. However the tips above can help contain the distractions and minimize the spontaneous attention the distractions get.

Some of the distractions may ironically be from the work itself, for example Slack notifications or new email notifications. I recommend turning them off for a couple of hours every day to have some distraction free time. It should be enough to check work mail a couple times a day. Checking them every hour probably does not add much overall value for the company unless your work is in sales or support where the main task itself is responding to emails.

Distraction free work environment

Following the same principle of distancing yourself from distractions, try to use a dedicated physical space for working. If you don’t have a spare room to dedicate to work, use a neighborhood café or sign up for a local co-working space or start commuting to the company office to find a space to be focused on work in.

Break down tasks into smaller steps

Sometimes people postpone tasks because they feel intimidated by the size or complexity of a task. In particular in software engineering problems may be vague and appear large until one reaches the breakthrough that brings the vision of how to tackle it. Breaking down problems into smaller more manageable pieces has many advantages in software engineering. Not only can it help with task-avoidance, but it can also make the problem easier to analyze, suggest solutions and test them and build a solid foundation to expand upon to ultimately later reach a full solution on the entire larger problem.

Working on big problems as a chain of smaller tasks may also offer more opportunities to celebrate success on completing each subtask and help getting in a suitable cadence of solving a single thing, taking a break and then tackling the next issue.

Breaking down a task into concrete steps may also help with getting more realistic time estimations. Sometimes procrastination isn’t real — someone could just be overly ambitious and feel bad about themselves for not doing an unrealistic amount of work.

Intrinsic motivation

Of course, you should follow your passion when possible. Strive to pick a career that you enjoy, and thus maximize the intrinsic motivation you experience. However, even a dream job is still a job. Nobody is ever paid to do whatever they want. Any work will include at least some tasks that feel like a chore or otherwise like something you would not do unless paid to.

Some would say that the definition of work itself is having to do things one would otherwise not do. You can only fully do whatever you want while on vacation or when you choose to not have a job at all. But if you have a job, you simply need to find the intrinsic motivation to do it.

Simply put, some tasks are just unpleasant or boring. Our natural inclination is to avoid them in favor of more enjoyable activities. For these situations we just have to find the discipline to force ourselves to do the tasks and figuratively speaking whip ourselves into being motivated to complete the tasks.

Extrinsic motivation

As the name implies, this is something people external to you need to provide, such as your employer or manager. If you have challenges in managing yourself and delivering results on a regular basis, somebody else needs to set goals and deadlines and keep you accountable for them. At the end of the day this means that eventually you will stop receiving salary or other payments unless you did your job.

Forcing people to do something isn’t nice, but eventually it needs to be done. It would not be fair for an employer to pay those who did their work the same salary as those who procrastinated and fell short on their tasks.

If you work solo, you can also simulate the extrinsic motivation by publicly announcing milestones and deadlines to build up pressure for yourself to meet them and avoid publicly humiliation. It is a well-studied and scientifically proven phenomenon that most university students procrastinate at the start of assignments, and truly start working on them only once the deadline is imminent.

External help for addictions

If procrastination is mainly due to a single distraction that is always on your mind, it may be a sign of an addiction. For example, constantly thinking about a computer game or staying up late playing a computer game, to the extent that it seriously affects your ability to work, may be a symptom of an addiction, and getting out of it may be easier with external help.

Discipline and structure

Most of the time procrastination is not due to an addiction, but simply due to lack of self-discipline and structure. The good thing is that those things can be learned. It is mostly a matter of getting into new habits, which most young software engineers pick up more or less automatically while working along the more senior ones.

Hopefully these tips can help you stay on track and ensure you do everything you are expected to do with clear focus, and on time!

31 August, 2025 12:00AM

August 30, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Bruteforcing pwgen passwords

I needed to bruteforce some passwords that I happened to know that were generated with the default mode (“pronouncable”) of pwgen, so I spent a fair amount of time writing software to help. It went through a whole lot of iterations and ended up being more efficient than I had ever assumed would be possible (although it's still nowhere near as efficient as it should ideally be). So now I'm sharing it with you. If you have IPv6 and can reach git.sesse.net, that is.

I'm pasting the entire README below. Remember to use it for ethical purposes.

Introduction
============

pwbrute creates all possible pwgen passwords (default tty settings, no -s).
It matches pwgen 2.08. It supports ordering them by most common first.
Note that pwgen before 2.07 also supported a special “non-tty mode”
that was even less secure (no digits, no uppercase) which is not supported here.

To get started, do

   g++ -std=c++20 -O2 -o pwbrute pwbrute.cc -ljemalloc
  ./pwbrute --raw --sort --expand --verbose > passwords.txt

wait for an hour or two and you're left with 276B passwords in order
(about 2.5TB). (You can run without -ljemalloc, but the glibc malloc
makes pwbrute take about 50% more time.)

pwbrute is not a finished, polished product. Do not expect this to be
suitable for inclusion in e.g. a Linux distribution.


A brief exposition of pwgen's security
======================================

pwgen is a program that is fairly widely used in Linux/UNIX systems
to generate “pronounceable” (and thus supposedly easier-to-remember)
passwords. On the surface of it, the default 8-letter passwords with
uppercase letters, lowercase letters and digits would have a password
space of

  62^8 = 218,340,105,584,896 ~= 47.63 bits

This isn't enough to save you from password cracking against fast hashes
(e.g. NTLM), but it's enough for almost everything else.

However, pwgen (without -s) does by design not use this entire space.
It builds passwords from a list of 40 “phonemes” (a, ae, ah, ai, b,
c, ch, ...) in sequence, with some rules of which can come after each
others (e.g. the combination f-g is disallowed, since any consonant
phoneme must be followed by a vowel or sometimes a digit), and sometimes
digits. Furthermore, some phonemes may be uppercased (only first letter,
in case of two-letter phonemes). In all, these restrictions mean that
the number of producable passwords drop to

  307,131,320,668 ~= 38.16 bits

Furthermore, if a password does not contain at least one uppercase letter
and one digit, it is rejected. This doesn't affect that many passwords,
but it's still down to

  276,612,845,450 ~= 38.00 bits

You would believe that this means that to get to a 50% chance of cracking
a password, you'd need to test about ~138 billon passwords; however, the
effective entropy is much, much worse than that:

First, consider that digits are inserted (at valid points) only with
30% probability, and phonemes are uppercased (at valid points) only
with 20% probability. This means that a password like “Ahdaiy7i” is
_much_ more likely than e.g. “EXuL8OhP” (five uppercase letters),
even though both are possible to generate.

Furthermore, when building up the password from left to right, every
letter is not equally likely -- every _phoneme_ is equally likely.
Since at any given point, (e.g.) “ai” is as likely as “a”, a lot fewer
rolls of the dice are required to get to eight letters if the password
contains many dipthongs (two-letter phonemes). This makes them vastly
overrepresented. E.g., the specific password “aechae0A” has three dipthongs
and a probability of about 1 in 12 million of being generated, while
“Oozaey7Y” has only two dipthongs (but an extra capital letter) and a
probability of about 1 in 9.33 _billion_!

In all, this means that to get to 50% probability of cracking a given
pwgen password (assuming you know that it was indeed generated with
pwgen, without -s), you need to test about 405 million passwords.
Note that pwgen gives out a list of passwords and lets the user choose,
which may make this easier or harder; I've had real-world single-password
cracks that fell after only ~400k attempts (~2% probability if the user
has chosen at random, but they most likely picked one that looked more
beautiful to them somehow).

This is all known; I reported the limited keyspace in 2004 (Debian bug
#276976), and Solar Designer reported the poor entropy in CVE-2013-4441.
(I discovered the entropy issues independently from them a couple of
months later, then discovered that it was already known, and didn't
publish.) However, to the best of my knowledge, pwbrute is the first
public program that will actually generate the most likely passwords
efficiently for you.

Needless to say, I cannot recommend using pwgen's phoneme-based
passwords for anything that needs to stay secure. (I will not make
concrete recommendations beyond that; a lot of literature exists
on the subject.)


Speeding up things
==================

Very few users would want the entire set of passwords, given that the
later ones are incredibly unlikely (e.g., AB0AB0AB has a chance of about
2^-52.155, or 1 in 5 quadrillion). To not get all, you can use e.g.
-c -40, which will produce only those with more than approx. 2^-40 probability
before final rejection (roughly ~6B passwords).

(NOTE: Since the calculated probability is before final rejection of those
without a digit or uppercase letter, they will not sum to 1, but something
less; approx. 0.386637 for the default 8-letter passwords, or 2^-1.3709.
Take this into account when reading all text below.)

pwbrute is fast but not super-fast; it can generate about 80M passwords/sec
(~700 MB/sec) to stdout, of course depending on your CPUs. The expansion phase
generally takes nearly all the time; if your cracker could somehow accept the
unexpanded patterns (i.e., without --expand) for free, pwbrute would basically
be infinitely fast. (It would be possible to microoptimize the expansion,
perhaps to 1B passwords/sec/core if pulling out all the stops, but at some point,
it starts becoming a problem related to pipe I/O performance, not candidate
generation.)

Thus, if your cracker is very fast (e.g. hashcat cracking NTLM), it's suboptimal
to try to limit yourself to only pwbrute-created passwords. It's much, much
faster to just create a bunch of legal prefixes and then let hashcat try all
of them, even though this will test some “impossible” passwords.
For instance:

  ./pwbrute --first-stage-len 5 --raw > start5.pwd
  ./hashcat.bin -O -m 1000 ntlm.pwd -w 3 -a 6 start5.pwd -1 '?l?u?d' '?1?1?1'

The “combination” mode in hashcat is also not always ideal; consider using
rules instead.

If you need longer passwords than 8 characters, you may want to split the job
into multiple parts. For this, you can combine --first-stage-len with --prefix
to generate passwords in two stages, e.g. first generate all valid 3-letter
prefixes (“bah” is valid, “bbh” is not) and then for each prefix generate
all possible passwords.  This requires much less RAM, can go in parallel,
and is pretty efficient.

For instance, this will create all passwords up to probability 2^-30,
over 16 cores, in a form that doesn't use too much RAM:

  ./pwbrute -f 3 -r -s -e | parallel -j 16 "./pwbrute -p {} -c -30 -s 2>/dev/null | zstd -6 > up-to-30-{}.pwd.zst"

You can then use the included merge.cc utility to merge the sorted files
into a new sorted one (this requires not using pwbrute --raw, since merge
wants the probabilities to merge correctly):

  g++ -O2 -o merge merge.cc -lzstd
  ./merge up-to-30-*.pwd.zst | pv | pzstd -6 > up-to-30.pwd.zst

merge is fairly fast, but not infinitely so. Sorry.

Beware, zstd uses some decompression buffers that can be pretty big per-file
and there are lots of files, so if you put the limit  lower than -30,
consider merging in multiple phases or giving -M to zstd, unless you want to
say hello to the OOM killer half-way into your merge.

As long as you give the --sort option to pwbrute, it is designed to give exactly
the same output in the same order every time (at the expense of a little bit of
speed during the pattern generation phase). This means that you can safely resume
an aborted generation or cracking job using the --skip=NUM flag, without worrying
that you'd lose some candidates.

Here are some estimated numbers for various probability cutoffs, and how much
of the probability space they cover (after correction for rejected passwords):

  p >= 2^-25:           78,000 passwords   (  0.00% coverage,   0.63% probability)
  p >= 2^-26:          171,200 passwords   (  0.00% coverage,   1.12% probability)
  p >= 2^-27:        3,427,100 passwords   (  0.00% coverage,   9.35% probability)
  p >= 2^-28:        5,205,200 passwords   (  0.00% coverage,  12.01% probability)
  p >= 2^-29:        8,588,250 passwords   (  0.00% coverage,  14.17% probability)
  p >= 2^-30:       24,576,550 passwords   (  0.01% coverage,  19.23% probability)
  p >= 2^-31:       75,155,930 passwords   (  0.03% coverage,  27.58% probability)
  p >= 2^-32:      284,778,250 passwords   (  0.10% coverage,  43.81% probability)
  p >= 2^-33:      540,418,450 passwords   (  0.20% coverage,  55.14% probability)
  p >= 2^-34:      808,534,920 passwords   (  0.29% coverage,  60.49% probability)
  p >= 2^-35:    1,363,264,200 passwords   (  0.49% coverage,  66.28% probability)
  p >= 2^-36:    2,534,422,340 passwords   (  0.92% coverage,  72.36% probability)
  p >= 2^-37:    5,663,431,890 passwords   (  2.05% coverage,  80.54% probability)
  p >= 2^-38:   11,178,389,760 passwords   (  4.04% coverage,  87.75% probability)
  p >= 2^-39:   16,747,555,070 passwords   (  6.05% coverage,  91.55% probability)
  p >= 2^-40:   25,139,913,440 passwords   (  9.09% coverage,  94.25% probability)
  p >= 2^-41:   34,801,107,110 passwords   ( 12.58% coverage,  95.91% probability)
  p >= 2^-42:   52,374,739,350 passwords   ( 18.93% coverage,  97.38% probability)
  p >= 2^-43:   78,278,619,550 passwords   ( 28.30% coverage,  98.51% probability)
  p >= 2^-44:  111,967,613,850 passwords   ( 40.48% coverage,  99.25% probability)
  p >= 2^-45:  147,452,759,450 passwords   ( 53.31% coverage,  99.64% probability)
  p >= 2^-46:  186,012,691,450 passwords   ( 67.25% coverage,  99.86% probability)
  p >= 2^-47:  215,059,885,450 passwords   ( 77.75% coverage,  99.94% probability)
  p >= 2^-48:  242,726,285,450 passwords   ( 87.75% coverage,  99.98% probability)
  p >= 2^-49:  257,536,845,450 passwords   ( 93.10% coverage,  99.99% probability)
  p >= 2^-50:  268,815,845,450 passwords   ( 97.18% coverage, 100.00% probability)
  p >= 2^-51:  273,562,845,450 passwords   ( 98.90% coverage, 100.00% probability)
  p >= 2^-52:  275,712,845,450 passwords   ( 99.67% coverage, 100.00% probability)
  p >= 2^-53:  276,512,845,450 passwords   ( 99.96% coverage, 100.00% probability)
         all:  276,612,845,450 passwords   (100.00% coverage, 100.00% probability)


License
=======

pwbrute is Copyright (C) 2025 Steinar H. Gunderson.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

30 August, 2025 09:56AM

Utkarsh Gupta

FOSS Activites in August 2025

Here’s my monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

Debian 13 was released! Woot!

Whilst I didn’t get a chance to do much, here’s still a few things that I worked on:

  • Helped Anshul with Golang 1.25 packaging and upload.
  • Assited Anshul in fixing Golang bugs in the stable release via a -pu.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:

  • Released Questing snapshot 4! \o/
  • Prepared for 25.10 Beta, held weekly release syncs, et al.
  • Granted FFe and triaged a bunch of other bugs from both, Release team and Archive Admin POV. :)
  • Got a recognition award for helping Chlo with Google Guest Agent packages.
  • Preparing for the a round of internal review, 360s, and trying to not be sick. :)

Debian (E)LTS

This month I have worked 16.00 hours on Debian Long Term Support (LTS) and 4.50 hours on its sister Extended LTS project and did the following things:

  • [LTS] Prepared the LTS update for wordpress, bumping the package from 5.7.11 to 5.7.13.
    • Prepared an update for stable, too, and pinged Craig. Haven’t heard yet.
    • Got incredibly sick so will carry on the coordination work and release the updates to all the releases. Everything’s mostly ready and tested.
    • Gave Salvatore a quick heads up via IRC.
  • [E/LTS] Frontdesk duty from 28th July to 04th August.
  • [E/LTS] Helped Daniel Leidert in showing him around as he did his first frontdesk rota. Yay! 🎉
    • We paired on an hour long meets call and discussed various toolings and workflows.
    • Pair-reviewed a few CVEs together.
    • Also discussed how to triage newly supported packages for ELTS, too!
  • [LTS] Attended the monthly LTS meeting on Jitsi. Summary here.
    • [ELTS] Raised questions about installing debusine on Ubuntu.
      • Still trying to play around to get a bit more comfortable before starting to do actual uploads there.
  • [LTS] Helping a few folks - like assisting Lee to see if we have a reproducer for CVE-2025-27613 for git, et al.
  • [Stable] Been working on fixing 2 packages:
    • ruby-graphql: The Debian Security team asked to fix that via p-u so prepared a patch update.
    • ruby-saml: The update is finally ready but not tested yet - should be a quick one though.
    • Got incredibly sick and couldn’t move things forward but will take care of the work in the following month.

Until next time.
:wq for today.

30 August, 2025 05:41AM