December 12, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.13 on CRAN: Maintenance

A new release 0.2.13 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others packages (four the last time we counted) include its sources too. Not ideal, but beyond our control.

This version include most routine package maintenance as well as one small contributed code improvement. The changes since the last CRAN release are summarised below.

Changes in version 0.2.13 (2024-12-11)

  • No longer set compilation standard as recent R version set a sufficiently high minimum

  • Qualify a call to cctz::format (Michael Quinn in #44)

  • Routine updates to continuous integration and badges

  • Switch to Authors@R in DESCRIPTION

Courtesy of my CRANberries, there is a diffstat report relative to to the previous version. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 December, 2024 04:04AM

December 11, 2024

Divine Attah-Ohiemi

From Sisterly Wisdom to Debian Dreams: My Outreachy Journey

Discovering Open Source: How I Got Introduced
Hey there! I’m Divine Attah-Ohiemi, a sophomore studying Computer Science. My journey into the world of open source was anything but grand. It all started with a simple question to my sister: “How do people get jobs without experience?” Her answer? Open source! I dove into this vibrant community, and it felt like discovering a hidden treasure chest filled with knowledge and opportunities.

Choosing Debian: Why This Community?
Why Debian, you ask? Well, I applied to Outreachy twice, and both times, I chose Debian. It’s not just my first operating system; it feels like home. The Debian community is incredibly welcoming, like a big family gathering where everyone supports each other. Whether I was updating my distro or poring over documentation, the care and consideration in this community were palpable. It reminded me of the warmth of homeschooling with relatives. Plus, knowing that Debian's name comes from its creator Ian and his wife Debra adds a personal touch that makes me feel even more honored to contribute to making the website better!

Why I Applied to Outreachy: What Inspired Me
Outreachy is my golden ticket to the open source world! As a 19-year-old, I see this internship as a unique opportunity to gain invaluable experience while contributing to something meaningful. It’s the perfect platform for me to learn, grow, and connect with like-minded individuals who share my passion for technology and community.

I’m excited for this journey and can’t wait to see where it takes me! 🌟

11 December, 2024 02:25PM by Divine Attah-Ohiemi

December 10, 2024

Stephan Lachnit

Creating a bootable USB stick that can still be used as normal storage

As someone who daily drives rolling release operating systems, having a bootable USB stick with a live install of Debian is essential. It has saved reverting broken packages and making my device bootable again at least a couple of times. However, creating a bootable USB stick usually uses the entire storage of the USB stick, which seems unnecessary given that USB sticks easily have 64GiB or more these days while live ISOs still don’t use more than 8GiB.

In this how-to, I will explain how to create a bootable USB stick, that also holds a FAT32 partition, which allows the USB stick used as usual.

Creating the partition partble

The first step is to create the partition table on the new drive. There are several tools to do this, I recommend the ArchWiki page on this topic for details. For best compatibility, MBR should be used instead of the more modern GPT. For simplicity I just went with the GParted since it has an easy GUI, but feel free to use any other tool. The layout should look like this:

Type  │ Partition │ Suggested size
──────┼───────────┼───────────────
EFI   │ /dev/sda1 │           8GiB
FAT32 │ /dev/sda2 │      remainder

Notes:

  • The disk names are just an example and have to be adjusted for your system.
  • Don’t set disk labels, they don’t appear on the new install anyway and some UEFIs might not like it on your boot partition.
  • If you used GParted, create the EFI partition as FAT32 and set the esp flag.

Mounting the ISO and USB stick

The next step is to download your ISO and mount it. I personally use a Debian Live ISO for this. To mount the ISO, use the loop mounting option:

mkdir -p /tmp/mnt/iso
sudo mount -o loop /path/to/image.iso /tmp/mnt/iso

Similarily, mount your USB stick:

mkdir -p /tmp/mnt/usb
sudo mount /dev/sda1 /tmp/mnt/usb

Copy the ISO contents

To copy the contents from the ISO to the USB stick, use this command:

sudo cp -r -L /tmp/mnt/iso/. /tmp/mnt/usb

The -L option is required to deference the symbolic links present in the ISO, since FAT32 does not support symbolic links. Note that you might get warning about cyclic symbolic links. Nothing can be done to fix those, except hoping that they won’t cause a problem. For me this never was a problem though.

Finishing touches

Umnount the ISO and USB stick:

sudo umount /tmp/mnt/usb
sudo umount /tmp/mnt/iso

Now the USB stick should be bootable and have a usable data partition that can be used on essentially every operating system. I recommend testing that the stick is bootable to make sure it actually works in case of an emergency.

10 December, 2024 02:53PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Too many open files in Minikube Pod

Right now playing with minikube, to run a three nodes highly available Kubernetes control plane. I am using the docker driver of minikube, so each Kubernetes node component is running inside a docker container, instead of using full blown VMs.

In my experience this works better than Kind, as using Kind you cannot correctly restart a cluster deployed in highly available mode.

This is the topology of the cluster:

$ minikube node list 
minikube        192.168.49.2
minikube-m02    192.168.49.3
minikube-m03    192.168.49.4

Each kubernetes node is actually a docker container:

$ docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED          STATUS         PORTS                                                                                                                                  NAMES
977046487e5e   gcr.io/k8s-minikube/kicbase:v0.0.45   "/usr/local/bin/entr…"   31 minutes ago   Up 6 minutes   127.0.0.1:32812->22/tcp, 127.0.0.1:32811->2376/tcp, 127.0.0.1:32810->5000/tcp, 127.0.0.1:32809->8443/tcp, 127.0.0.1:32808->32443/tcp   minikube-m03
8be3549f0c4c   gcr.io/k8s-minikube/kicbase:v0.0.45   "/usr/local/bin/entr…"   31 minutes ago   Up 6 minutes   127.0.0.1:32807->22/tcp, 127.0.0.1:32806->2376/tcp, 127.0.0.1:32805->5000/tcp, 127.0.0.1:32804->8443/tcp, 127.0.0.1:32803->32443/tcp   minikube-m02
4b39f1c47c23   gcr.io/k8s-minikube/kicbase:v0.0.45   "/usr/local/bin/entr…"   31 minutes ago   Up 6 minutes   127.0.0.1:32802->22/tcp, 127.0.0.1:32801->2376/tcp, 127.0.0.1:32800->5000/tcp, 127.0.0.1:32799->8443/tcp, 127.0.0.1:32798->32443/tcp   minikube

The whole list of pods running in the cluster is as this:

$ minikube kubectl -- get pods --all-namespaces
kube-system   coredns-6f6b679f8f-85n9l               0/1     Running   1 (44s ago)   3m32s
kube-system   coredns-6f6b679f8f-pnhxv               0/1     Running   1 (44s ago)   3m32s
kube-system   etcd-minikube                          1/1     Running   1 (44s ago)   3m37s
kube-system   etcd-minikube-m02                      1/1     Running   1 (42s ago)   3m21s
...
kube-system   kube-proxy-84gm6                       0/1     Error     1 (31s ago)   3m4s

There is this container in Error status, let us check the logs

$ minikube kubectl -- logs -n kube-system kube-proxy-84gm6
E1210 11:50:42.117036       1 run.go:72] "command failed" err="failed complete: too many open files"

This can be fixed by increasing the number of inotify watchers:

# cat > /etc/sysctl.d/minikube.conf <<EOF
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
EOF
# sysctl --system
...
* Applying /etc/sysctl.d/minikube.conf ...
* Applying /etc/sysctl.conf ...
...
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512

Restart the cluster

$ minikube stop
$ minikube start 
$ minikube kubectl -- get pods --all-namespaces | grep Error
$

10 December, 2024 12:10PM by Manu

Russ Allbery

2024 book haul

I haven't made one of these posts since... last year? Good lord. I've therefore already read and reviewed a lot of these books.

Kemi Ashing-Giwa — The Splinter in the Sky (sff)
Moniquill Blackgoose — To Shape a Dragon's Breath (sff)
Ashley Herring Blake — Delilah Green Doesn't Care (romance)
Ashley Herring Blake — Astrid Parker Doesn't Fail (romance)
Ashley Herring Blake — Iris Kelly Doesn't Date (romance)
Molly J. Bragg — Scatter (sff)
Sarah Rees Breenan — Long Live Evil (sff)
Michelle Browne — And the Stars Will Sing (sff)
Steven Brust — Lyorn (sff)
Miles Cameron — Beyond the Fringe (sff)
Miles Cameron — Deep Black (sff)
Haley Cass — Those Who Wait (romance)
Sylvie Cathrall — A Letter to the Luminous Deep (sff)
Ta-Nehisi Coates — The Message (non-fiction)
Julie E. Czerneda — To Each This World (sff)
Brigid Delaney — Reasons Not to Worry (non-fiction)
Mar Delaney — Moose Madness (sff)
Jerusalem Demsas — On the Housing Crisis (non-fiction)
Michelle Diener — Dark Horse (sff)
Michelle Diener — Dark Deeds (sff)
Michelle Diener — Dark Minds (sff)
Michelle Diener — Dark Matters (sff)
Elaine Gallagher — Unexploded Remnants (sff)
Bethany Jacobs — These Burning Stars (sff)
Bethany Jacobs — On Vicious Worlds (sff)
Micaiah Johnson — Those Beyond the Wall (sff)
T. Kingfisher — Paladin's Faith (sff)
T.J. Klune — Somewhere Beyond the Sea (sff)
Mark Lawrence — The Book That Wouldn't Burn (sff)
Mark Lawrence — The Book That Broke the World (sff)
Mark Lawrence — Overdue (sff)
Mark Lawrence — Returns (sff collection)
Malinda Lo — Last Night at the Telegraph Club (historical)
Jessie Mihalik — Hunt the Stars (sff)
Samantha Mills — The Wings Upon Her Back (sff)
Lyda Morehouse — Welcome to Boy.net (sff)
Cal Newport — Slow Productivity (non-fiction)
Naomi Novik — Buried Deep and Other Stories (sff collection)
Claire O'Dell — The Hound of Justice (sff)
Keanu Reeves & China Miéville — The Book of Elsewhere (sff)
Kit Rocha — Beyond Temptation (sff)
Kit Rocha — Beyond Jealousy (sff)
Kit Rocha — Beyond Solitude (sff)
Kit Rocha — Beyond Addiction (sff)
Kit Rocha — Beyond Possession (sff)
Kit Rocha — Beyond Innocence (sff)
Kit Rocha — Beyond Ruin (sff)
Kit Rocha — Beyond Ecstasy (sff)
Kit Rocha — Beyond Surrender (sff)
Kit Rocha — Consort of Fire (sff)
Geoff Ryman — HIM (sff)
Melissa Scott — Finders (sff)
Rob Wilkins — Terry Pratchett: A Life with Footnotes (non-fiction)
Gabrielle Zevin — Tomorrow, and Tomorrow, and Tomorrow (mainstream)

That's a lot of books, although I think I've already read maybe a third of them? Which is better than I usually do.

10 December, 2024 04:09AM

December 09, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Some tips for those who still administer Drupal7-based sites

A bit of history: Drupal at my workplace (and in Debian)

My main day-to-day responsability in my workplace is, and has been for 20 years, to take care of the network infrastructure for UNAM’s Economics Research Institute. One of the most visible parts of this responsability is to ensure we have a working Web presence, and that it caters for the needs of our academic community.

I joined the Institute in January 2005. Back then, our designer pushed static versions of our webpage, completely built in her computer. This was standard practice at the time, and lasted through some redesigns, but I soon started advocating for the adoption of a Content Management System. After evaluating some alternatives, I recommended adopting Drupal. It took us quite a bit to do the change: even though I clearly recall starting work toward adopting it as early as 2006, according to the Internet Archive, we switched to a Drupal-backed site around June 2010. We started using it somewhere in the version 6’s lifecycle.

As for my Debian work, by late 2012 I started getting involved in the maintenance of the drupal7 package, and by April 2013 I became its primary maintainer. I kept the drupal7 package up to date in Debian until ≈2018; the supported build methods for Drupal 8 are not compatible with Debian (mainly, bundling third-party libraries and updating them without coordination with the rest of the ecosystem), so towards the end of 2016, I announced I would not package Drupal 8 for Debian.

By March 2016, we migrated our main page to Drupal 7. By then, we already had several other sites for our academics’ projects, but my narrative follows our main Web site. I did manage to migrate several Drupal 6 (D6) sites to Drupal 7 (D7); it was quite involved process, never transparent to the user, and we did have the backlash of long downtimes (or partial downtimes, with sites half-available only) with many of our users. For our main site, we took the opportunity to do a complete redesign and deployed a fully new site.

You might note that March 2016 is after the release of D8 (November 2015). I don’t recall many of the specifics for this decision, but if I’m not mistaken, building the new site was a several months long process — not only for the technical work of setting it up, but for the legwork of getting all of the needed information from the different areas that need to be represented in the Institute. Not only that: Drupal sites often include tens of contributed themes and modules; the technological shift the project underwent between its 7 and 8 releases was too deep, and modules took a long time (if at all — many themes and modules were outright dumped) to become available for the new release.

Naturally, the Drupal Foundation wanted to evolve and deprecate the old codebase. But the pain to migrate from D7 to D8 is too big, and many sites have remained under version 7 — Eight years after D8’s release, almost 40% of Drupal installs are for version 7, and a similar proportion runs a currently-supported release (10 or 11). And while the Drupal Foundation made a great job at providing very-long-term support for D7, I understand the burden is becoming too much, so close to a year ago (and after pushing several times the D7, they finally announced support will finish this upcoming January 5.

Drupal 7 must go!

I found the following usage graphs quite interesting: the usage statistics for all Drupal versions follows a very positive slope, peaking around 2014 during the best years of D7, and somewhat stagnating afterwards, staying since 2015 at the 25000–28000 sites mark (I’m very tempted to copy the graphs, but builtwith’s terms of use are very clear in not allowing it). There is a sharp drop in the last year — I attribute it to the people that are leaving D7 for other technologies after its end-of-life announcement. This becomes clearer looking only at D7’s usage statistics: D7 peaks at ≈15000 installs in 2016 stays there for close to 5 years, and has a sharp drop to under 7500 sites in the span of one year.

D8 has a more “regular” rise, peak and fall peaking at ~8500 between 2020 and 2021, and down to close to 2500 for some months already; D9 has a very brief peak of almost 9000 sites in 2023 and is now close to half of it. Currently, the Drupal king appears to be D10, still on a positive slope and with over 9000 sites. Drupal 11 is still just a blip in builtwith’s radar, with… 3 registered sites as of September 2024 :-Þ

After writing this last paragraph, I came across the statistics found in the Drupal webpage; the methodology for acquiring its data is completely different: while builtwith’s methodology is their trade secret, you can read more about how Drupal’s data is gathered (and agree or disagree with it 😉, but at least you have a page detailing 12 years so far of reported data, producing the following graph (which can be shared under the CC BY-SA license 😃):

Drupal usage statistics by version 2013–2024

This graph is disgregated into minor versions, and I don’t want to come up with yet another graph for it 😉 but it supports (most of) the narrative I presented above… although I do miss the recent drop builtwith reported in D7’s numbers!

And what about Backdrop?

During the D8 release cycle, a group of Drupal developers were not happy with the depth of the architectural changes that were being adopted, particularly the transition to the Symfony PHP component framework, and forked the D7 codebase to create the Backdrop CMS, a modern version of Drupal, without dropping the known and tested architecture it had. The Backdrop developers keep working closely together with the Drupal community, and although its usage numbers are way smaller than Drupal’s, seems to be sustainable and lively. Of course, as I presented their numbers in the previous section, you can see Backdrop’s numbers in builtwith… are way, way lower.

I have found it to be a very warm and welcoming community, eager to receive new members. And, thanks to its contributed D2B Migrate module, I found it is quite easy to migrate a live site from Drupal 7 to Backdrop.

Migration by playbook!

So… Well, I’m an academic. And (if it’s not obvious to you after reading so far 😉), one of the things I must do in my job is to write. So I decided to write an article to invite my colleagues to consider Backdrop for their D7 sites in Cuadernos Técnicos Universitarios de la DGTIC, a young journal in our university for showcasing technical academical work. And now that my article got accepted and published, I’m happy to share it with you — of course, if you can read Spanish 😉 But anyway…

Given I have several sites to migrate, and that I’m trying to get my colleagues to follow suite, I decided to automatize the migration by writing an Ansible playbook to do the heavy lifting. Of course, the playbook’s users will probably need to tweak it a bit to their personal needs. I’m also far from an Ansible expert, so I’m sure there is ample room fo improvement in my style.

But it works. Quite well, I must add.

But with this size of database…

I did stumble across a big pebble, though. I am working on the migration of one of my users’ sites, and found that its database is… huge. I checked the mysqldump output, and it got me close to 3GB of data. And given the D2B_migrate is meant to work via a Web interface (my playbook works around it by using a client I wrote with Perl’s WWW::Mechanize), I repeatedly stumbled with PHP’s maximum POST size, maximum upload size, maximum memory size…

I asked for help in Backdrop’s Zulip chat site, and my attention was taken off fixing PHP to something more obvious: Why is the database so large? So I took a quick look at the database (or rather: my first look was at the database server’s filesystem usage). MariaDB stores each table as a separate file on disk, so I looked for the nine largest tables:

# ls -lhS|head
total 3.8G
-rw-rw---- 1 mysql mysql 2.4G Dec 10 12:09 accesslog.ibd
-rw-rw---- 1 mysql mysql 224M Dec  2 16:43 search_index.ibd
-rw-rw---- 1 mysql mysql 220M Dec 10 12:09 watchdog.ibd
-rw-rw---- 1 mysql mysql 148M Dec  6 14:45 cache_field.ibd
-rw-rw---- 1 mysql mysql  92M Dec  9 05:08 aggregator_item.ibd
-rw-rw---- 1 mysql mysql  80M Dec 10 12:15 cache_path.ibd
-rw-rw---- 1 mysql mysql  72M Dec  2 16:39 search_dataset.ibd
-rw-rw---- 1 mysql mysql  68M Dec  2 13:16 field_revision_field_idea_principal_articulo.ibd
-rw-rw---- 1 mysql mysql  60M Dec  9 13:19 cache_menu.ibd

A single table, the access log, is over 2.4GB long. The three following tables are, cache tables. I can perfectly live without their data in our new site! But I don’t want to touch the slightest bit of this site until I’m satisfied with the migration process, so I found a way to exclude those tables in a non-destructive way: given D2B_migrate works with a mysqldump output, and given that mysqldump locks each table before starting to modify it and unlocks it after its job is done, I can just do the following:

$ perl -e '$output = 1; while (<>) { $output=0 if /^LOCK TABLES `(accesslog|search_index|watchdog|cache_field|cache_path)`/; $output=1 if /^UNLOCK TABLES/; print if $output}' < /tmp/d7_backup.sql  > /tmp/d7_backup.eviscerated.sql; ls -hl /tmp/d7_backup.sql /tmp/d7_backup.eviscerated.sql
-rw-rw-r-- 1 gwolf gwolf 216M Dec 10 12:22 /tmp/d7_backup.eviscerated.sql
-rw------- 1 gwolf gwolf 2.1G Dec  6 18:14 /tmp/d7_backup.sql

Five seconds later, I’m done! The database is now a tenth of its size, and D2B_migrate is happy to take it. And I’m a big step closer to finishing my reliance on (this bit of) legacy code for my highly-visible sites 😃

09 December, 2024 07:55PM

Thorsten Alteholz

My Debian Activities in November 2024

Debian LTS

This was my hundred-twenty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3968-1] netatalk security update to fix four CVEs related to heap buffer overflow and writing arbitrary files. The patches have been prepared by the maintainer.
  • [DLA 3976-1] tgt update to fix one CVE related to not using a propper seed for rand()
  • [DLA 3977-1] xfpt update to fix one CVE related to a stack-based buffer overflow
  • [DLA 3978-1] editorconfig-core update to fix two CVEs related to buffer overflows.

I also continued to work on a fix for glewlwyd, which is more difficult than expected. Besides I started to work on ffmpeg and haproxy.

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-sixth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1259-1]editorconfig-core security update for two CVEs in Buster to fix buffer overflows.

I also started to work on a fix for kmail-account-wizzard. Unfortunately preparing a testing environment takes some time and I did not finish testing this month. Besides I started to work on ffmpeg and haproxy.

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

I also sponsored an upload of calceph.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

I also did some NMU of opensta, kdrill, glosstex, irsim, pagetools, afnix, cpm, to fix some RC bugs.

FTP master

This month I accepted 266 and rejected 16 packages. The overall number of packages that got accepted was 269.

09 December, 2024 06:51PM by alteholz

Paul Wise

FLOSS Activities November 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Communication

  • Respond to queries from Debian users and contributors on IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

09 December, 2024 02:07AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: OpenMPI transitions, cPython 3.12.7+ update uploads, Python 3.13 Transition, and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-11

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Transition management, by Emilio Pozuelo Monfort

Emilio has been helping finish the mpi-defaults switch to mpich on 32-bit architectures, and the openmpi transitions. This involves filing bugs for the reverse dependencies, doing NMUs, and requesting removals for outdated (Not Built from Source) binaries on 32-bit architectures where openmpi is no longer available. Those transitions got entangled with a few others, such as the petsc stack, and were blocking many packages from migrating to testing. These transitions were completed in early December.

cPython 3.12.7+ update uploads, by Stefano Rivera

Python 3.12 had failed to build on mips64el, due to an obscure dh_strip failure. The mips64el porters never figured it out, but the missing build on mips64el was blocking migration to Debian testing. After waiting a month, enough changes had accumulated in the upstream 3.12 maintenance git branch that we could apply them in the hope of changing the output enough to avoid breaking dh_strip. This worked.

Of course there were other things to deal with too. A test started failing due to a Debian-specific patch we carry for python3.x-minimal, and it needed to be reworked. And Stefano forgot to strip the trailing + from PY_VERSION, which confuses some python libraries. This always requires another patch when applying git updates from the maintenance branch. Stefano added a build-time check to catch this mistake in the future. Python 3.12.7 migrated.

Python 3.13 Transition, by Stefano Rivera and Colin Watson

During November the Python 3.13-add transition started. This is the first stage of supporting a new version of Python in Debian archive (after preparatory work), adding it as a new supported but non-default version. All packages with compiled Python extensions need to be re-built to add support for the new version.

We have covered the lead-up to this transition in the past. Due to preparation, many of the failures we hit were expected and we had patches waiting in the bug tracker. These could be NMUed to get the transition moving. Others had been known about but hadn’t been worked on, yet.

Some other packages ran into new issues, as we got further into the transition than we’d been able to in preparation. The whole Debian Python team has been helping with this work.

The rebuild stage of the 3.13-add transition is now over, but many packages need work before britney will let python3-defaults migrate to testing.

Limiting build concurrency based on available RAM, by Helmut Grohne

In recent years, the concurrency of CPUs has been increasing as has the demand for RAM by linkers. What has not been increasing as quickly is the RAM supply in typical machines. As a result, we more frequently run into situations where the package builds exhaust memory when building at full concurrency. Helmut initiated a discussion about generalizing an approach to this in Debian packages. Researching existing code that limits concurrency as well as providing possible extensions to debhelper and dpkg to provide concurrency limits based on available system RAM. Thus far there is consensus on the need for a more general solution, but ideas are still being collected for the precise solution.

MiniDebConf Toulouse at Capitole du Libre

The whole Freexian Collaborator team attended MiniDebConf Toulouse, part of the Capitole du Libre event. Several members of the team gave talks:

Stefano and Anupa worked as part of the video team, streaming and recording the event’s talks.

Miscellaneous contributions

  • Stefano looked into packaging the latest upstream python-falcon version in Debian, in support of the Python 3.13 transition. This appeared to break python-hug, which is sadly looking neglected upstream, and the best course of action is probably its removal from Debian.

  • Stefano uploaded videos from various 2024 Debian events to PeerTube and YouTube.

  • Stefano and Santiago visited the site for DebConf 2025 in Brest, after the MiniDebConf in Toulouse, to meet with the local team and scout out the venue.

    The on-going DebConf 25 organization work of last month also included handling the logo and artwork call for proposals.

  • Stefano helped the press team to edit a post for bits.debian.org on OpenStreetMap’s migration to Debian.

  • Carles implemented multiple language support on po-debconf-manager and tested it using Portuguese-Brazilian during MiniDebConf Toulouse. The system was also tested and improved by reviewing more than 20 translations to Catalan, creating merge requests for those packages, and providing user support to new users. Additionally, Carles implemented better status transitions, configuration keys management and other small improvements.

  • Helmut sent 32 patches for cross build failures. The wireplumber one was an interactive collaboration with Dylan Aïssi.

  • Helmut continued to monitor the /usr-move, sent a patch for lib64readline8 and continued several older patch conversations. lintian now reports some aliasing issues in unstable.

  • Helmut initiated a discussion on the semantics of *-for-host packages. More feedback is welcome.

  • Helmut improved the crossqa.debian.net infrastructure to fail running lintian less often in larger packages.

  • Helmut continued maintaining rebootstrap mostly dropping applied patches and continuing discussions of submitted patches.

  • Helmut prepared a non-maintainer upload of gzip for several long-standing bugs.

  • Colin came up with a plan for resolving the multipart vs. python-multipart name conflict, and began work on converting reverse-dependencies.

  • Colin upgraded 42 Python packages to new upstream versions. Some were complex: python-catalogue had some upstream version confusion, pydantic and rpds-py involved several Rust package upgrades as prerequisites, and python-urllib3 involved first packaging python-quart-trio and then vendoring an unpackaged test-dependency.

  • Colin contributed Incus support to needrestart upstream.

  • Lucas set up a machine to do a rebuild of all ruby reverse dependencies to check what will be broken by adding ruby 3.3 as an alternative interpreter. The tool used for this is mass-rebuild and the initial rebuilds have already started. The ruby interpreter maintainers are planning to experiment with debusine next time.

  • Lucas is organizing a Debian Ruby sprint towards the end of January in Paris. The plan of the team is to finish any missing bits of Ruby 3.3 transition at the time, try to push Rails 7 transition and fix RC bugs affecting the ruby ecosystem in Debian.

  • Anupa attended a Debian Publicity team meeting in-person during MiniDebCamp Toulouse.

  • Anupa moderated and posted in the Debian Administrator group in LinkedIn.

09 December, 2024 12:00AM by Anupa Ann Joseph, Stefano Rivera

December 08, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pinp 0.0.11 on CRAN: Maintenance

A new version of our pinp package arrived on CRAN today, and is the first release in four years. The pinp package allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

pinp vignette

This release contains no new features or new user-facing changes but reflects the standard package and repository maintenance over the four-year window since the last release: updating of actions, updating of URLs and addressing small packaging changes spotted by ever-more-vigilant R checking code.

The NEWS entry for this release follows.

Changes in pinp version 0.0.11 (2024-12-08)

  • Standard package maintenance for continuous integration, URL updates, and packaging conventions

  • Correct two minor nags in the Rd file

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the ping page. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 December, 2024 09:18PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

Bits from MiniDebConf Toulouse 2024

Intro

I always find it amazing the opportunities I have thanks to my contributions to the Debian Project. I am happy to receive this recognition through the help I receive with travel to attend events in other countries.

This year, two MiniDebConfs were scheduled for the second half of the year in Europe: the traditional edition in Cambridge in UK and a new edition in Toulouse in France. After weighing the difficulties and advantages that I would have to attend one of them, I decided to choose Toulouse, mainly because it was cheaper and because it was in November, giving me more time to plan the trip. I contacted the current DPL Andreas Tille explaining my desire to attend the event and he kindly approved my request for Debian to pay for the tickets. Thanks again to Andreas!

MiniDebConf Toulouse 2024 was held in November 16th and 17th (Saturday and Sunday) and took place in one of the rooms of a traditional Free Software event in the city named Capitole du Libre. Before MiniDebConf, the team organized a MiniDebCamp in November 14th and 15th at a coworking space.

The whole experience promised to be incredible, and it was! From visiting a city in France for the first time, to attending a local Free Software event, and sharing four days with people from the Debian community from various countries.

Travel and the city

My plan was to leave Belo Horizonte on Monday, pass through São Paulo, and arrive in Toulouse on Tuesday night. I was going to spend the whole of Wednesday walking around the city and then participate in the MiniDebCamp on Thursday.

But the flight that was supposed to leave São Paulo in the early hours of Monday to Tuesday was cancelled due to a problem with the airplane and I had spent all Tuesday waiting. I was rebooked on another flight that left in the evening and arrived in Toulouse on Wednesday afternoon. Even though I was very tired from the trip, I still took advantage of the end of the day to walk around the city. But it was a shame to have lost an entire day of sightseeing.

On Thursday I left early in the morning to walk around a little more before going to the MiniDebCamp venue. I walked around a lot and saw several tourist attractions. The city is really very beautiful, as they say, especially the houses and buildings made of pink bricks. I was impressed by the narrow and winding streets; at one point it seemed like I was walking through a maze. I arrived to a corner and there would be 5 streets crossing in different directions.

The riverbank that runs through the city is very beautiful and people spend their evenings there just hanging out. There was a lot of history around there.

I stayed in an airbnb 25 minutes walking from the coworking space and only 10 minutes from the event venue. It was a very spacious apartment that was much cheaper than a hotel.

MiniDebConf Toulouse 2024

MiniDebConf Toulouse 2024

MiniDebCamp

I arrived at the coworking space where the MiniDebCamp was being held and met up with several friends. I also met some new people, talked about the translation work we do in Brazil, and other topics.

We already knew that the organization would pay for lunch for everyone during the two days of MiniDebCamp, and at a certain point they told us that we could go to the room (which was downstairs from the coworking space) to have lunch. They set up a table with quiches, breads, charcuterie and LOTS of cheese :-) There were several types of cheese and they were all very good. I just found it a little strange because I’m not used to having cheese for lunch :-)

MiniDebConf Toulouse 2024

MiniDebConf Toulouse 2024

In the evening, we went as a group to dinner at a restaurant in front of the Capitolium, the city’s main tourist attraction.

On the second day, in the morning, I walked around the city a bit more, then went to the coworking space and had another incredible cheese table for lunch.

Video Team

One of my ideas for going to Toulouse was to be able to help the video team in setting up the equipment for broadcasting and recording the talks. I wanted to follow this work from the beginning and learn some details, something I can’t do before the DebConfs because I always arrive after the people have already set up the infrastructure. And later reproduce this work in the MiniDebConfs in Brazil, such as the one in Maceió that is already scheduled for May 1-4, 2025.

As I had agreed with the people from the video team that I would help set up the equipment, on Friday night we went to the University and stayed in the room working. I asked several questions about what they were doing, about the equipment, and I was able to clear up several doubts. Over the next two days I was handling one of the cameras during the talks. And on Sunday night I helped put everything away.

Thanks to olasd, tumbleweed and ivodd for their guidance and patience.

MiniDebConf Toulouse 2024

The event in general

There was also a meeting with some members of the publicity team who were there with the DPL. We went to a cafeteria and talked mainly about areas that could be improved in the team.

The talks at MiniDebConf were very good and the recordings are also available here.

I ended up not watching any of the talks from the general schedule at Capitole du Libre because they were in French. It’s always great to see free software events abroad to learn how they are done there and to bring some of those experiences to our events.

I hope that MiniDebConf in Toulouse will continue to take place every year, or that the French community will hold the next edition in another city and I will be able to join again :-) If everything goes well, in July next year I will return to France to join DebConf25 in Brest.

MiniDebConf Toulouse 2024

More photos

08 December, 2024 10:00AM

Russ Allbery

Review: Why Buildings Fall Down

Review: Why Buildings Fall Down, by Matthys Levy & Mario Salvadori

Illustrator: Kevin Woest
Publisher: W.W. Norton
Copyright: 1992
Printing: 1994
ISBN: 0-393-31152-X
Format: Trade paperback
Pages: 314

Why Buildings Fall Down is a non-fiction survey of the causes of structure collapses, along with some related topics. It is a sequel of sorts to Why Buildings Stand Up by Mario Salvadori, which I have not read. Salvadori was, at the time of writing, Professor Emeritus of Architecture at Columbia University (he died in 1997). Levy is an award-winning architectural engineer, and both authors were principals at the structural engineering firm Weidlinger Associates. There is a revised and updated 2002 edition, but this review is of the original 1992 edition.

This is one of those reviews that comes with a small snapshot of how my brain works. I got fascinated by the analysis of the collapse of Champlain Towers South in Surfside, Florida in 2021, thanks largely to a random YouTube series on the tiny channel of a structural engineer. Somewhere in there (I don't remember where, possibly from that channel, possibly not) I saw a recommendation for this book and grabbed a used copy in 2022 with the intent of reading it while my interest was piqued. The book arrived, I didn't read it right away, I got distracted by other things, and it migrated to my shelves and sat there until I picked it up on an "I haven't read nonfiction in a while" whim.

Two years is a pretty short time frame for a book to sit on my shelf waiting for me to notice it again. The number of books that have been doing that for several decades is, uh, not small.

Why Buildings Fall Down is a non-technical survey of structure failures. These are mostly buildings, but also include dams, bridges, and other structures. It's divided into 18 fairly short chapters, and the discussion of each disaster is brisk and to the point. Most of the structures discussed are relatively recent, but the authors talk about the Meidum Pyramid, the Parthenon (in the chapter on intentional destruction by humans), and the Pavia Civic Tower (in the chapter about building death from old age). If you are someone who has already been down the structural failure rabbit hole, you will find chapters on the expected disasters like the Tacoma Narrows Bridge collapse and the Hyatt Regency walkway collapse, but there are a lot of incidents here, including a short but interesting discussion of the Leaning Tower of Pisa in the chapter on problems caused by soil properties.

What you're going to get, in other words, is a tour of ways in which structures can fail, which is precisely what was promised by the title. This wasn't quite what I was expecting, but now I'm not sure why I was expecting something different. There is no real unifying theme here; sometimes the failure was an oversight, sometimes it was a bad design, sometimes it was a last-minute change, and sometimes it was something unanticipated. There are a lot of factors involved in structure design and any of them can fail. The closest there is to a common pattern is a lack of redundancy and sufficient safety factors, but that lack of redundancy was generally not deliberate and therefore this is not a guide to preventing a collapse. The result is a book that feels a bit like a grab-bag of structural trivia that is individually interesting but only occasionally memorable.

The writing style I suspect will be a matter of taste, but once I got used to it, I rather enjoyed it. In a co-written book, it's hard to separate the voices of the authors, but Salvadori wrote most of the chapter on the law in the first person and he's clearly a character. (That chapter is largely the story of two trials he testified in, which, from his account, involved him verbally fencing with lawyers who attempted to claim his degrees from the University of Rome didn't count as real degrees.) If this translates to his speaking style, I suspect he was a popular lecturer at Columbia.

The explanations of the structural failures are concise and relatively clear, although even with Kevin Woest's diagrams, it's hard to capture the stresses and movement in a written description. (I've found from watching YouTube videos that animations, or even annotations drawn while someone is talking, help a lot.) The framing discussion, well, sometimes that is bombastic in a way that I found amusing:

But we, children of a different era, do not want our lives to be enclosed, to be shielded from the mystery. We are eager to participate in it, to gather with our brothers and sisters in a community of thought that will lift us above the mundane. We need to be together in sorrow and in joy. Thus we rarely build monolithic monuments. Instead, we build domes.

It helps that passages like this are always short and thus don't wear out their welcome. My favorite line in the whole book is a throwaway sentence in a discussion of building failures due to explosions:

With a similar approach, it can be estimated that the chance of an explosion like that at Forty-fifth Street was at most one in thirty million, and probably much less. But this is why life is dangerous and always ends in death.

Going hard, structural engineering book!

It's often appealing to learn about things from their failures because the failures are inherently more dramatic and thus more interesting, but if you were hoping for an introduction to structural engineering, this is probably not the book you want. There is an excellent and surprisingly engaging appendix that covers the basics of structural analysis in 45 pages, but you would probably be better off with Why Buildings Stand Up or another architecture or structural engineering textbook (or maybe a video course). The problem with learning by failure case study is that all the case studies tend to blend together, despite the authors' engaging prose, and nearly every collapse introduces a new structural element with new properties and new failure modes and only the briefest of explanations. This book might make you a slightly more informed consumer of the news, but for most readers I suspect it will be a collection of forgettable trivia told in an occasionally entertaining style.

I think the book I wanted to read was something that went deeper into the process of forensic engineering, not just the outcomes. It's interesting to know what the cause of a failure was, but I'm more interested in how one goes about investigating a failure. What is the process, how do you organize the investigation, and how does the legal system around engineering failures work? There are tidbits and asides here, but this book is primarily focused on the structural analysis and elides most of the work done to arrive at those conclusions.

That said, I was entertained. Why Buildings Fall Down is a bit dated — the opening chapter on airplanes hitting buildings reads much differently now than when it was written in 1992, and I'm sure it was updated in the 2002 edition — but it succeeds in being clear without being soulless or sounding like a textbook. I appreciate an occasional rant about nuclear weapons in a book about architecture. I'm not sure I really recommend this, but I had a good time with it.

Also, I'm now looking for opportunities to say "this is why life is dangerous and always ends in death," so there is that.

Rating: 6 out of 10

08 December, 2024 04:04AM

December 07, 2024

Dominique Dumont

New cme command to update Debian Standards-Version field

Hi

While updating my Debian package, I often have to update a field from debian/control file.

This field is named Standards-Version and it declares which version of Debian policy the package complies to. When updating this field, one must follow the upgrading checklist.

That being said, I maintain a lot of similar package and I often have to update this Standards-Version field.

This field can be updated manually with cme fix dpkg (see Managing Debian packages with cme). But this command may make other changes and does not commit the result.

So I’ve created a new update-standards-version cme script that:

  • udpate Standards-Version field
  • commit the changed

For instance:

$ cme run update-standards-version 
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Connecting to api.ftp-master.debian.org to check 31 package versions. Please wait...
Got info from api.ftp-master.debian.org for 31 packages.
Warning in 'source Standards-Version': Current standards version is '4.7.0'. Please read https://www.debian.org/doc/debian-policy/upgrading-checklist.html for the changes that may be needed on your package
to upgrade it from standard version '4.6.2' to '4.7.0'.

Offending value: '4.6.2'

Changes applied to dpkg-control configuration:
- source Standards-Version: '4.6.2' -> '4.7.0'
[master 552862c1] control: declare compliance with Debian policy 4.7.0
 1 file changed, 1 insertion(+), 1 deletion(-)

Here’s the generated commit. Note that the generated log mentions the new policy version:

$ git show
commit 552862c1f24479b1c0c8c35a6289557f65e8ff3b (HEAD -> master)
Author: Dominique Dumont <dod[at]debian.org>
Date:   Sat Dec 7 19:06:14 2024 +0100

    control: declare compliance with Debian policy 4.7.0

diff --git a/debian/control b/debian/control
index cdb41dc0..e888012e 100644
--- a/debian/control
+++ b/debian/control
@@ -48,7 +48,7 @@ Build-Depends-Indep: dh-sequence-bash-completion,
                      libtext-levenshtein-damerau-perl,
                      libyaml-tiny-perl,
                      po-debconf
-Standards-Version: 4.6.2
+Standards-Version: 4.7.0
 Vcs-Browser: https://salsa.debian.org/perl-team/modules/packages/libconfig-model-perl
 Vcs-Git: https://salsa.debian.org/perl-team/modules/packages/libconfig-model-perl.git
 Homepage: https://github.com/dod38fr/config-model/wiki

Notes:

  • this script can run only if there’s not pending change. Please commit or stash these changes before running this script.
  • this script requires:
    • cme >= 1.041
    • libconfig-model-perl >= 2.155
    • libconfig-model-dpkg-perl >= 3.006

I hope this will be useful to all my fellow Debian developers to reduce the boring parts of packaging activities.

All the best

07 December, 2024 05:52PM by dod

Russ Allbery

Review: Dark Minds

Review: Dark Minds, by Michelle Diener

Series: Class 5 #3
Publisher: Eclipse
Copyright: 2016
ISBN: 0-6454658-5-2
Format: Kindle
Pages: 328

Dark Minds is the third book in the self-published Class 5 science-fiction romance series. It is a sequel to Dark Deeds and should not be read out of order, but as with the other books of the series, it follows new protagonists.

Imogen, like another Earth woman before her, was kidnapped by a Class 5 for experimentation by the Tecran. She was subsequently transferred to a research facility and has been held in cages and secret military bases for over three months. As the story opens, she had been imprisoned in a Tecran transport for a couple of weeks when the transport is taken by the Krik who kill all of the Tecran crew. Next stop: another Class 5.

That same Class 5 is where the Krik are taking Camlar Kalor and his team. Cam is a Grih investigator from the United Council, sent to look into reports of a second Earth woman rescued from a Garmman ship. (The series reader will recognize this as the plot of Dark Deeds.) Now he's trapped with the crews of a bunch of other random ships in the cargo hold of a Class 5 with Krik instead of Tecran roaming the halls, apparently at the behest of the ship's banned AI, and a mysterious third Earth woman already appears to be befriending it.

Imogen is the woman Fiona saw signs of in Dark Deeds. She's had a rougher time than the protagonists of the previous two books in this series, and she's been dropped into a less stable situation. The Class 5 she's brought to at the start of the story is far more suspicious (with quite a lot of cause) and somewhat more hostile than the AIs we've encountered previously. The rest of the story formula is roughly the same, though: hunky Grih officer with a personality completely indistinguishable from the hunky Grih officers in the previous two books, an AI with a sketchy concept of morality that desperately needs an ally, the Grih obsession with singing, the eventual discovery of useful armor and weaponry that Imogen can use to surprise people, and more political maneuvering over the Sentient Beings Agreement.

This entry in the series mostly abandons the Grih shock and horror at how badly the Earth women have been treated. This makes sense, given how dangerous Earth women have proven over the course of this series, and I like that Diener is changing the political dynamics as the story develops. I do sometimes miss that appalled anger of Dark Horse, but Dark Minds focuses more on the politics and corridor fighting of a tense multi-sided stand-off.

I found the action more gripping in Dark Minds than in Dark Deeds, and I liked Imogen more as a character than Fiona. She doesn't have the delightfully calm competence of Rose from Dark Horse, but she's a bit more hardened, a bit more canny, and is better at taking control of situations. I also like that Diener avoids simplistically pairing Earth women off with Class 5s. The series plot is progressing faster than I had expected, and that gives this book a somewhat different shape than the previous ones.

Cam is probably the least interesting of the men in this series so far and appears to exist solely to take up the man-shaped hole in the plot. This is not a great series for gender roles; thankfully, the romance is a small part of the plot and largely ignorable. The story is about the women and the AIs, and all of the women and most of the AIs of the previous books make an appearance. It's clear they're forming an alliance whether the Grih like it or not, and that part of the story was very satisfying.

Up to this book, this series had been all feel-good happy endings. I will risk the small spoiler and warn that this is not true to the same degree here, so you may not want to read this one if you want something entirely fluffy, light, and positive (inasmuch as a series involving off-screen experimentation on humans can be fluffy, light, and positive). That caught me by surprise in a way I didn't entirely like, and I wish Diener had stuck with the entirely positive tone.

Other than that, though, this was fun, light, readable entertainment. It's not going to win any literary awards, it's formulaic, the male protagonist comes from central casting, and the emphasis by paragraph break is still a bit grating in places, but I will probably pick up the next book when I'm in the mood for something light. Dark Minds is an improvement over book two, which bodes well for the rest of the series.

Followed by Dark Matters.

Rating: 7 out of 10

07 December, 2024 05:22AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

lintian.debian.org: Episode IV – A New Hope

After weeks – dare I say months – of work, it is finally done. lintian.debian.org is back online!

Screenshot of the new lintian.debian.org website

Many, many thanks to everyone who worked hard to make this possible:

  • Thanks to Nicolas Peugnet, the author of lintian-ssg, who handed us this custom static site generator on a silver platter. I'm happy I didn't have to code this myself :)
  • Thanks to Otto Kekäläinen, maintainer of the lintian-ssg package in Debian, who worked in tandem with Nicolas to iron out problems.
  • Thanks to Philipp Kern, who did the work on the DSA side to put the website back online.

All in all, I did very little (mostly coordinating these fine folks) and they should get the credit for this very useful service being back.

07 December, 2024 01:45AM by Louis-Philippe Véronneau

December 06, 2024

hackergotchi for Bálint Réczey

Bálint Réczey

Firebuild 0.8.3 is out with 100+ fixes and experimental macOS support!

The new Firebuild release contains plenty of small fixes and a few notable improvements.

Experimental macOS support

The most frequently asked question from people getting to know Firebuild was if it worked on their Mac and the answer sadly used to be that well, it did, but only in a Linux VM. This was far from what they were looking for. 🙁

Linux and macOS have common UNIX roots, but porting Firebuild to macOS included bigger challenges, like ensuring that dyld(1), macOS’s dynamic loader initializes the preloaded interceptor library early enough to catch all interesting calls, and avoid using anything that uses malloc() or thread local variables which are not yet set up then.

Preloading libraries on Linux is really easy, running LD_PRELOAD=my_lib.so ls just works if the library exports the symbols to be interposed, while macOS employs multiple lines of defense to prevent applications from using unknown libraries. Firebuild’s guide for making DYLD_INSERT_LIBRARIES honored on Macs can be helpful with other projects as well that rely on injecting libraries.

Since GitHub’s Arm64 macOS runners don’t allow intercepting binaries with arm64e ABI yet, Firebuild’s Apple Silicon tests are run at Bitrise, who are proud to be first to provide the latest Xcode stacks and were also quick to make the needed changes to their infrastructure to support Firebuild (thanks! ❤).

Firebuild on macOS can already accelerate simple projects and rebuild itself with Xcode. Since Xcode introduces a lot of nondeterminism to the build, Firebuild can’t shine in acceleration with Xcode yet, but can provide nice reports to show which part of the build is the most time consuming and how each sub-command is called.

If you would like to try Firebuild on macOS please compile it from the GitHub repository for now. Precompiled binaries will be distributed on the Mac App Store and via CI providers. Contact us to get notified when those channels become available.

Dealing with the ‘Epochalypse’

Glibc’s API provides many functions with time parameters and some of those functions are intercepted by Firebuild. Time parameters used to be passed as 32-bit values on 32-bit systems, preventing them to accurately represent timestamps after year 2038, which is known as the Y2038 problem or the Epochalypse.

To deal with the problem glibc 2.34 started providing new function symbol variants with 64-bit time parameters, e.g clock_gettime64() in addition to clock_gettime(). The new 64-bit variants are used when compiling consumers of the API with _TIME_BITS=64 defined.

Processes intercepted by Firebuild may have been compiled with or without _TIME_BITS=64, thus libfirebuild now provides both variants on affected systems running glibc >= 34 to work safely with binaries using 64-bit and 32-bit time representation.

Many Linux distributions already stopped supporting 32-bit architectures, but Debian and Ubuntu still supports armhf, for example, where the Y2038 problem still applies. Both Debian and Ubuntu performed a transition rebuilding every library (and their reverse dependencies) with -D_FILE_OFFSET_BITS=64 set where the libraries exported symbols that changed when switching to 64-bit time representation (thanks to Steve Langasek for driving this!) . Thanks to the transition most programs are ready for 2038, but interposer libraries are trickier to fix and if you maintain one it might be a good idea to check if it works well both 32-bit and 64-bit libraries. Faketime, for example is not fixed yet, see #1064555.

Select passed through environment variables with regular expressions

Firebuild filters out most of the environment variables set when starting a build to make the build more reproducible and achieve higher cache hit rate. Extra environment variables to pass through can be specified on the command line one by one, but with many similarly named variables this may become hard to maintain. With regular expressions this just became easier:

firebuild -o 'env_vars.pass_through += "MY_VARS_.*"' my_build_command

If you are not interested in acceleration just would like to explore what the build does by generating a report you can simply pass all variables:

firebuild -r -o 'env_vars.pass_through += ".*"' my_build_command

Other highlights from the 0.8.3 release

  • Fixed and nicer report in Chrome and other WebKit based browsers
  • Support GLibc 2.39 by intercepting pidfd_spawn() and pidfd_spawnp()
  • Even faster Rust build acceleration

For all the changes please check out the release page on GitHub! 🚀

(This post is also published on The Firebuild blog.)

06 December, 2024 09:53PM by Réczey Bálint

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.2.2-1 on CRAN: Small Upstream Fixes

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1197 other packages on CRAN, downloaded 37.5 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 605 times according to Google Scholar.

Conrad released a minor version 14.2.2 yesterday. This followed a bit of recent work a few of us did in the ensmallen and mlpack repositories following the [14.2.0 release]. Use of (member functions) .min(index) and .max(index) was deprecated in Armadillo in favor of .index_mix() and .index_max(). By now ensmallen and mlpack have been updated at CRAN. To add some spice, CRAN emailed that the (very much unreleased as of now, but coming likely next spring) gcc-15 was unhappy with RcppArmadillo due to some Armadillo code. This likely related to the listed gcc-15 C++ change about “Qualified name lookup failure into the current instantiation”. Anyway, Conrad fixed it within days and that change too is part of this new version (as is a small behaviour normalization between the two indexing methods that matters in case of ties, this was in 14.2.1).

The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.2-1 (2024-12-05)

  • Upgraded to Armadillo release 14.2.2 (Smooth Caffeine)

    • Workarounds for regressions in pre-release versions of GCC 15

    • More selective detection of symmetric/hermitian matrices by various functions

Changes in RcppArmadillo version 14.2.1-1 (2024-11-24) (GitHub Only)

  • Upgraded to Armadillo release 14.2.1 (Smooth Caffeine)

    • Fix for index_min() and index_max() to ensure that the first index of equal extremum values is found

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 December, 2024 06:34PM

Reproducible Builds (diffoscope)

diffoscope 284 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 284. This version includes the following changes:

[ Chris Lamb ]
* Simplify tests_quines.py::test_{differences,differences_deb} to use
  assert_diff and not mangle the expected test output.
* Update some tests to support file(1) version 5.46.
  (Closes: reproducible-builds/diffoscope#395)

You find out more by visiting the project homepage.

06 December, 2024 12:00AM

December 05, 2024

Reproducible Builds

Reproducible Builds in November 2024

Welcome to the November 2024 report from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security where relevant. As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Reproducible Builds mourns the passing of Lunar
  2. Introducing reproduce.debian.net
  3. New landing page design
  4. SBOMs for Python packages
  5. Debian updates
  6. Reproducible builds by default in Maven 4
  7. PyPI now supports digital attestations
  8. “Dependency Challenges in OSS Package Registries”
  9. Zig programming language demonstrated reproducible
  10. Website updates
  11. Upstream patches
  12. Misc development news
  13. Reproducibility testing framework

Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announced it has lost its founding member, Lunar. Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. He was the author of our earliest status reports and many of our key tools in use today are based on his design. Lunar’s creativity, insight and kindness were often noted.

You can view our full tribute elsewhere on our website. He will be greatly missed.


Introducing reproduce.debian.net

In happier news, this month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.

rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there.

In November, reproduce.debian.net began rebuilding Debian unstable on the amd64 architecture, but throughout the MiniDebConf, it had attempted to rebuild 66% of the official archive. From this, it could be determined that it is currently possible to bit-for-bit reproduce and corroborate approximately 78% of the actual binaries distributed by Debian — that is, using the .buildinfo files hosted by Debian itself.

reproduce.debian.net also contains instructions how to setup one’s own rebuilderd instance, and we very much invite everyone with a machine to spare to setup their own version and to share the results. Whilst rebuilderd is still in development, it has been used to reproduce Arch Linux since 2019. We are especially looking for installations targeting Debian architectures other than i386 and amd64.


New landing page design

As part of a very productive partnership with the Sovereign Tech Fund and Neighbourhoodie, we are pleased to unveil our new homepage/landing page.

We are very happy with our collaboration with both STF and Neighbourhoodie (including many changes not directly related to the website), and look forward to working with them in the future.

SBOMs for Python packages

The Python Software Foundation has announced a new “cross-functional project for SBOMs and Python packages”. Seth Michael Larson writes that the project is “specifically looking to solve these issues”:

  • Enable Python users that require SBOM documents (likely due to regulations like CRA or SSDF) to self-serve using existing SBOM generation tools.
  • Solve the “phantom dependency” problem, where non-Python software is bundled in Python packages but not recorded in any metadata. This makes the job of software composition analysis (SCA) tools difficult or impossible.
  • Make the adoption work by relevant projects such as build backends, auditwheel-esque tools, as minimal as possible. Empower users who are interested in having better SBOM data for the Python projects they are using to be able to contribute engineering time towards that goal.

A GitHub repository for the initiative is available, and there are a number of queries, comments and remarks on Seth’s Discourse forum post.


Debian updates

There was significant development within Debian this month. Firstly, at the recent MiniDebConf in Toulouse, France, Holger Levsen gave a Debian-specific talk on rebuilding packages distributed from ftp.debian.org — that is to say, how to reproduce the results from the official Debian build servers:

Holger described the talk as follows:

For more than ten years, the Reproducible Builds project has worked towards reproducible builds of many projects, and for ten years now we have build Debian packages twice—with maximal variations applied—to see if they can be build reproducible still.

Since about a month, we’ve also been rebuilding trying to exactly match the builds being distributed via ftp.debian.org. This talk will describe the setup and the lessons learned so far, and why the results currently are what they are (spoiler: they are less than 30% reproducible), and what we can do to fix that.

The Debian Project Leader, Andreas Tille, was present at the talk and remarked later in his Bits from the DPL update that:

It might be unfair to single out a specific talk from Toulouse, but I’d like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar’s contributions and legacy. Personally, I’ve encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work.

Holger’s slides and video in .webm format are available.


Next, rebuilderd is the server to monitor package repositories of Linux distributions and attempt to reproduce the observed results. This month, version 0.21.0 released, most notably with improved support for binNMUs by Jochen Sprickerhof and updating the rebuilderd-debian.sh integration to the latest debrebuild version by Holger Levsen. There has also been significant work to get the rebuilderd package into the Debian archive, in particular, both rust-rebuilderd-common version 0.20.0-1 and rust-rust-lzma version 0.6.0-1 were packaged by kpcyrd and uploaded by Holger Levsen.

Related to this, Holger Levsen submitted three additional issues against rebuilderd as well:

  • rebuildctl should be more verbose when encountering issues. []
  • Please add an option to used randomised queues. []
  • Scheduling and re-scheduling multiple packages at once. []

… and lastly, Jochen Sprickerhof submitted one an issue requested that rebuilderd downloads the source package in addition to the .buildinfo file [] and kpcyrd also submitted and fixed an issue surrounding dependencies and clarifying the license []


Separate to this, back in 2018, Chris Lamb filed a bug report against the sphinx-gallery package as it generates unreproducible content in various ways. This month, however, Dmitry Shachnev finally closed the bug, listing the multiple sub-issues that were part of the problem and how they were resolved.


Elsewhere, Roland Clobus posted to our mailing list this month, asking for input on a bug in Debian’s ca-certificates-java package. The issue is that the Java key management tools embed timestamps in its output, and this output ends up in the /etc/ssl/certs/java/cacerts file on the generated ISO images. A discussion resulted from Roland’s post suggesting some short- and medium-term solutions to the problem.


Holger Levsen uploaded some packages with reproducibility-related changes:


Lastly, 12 reviews of Debian packages were added, 5 were updated and 21 were removed this month adding to our knowledge about identified issues in Debian.


Reproducible builds by default in Maven 4

On our mailing list this month, Hervé Boutemy reported the latest release of Maven (4.0.0-beta-5) has reproducible builds enabled by default. In his mailing list post, Hervé mentions that “this story started during our Reproducible Builds summit in Hamburg”, where he created the upstream issue that builds on a “multi-year” effort to have Maven builds configured for reproducibility.


PyPI now supports digital attestations

Elsewhere in the Python ecosystem and as reported on LWN and elsewhere, the Python Package Index (PyPI) has announced that it has finalised support for PEP 740 (“Index support for digital attestations”).

Trail of Bits, who performed much of the development work, has an in-depth blog post about the work and its adoption, as well as what is left undone:

One thing is notably missing from all of this work: downstream verification. […]

This isn’t an acceptable end state (cryptographic attestations have defensive properties only insofar as they’re actually verified), so we’re looking into ways to bring verification to individual installing clients. In particular, we’re currently working on a plugin architecture for pip that will enable users to load verification logic directly into their pip install flows.

There was an in-depth discussion on LWN’s announcement page, as well as on Hacker News.


Dependency Challenges in OSS Package Registries

At BENEVOL, the Belgium-Netherlands Software Evolution workshop in Namur, Belgium, Tom Mens and Alexandre Decan presented their paper, “An Overview and Catalogue of Dependency Challenges in Open Source Software Package Registries”.

The abstract of their paper is as follows:

While open-source software has enabled significant levels of reuse to speed up software development, it has also given rise to the dreadful dependency hell that all software practitioners face on a regular basis. This article provides a catalogue of dependency-related challenges that come with relying on OSS packages or libraries. The catalogue is based on the scientific literature on empirical research that has been conducted to understand, quantify and overcome these challenges. []

A PDF of the paper is available online.


Zig programming language demonstrated reproducible

Motiejus Jakšty posted an interesting and practical blog post on his successful attempt to reproduce the Zig programming language without using the pre-compiled binaries checked into the repository, and despite the circular dependency inherent in its bootstrapping process.

As a summary, Motiejus concludes that:

I can now confidently say (and you can also check, you don’t need to trust me) that there is nothing hiding in zig1.wasm [the checked-in binary] that hasn’t been checked-in as a source file.

The full post is full of practical details, and includes a few open questions.


Website updates

Notwithstanding the significant change to the landing page (screenshot above), there were an enormous number of changes made to our website this month. This included:

  • Alex Feyerke and Mariano Giménez:

  • Bernhard M. Wiedemann:

    • Update the “System images” page to document the e2fsprogs approach. []
  • Chris Lamb:

  • FC (Fay) Stegerman:

    • Replace more inline markdown with HTML on the “Success stories” page. []
    • Add some links, fix some other links and correct some spelling errors on the “Tools” page. []
  • Holger Levsen:

    • Add a historical presentation (“Reproducible builds everywhere eg. in Debian, OpenWrt and LEDE”) from October 2016. []
    • Add jochensp and Oejet to the list of known contributors. [][]
  • Julia Krüger:

  • Ninette Adhikari & hulkoba:

    • Add/rework the list of success stories into a new page that clearly shows milestones in Reproducible Builds. [][][][][][]
  • Philip Rinn:

    • Import 47 historical weekly reports. []
  • hulkoba:

    • Add alt text to almost all images (!). [][]
    • Fix a number of links on the “Talks”. [][]
    • Avoid so-called ‘ghost’ buttons by not using <button> elements as links, as the affordance of a <button> implies an action with (potentially) a side effect. [][]
    • Center the sponsor logos on the homepage. []
    • Move publications and generate them instead from a data.yml file with an improved layout. [][]

    • Make a large number of small but impactful stylisting changes. [][][][]

    • Expand the “Tools” to include a number of missing tools, fix some styling issues and fix a number of stale/broken links. [][][][][][]


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Misc development news


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related changes:

    • Create and introduce a new reproduce.debian.net service and subdomain []
    • Make a large number of documentation changes relevant to rebuilderd. [][][][][]
    • Explain a temporary workaround for a specific issue in rebuilderd. []
    • Setup another rebuilderd instance on the o4 node and update installation documentation to match. [][]
    • Make a number of helpful/cosmetic changes to the interface, such as clarifying terms and adding links. [][][][][]
    • Deploy configuration to the /opt and /var directories. [][]
    • Add an infancy (or ‘alpha’) disclaimer. [][]
    • Add more notes to the temporary rebuilderd documentation. []
    • Commit an nginx configuration file for reproduce.debian.net’s “Stats” page. []
    • Commit a rebuilder-worker.conf configuration for the o5 node. []
  • Debian-related changes:

    • Grant jspricke and jochensp access to the o5 node. [][]
    • Build the qemu package with the nocheck build flag. []
  • Misc changes:

    • Adapt the update_jdn.sh script for new Debian trixie systems. []
    • Stop installing the PostgreSQL database engine on the o4 and o5 nodes. []
    • Prevent accidental reboots of the o4 node because of a long-running job owned by josch. [][]

In addition, Mattia Rizzolo addressed a number of issues with reproduce.debian.net [][][][]. And lastly, both Holger Levsen [][][][] and Vagrant Cascadian [][][][] performed node maintenance.


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 December, 2024 12:47PM

Russ Allbery

Review: Paladin's Hope

Review: Paladin's Hope, by T. Kingfisher

Series: The Saint of Steel #3
Publisher: Red Wombat Studio
Copyright: 2021
ISBN: 1-61450-613-2
Format: Kindle
Pages: 303

Paladin's Hope is a fantasy romance novel and the third book of The Saint of Steel series. Each book of that series features different protagonists in closer to the romance series style than the fantasy series style and stands alone reasonably well. There are a few spoilers for the previous books here, so you probably want to read the series in order.

Galen is one of the former paladins of the Saint of Steel, left bereft and then adopted by the Temple of the Rat after their god dies. Even more than the paladin protagonists of the previous two books, he reacted very badly to that death and has ongoing problems with nightmares and going into berserker rages when awakened. As the book opens, he's the escort for a lich-doctor named Piper who is examining a corpse found in the river.

The last of the five was the only one who did not share a certain martial quality. He was slim and well-groomed and would be considered handsome, but he was also extraordinarily pale, as if he lived his life underground.

It was this fifth man who nudged the corpse with the toe of his boot and said, "Well, if you want my professional opinion, this great goddamn hole in his chest is probably what killed him."

As it turns out, slim and well-groomed and exceedingly pale is Galen's type.

This is another paladin romance, this time between two men. It's almost all romance; the plot is barely worth mentioning. About half of the book is an exploration of a puzzle dungeon of the sort that might be fun in a video game or tabletop RPG, but that I found rather boring and monotonous in a novel. This creates a lot more room for the yearning and angst.

Kingfisher tends towards slow-burn romances. This romance is a somewhat faster burn than some of her other books, but instead implodes into one of the most egregiously stupid third-act break-ups that I've read in a romance plot. Of all the Kingfisher paladin books, I think this one was hurt the most by my basic difference in taste from the author. Kingfisher finds constant worrying and despair over being good enough for the romantic partner to be an enjoyable element, and I find it incredibly annoying. I think your enjoyment of this book will heavily depend on where you fall on that taste divide.

The saving grace of this book are the gnoles, who are by far the best part of this world. Earstripe, a gnole constable, is the one who found the body that the book opens with and he drives most of the plot, such that it is. He's also the source of the best banter in the book, which is full of pointed and amused gnole observations about humans and their various stupidities. Given that I was also grumbling about human stupidities for most of the book, the gnole viewpoint and I got along rather well.

"God's stripes." Earstripe shook his head in disbelief. "Bone-doctor would save some gnole, yes? If some gnole was hurt."

"Of course," said Piper. "If I could."

"And tomato-man would save some gnole?" He swung his muzzle toward Galen. "If some gnome needed big human with sword?"

"Yes, of course."

Earstripe spread his hands, claws gleaming. "A gnole saves some human. Same thing." He took a deep breath, clearly choosing his words carefully. "A gnole's compassion does not require fur."

We learn a great deal more about gnole culture, all of which I found fascinating, and we get a rather satisfying amount of gnole acerbic commentary. Kingfisher is very good at banter, and dialogue in general, which also smoothes over the paucity of detailed plot. There was no salvaging the romance, at least for me, but I did at least like Piper, and Galen wasn't too bad when he wasn't being annoyingly self-destructive.

I had been wondering a little if gay romance would, like sapphic romance, avoid my dislike of heterosexual gender roles. I think the jury is still out, but it did not work in this book because Galen is so committed to being the self-sacrificing protector who is unable to talk about his feelings that he single-handedly introduced a bunch of annoying pieces of the male gender role anyway. I will have to try that experiment with a book that doesn't involve hard-headed paladins.

I have yet to read a bad T. Kingfisher novel, but I thought this one was on the weaker side. The gnoles are great and kept me reading, but I wish there had been a more robust plot, a lot less of the romance, and no third-act break-up. As is, I recommend the other Saint of Steel books over this one. Ah well.

Followed by Paladin's Faith.

Rating: 6 out of 10

05 December, 2024 03:56AM

December 04, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

corels 0.0.5 on CRAN: Maintenance

An updated version of the corels package is now on CRAN! The ‘Certifiably Optimal RulE ListS (Corels)’ learner provides interpretable decision rules with an optimality guarantee—a nice feature which sets it apart in machine learning. You can learn more about corels at its UBC site.

The changes concern mostly maintenance for both the repository (such as continunous integration setup, badges, documentation links, …) and the package level (such as removing the no-longer-requiring C++ compilation standard setter now emitting a NOTE at CRAN.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 December, 2024 11:13PM

Scarlett Gately Moore

Hacked and tis the season for surgeries

I am still here. Sadly while I battle this insane infection from my broken arm I got back in July, the hackers got my blog. I am slowly building it back up. Further bad news is I have more surgeries, first one tomorrow. Furthering my current struggles I cannot start my job search due to hospitalization and recovery. Please consider a donation. https://gofund.me/6e99345d

On the open source work front, I am still working on stuff, mostly snaps ( Apps 24.08.3 released )

Thank you everyone that voted me into the Ubuntu Community Council!

I am trying to stay positive, but it seems I can’t catch a break. I will have my computer in the hospital and will work on what I can. Have a blessed day and see you soon.

Scarlett

04 December, 2024 05:30PM by sgmoore

Sven Hoexter

Looking at x509 Certificate Chains

Sometimes you've to look at the content of x509 certificate chains. Usually one finds them pem encoded and concatenated in a text file.

Since the openssl x509 subcommand only decodes the first certificate it will find in a file, I did something like this:

csplit -z -f 'cert' fullchain.pem '/-----BEGIN CERTIFICATE-----/' '{*}'
for x in cert*; do openssl x509 -in $x -noout -text; done

Apparently that's the "wrong" way and the more appropriate way is using the openssl crl2pkcs7 subcommand albeit we do not try to parse a revocation list here.

  openssl crl2pkcs7 -nocrl -certfile fullchain.pem | \
  openssl pkcs7 -print_certs -noout

Learned that one in a webinar presented by Victor Dukhovni. If you're new to the topic worth watching.

04 December, 2024 05:23PM

Enrico Zini

How to right click

I climbed on top of a mountain with a beautiful view, and when I started readying my new laptop for a work call (as one does on top of mountains), I realised that I couldn't right click and it kind of spoiled the mood.

Clicking on the bottom right corner of my touchpad left-clicked. Clicking with two fingers left-clicked. Alt-clicking, Super-clicking, Control-clicking, left clicked.

Here's there are two ways to simulate mouse buttons with touchpads in Wayland:

  • clicking on different areas at the bottom of the touchpad
  • double or triple-tapping, as long as the fingers are not too far apart

Skippable digression:

I'm not sure why Gnome insists in following Macs for defaults, which is what people with non-Mac hardware are less likely to be used to.

In my experience, Macs are as arbitrarily awkward to use as anything else, but they managed to build a community where if you don't understand how it works you get told you're stupid. All other systems (including Gnome) have communities where instead you get told (as is generally the case) that the system design is stupid, which at least gives you some amount of validation in your suffering.

Oh well.

How to configure right click

Surprisingly, this is not available in Gnome Shell settings. It can be found in gnome-tweaks: under "Keyboard & Mouse", "Mouse Click Emulation", one can choose between "Fingers" or "Area".

I tried both and went for "Area": I use right-drag a lot to resize windows, and I couldn't find a way, at least with this touchpad, to make it work consistently in "Fingers" mode.

04 December, 2024 04:51PM

hackergotchi for Bits from Debian

Bits from Debian

"Ceratopsian" will be the default theme for Debian 13

The theme "Ceratopsian" by Elise Couper has been selected as the default theme for Debian 13 "trixie". The theme is inspired by Trixie's (the fictional character from Toy Story) frill and is also influenced by a previously used theme called "futurePrototype" by Alex Makas.

Ceratopsian wallpaper. Click to see the whole theme proposal

Ceratopsian Website banner. Click to see the whole theme proposal

After the Debian Desktop Team made the call for proposing themes, a total of six choices were submitted. The desktop artwork poll was open to the public, and we received 2817 responses ranking the different choices, of which Ceratopsian has been ranked as the winner among them.

We'd like to thank all the designers that have participated and have submitted their excellent work in the form of wallpapers and artwork for Debian 13.

Congratulations, Elise, and thank you very much for your contribution to Debian!

04 December, 2024 12:30PM by Jonathan Carter

December 03, 2024

Russ Allbery

Review: Astrid Parker Doesn't Fail

Review: Astrid Parker Doesn't Fail, by Ashley Herring Blake

Series: Bright Falls #2
Publisher: Berkley Romance
Copyright: November 2022
ISBN: 0-593-33644-5
Format: Kindle
Pages: 365

Astrid Parker Doesn't Fail is a sapphic romance novel and a sequel to Delilah Green Doesn't Care. This is a romance style of sequel, which means that it spoils the previous book but involves a different set of protagonists, one of whom was a supporting character in the previous novel.

I suppose the title is a minor spoiler for Delilah Green Doesn't Care, but not one that really matters.

Astrid Parker's interior design business is in trouble. The small town of Bright Falls doesn't generate a lot of business, and there are limits to how many dentist office renovations that she's willing to do. The Everwood Inn is her big break: Pru Everwood has finally agreed to remodel and, even better, Innside America wants to feature the project. The show always works with local designers, and that means Astrid. National TV exposure is just what she needs to turn her business around and avoid an unpleasant confrontation with her domineering, perfectionist mother.

Jordan Everwood is an out-of-work carpenter and professional fuck-up. Ever since she lost her wife, nothing has gone right either inside or outside of her head. Now her grandmother is renovating the favorite place of her childhood, and her novelist brother had the bright idea of bringing her to Bright Falls to help with the carpentry work. The remodel and the HGTV show are the last chance for the inn to stay in business and stay in the family, and Jordan is terrified that she's going to fuck that up too. And then she dumps coffee all over the expensive dress of a furious woman in a designer dress because she wasn't watching where she was going, and that woman turns out to be the designer of the Everwood Inn renovation. A design that Jordan absolutely loathes.

The reader met Astrid in Delilah Green Doesn't Care (which you definitely want to read first). She's a bit better than she was there, but she's still uptight and unhappy and determined not to think too hard about why. When Jordan spills coffee down her favorite dress in their first encounter, shattering her fragile professional calm, it's not a meet-cute. Astrid is awful to her. Her subsequent regret, combined with immediately having to work with her and the degree to which she finds Jordan surprisingly attractive (surprising in part because Astrid thinks she's straight), slowly crack open Astrid's too-controlled life.

This book was, once again, just compulsively readable. I read most of it the same day that I started it, staying up much too late, and then finished it the next day. It also once again made me laugh in delight at multiple points. I am a sucker for stories about someone learning how to become a better person, particularly when it involves a release of anxiety, and oh my does Blake ever deliver on that. Jordan's arc is more straightforward than Astrid's — she just needs to get her confidence back — but her backstory is a lot more complex than it first appears, including a morally ambiguous character who I would hate in person but who I admired as a deft and tricky bit of characterization.

The characters from Delilah Green Doesn't Care of course play a significant role. Delilah in particular is just as much of a delight here as she was in the first book, and I enjoyed seeing the development of her relationship with her step-sister. But the new characters, both the HGTV film crew and the Everwoods, are also great. I think Blake has a real knack for memorable, distinct supporting characters that add a lot of depth to the main romance plot.

I thought this book was substantially more sex-forward than Delilah Green Doesn't Care, with some lust at first or second sight, a bit more physical description of bodies, and an extended section in the middle of the book that's mostly about sex. If this is or is not your thing in romance novels, you may have a different reaction to this book than the previous one.

There is, unfortunately, another third-act break-up, and this one annoyed me more than the one in Delilah Green Doesn't Care because it felt more unnecessary and openly self-destructive. The characters felt like they were headed towards a more sensible and less dramatic resolution, and then that plot twist caught me by surprise in an unpleasant way. After two books, I'm getting the sense that Blake has a preferred plot arc, at least in this series, and I wish she'd varied the story structure a bit more. Still, the third-act conflict was somewhat believable and the resolution was satisfying enough to salvage it.

If it weren't for some sour feelings about the shape of that plot climax, I would have said that I liked this book even better than Delilah Green Doesn't Care, and that's a high bar. This series is great, and I will definitely be reading the third one. I'm going to be curious how that goes since it's about Iris, who so far has worked better for me as a supporting character than a protagonist. But Blake has delivered compulsively readable and thoroughly enjoyable books twice now, so I'm definitely here for the duration.

If you like this sort of thing, I highly recommend this whole series.

Followed by Iris Kelly Doesn't Date in the romance series sense, but as before this book is a complete story with a satisfying ending.

Rating: 9 out of 10

03 December, 2024 03:26AM

December 02, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

This is bits from DPL for November.

MiniDebConf Toulouse

I had the pleasure of attending the MiniDebConf in Toulouse, which featured a range of engaging talks, complementing those from the recent MiniDebConf in Cambridge. Both events were preceded by a DebCamp, which provided a valuable opportunity for focused work and collaboration.

DebCamp

During these events, I participated in numerous technical discussions on topics such as maintaining long-neglected packages, team-based maintenance, FTP master policies, Debusine, and strategies for separating maintainer script dependencies from runtime dependencies, among others. I was also fortunate that members of the Publicity Team attended the MiniDebCamp, giving us the opportunity to meet in person and collaborate face-to-face.

Independent of the ongoing lengthy discussion on the Debian Devel mailing list, I encountered the perspective that unifying Git workflows might be more critical than ensuring all packages are managed in Git. While I'm uncertain whether these two questions--adopting Git as a universal development tool and agreeing on a common workflow for its use--can be fully separated, I believe it's worth raising this topic for further consideration.

Attracting newcomers

In my own talk, I regret not leaving enough time for questions--my apologies for this. However, I want to revisit the sole question raised, which essentially asked: Is the documentation for newcomers sufficient to attract new contributors? My immediate response was that this question is best directed to new contributors themselves, as they are in the best position to identify gaps and suggest improvements that could make the documentation more helpful.

That said, I'm personally convinced that our challenges extend beyond just documentation. I don't get the impression that newcomers are lining up to join Debian only to be deterred by inadequate documentation. The issue might be more about fostering interest and engagement in the first place.

My personal impression is that we sometimes fail to convey that Debian is not just a product to download for free but also a technical challenge that warmly invites participation. Everyone who respects our Code of Conduct will find that Debian is a highly diverse community, where joining the project offers not only opportunities for technical contributions but also meaningful social interactions that can make the effort and time truly rewarding.

In several of my previous talks (you can find them on my talks page –just search for "team," and don't be deterred if you see "Debian Med" in the title; it's simply an example), I emphasized that the interaction between a mentor and a mentee often plays a far more significant role than the documentation the mentee has to read. The key to success has always been finding a way to spark the mentee's interest in a specific topic that resonates with their own passions.

Bug of the Day

In my presentation, I provided a brief overview of the Bug of the Day initiative, which was launched with the aim of demonstrating how to fix bugs as an entry point for learning about packaging. While the current level of interest from newcomers seems limited, the initiative has brought several additional benefits.

I must admit that I'm learning quite a bit about Debian myself. I often compare it to exploring a house's cellar with a flashlight –you uncover everything from hidden marvels to things you might prefer to discard. I've also come across traces of incredibly diligent people who have invested their spare time polishing these hidden treasures (what we call NMUs). The janitor, a service in Salsa that automatically updates packages, fits perfectly into this cellar metaphor, symbolizing the ongoing care and maintenance that keep everything in order. I hadn't realized the immense amount of silent work being done behind the scenes--thank you all so much for your invaluable QA efforts.

Reproducible builds

It might be unfair to single out a specific talk from Toulouse, but I'd like to highlight the one on reproducible builds. Beyond its technical focus, the talk also addressed the recent loss of Lunar, whom we mourn deeply. It served as a tribute to Lunar's contributions and legacy. Personally, I've encountered packages maintained by Lunar and bugs he had filed. I believe that taking over his packages and addressing the bugs he reported is a meaningful way to honor his memory and acknowledge the value of his work.

Advent calendar bug squashing

I’d like to promote an idea originally introduced by Thorsten Alteholz, who in 2011 proposed a Bug Squashing Advent Calendar for the Debian Med team. (For those unfamiliar with the concept of an Advent Calendar, you can find an explanation on Wikipedia.) While the original version included a fun graphical element —which we’ve had to set aside due to time constraints (volunteers, anyone?)— we’ve kept the tradition alive by tackling one bug per day from December 1st to 24th each year. This initiative helps clean up issues that have accumulated over the year.

Regardless of whether you celebrate the concept of Advent, I warmly recommend this approach as a form of continuous bug-squashing party for every team. Not only does it contribute to the release readiness of your team’s packages, but it’s also an enjoyable and bonding activity for team members.

Best wishes for a cheerful and productive December

Andreas.

02 December, 2024 11:00PM by Andreas Tille

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.10 on CRAN: Multiple Enhancements

A new release of the anytime package arrived on CRAN today—the first is well over four years. The package is fairly feature-complete, and code and functionality remain mature and stable, of course.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub repo for a few examples, and the beautiful documentation site for all documentation.

This release slowly matured over four years. It combines a number of strictly internal repository maintenance such as changes to continuous integration with small enhancements (adding for example some new formats, responding better to an error condition, dealing with logical input as an error) with a relaxation of the C++ compilation standard. While we once needed C++11, it is now a constraint as as R itself is quite proactive (the last two releases defaulted already to C++17, suitable compiler permitting) we can now relax this constraint. The documentation site is new, as some other small changes. See the full list of changes which follows.

Changes in anytime version 0.3.10 (2024-12-02)

  • A new documentation site was added.

  • Continuous Integration now uses run.sh from r-ci with bspm

  • Logical input vectors are now recognised as an error (#121)

  • Additional dot-separated format '%Y.%m.%d' is supported

  • Other small updates were made throughout the package

  • No longer set a C++ compilation standard as the default choices by R are sufficient for the package

  • Switch Rcpp include file to Rcpp/Lightest

  • We recommend ~/.R/Makevars compiler flag options -Wno-ignored-attributes -Wno-nonnull -Wno-parentheses

  • The tinytest runner was simplified

  • NA values from conversion now trigger a warning

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo and the documentation site. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 December, 2024 10:01PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

jungle/acid/etc

I thought it had been a full year since I last shared a playlist, but it's been two! I had a plan to produce more, but it seems I haven't. Instead here's a few tracks I've discovered recently which share a common theme.

In August I stumbled across a Sound on Sound video interviewing Pete Cannon, who creates authentically old-school Jungle music using tools and techniques from the time, including AKAI samplers and the Commodore Amiga computer.

Here's three tracks that I found since then. Some 8-bit Amiga-jungle,

by

some slower-paced acid house from someone ostensibly based on Whitley Bay,

by

and a darker piece I heard on the radio.

by

02 December, 2024 10:00PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Graph for my furusato tax.

Graph for my furusato tax. Exceeding 150-man will inevitably exceed 50-man limit for Ichiji-shotoku. So added some rough calculation there. graph.

02 December, 2024 05:39AM by Junichi Uekawa

Charles

Hello World

Or how it took more than a year for me to set up this website -

As the computer science tradition demands, we must start with a Hello World.

Though I have to say this hello world took quite a long time to reach the internet. I’ve been thinking about setting up this website for way over a year, but there are always too many things to decide - what Static Site Generator will I use? Where should I get a domain from? Which registrar would be better now? What if I want to set up a mail server, is it good enough? Oh, and what about the theme, which one to choose? Can I get one simple enough to not fetch javascript or css from external sources?

This was taking so long that even my friends were saying “Please, just share your screen and let’s do it now!”. Well, rejoice friends, now it’s done!

02 December, 2024 02:18AM

December 01, 2024

hackergotchi for Guido Günther

Guido Günther

Free Software Activities November 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.43 release, the initial file chooser portal, phosh-osk-stub now handling digit, number, phone and PIN input purpose via special layouts as well as Phoc mostly catching up with wlroots 0.18 and the current development version targeting 0.19.

phosh

  • When taking a screenshot via keybinding or power button long press save screenshots to clipboard and disk (MR)
  • Robustify Screenshot CI job (MR)
  • Update CI pipeline (MR)
  • Fix notifications banners that aren't tall enough not being shown (MR). Another 4y old bug hopefully out of the way.
  • Add rfkill mock and docs (MR). Useful for HKS testing.
  • Release 0.43~rc1 and 0.43
  • Drop libsoup workaround (MR)
  • Ensure notification only takes its actual height (MR)

phoc

  • Move wlroots 0.18 update forward (MR). Needs a bit more work before we can make it default.
  • Catch up with wlroots development branch (MR) allowing us to test current wlroots again.
  • Some of the above already applies to main so schedule it for 0.44 (MR)

phosh-mobile-settings

  • Don't mark release notes as translatable to save some i18n effort (MR)
  • Release 0.43~rc1 and 0.43.0

libphosh-rs

phosh-osk-stub

  • Add layouts for PIN, number and phone input purpose (MR)
  • Release 0.43~rc1
  • Ensure translation get picked up, various cleanups and release 0.43.0 (MR)
  • Make desktop file match app-id (MR)

phosh-tour

  • Fix typo and reduce number of strings to translate (MR)
  • Add translator comments (MR). This, the above and additional fixes in p-m-s were prompted by i18n feedback from Alexandre Franke, thanks a lot!
  • Release 0.43.0

pfs

  • Initial version of the adaptive file chooser dialog using gtk-rs. See demo.
  • Allow to activate via double click (for non-touch use) (MR)

xdg-desktop-portal-phosh

  • Use pfs to provide a file chooser portal (MR)

meta-phosh

  • Slightly improve point release handling (MR)
  • Improve string freeze announcements and add phosh-tour (MR)

Debian

  • Upload Phosh 0.43.0~rc1 and 0.43.0 (MR, MR, MR, MR, MR, MR, MR, MR, MR, MR, MR)
  • meta-phosh: Add Recommend: for xdg-desktop-portal-phosh (MR)
  • phosh-osk-data got accepted, create repo, brush up packaging and upload to unstable (MR
  • phosh-osk-stub: Recommend data packager (MR
  • Phosh: drop reverts (MR)
  • varnam-schemes: Fix autopkgtest (MR)
  • varnam-schemes: Improve packaging (MR)
  • Prepare govarnam 1.9.1 (MR)

Calls

  • ussd: Set input purpose and switch to AdwDialog (MR, Screenshot)

libcall-ui

  • Drop libhandy leftover (MR)

git-buildpackage

  • Improve docs and cleanup markdown (MR)
  • Mention gbp push in intro (MR)
  • Use application instead of productname entities to improve reading flow (MR)

wlroots

  • Drop mention of wlr_renderer_begin_with_buffer (MR)

python-dbusmock

  • Add mock for gsd-rfkill (MR)

xdg-spec

  • Sync notification categories with the portal spec (MR)
  • Add categories for SMS (MR)
  • Add a pubdate so it's clear the specs aren't stale (MR) (got fixed in a different and better way, thanks Matthias!)

ashpd

  • Allow to set filters in file chooser portal demo (MR)

govarnam

  • Robustify file generation (MR)

varnam-schemes

  • Unbreak tests on non intel/amd architectures (e.g. arm64) (MR)

Reviews

This is not code by me but reviews I did on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!

  • flathub: livi runtime and gst update (MR)
  • phosh: Split linters into their own test suite (MR)
  • phosh; QuickSettings follow-up (MR)
  • phosh: Accent color fixes (MR)
  • phosh: Notification animation (MR)
  • phosh: end-session dialog timeout fix (MR)
  • phosh: search daemon (MR)
  • phosh-ev: Migrate to newer gtk-rs and async_channel (MR)
  • phosh-mobile-settings: Update gmobile (MR)
  • phosh-mobile-settings: Make panel-switcher scrollable (MR)
  • phosh-mobile-settings: i18n comments (MR)
  • gbp doc updates (MR)
  • gbp handle suite names with number prefix (MR)
  • Debian libvirt dependency changes (MR
  • Chatty: misc improvements (MR
  • iio-sensor-proxy: buffer driver without trigger (MR)
  • gbp doc improvements (MR)
  • gbp: More doc improvements (MR)
  • gbp: Clean on failure (MR)
  • gbp: DEP naming consistency (MR)

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

Comments?

Join the Fediverse thread

01 December, 2024 06:55PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in November 2024

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Conferences

I attended MiniDebConf Toulouse 2024, and the MiniDebCamp before it. Most of my time was spent with the Freexian folks working on debusine; Stefano gave a talk about its current status with a live demo (frantically fixed up over the previous couple of days, as is traditional) and with me and others helping to answer questions at the end. I also caught up with some people I haven’t seen in ages, ate a variety of delicious cheeses, and generally had a good time. Many thanks to the organizers and sponsors!

After the conference, Freexian collaborators spent a day and a half doing some planning for next year, and then went for an afternoon visiting the Cité de l’espace.

Rust team

I upgraded these packages to new upstream versions, as part of upgrading pydantic and rpds-py:

  • rust-archery
  • rust-jiter (noticing an upstream test bug in the process)
  • rust-pyo3 (fixing CVE-2024-9979)
  • rust-pyo3-build-config
  • rust-pyo3-ffi
  • rust-pyo3-macros
  • rust-pyo3-macros-backend
  • rust-regex
  • rust-regex-automata
  • rust-regex
  • rust-serde
  • rust-serde-derive
  • rust-serde-json
  • rust-speedate
  • rust-triomphe

Python team

Last month, I mentioned that we still need to work out what to do about the multipart vs. python-multipart name conflict in Debian (#1085728). We eventually managed to come up with an agreed plan; Sandro has uploaded a renamed binary package to experimental, and I’ve begun work on converting reverse-dependencies (asgi-csrf, fastapi, python-curies, and starlette done so far). There’s a bit more still to do, but I expect we can finish it soon.

I fixed problems related to adding Python 3.13 support in:

I fixed some packaging problems that resulted in failures any time we add a new Python version to Debian:

I fixed other build/autopkgtest failures in:

I packaged python-quart-trio, needed for a new upstream version of python-urllib3, and contributed a small packaging tweak upstream.

I backported a twisted fix that caused problems in other packages, including breaking debusine‘s tests.

I disentangled some upstream version confusion in python-catalogue, and upgraded to the current upstream version.

I upgraded these packages to new upstream versions:

Other small fixes

I contributed Incus support to needrestart upstream.

In response to Helmut’s Cross building talk at MiniDebConf Toulouse, I fixed libfilter-perl to support cross-building (5b4c2e10, f9788c27).

I applied a patch to move aliased files from / to /usr in iprutils (#1087733).

I adjusted debconf to use the new /usr/lib/apt/apt-extracttemplates path (#1087523).

I upgraded putty to 0.82.

01 December, 2024 03:00PM by Colin Watson

hackergotchi for Junichi Uekawa

Junichi Uekawa

Lots of travel and back to Tokyo.

Lots of travel and back to Tokyo. Then I got sick. Trying to work on my bass piece, but it's really hard and I am having hard time getting to a reasonable shape. Discussions on Debconf 2026 bid. Hoping it will materialize soon.

01 December, 2024 06:36AM by Junichi Uekawa

hackergotchi for Sandro Knauß

Sandro Knauß

QML Dependency tracking in Debian

Tracking library dependencies work in Debian to resolve from symbols usage to a library and add this to the list of dependencies. That is working for years now. The KDE community nowadays create more and more QML based applications. Unfortunately QML is a interpreted language, this means missing QML dependencies will only be an issue at runtime.

To fix this I created dh_qmldeps, that searches for QML dependencies at build time and will fail if it can't resolve the QML dependency.

Me didn't create an own QML interpreter, just using qmlimportscanner behind the scenes and process the output further to resolve the QML modules to Debian packages.

The workflow is like follows:

The package compiles normally and split to the binary packages. Than dh_qmldeps scans through the package content to find QML content ( .qml files, or qmldirfor QML modules). All founded files will be scanned by qmlimportscanner, the output is a list of depended QML modules. As QML modules have a standardized file path, we can ask the Debian system, which packages ship this file path. We end up with a list of Debian packages in the variable ${qml6:Depends}. This variable can be attached to the list of dependencies of the scanned package. A maintainer can also lower some dependencies to Recommends or Suggest, if needed.

You can find the source code on salsa and usage documentation you can find on https://qt-kde-team.pages.debian.net/dh_qmldeps.html.

The last weeks I now enabled dh_qmldeps for newly every package, that creates a QML6 module package. So the first bugs are solved and it should be usable for more packages.

By scanning with qmlimportscanner trough all code, I found several non-existing QML modules:

  • import QtQuick3DPrivate qt6-multimedia - no Private QML module QTBUG-131753.
  • import QtQuickPrivate qt6-graphs - no Private QML module QTBUG-131754.
  • import QtQuickTimeline qt6-quicktimeline - the correct QML name is QtQuick.Timeline QTBUG-131755.
  • import QtQuickControls2 qt6-webengine - looks like a porting bug as the QML6 modules name is QtQuick.Controls QTBUG-131756.
  • import QtGraphicalEffects kquickimageeditor - the correct name is for QML6 is qt5compat.graphicaleffects, properly as it is an example nobody checks it kquickimageeditor!7.

YEAH - the first milestone is reached. We are able to simply handle QML modules.

But QML applications there is still room for improvement. In apps the QML files are inside the executable. Additionally applications create internal QML modules, that are shipped directly in the same executable. I still search for a good way to analyse an executable to get a list of internal QML modules and a list of included QML files. Any ideas are welcomed :)

As workaround dh_qmldeps scans currently all QML files inside the application source code.

01 December, 2024 12:00AM by Sandro Knauß

November 30, 2024

Dima Kogan

Strava track filtering validation

After years of seeing people's strava tracks, I became convinced that they insufficiently filter the data, resulting in over-estimating the effort. Today I did a bit of lazy analysis, and half-confirmed this: in the one case I looked at, strava reported reasonable elevation gain numbers, but greatly overestimated the distance traveled.

I looked at a single gps track of a long bike ride. This was uploaded to strava manually, as a .gpx file. I can imagine that different things happen if you use the strava app or some device that integrates with the service (the filtering might happen before the data hits the server, and the server could decide to not apply any more filtering).

I processed the data with a simple hysteretic filter, ignoring small changes in position and elevation, trying out different thresholds in the process. I completely ignore the timestamps, and only look at the differences between successive points. This handles the usual GPS noise; it does not handle GPS jumps, which I completely ignore in this analysis. Ignoring these would produce inflated elevation/gain numbers, but I'm working with a looong track, so hopefully this is a small effect.

Clearly this is not scientific, but it's something.

The code

Parsing .gpx is slow (this is a big file), so I cache that into a .vnl:

import sys
import gpxpy

filename_in  = 'INPUT.gpx'
filename_out = 'OUTPUT.gpx'

with open(filename_in, 'r') as f:
    gpx = gpxpy.parse(f)

f_out = open(filename_out, 'w')

tracks = gpx.tracks
if len(tracks) != 1:
    print("I want just one track", file=sys.stderr)
    sys.exit(1)
track = tracks[0]

segments = track.segments
if len(segments) != 1:
    print("I want just one segment", file=sys.stderr)
    sys.exit(1)
segment = segments[0]

time0 = segment.points[0].time
print("# time lat lon ele_m")
for p in segment.points:
    print(f"{(p.time - time0).seconds} {p.latitude} {p.longitude} {p.elevation}",
          file = f_out)

And I process this data with the different filters (this is a silly Python loop, and is slow):

#!/usr/bin/python3

import sys
import numpy as np
import numpysane as nps
import gnuplotlib as gp
import vnlog
import pyproj

geod = None
def dist_ft(lat0,lon0, lat1,lon1):

    global geod
    if geod is None:
        geod = pyproj.Geod(ellps='WGS84')
    return \
        geod.inv(lon0,lat0, lon1,lat1)[2] * 100./2.54/12.




f = 'OUTPUT.gpx'

track,list_keys,dict_key_index = \
    vnlog.slurp(f)

t      = track[:,dict_key_index['time' ]]
lat    = track[:,dict_key_index['lat'  ]]
lon    = track[:,dict_key_index['lon'  ]]
ele_ft = track[:,dict_key_index['ele_m']] * 100./2.54/12.



@nps.broadcast_define( ( (), ()),
                       (2,))
def filter_track(ele_hysteresis_ft,
                 dxy_hysteresis_ft):

    dist        = 0.0
    ele_gain_ft = 0.0

    lon_accepted = None
    lat_accepted = None
    ele_accepted = None

    for i in range(len(lat)):

        if ele_accepted is not None:
            dxy_here  = dist_ft(lat_accepted,lon_accepted, lat[i],lon[i])
            dele_here = np.abs( ele_ft[i] - ele_accepted )

            if dxy_here < dxy_hysteresis_ft and dele_here < ele_hysteresis_ft:
                continue

            if ele_ft[i] > ele_accepted:
                ele_gain_ft += dele_here;

            dist += np.sqrt(dele_here * dele_here +
                            dxy_here  * dxy_here)

        lon_accepted = lon[i]
        lat_accepted = lat[i]
        ele_accepted = ele_ft[i]

    # lose the last point. It simply doesn't matter

    dist_mi = dist / 5280.
    return np.array((ele_gain_ft, dist_mi))




Nele_hysteresis_ft    = 20
ele_hysteresis0_ft    = 5
ele_hysteresis1_ft    = 100
ele_hysteresis_ft_all = np.linspace(ele_hysteresis0_ft,
                                    ele_hysteresis1_ft,
                                    Nele_hysteresis_ft)

Ndxy_hysteresis_ft = 20
dxy_hysteresis0_ft = 5
dxy_hysteresis1_ft = 1000
dxy_hysteresis_ft  = np.linspace(dxy_hysteresis0_ft,
                                 dxy_hysteresis1_ft,
                                 Ndxy_hysteresis_ft)


# shape (Nele,Ndxy,2)
gain,distance = \
    nps.mv( filter_track( nps.dummy(ele_hysteresis_ft_all,-1),
                          dxy_hysteresis_ft),
            -1,0 )


# Stolen from mrcal
def options_heatmap_with_contours( plotoptions, # we update this on output

                                   *,
                                   contour_min           = 0,
                                   contour_max,
                                   contour_increment     = None,
                                   do_contours           = True,
                                   contour_labels_styles = 'boxed',
                                   contour_labels_font   = None):
    r'''Update plotoptions, return curveoptions for a contoured heat map'''

    gp.add_plot_option(plotoptions,
                       'set',
                       ('view equal xy',
                        'view map'))

    if do_contours:
        if contour_increment is None:
            # Compute a "nice" contour increment. I pick a round number that gives
            # me a reasonable number of contours

            Nwant = 10
            increment = (contour_max - contour_min)/Nwant

            # I find the nearest 1eX or 2eX or 5eX
            base10_floor = np.power(10., np.floor(np.log10(increment)))

            # Look through the options, and pick the best one
            m   = np.array((1., 2., 5., 10.))
            err = np.abs(m * base10_floor - increment)
            contour_increment = -m[ np.argmin(err) ] * base10_floor

        gp.add_plot_option(plotoptions,
                           'set',
                           ('key box opaque',
                            'style textbox opaque',
                            'contour base',
                            f'cntrparam levels incremental {contour_max},{contour_increment},{contour_min}'))

        if contour_labels_font is not None:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%d" font "{contour_labels_font}"' )
        else:
            gp.add_plot_option(plotoptions,
                               'set',
                               f'cntrlabel format "%.0f"' )

        plotoptions['cbrange'] = [contour_min, contour_max]

        # I plot 3 times:
        # - to make the heat map
        # - to make the contours
        # - to make the contour labels
        _with = np.array(('image',
                          'lines nosurface',
                          f'labels {contour_labels_styles} nosurface'))
    else:
        gp.add_plot_option(plotoptions, 'unset', 'key')
        _with = 'image'

    using = \
        f'({dxy_hysteresis0_ft}+$1*{float(dxy_hysteresis1_ft-dxy_hysteresis0_ft)/(Ndxy_hysteresis_ft-1)}):' + \
        f'({ele_hysteresis0_ft}+$2*{float(ele_hysteresis1_ft-ele_hysteresis0_ft)/(Nele_hysteresis_ft-1)}):3'
    plotoptions['_3d']     = True
    plotoptions['_xrange'] = [dxy_hysteresis0_ft,dxy_hysteresis1_ft]
    plotoptions['_yrange'] = [ele_hysteresis0_ft,ele_hysteresis1_ft]
    plotoptions['ascii']   = True # needed for using to work

    gp.add_plot_option(plotoptions, 'unset', 'grid')

    return \
        dict( tuplesize=3,
              legend = "", # needed to force contour labels
              using = using,
              _with=_with)




contour_granularity = 1000
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(gain) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(gain) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(gain,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Elevation gain (ft)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed gain vs filtering parameters')


contour_granularity = 10
plotoptions = dict()
curveoptions = \
    options_heatmap_with_contours( plotoptions, # we update this on output
                                   # round down to the nearest contour_granularity
                                   contour_min = (np.min(distance) // contour_granularity)*contour_granularity,
                                   # round up to the nearest contour_granularity
                                   contour_max = ((np.max(distance) + (contour_granularity-1)) // contour_granularity) * contour_granularity,
                                   do_contours = True)
gp.add_plot_option(plotoptions, 'unset', 'key')
gp.add_plot_option(plotoptions, 'set', 'size square')
gp.plot(distance,
        xlabel  = "Distance hysteresis (ft)",
        ylabel  = "Elevation hysteresis (ft)",
        cblabel = "Distance (miles)",
        wait = True,
        **curveoptions,
        **plotoptions,
        title    = 'Computed distance vs filtering parameters')

Results: gain

Strava says the gain was 46307ft. The analysis says:

strava-gain.png

strava-gain-zoom.png

These show the filtered gain for different values of the distance and gain hysteresis thresholds. The same data is shown at diffent zoom levels. There's no sweet spot, but we get 46307ft with a reasonable amount of filtering. Maybe 46307ft is a bit low even.

Results: distance

Strava says the distance covered was 322 miles. The analysis says:

strava-distance.png

strava-distance-zoom.png

Once again, there's no sweet spot, but we get 322 miles only if we apply no filtering at all. That's clearly too high, and is not reasonable. From the map (and from other people's strava routes) the true distance is closer to 305 miles. Why those people's strava numbers are more believable is anybody's guess.

30 November, 2024 10:48PM by Dima Kogan

Enrico Zini

New laptop setup

My new laptop Framework (Framework Laptop 13 DIY Edition (AMD Ryzen™ 7040 Series)) arrived, all the hardware works out of the box on Debian Stable, and I'm very happy indeed.

This post has the notes of all the provisioning steps, so that I can replicate them again if needed.

Installing Debian 12

Debian 12's installer just worked, with Secure Boot enabled no less, which was nice.

The only glitch is an argument with the guided partitioner, which was uncooperative: I have been hit before by a /boot partition too small, and I wanted 1G of EFI and 1G of boot, while the partitioner decided that 512Mb were good enough. Frustratingly, there was no way of changing that, nor I found how to get more than 1G of swap, as I wanted enough swap to fit RAM for hybernation.

I let it install the way it pleased, then I booted into grml for a round of gparted.

The tricky part of that was resizing the root btrfs filesystem, which is in an LV, which is in a VG, which is in a PV, which is in LUKS. Here's a cheatsheet.

Shrink partitions:

  • mount the root filesystem in /mnt
  • btrfs filesystem resize 6G /mnt
  • umount the root filesystem
  • lvresize -L 7G vgname/lvname
  • pvresize --setphysicalvolumesize /dev/mapper/pvname 8G
  • cryptsetup resize --device-size 9G name

note that I used an increasing size because I don't trust that each tool has a way of representing sizes that aligns to the byte. I'd be happy to find out that they do, but didn't want to find out the hard way that they didn't.

Resize with gparted:

Move and resize partitions at will. Shrinking first means it all takes a reasonable time, and you won't have to wait almost an hour for a terabyte-sized empty partition to be carefully moved around. Don't ask me why I know.

Regrow partitions:

  • cryptsetup resize name
  • pvresize /dev/mapper/pvname
  • lvresize -L 100% vgname/lvname
  • mount the root filesystem in /mnt
  • btrfs filesystem resize max /mnt
  • umount the root filesystem

Setup gnome

When I get a new laptop I have a tradition of trying to make it work with Gnome and Wayland, which normally ended up in frustration and a swift move to X11 and Xfce: I have a lot of long-time muscle memory involved in how I use a computer, and it needs to fit like prosthetics. I can learn to do a thing or two in a different way, but any papercut that makes me break flow and I cannot fix will soon become a dealbreaker.

This applies to Gnome as present in Debian Stable.

General Gnome settings tips

I can list all available settings with:

gsettings list-recursively

which is handy for grepping things like hotkeys.

I can manually set a value with:

gsettings set <schema> <key> <value>

and I can reset it to its default with:

gsettings reset <schema> <key>

Some applications like Gnome Terminal use "relocatable schemas", and in those cases you also need to specify a path, which can be discovered using dconf-editor:

gsettings set <schema>:<path> <key> <value>

Install appindicators

First thing first: app install gnome-shell-extension-appindicator, log out and in again: the Gnome Extension manager won't see the extension as available until you restart the whole session.

I have no idea why that is so, and I have no idea why a notification area is not present in Gnome by default, but at least now I can get one.

Fix font sizes across monitors

My laptop screen and monitor have significantly different DPIs, so:

gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"

And in Settings/Displays, set a reasonable scaling factor for each display.

Disable Alt/Super as hotkey for the Overlay

Seeing all my screen reorganize and reshuffle every time I accidentally press Alt leaves me disoriented and seasick:

gsettings set org.gnome.mutter overlay-key ''

Focus-follows-mouse and Raise-or-lower

My desktop is like my desktop: messy and cluttered. I have lots of overlapping window and I switch between them by moving the focus with the mouse, and when the visible part is not enough I have a handy hotkey mapped to raise-or-lower to bring forward what I need and send back what I don't need anymore.

Thankfully Gnome can be configured that way, with some work:

  • In gnome-shell settings, keyboard, shortcuts, windows, set "Raise window if covered, otherwise lower it" to "Super+Escape"
  • In gnome-tweak-tool, Windows, set "Focus on Hover"

This almost worked, but sometimes it didn't do what I wanted, like I expected to find a window to the front but another window disappeared instead. I eventually figured that by default Gnome delays focus changes by a perceivable amount, which is evidently too slow for the way I move around windows.

The amount cannot be shortened, but it can be removed with:

gsettings set org.gnome.shell.overrides focus-change-on-pointer-rest false

Mouse and keyboard shortcuts

Gnome has lots of preconfigured sounds, shortcuts, animations and other distractions that I do not need. They also either interfere with key combinations I want to use in terminals, or cause accidental window moves or resizes that make me break flow, or otherwise provide sensory overstimulation that really does not work for me.

It was a lot of work, and these are the steps I used to get rid of most of them.

Disable Super+N combinations that accidentally launch a questionable choice of programs:

for i in `seq 1 9`; do gsettings set org.gnome.shell.keybindings switch-to-application-$i '[]'; done

Gnome-Shell settings:

  • Multitasking:
    • disable hot corner
    • disable active edges
    • set a fixed number of workspaces
    • workspaces on all displays
    • switching includes apps from current workspace only
  • Sound:
    • disable system sounds
  • Keyboard
    • Compose Key set to Caps Lock
    • View and Customize Shortcuts:
      • Launchers
        • launch help browser: remove
      • Navigation
        • move to workspace on the left: Super+Left
        • move to workspace on the right: Super+Right
        • move window one monitor …: remove
        • move window one workspace to the left: Shift+Super+Left
        • move window one workspace to the right: Shift+Super+Right
        • move window to …: remove
        • switch system …: remove
        • switch to …: remove
        • switch windows …: disabled
      • Screenshots
        • Record a screenshot interactively: Super+Print
        • Take a screenshot interactively: Print
        • Disable everything else
      • System
        • Focus the active notification: remove
        • Open the applcation menu: remove
        • Restore the keyboard shortctus: remove
        • Show all applications: remove
        • Show the notification list: remove
        • Show the overvire: remove
        • Show the run command prompt: remove (the default Gnome launcher is not for me) Super+F2 (or remove to leave it to the terminal)
      • Windows
        • Close window: remove
        • Hide window: remove
        • Maximize window: remove
        • Move window: remove
        • Raise window if covered, otherwise lower it: Super+Escape
        • Resize window: remove
        • Restore window: remove
        • Toggle maximization state: remove
    • Custom shortcuts
      • xfrun4, launching xfrun4, bound to Super+F2
  • Accessibility:
    • disable "Enable animations"

gnome-tweak-tool settings:

  • Keyboard & Mouse
    • Overview shortcut: Right Super. This cannot be disabled, but since my keyboard doesn't have a Right Super button, that's good enough for me. Oddly, I cannot find this in gsettings.
  • Window titlebars
    • Double-Click: Toggle-Maximize
    • Middle-Click: Lower
    • Secondary-Click: Menu
  • Windows
    • Resize with secondary click

Gnome Terminal settings:

Thankfully 10 years ago I took notes on how to customize Gnome Terminal, and they're still mostly valid:

  • Shortcuts

    • New tab: Super+T
    • New window: Super+N
    • Close tab: disabled
    • Close window: disabled
    • Copy: Super+C
    • Paste: Super+V
    • Search: all disabled
    • Previous tab: Super+Page Up
    • Next tab: Super+Page Down
    • Move tab…: Disabled
    • Switch to tab N: Super+Fn (only available after disabling overview)
    • Switch to tab N with Alt+Fn cannot be configured in the UI: Alt+Fn is detected as simply Fn. It can however be set with gsettings:

      sh for i in `seq 1 12`; do gsettings set org.gnome.Terminal.Legacy.Keybindings:/org/gnome/terminal/legacy/keybindings/ switch-to-tab-$i "<Alt>F$i"; done

  • Profile

    • Text
      • Sound: disable terminal bell

Other hotkeys that got in my way and had to disable the hard way:

for n in `seq 1 12`; do gsettings set org.gnome.mutter.wayland.keybindings switch-to-session-$n '[]'; done
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-down '[]'
gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-up '[]'
gsettings set org.gnome.desktop.wm.keybindings panel-main-menu '[]'
gsettings set org.gnome.desktop.interface menubar-accel '[]'

Note that even after removing F10 from being bound to menubar-accel, and after having to gsetting binding to F10 as is:

$ gsettings list-recursively|grep F10
org.gnome.Terminal.Legacy.Keybindings switch-to-tab-10 '<Alt>F10'

I still cannot quit Midnight Commander using F10 in a terminal, as that moves the focus in the window title bar. This looks like a Gnome bug, and a very frustrating one for me.

Appearance

Gnome-Shell settings:

  • Appearance:
    • dark mode

gnome-tweak-tool settings:

  • Fonts
    • Antialiasing: Subpixel
  • Top Bar
    • Clock/Weekday: enable (why is this not a default?)

Gnome Terminal settings:

  • General
    • Theme variant: Dark (somehow it wasn't picked by up from the system settings)
  • Profile
    • Colors
      • Background: #000

Other decluttering and tweaks

Gnome Shell Settings:

  • Search
    • disable application search
  • Removable media
    • set everything to "ask what to do"
  • Default applications
    • Web: Chromium
    • Mail: mutt
    • Calendar: khal is not sadly an option
    • Video: mpv
    • Photos: Geequie

Set a delay between screen blank and lock: when the screen goes blank, it is important for me to be able to say "nope, don't blank yet!", and maybe switch on caffeine mode during a presentation without needing to type my password in front of cameras. No UI for this, but at least gsettings has it:

gsettings set org.gnome.desktop.screensaver lock-delay 30

Extensions

I enabled the Applications Menu extension, since it's impossible to find less famous applications in the Overview without knowing in advance how they're named in the desktop. This stole a precious hotkey, which I had to disable in gsettings:

gsettings set org.gnome.shell.extensions.apps-menu apps-menu-toggle-menu '[]'

I also enabled:

  • Removable Drive Menu: why is this not on by default?
  • Workspace Indicator
  • Ubuntu Appindicators (apt install gnome-shell-extension-appindicator and restart Gnome)

I didn't go and look for Gnome Shell extentions outside what is packaged in Debian, as I'm very wary about running JavaScript code randomly downloaded from the internet with full access over my data and desktop interaction.

I also took care of checking that the Gnome Shell Extensions web page complains about the lacking "GNOME Shell integration" browser extension, because the web browser shouldn't be allowed to download random JavaScript from the internet and run it with full local access.

Yuck.

Run program dialog

The default run program dialog is almost, but not quite, totally useless to me, as it does not provide completion, not even just for executable names in path, and so it ends up being faster to open a new terminal window and type in there.

It's possible, in Gnome Shell settings, to bind a custom command to a key. The resulting keybinding will now show up in gsettings, though it can be located in a more circuitous way by grepping first, and then looking up the resulting path in dconf-editor:

gsettings list-recursively|grep custom-key
org.gnome.settings-daemon.plugins.media-keys custom-keybindings ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/']

I tried out several run dialogs present in Debian, with sad results, possibly due to most of them not being tested on wayland:

  • fuzzel does not start
  • gmrun is gtk2, last updated in 2016, but works fine
  • kupfer segfaults as I type
  • rofi shows, but can't get keboard input
  • shellex shows a white bar at top of the screen and lots of errors on stderr
  • superkb wants to grab the screen for hotkeys
  • synapse searched news on the internet as I typed, which is a big no for me
  • trabucco crashes on startup
  • wofi works but looks like very much an acquired taste, though it has some completion that makes it more useful than Gnome's run dialog
  • xfrun4 (package xfce4-appfinder) struggles on wayland, being unable to center its window and with the pulldown appearing elsewhere in the screen, but it otherwise works

Both gmrun and xfrun4 seem like workable options, with xfrun4 being customizable with convenient shortcut prefixes, so xfrun4 it is.

TODO

  • Figure out what is still binding F10 to menu, and what I can do about it
  • Figure out how to reduce the size of window titlebars, which to my taste should be unobtrusive and not take 2.7% of vertical screen size each. There's a minwaita theme which isn't packaged in Debian. There's a User Theme extension, and then the whole theming can of worms to open. For another day.
  • Figure out if Gnome can be convinced to resize popup windows? Take the Gnome Terminal shortcut preferences for example: it takes ⅓ of the vertical screen and can only display ¼ of all available shortcuts, and I cannot find a valid reason why I shouldn't be allowed to enlarge it vertically.
  • Figure out if I can place shortcut launcher icons in the top panel, and how

I'll try to update these notes as I investigate.

Conclusion so far

I now have something that seems to work for me. A few papercuts to figure out still, but they seem manageable.

It all feels a lot harder than it should be: for something intended to be minimal, Gnome defaults feel horribly cluttered and noisy to me, continuosly getting in the way of getting things done until tamed into being out of the way unless called for. It felt like a device that boots into flashy demo mode, which needs to be switched off before actual use.

Thankfully it can be switched off, and now I have notes to do it again if needed.

gsettings oddly feels to me like a better UI than the interactive settings managers: it's more comprehensive, more discoverable, more scriptable, and more stable across releases. Most of the Q&A I found on the internet with guidance given on the UI was obsolete, while when given with gsettings command lines it kept being relevant. I also have the feeling that these notes would be easier to understand and follow if given as gsettings invocations instead of descriptions of UI navigation paths.

At some point I'll upgrade to Trixie and reevaluate things, and these notes will be a useful checklist for that.

Fingers crossed that this time I'll manage to stay on Wayland. If not, I know that Xfce is still there for me, and I can trust it to be both helpful and good at not getting in the way of my work.

30 November, 2024 08:13PM

hackergotchi for Aurelien Jarno

Aurelien Jarno

UEFI Unified Kernel Image for Debian Installer on riscv64

On the riscv64 port, the default boot method is UEFI, with U-Boot typically used as the firmware. This approach aligns more closely with other architectures, which avoid developping riscv64 specific code. For advanced users, booting using U-Boot and extlinux is possible, thanks to the kernel being built with CONFIG_EFI_STUB=y.

The same applies to the Debian Installer, which is provided as ISO images in various sizes and formats like on other architectures. These images can be put on a USB drive or an SD-card and booted directly from U-Boot in UEFI mode. Some users prefer to use the netboot "image", which in practice consists of a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files.

However, booting this in UEFI mode is not straightforward, unless you use a TFTP server, which is also not trivial. Less known to users, there is also a corresponding mini.iso image, which contains all the above plus a bootloader. This offers a simpler alternative for installation, but depending on your (vendor) U-Boot version this still requires to go through a media.

Systemd version 257-rc2 comes with a great new feature, the ability to include multiple DTB files in a single UKI (Unified Kernel Image) file, with systemd-stub automatically loading the appropriate one for the current hardware. A UKI file combines a UEFI boot stub program, a Linux kernel image, an optional initrd, and further resources in a single UEFI PE file. This finally solves the DTB problem in the UEFI world for distributions, as a single image can work on multiple boards.

Building upon this, debian-installer on riscv64 now also creates a UEFI UKI mini.efi image, which contains systemd-stub, a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files. Using this image also ensures that the system is booted in UEFI mode. Booting it with debian-installer is as simple as:

load mmc 0:1 mini.efo $kernel_addr_r # (can also be done using tftpboot, wget, etc.)
bootefi $kernel_addr_r

Additional parameters can be passed to the image using the U-Boot bootargs environment variable. For instance, to boot in rescue mode:

setenv bootargs "rescue/enable=true"

30 November, 2024 09:41AM by aurel32

Russell Coker

November 29, 2024

Raju Devidas

Finding all sub domains of a main domain

Problem: Need to know all the sub domains of a main domain, e.g. example.com has a sub domain dev.example.com , I also want to know other sub domains.

Solution:

Install the package called sublist3r, written by Ahmed Aboul-Ela

$ sudo apt install sublist3r

run the command

$ sublist3r -d example.com -o subdomains-example.com.txt

                 ____        _     _ _     _   _____
                / ___| _   _| |__ | (_)___| |_|___ / _ __
                \___ \| | | | &apos_ \| | / __| __| |_ \| &apos__|
                 ___) | |_| | |_) | | \__ \ |_ ___) | |
                |____/ \__,_|_.__/|_|_|___/\__|____/|_|

                # Coded By Ahmed Aboul-Ela - @aboul3la

[-] Enumerating subdomains now for example.com
[-] Searching now in Baidu..
[-] Searching now in Yahoo..
[-] Searching now in Google..
[-] Searching now in Bing..
[-] Searching now in Ask..
[-] Searching now in Netcraft..
[-] Searching now in DNSdumpster..
[-] Searching now in Virustotal..
[-] Searching now in ThreatCrowd..
[-] Searching now in SSL Certificates..
[-] Searching now in PassiveDNS..
Process DNSdumpster-8:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 269, in run
    domain_list = self.enumerate()
                  ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 649, in enumerate
    token = self.get_csrftoken(resp)
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/sublist3r.py", line 644, in get_csrftoken
    token = csrf_regex.findall(resp)[0]
            ~~~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
[!] Error: Google probably now is blocking our requests
[~] Finished now the Google Enumeration ...
[!] Error: Virustotal probably now is blocking our requests
[-] Saving results to file: subdomains-example.com.txt
[-] Total Unique Subdomains Found: 7
AS207960 Test Intermediate - example.com
www.example.com
dev.example.com
m.example.com
products.example.com
support.example.com
m.testexample.com

We can see the subdomains listed at the end of the command output.

enjoy, have fun, drink water!

29 November, 2024 10:31AM by rajudev

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy December 2024 - March 2025 round.

Patrick Noblet Appiah will work on Automatic Indi-3rd-party driver update, mentored by Thorsten Alteholz.

Divine Attah-Ohiemi will work on Making the Debian main website more attractive by switching to HuGo as site generator, mentored by Carsten Schoenert, Subin Siby and Thomas Lange.


Congratulations and welcome Patrick Noblet Appiah and Divine Attah-Ohiemi!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

29 November, 2024 09:15AM by Nilesh Patra

Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy December 2024 - March 2025 round.

Patrick Noblet Appiah will work on Automatic Indi-3rd-party driver update, mentored by Thorsten Alteholz.

Divine Attah-Ohiemi will work on Making the Debian main website more attractive by switching to HuGo as site generator, mentored by Carsten Schoenert, Subin Siby and Thomas Lange.


Congratulations and welcome Patrick Noblet Appiah and Divine Attah-Ohiemi!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

29 November, 2024 09:00AM by Nilesh Patra

hackergotchi for Freexian Collaborators

Freexian Collaborators

Tryton 7.0 LTS reaches Debian trixie (by Mathias Behrle, Raphaël Hertzog and Anupa Ann Joseph)

Tryton is a FOSS software suite which is highly modular and scalable. Tryton along with its standard modules can provide a complete ERP solution or it can be used for specific functions of a business like accounting, invoicing etc.

Debian packages for Tryton are being maintained by Mathias Behrle. You can follow him on Mastodon or get his help on Tryton related projects through MBSolutions (his own consulting company).

Freexian has been sponsoring Mathias’s packaging work on Tryton for a while, so that Debian gets all the quarterly bug fix releases as well as the security release in a timely manner.

About Tryton 7.0 LTS

Lately Mathias has been busy packaging Tryton 7.0 LTS. As the “LTS” tag implies, this release is recommended for production deployments since it will be supported until November 2028. This release brings numerous bug fixes, performance improvements and various new features.

As part of this work, 41 new Tryton modules and 3 dependency packages have been added to Debian, significantly broadening the options available to Debian users and improving integration with Tryton systems.

Running different versions of Tryton on different Debian releases

To provide extended compatibility, a dedicated Tryton mirror is being managed and is available at https://debian.m9s.biz/debian/. This mirror hosts backports for all supported Tryton series, ensuring availability for a variety of Debian releases and deployment scenarios.

These initiatives highlight MBSolutions’ technical contributions to the Tryton community, made possible by Freexian’s financial backing. Together, we are advancing the Tryton ecosystem for Debian users.

29 November, 2024 12:00AM by Mathias Behrle, Raphaël Hertzog and Anupa Ann Joseph

November 28, 2024

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (September and October 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Joachim Bauch (fancycode)
  • Alexander Kjäll (capitol)
  • Jan Mojžíš (janmojzis)
  • Xiao Sheng Wen (atzlinux)

The following contributors were added as Debian Maintainers in the last two months:

  • Alberto Bertogli
  • Alexis Murzeau
  • David Heilderberg
  • Xiyue Deng
  • Kathara Sasikumar
  • Philippe Swartvagher

Congratulations!

28 November, 2024 05:00PM by Jean-Pierre Giraud

November 26, 2024

Emanuele Rocca

Building Debian packages The Right Way

There is more than one way to do it, but it seems that The Right Way to build Debian packages today is using sbuild with the unshare backend. The most common backend before the rise of unshare was schroot.

The official Debian Build Daemons have recently transitioned to using sbuild with unshare, providing a strong motivation to consider making the switch. Additionally the new approach means: (1) no need to configure schroot, and (2) no need to run the build as root.

Here are my notes about moving to the new setup, for future reference and in case they may be useful to others.

First I installed the required packages:

apt install sbuild mmdebstrap uidmap

Then I created the following script to update my chroots every night:

#!/bin/bash

for arch in arm64 armhf armel; do
    HOME=/tmp mmdebstrap --quiet --arch=$arch --include=ca-certificates --variant=buildd unstable \
        ~/.cache/sbuild/unstable-$arch.tar http://127.0.0.1:3142/debian
done

In the script, I’m calling mmdebstrap with --quiet because I don’t want to get any output on succesful execution. The script is running in cron with email notifications, and I only want to get a message if something goes south. I’m setting HOME=/tmp for a similar reason: the unshare user does not have access to my actual home directory, and by default dpkg tries to use $HOME/.dpkg.cfg as the configuration file. By overriding HOME I avoid the following message to standard error:

dpkg: warning: failed to open configuration file '/home/ema/.dpkg.cfg' for reading: Permission denied

Then I added the following to my sbuild configuration file (~/.sbuildrc):

$chroot_mode = 'unshare';
$unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXXXX';

The first option sets the sbuild backend to unshare, whereas unshare_tmpdir_template is needed on Bookworm to ensure that the build process runs in memory rather than on disk for performance reasons. Starting with Trixie, /tmp is by default a tmpfs so the setting won’t be needed anymore.

Packages for different architectures can now be built as follows:

# Tarball used: ~/.cache/sbuild/unstable-arm64.tar
$ sbuild --dist=unstable hello

# Tarball used: ~/.cache/sbuild/unstable-armhf.tar
$ sbuild --dist=unstable --arch=armhf hello

If you have any comments or suggestions about any of this, please let me know.

26 November, 2024 08:34AM by Emanuele Rocca (ema@linux.it)

hackergotchi for Sandro Knauß

Sandro Knauß

Akademy 2024 in Würzburg

In order to prepare for the Akademy I started some days before to give my Librem 5 ( an Open Hardware Phone) another try and ended up with a non starting Plasma 6. Actually this issue was known already, but hasn't been addressed. In the end I reached the Akademy with my Librem 5 having phosh installed (which is Gnome based), in order to have something working.

I met Bushan and Bart who took care and the issue was fixed two days later I could finally install Plasma 6 on it. The last time I tested my Librem 5 with Plasma 5 it felt sluggish and not well working. But this time I was impressed how well the system reacts. Sure there are some things here and there, but in the bigger picture it is quite useable. One annoying issue is that the camera is only working with one app and the other issue is the battery capacity, you have to charge it once a day. Because of missing a QR reader that can use the camera, getting data to the phone was quite challenging. Unfortunately the conference Wifi separated the devices and I couldn't use KDE Connect to transfer data. In the end the only way to import data was taking five photos from the QR Code to import my D-Ticket to Itinerary.

With a device with Plasma Mobile, it directly was used for a experiment: How well does Dolphin works on a Plasma Mobile device. Together with Felix Ernst we tried it out and were quite impressed, that Dolphin does work very well on Plasma Mobile, after some simple modifications on the UI. That resulted in a patch to add a mobile UI for Dolphin !826.

With more time to play with my Librem 5 I also found an bug in KWeather, that is missing a Refresh option, when used in a Plasma Mobile environment #493656.

Akademy is a good place to identify and solve some issues. It is always like that, you chat with someone and they can tell you who to ask to answer the concrete question and in the end you can solve things, that seems unsolvable in the beginning.

There was also time to look into the travelling app Itinerary. A lot people are faced with a lot of real world issues, when not in their home town. Itinerary is the best traveling apps I know about. It can import nearly every ticket you have and can get location information from restaurant websites and allow routing to that place. It does add many useful information, while traveling like current delays, platform changes, live updates for elevator, weather information at the destination, a station map and all those features with strong focus on privacy.

In detail I found some small things to improve:

  • If you search for a bus ride and enter the correct name for the bus stop, it will still add some walk from and to the station. The issue here is that we use different backends and not all backends share the same geo coordinate. That's why Itinerary needs to add some heuristics to delete those paths. walk to and from the bus stop

  • Instead of displaying just a small station map of one bus stop in the inner city, it showed complete Würzburg inner city, as there is one big park around the inner city (named "Ringpark").

  • Würzburg has a quite big bus station but the platform information were missing in the map, so we tweaked the CSS to display the platform. To be sure, that we don't fix only Würzburg, we also looked at Greifswald and Aix-en-Provence if they are following the same name scheme.

I additionally learned that it has a lot of details that helps people who have special needs. That is the reason why Daniel Kraut wants to get Itinerary available for iOS. As spoken out, that Daniel wants to reach this goal, others already started to implement the first steps to build apps for iOS.

This year I was volunteering in helping out at Akademy. For me it was a lot of fun to meet everyone at the infodesk or help the speakers setup the beamer and microphone. It is also a good opportunity to meet many new faces and get in contact with them. I see also room for improvement. As we were quite busy at the Welcome Event to get out the badges to everyone, I couldn't answer the questions from newcomers, as the queue was too long. I propose that some people volunteer to be available for questions from newcomers. Often it is hard for newcomers to get their first contact(s) in a new community. There is a lot of space for improvement to make it easier for newcomers to join. Some ideas in my head are: Make an event for the newcomers to get them some links into the community and show that everyone is friendly. The tables at the BoFs should make a circle, so everyone can see each other. It was also hard for me to understand everyone as they mostly spoken towards the front. And then BoFs are sometimes full of very specific words and if you are not already deep in the topic you are lost. I can see the problem, on the one side BoFs are also the place where the person that knows the topic already wants to get things done. On the other side new comers join BoFs, are overwhelmed by to many new words get frustrated and think, that they are not welcome. Maybe at least everyone should present itself with name and ask new faces, why they joined the BoF to help them joining.

I'm happy, that the food provided for the attendees was very delicious and that I'm not the only one mostly vegetarian with a big amount to be vegan.

At the conference the KDE Eco initiation really caught me, as I see a lot of new possibilities in giving more reasons to switch to an Open Source system. The talk from Natalie was great to see how pupils get excited about Open Source and also help their grandparents to move to a Linux system. As I also will start to work as a teacher, I really got ideas what I can do at school. Together with Joseph and Nicole, we finally started to think about how to drive an exploration on what kind of old hardware is still KDE software running. The ones with the oldest hardware will get an old KDE shirt. For more information see #40.

The conference was very motivating for me, I also had still energy at the evening to do some Debian packaging and finally pushed kweathercore to Debian and started to work on KWeather. Now I'm even more interested in the KDE apps focusing the mobile world, as I now have some hardware that can actually use those apps.

I really enjoyed the workshop how to contribute to Qt by Volker Hilsheimer, especially the way how Volker explained things in a very friendly way, answered every question, sometime postponed some questions but came back to them later. All in all I now have a good overview how Qt is doing development and how I can fix bugs.

The daytrip to Rothenburg ob der Tauber was very interesting for me. It was the first time I visited the village. But in my memory it feels like I know the village already. I grew up with reading a lot of comic albums including the good SiFi comic album series "Yoku Tsuno" created by the Belgian writer Roger Leloup. Yoku Tsuno is an electronics engineer, raised in Japan but now living in Belgium. In "On the edge of life" she helps her friend Ingard, who actually lives in Rothenburg. Leloup invested a lot of time to travel to make the make his drawings as accurate as possible.

a comic page with Yoko Tsuno in Rothenburg ob der Tauber

In order to not have a hard cut from Akademy to normal life, I had a lunch with Carlos, to discuss KDE Neon and how we can improve the interaction with Debian. In the future this should have less friction and make both communities work together more smoothly. Additionally as I used to develop on KDEPIM with the help of Docker images based on Neon I ask for a meta kf6 dev meta package. That should help to get rid of most hand written lists of dev packages in the Docker file in order to make it more simple for new contributors to start hacking on KDEPIM.

The rest of the day I finally found time to do the normal tourist stuff: Going to the Wine bridge and having a walk to the castle of Würzburg. Unfortunately you hear a lot of car noises up there, but I could finally relaxe in a Japanese designed garden.

Finally at Saturday I started my trip back. The trains towards Eberswalde are broken and I needed to find alternative routing. I got a little bit nervous, as it was the first time I travelled with my Librem 5 and Itinerary only and needed to reach the next train in less than two mins. With the indoor maps provided, I could prepare my run through the train station so I reached successfully my next train.

By the way, also if you only only use KDE software, I would recommend everyone to join Akademy ;)

26 November, 2024 12:00AM by Sandro Knauß

November 24, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.23 released

I've just released version 1.1.23 of plocate, almost a year after 1.1.22. The changes are mostly around the systemd unit this time, but perhaps more interestingly is that this is the first release where I don't have the majority of patches; in fact, I don't have any patches at all. All of them came from contributors, many of them through the “just do git push to send me a patch email” interface.

I guess this means that I'll need to actually start streamlining my “git am” workflow… it gets me every time. :-)

24 November, 2024 10:27PM

Edward Betts

A mini adventure at MiniDebConf Toulouse

A mini adventure at MiniDebConf Toulouse

Last week, I ventured to Toulouse, for a delightful mix of coding, conversation, and crepes at MiniDebConf Toulouse, part of the broader Capitole du Libre conference, akin to the more well-known FOSDEM but with a distinctly French flair.

This was my fourth and final MiniDebConf of the year.

no jet bridge

My trek to Toulouse was seamless. I hopped on a bus from my home in Bristol to the airport, then took a short flight. I luxuriated in seat 1A, making me the first to disembark—a mere ten minutes later, I was already on the bus heading to my hotel.

Exploring the Pink City

pink

img 29

duck shop

Once settled, I wasted no time exploring the charms of Toulouse. Just a short stroll from my hotel, I found myself beside a tranquil canal, its waters mirroring the golden hues of the trees lining its banks. Autumn in Toulouse painted the city in warm oranges and reds, creating a picturesque backdrop that was a joy to wander through. Every corner of the street revealed more of the city's rich cultural tapestry and striking architecture. Known affectionately as 'La Ville Rose' (The Pink City) for its unique terracotta brickwork, Toulouse captivated me with its blend of historical allure and vibrant modern life.

MiniDebCamp

FabLab sign

laptop setup

Prior to the main event, the MiniDebCamp provided two days of hacking at Artilect FabLab—a space as creative as it was welcoming. It was a pleasure to reconnect with familiar faces and forge new friendships.

Culinary delights

lunch 1

img 14

img 15

img 16

img 17

cakes

The hospitality was exceptional. Our lunches boasted a delicious array of quiches, an enticing charcuterie board, and a superb selection of cheeses, all perfectly complemented by exquisite petite fours. Each item was not only a feast for the eyes but also a delight for the palate.

Wine and cheese

wine and cheese 1

wine and cheese 2

Leftovers from these gourmet feasts fuelled our impromptu cheese and wine party on Thursday evening—a highlight where informal chats blended seamlessly with serious software discussions.

The river at night

night river 1

night river 2

night river 3

night river 4

The enchantment of Toulouse doesn't dim with the setting sun; instead, it transforms. My evening strolls took me along the banks of the Garonne, under a sky just turning from twilight to velvet blue. The river, a dark mirror, perfectly reflected the illuminated grandeur of the city's architecture. Notably, the dome of the Hôpital de La Grave stood out, bathed in a warm glow against the night sky. This architectural gem, coupled with the soft lights of the bridge and the serene river, created a breathtaking scene that was both tranquil and awe-inspiring.

Capitole du Libre

making crepes

The MiniDebConf itself, part of the larger Capitole du Libre event, was a fantastic immersion into the world of free software. Unlike the ticket-free FOSDEM, this conference required QR codes for entry and even had bag searches, adding an unusual layer of security for a software conference.

Highlights included the crepe-making by the organisers, reminiscent of street food scenes from larger festivals. The availability of crepes for MiniDebConf attendees and the presence of food trucks added a festive air, albeit with the inevitable long queues familiar to any festival-goer.

vélôToulouse

bike

cyclocity

The city's bike rental system was a boon—easy to use with handy bike baskets perfect for casual city touring. I chose pedal power over electric, finding it a pleasant way to navigate the streets and absorb the city's vibrant atmosphere.

Markets

market

flatbreads

Toulouse's markets were a delightful discovery. From a spontaneous visit to a market near my hotel upon arrival, to cycling past bustling marketplaces, each day presented new local flavours and crafts to explore.

The Za'atar flatbread from a Syrian stall was a particularly memorable lunch pick.

La brasserie Les Arcades

img 25

img 26

img 27

Our conference wrapped up with a spontaneous gathering at La Brasserie Les Arcades in Place du Capitole. Finding a café that could accommodate 30 of us on a Sunday evening without a booking felt like striking gold. What began with coffee and ice cream smoothly transitioned into dinner, where I enjoyed a delicious braised duck leg with green peppercorn sauce. This meal rounded off the trip with lively conversations and shared experiences.

The journey back home

img 30

img 31

img 32

img 33

Returning from Toulouse, I found myself once again in seat 1A, offering the advantage of being the first off the plane, both on departure and arrival. My flight touched down in Bristol ahead of schedule, and within ten minutes, I was on the A1 bus, making my way back into the heart of Bristol.

Anticipating DebConf 25 in Brittany

My trip to Toulouse for MiniDebConf was yet another fulfilling experience; the city was delightful, and the talks were insightful. While I frequently travel, these journeys are more about continuous learning and networking than escape. The food in Toulouse was particularly impressive, a highlight I've come to expect and relish on my trips to France. Looking ahead, I'm eagerly anticipating DebConf in Brest next year, especially the opportunity to indulge once more in the excellent French cuisine and beverages.

24 November, 2024 12:42PM

November 22, 2024

hackergotchi for Matthew Palmer

Matthew Palmer

Your Release Process Sucks

For the past decade-plus, every piece of software I write has had one of two release processes.

Software that gets deployed directly onto servers (websites, mostly, but also the infrastructure that runs Pwnedkeys, for example) is deployed with nothing more than git push prod main. I’ll talk more about that some other day.

Today is about the release process for everything else I maintain – Rust / Ruby libraries, standalone programs, and so forth. To release those, I use the following, extremely intricate process:

  1. Create an annotated git tag, where the name of the tag is the software version I’m releasing, and the annotation is the release notes for that version.

  2. Run git release in the repository.

  3. There is no step 3.

Yes, it absolutely is that simple. And if your release process is any more complicated than that, then you are suffering unnecessarily.

But don’t worry. I’m from the Internet, and I’m here to help.

Sidebar: “annotated what-now?!?”

The annotated tag is one git’s best-kept secrets. They’ve been available in git for practically forever (I’ve been using them since at least 2014, which is “practically forever” in software development), yet almost everyone I mention them to has never heard of them.

A “tag”, in git parlance, is a repository-unique named label that points to a single commit (as identified by the commit’s SHA1 hash). Annotating a tag is simply associating a block of free-form text with that tag.

Creating an annotated tag is simple-sauce: git tag -a tagname will open up an editor window where you can enter your annotation, and git tag -a -m "some annotation" tagname will create the tag with the annotation “some annotation”. Retrieving the annotation for a tag is straightforward, too: git show tagname will display the annotation along with all the other tag-related information.

Now that we know all about annotated tags, let’s talk about how to use them to make software releases freaking awesome.

Step 1: Create the Annotated Git Tag

As I just mentioned, creating an annotated git tag is pretty simple: just add a -a (or --annotate, if you enjoy typing) to your git tag command, and WHAM! annotation achieved.

Releases, though, typically have unique and ever-increasing version numbers, which we want to encode in the tag name. Rather than having to look at the existing tags and figure out the next version number ourselves, we can have software do the hard work for us.

Enter: git-version-bump. This straightforward program takes one mandatory argument: major, minor, or patch, and bumps the corresponding version number component in line with Semantic Versioning principles. If you pass it -n, it opens an editor for you to enter the release notes, and when you save out, the tag is automagically created with the appropriate name.

Because the program is called git-version-bump, you can call it as a git command: git version-bump. Also, because version-bump is long and unwieldy, I have it aliased to vb, with the following entry in my ~/.gitconfig:

[alias]
    vb = version-bump -n

Of course, you don’t have to use git-version-bump if you don’t want to (although why wouldn’t you?). The important thing is that the only step you take to go from “here is our current codebase in main” to “everything as of this commit is version X.Y.Z of this software”, is the creation of an annotated tag that records the version number being released, and the metadata that goes along with that release.

Step 2: Run git release

As I said earlier, I’ve been using this release process for over a decade now. So long, in fact, that when I started, GitHub Actions didn’t exist, and so a lot of the things you’d delegate to a CI runner these days had to be done locally, or in a more ad-hoc manner on a server somewhere.

This is why step 2 in the release process is “run git release”. It’s because historically, you can’t do everything in a CI run. Nowadays, most of my repositories have this in the .git/config:

[alias]
    release = push --tags

Older repositories which, for one reason or another, haven’t been updated to the new hawtness, have various other aliases defined, which run more specialised scripts (usually just rake release, for Ruby libraries), but they’re slowly dying out.

The reason why I still have this alias, though, is that it standardises the release process. Whether it’s a Ruby gem, a Rust crate, a bunch of protobuf definitions, or whatever else, I run the same command to trigger a release going out. It means I don’t have to think about how I do it for this project, because every project does it exactly the same way.

The Wiring Behind the Button

It wasn’t the button that was the problem. It was the miles of wiring, the hundreds of miles of cables, the circuits, the relays, the machinery. The engine was a massive, sprawling, complex, mind-bending nightmare of levers and dials and buttons and switches. You couldn’t just slap a button on the wall and expect it to work. But there should be a button. A big, fat button that you could press and everything would be fine again. Just press it, and everything would be back to normal.

  • Red Dwarf: Better Than Life

Once you’ve accepted that your release process should be as simple as creating an annotated tag and running one command, you do need to consider what happens afterwards. These days, with the near-universal availability of CI runners that can do anything you need in an isolated, reproducible environment, the work required to go from “annotated tag” to “release artifacts” can be scripted up and left to do its thing.

What that looks like, of course, will probably vary greatly depending on what you’re releasing. I can’t really give universally-applicable guidance, since I don’t know your situation. All I can do is provide some of my open source work as inspirational examples.

For starters, let’s look at a simple Rust crate I’ve written, called strong-box. It’s a straightforward crate, that provides ergonomic and secure cryptographic functionality inspired by the likes of NaCl. As it’s just a crate, its release script is very straightforward. Most of the complexity is working around Cargo’s inelegant mandate that crate version numbers are specified in a TOML file. Apart from that, it’s just a matter of building and uploading the crate. Easy!

Slightly more complicated is action-validator. This is a Rust CLI tool which validates GitHub Actions and Workflows (how very meta) against a published JSON schema, to make sure you haven’t got any syntax or structural errors. As not everyone has a Rust toolchain on their local box, the release process helpfully build binaries for several common OSes and CPU architectures that people can download if they choose. The release process in this case is somewhat larger, but not particularly complicated. Almost half of it is actually scaffolding to build an experimental WASM/NPM build of the code, because someone seemed rather keen on that.

Moving away from Rust, and stepping up the meta another notch, we can take a look at the release process for git-version-bump itself, my Ruby library and associated CLI tool which started me down the “Just Tag It Already” rabbit hole many years ago. In this case, since gemspecs are very amenable to programmatic definition, the release process is practically trivial. Remove the boilerplate and workarounds for GitHub Actions bugs, and you’re left with about three lines of actual commands.

These approaches can certainly scale to larger, more complicated processes. I’ve recently implemented annotated-tag-based releases in a proprietary software product, that produces Debian/Ubuntu, RedHat, and Windows packages, as well as Docker images, and it takes all of the information it needs from the annotated tag. I’m confident that this approach will successfully serve them as they expand out to build AMIs, GCP machine images, and whatever else they need in their release processes in the future.

Objection, Your Honour!

I can hear the howl of the “but, actuallys” coming over the horizon even as I type. People have a lot of Big Feelings about why this release process won’t work for them. Rather than overload this article with them, I’ve created a companion article that enumerates the objections I’ve come across, and answers them. I’m also available for consulting if you’d like a personalised, professional opinion on your specific circumstances.

DVD Bonus Feature: Pre-releases

Unless you’re addicted to surprises, it’s good to get early feedback about new features and bugfixes before they make it into an official, general-purpose release. For this, you can’t go past the pre-release.

The major blocker to widespread use of pre-releases is that cutting a release is usually a pain in the behind. If you’ve got to edit changelogs, and modify version numbers in a dozen places, then you’re entirely justified in thinking that cutting a pre-release for a customer to test that bugfix that only occurs in their environment is too much of a hassle.

The thing is, once you’ve got releases building from annotated tags, making pre-releases on every push to main becomes practically trivial. This is mostly due to another fantastic and underused Git command: git describe.

How git describe works is, basically, that it finds the most recent commit that has an associated annotated tag, and then generates a string that contains that tag’s name, plus the number of commits between that tag and the current commit, with the current commit’s hash included, as a bonus. That is, imagine that three commits ago, you created an annotated release tag named v4.2.0. If you run git describe now, it will print out v4.2.0-3-g04f5a6f (assuming that the current commit’s SHA starts with 04f5a6f).

You might be starting to see where this is going. With a bit of light massaging (essentially, removing the leading v and replacing the -s with .s), that string can be converted into a version number which, in most sane environments, is considered “newer” than the official 4.2.0 release, but will be superceded by the next actual release (say, 4.2.1 or 4.3.0). If you’re already injecting version numbers into the release build process, injecting a slightly different version number is no work at all.

Then, you can easily build release artifacts for every commit to main, and make them available somewhere they won’t get in the way of the “official” releases. For example, in the proprietary product I mentioned previously, this involves uploading the Debian packages to a separate component (prerelease instead of main), so that users that want to opt-in to the prerelease channel simply modify their sources.list to change main to prerelease. Management have been extremely pleased with the easy availability of pre-release packages; they’ve been gleefully installing them willy-nilly for testing purposes since I rolled them out.

In fact, even while I’ve been writing this article, I was asked to add some debug logging to help track down a particularly pernicious bug. I added the few lines of code, committed, pushed, and went back to writing. A few minutes later (next week’s job is to cut that in-process time by at least half), the person who asked for the extra logging ran apt update; apt upgrade, which installed the newly-built package, and was able to progress in their debugging adventure.

Continuous Delivery: It’s Not Just For Hipsters.

“+1, Informative”

Hopefully, this has spurred you to commit your immortal soul to the Church of the Annotated Tag. You may tithe by buying me a refreshing beverage. Alternately, if you’re really keen to adopt more streamlined release management processes, I’m available for consulting engagements.

22 November, 2024 08:15PM by Matt Palmer (mpalmer@hezmatt.org)

Invalid Excuses for Why Your Release Process Sucks

In my companion article, I made the bold claim that your release process should consist of no more than two steps:

  1. Create an annotated Git tag;

  2. Run a single command to trigger the release pipeline.

As I have been on the Internet for more than five minutes, I’m aware that a great many people will have a great many objections to this simple and straightforward idea. In the interests of saving them a lot of wear and tear on their keyboards, I present this list of common reasons why these objections are invalid.

If you have an objection I don’t cover here, the comment box is down the bottom of the article. If you think you’ve got a real stumper, I’m available for consulting engagements, and if you turn out to have a release process which cannot feasibly be reduced to the above two steps for legitimate technical reasons, I’ll waive my fees.

“But I automatically generate my release notes from commit messages!”

This one is really easy to solve: have the release note generation tool feed directly into the annotation. Boom! Headshot.

“But all these files need to be edited to make a release!”

No, they absolutely don’t. But I can see why you might think you do, given how inflexible some packaging environments can seem, and since “that’s how we’ve always done it”.

Language Packages

Most languages require you to encode the version of the library or binary in a file that you want to revision control. This is teh suck, but I’m yet to encounter a situation that can’t be worked around some way or another.

In Ruby, for instance, gemspec files are actually executable Ruby code, so I call code (that’s part of git-version-bump, as an aside) to calculate the version number from the git tags. The Rust build tool, Cargo, uses a TOML file, which isn’t as easy, but a small amount of release automation is used to take care of that.

Distribution Packages

If you’re building Linux distribution packages, you can easily apply similar automation faffery. For example, Debian packages take their metadata from the debian/changelog file in the build directory. Don’t keep that file in revision control, though: build it at release time. Everything you need to construct a Debian (or RPM) changelog is in the tag – version numbers, dates, times, authors, release notes. Use it for much good.

The Dreaded Changelog

Finally, there’s the CHANGELOG file. If it’s maintained during the development process, it typically has an archive of all the release notes, under version numbers, with an “Unreleased” heading at the top. It’s one more place to remember to have to edit when making that “preparing release X.Y.Z” commit, and it is a gift to the Demon of Spurious Merge Conflicts if you follow the policy of “every commit must add a changelog entry”.

My solution: just burn it to the ground. Add a line to the top with a link to wherever the contents of annotated tags get published (such as GitHub Releases, if that’s your bag) and never open it ever again.

“But I need to know other things about my release, too!”

For some reason, you might think you need some other metadata about your releases. You’re probably wrong – it’s amazing how much information you can obtain or derive from the humble tag – so think creatively about your situation before you start making unnecessary complexity for yourself.

But, on the off chance you’re in a situation that legitimately needs some extra release-related information, here’s the secret: structured annotation. The annotation on a tag can be literally any sequence of octets you like. How that data is interpreted is up to you.

So, require that annotations on release tags use some sort of structured data format (say YAML or TOML – or even XML if you hate your release manager), and mandate that it contain whatever information you need. You can make sure that the annotation has a valid structure and contains all the information you need with an update hook, which can reject the tag push if it doesn’t meet the requirements, and you’re sorted.

“But I have multiple packages in my repo, with different release cadences and versions!”

This one is common enough that I just refer to it as “the monorepo drama”. Personally, I’m not a huge fan of monorepos, but you do you, boo. Annotated tags can still handle it just fine.

The trick is to include the package name being released in the tag name. So rather than a release tag being named vX.Y.Z, you use foo/vX.Y.Z, bar/vX.Y.Z, and baz/vX.Y.Z. The release automation for each package just triggers on tags that match the pattern for that particular package, and limits itself to those tags when figuring out what the version number is.

“But we don’t semver our releases!”

Oh, that’s easy. The tag pattern that marks a release doesn’t have to be vX.Y.Z. It can be anything you want.

Relatedly, there is a (rare, but existent) need for packages that don’t really have a conception of “releases” in the traditional sense. The example I’ve hit most often is automatically generated “bindings” packages, such as protobuf definitions. The source of truth for these is a bunch of .proto files, but to be useful, they need to be packaged into code for the various language(s) you’re using. But those packages need versions, and while someone could manually make releases, the best option is to build new per-language packages automatically every time any of those definitions change.

The versions of those packages, then, can be datestamps (I like something like YYYY.MM.DD.N, where N starts at 0 each day and increments if there are multiple releases in a single day).

This process allows all the code that needs the definitions to declare the minimum version of the definitions that it relies on, and everything is kept in sync and tracked almost like magic.

Th-th-th-th-that’s all, folks!

I hope you’ve enjoyed this bit of mild debunking. Show your gratitude by buying me a refreshing beverage, or purchase my professional expertise and I’ll answer all of your questions and write all your CI jobs.

22 November, 2024 08:15PM by Matt Palmer (mpalmer@hezmatt.org)

November 20, 2024

Ian Jackson

The Rust Foundation's 2nd bad draft trademark policy

tl;dr: The Rust Foundation’s new trademark policy still forbids unapproved modifications: this would forbid both the Rust Community’s own development work(!) and normal Free Software distribution practices.

Background

In April 2023 I wrote about the Rust Foundation’s ham-fisted and misguided attempts to update the Rust trademark policy. This turned into drama.

The new draft

Recently, the Foundation published a new draft. It’s considerably less bad, but the most serious problem, which I identified last year, remains.

It prevents redistribution of modified versions of Rust, without pre-approval from the Rust Foundation. (Subject to some limited exceptions.) The people who wrote this evidently haven’t realised that distributing modified versions is how free software development works. Ie, the draft Rust trademark policy even forbids making a github branch for an MR to contribute to Rust!

It’s also very likely unacceptable to Debian. Rust is still on track to repeat the Firefox/Iceweasel debacle.

Below is a copy of my formal response to the consultation. The consultation closes at 07:59:00 UTC tomorrow (21st November), ie, at the end of today (Wednesday) US Pacific time, so if you want to reply, do so quickly.

My consultation response

Hi. My name is Ian Jackson. I write as a Rust contributor and as a Debian Developer with first-hand experience of Debian’s approach to trademarks. (But I am not a member of the Debian Rust Packaging Team.)

Your form invites me to state any blocking concerns. I’m afraid I have one:

PROBLEM

The policy on distributing modified versions of Rust (page 4, 8th bullet) is far too restrictive.

PROBLEM - ASPECT 1

On its face the policy forbids making a clone of the Rust repositories on a git forge, and pushing a modified branch there. That is publicly distributing a modified version of Rust.

I.e., the current policy forbids the Rust’s community’s own development workflow!

PROBLEM - ASPECT 2

The policy also does not meet the needs of Software-Freedom-respecting downstreams, including community Linux distributions such as Debian.

There are two scenarios (fuzzy, and overlapping) which provide a convenient framing to discuss this:

Firstly, in practical terms, Debian may need to backport bugfixes, or sometimes other changes. Sometimes Debian will want to pre-apply bugfixes or changes that have been contributed by users, and are intended eventually to go upstream, but are not included upstream in official Rust yet. This is a routine activity for a distribution. The policy, however, forbids it.

Secondly, Debian, as a point of principle, requires the ability to diverge from upstream if and when Debian decides that this is the right choice for Debian’s users. The freedom to modify is a key principle of Free Software. This includes making changes that the upstream project disapproves of. Some examples of this, where Debian has made changes, that upstream do not approve of, have included things like: removing user-tracking code, or disabling obsolescence “timebombs” that stop a particular version working after a certain date.

Overall, while alignment in values between Debian and Rust seems to be very good right now, modifiability it is a matter of non-negotiable principle for Debian. The 8th bullet point on page 4 of the PDF does not give Debian (and Debian’s users) these freedoms.

POSSIBLE SOLUTIONS

Other formulations, or an additional permission, seem like they would be able to meet the needs of both Debian and Rust.

The first thing to recognise is that forbidding modified versions is probably not necessary to prevent language ecosystem fragmentation. Many other programming languages are distributed under fully Free Software licences without such restrictive trademark policies. (For example, Python; I’m sure a thorough survey would find many others.)

The scenario that would be most worrying for Rust would be “embrace - extend - extinguish”. In projects with a copyleft licence, this is not a concern, but Rust is permissively licenced. However, one way to address this would be to add an additional permission for modification that permits distribution of modified versions without permission, but if the modified source code is also provided, under the original Rust licence.

I suggest therefore adding the following 2nd sub-bullet point to the 8th bullet on page 4:

  • changes which are shared, in source code form, with all recipients of the modified software, and publicly licenced under the same licence as the official materials.

This means that downstreams who fear copyleft have the option of taking Rust’s permissive copyright licence at face value, but are limited in the modifications they may make, unless they rename. Conversely downstreams such as Debian who wish to operate as part of the Free Software ecosystem can freely make modifications.

It also, obviously, covers the Rust Community’s own development work.

NON-SOLUTIONS

Some upstreams, faced with this problem, have offered Debian a special permission: ie, said that it would be OK for Debian to make modifications that Debian wants to. But Debian will not accept any Debian-specific permissions.

Debian could of course rename their Rust compiler. Debian has chosen to rename in the past: infamously, a similar policy by Mozilla resulted in Debian distributing Firefox under the name Iceweasel for many years. This is a PR problem for everyone involved, and results in a good deal of technical inconvenience and makework.

“Debian could seek approval for changes, and the Rust Foundation would grant that approval quickly”. This is unworkable on a practical level - requests for permission do not fit into Debian’s workflow, and the resulting delays would be unacceptable. But, more fundamentally, Debian rightly insists that it must have the freedom to make changes that the Foundation do not approve of. (For example, if a future Rust shipped with telemetry features Debian objected to.)

“Debian and Rust could compromise”. However, Debian is an ideological as well as technological project. The principles I have set out are part of Debian’s Foundation Documents - they are core values for Debian. When Debian makes compromises, it does so very slowly and with great deliberation, using its slowest and most heavyweight constitutional governance processes. Debian is not likely to want to engage in such a process for the benefit of one programming language.

“Users will get Rust from upstream”. This is currently often the case. Right now, Rust is moving very quickly, and by Debian standards is very new. As Rust becomes more widely used, more stable, and more part of the infrastructure of the software world, it will need to become part of standard, stable, reliable, software distributions. That means Debian.

(The consultation was a Google Forms page with a single text field, so the formatting isn’t great. I have edited the formatting very lightly to avoid rendering bugs here on my blog.)



comment count unavailable comments

20 November, 2024 12:50PM

Russell Coker

Solving Spam and Phishing for Corporations

Centralisation and Corporations

An advantage of a medium to large company is that it permits specialisation. For example I’m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it’s clear that a good deal of centralisation is appropriate.

For people doing technical computer work such as programming there’s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation.

A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that’s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn’t a solution that’s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it’s still clear that this isn’t a great solution.

Let’s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it’s valid, it’s great that they seek expert advice when they are unsure about things but it would be better if they didn’t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don’t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that’s 10 work hours wasted!

The Rationale for Human Filtering

For personal email human filtering usually isn’t viable because people want privacy. But corporate email isn’t private, it’s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems.

The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it’s legit you have one person who’s good at recognising spam (because it’s their job) who clicks on a “remove mail from this sender from all mailboxes” button and 500 messages are deleted and the sender is blocked.

Delaying email would be a concern. It’s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don’t have notifications turned on for email because it’s a distraction and a fast response isn’t needed. There are a few senders where fast response is required, which is mostly corporations sending a “click this link within 10 minutes to confirm your password change” email. Setting up rules for all such senders that are relevant to work wouldn’t be difficult to do.

How to Solve This

Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn’t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven’t managed to filter out spam/phishing mail effectively so I think we should assume that it’s not going to be solved by filtering. There is talk about what “AI” technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters.

An additional complication for corporate email filtering is that some criteria that are used to filter personal email don’t apply to corporate mail. If someone sends email to me personally about millions of dollars then it’s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that’s impossible to solve, it’s just an extra difficulty that reduces the efficiency of filters.

It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee’s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn’t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs).

For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day’s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list.

The Change

One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases.

But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that’s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn’t be much work involved in maintaining it. Then after that it would be a net win for the company.

The Benefits

If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that’s 10 hours of wasted time per day. Effectively wasting one employee’s work! I’m sure that’s the low end of the range, 5 minutes average per day doesn’t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day.

Then there’s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that’s 300 hours or 7.5 working weeks every time you do it) you just train the few experts.

In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked.

Will They Do It?

They probably won’t do it any time soon. I don’t think it’s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it’s probably regarded as too difficult to change anything and the costs aren’t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

20 November, 2024 05:22AM by etbe

Arnaud Rebillout

Installing an older Ansible version via pipx

Latest Ansible requires Python 3.8 on the remote hosts

... and therefore, hosts running Debian Buster are now unsupported.

Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:

$ ansible --version | head -1
ansible [core 2.18.0]

To my surprise, Ansible started to fail with some remote hosts:

ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]

Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.

How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.

Pipx to the rescue

TL;DR

pipx install --include-deps ansible==10.6.0
pipx inject ansible dnspython    # for community.general.dig

Installing Ansible via pipx

Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.

Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.

First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.

The output should look something like that:

$ pipx install --include-deps ansible==10.6.0
  installed package ansible 10.6.0, installed using Python 3.12.7
  These apps are now globally available
    - ansible
    - ansible-community
    - ansible-config
    - ansible-connection
    - ansible-console
    - ansible-doc
    - ansible-galaxy
    - ansible-inventory
    - ansible-playbook
    - ansible-pull
    - ansible-test
    - ansible-vault
done!  🌟 

Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.

Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi

Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.

Now, let's open a new terminal and check if we're good:

$ which ansible
/home/me/.local/bin/ansible

$ ansible --version | head -1
ansible [core 2.17.6]

Yep! And that's working already, I can use Ansible with Buster hosts again.

What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.

Injecting Python dependencies needed by collections

Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:

# Works with APT-installed Ansible? Yes!
$ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | SUCCESS => {
    "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132"
}

# Works with pipx-installed Ansible? No!
$ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | FAILED! => {
  "msg": "An unhandled exception occurred while running the lookup plugin 'dig'.
  Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig
  lookup requires the python 'dnspython' library and it is not installed."
}

The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:

$ pipx inject ansible dnspython
  injected package dnspython into venv ansible
done!  🌟 

Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.

Closing thoughts

Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.

Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

20 November, 2024 12:00AM by Arnaud Rebillout

November 19, 2024

hackergotchi for Aurelien Jarno

Aurelien Jarno

AI crawlers should be smarter

It would be fantastic if all those AI companies dedicated some time to make their web crawlers smarter (what about using AI?). Noawadays most of them still stupidly follow every link on a Git frontend.

Hint: Changing the display options does not provide more training data!

19 November, 2024 10:31PM by aurel32

Melissa Wen

Display/KMS Meeting at XDC 2024: Detailed Report

XDC 2024 in Montreal was another fantastic gathering for the Linux Graphics community. It was again a great time to immerse in the world of graphics development, engage in stimulating conversations, and learn from inspiring developers.

Many Igalia colleagues and I participated in the conference again, delivering multiple talks about our work on the Linux Graphics stack and also organizing the Display/KMS meeting. This blog post is a detailed report on the Display/KMS meeting held during this XDC edition.

Short on Time?

  1. Catch the lightning talk summarizing the meeting here (you can even speed up 2x):
  1. For a quick written summary, scroll down to the TL;DR section.

TL;DR

This meeting took 3 hours and tackled a variety of topics related to DRM/KMS (Linux/DRM Kernel Modesetting):

  • Sharing Drivers Between V4L2 and KMS: Brainstorming solutions for using a single driver for devices used in both camera capture and display pipelines.
  • Real-Time Scheduling: Addressing issues with non-blocking page flips encountering sigkills under real-time scheduling.
  • HDR/Color Management: Agreement on merging the current proposal, with NVIDIA implementing its special cases on VKMS and adding missing parts on top of Harry Wentland’s (AMD) changes.
  • Display Mux: Collaborative design discussions focusing on compositor control and cross-sync considerations.
  • Better Commit Failure Feedback: Exploring ways to equip compositors with more detailed information for failure analysis.

Bringing together Linux display developers in the XDC 2024

While I didn’t present a talk this year, I co-organized a Display/KMS meeting (with Rodrigo Siqueira of AMD) to build upon the momentum from the 2024 Linux Display Next hackfest. The meeting was attended by around 30 people in person and 4 remote participants.

Speakers: Melissa Wen (Igalia) and Rodrigo Siqueira (AMD)

Link: https://indico.freedesktop.org/event/6/contributions/383/

Topics: Similar to the hackfest, the meeting agenda was built over the first two days of the conference and mixed talks follow-up with new ideas and ongoing community efforts.

The final agenda covered five topics in the scheduled order:

  1. How to share drivers between V4L2 and DRM for bridge-like components (new topic);
  2. Real-time Scheduling (problems encountered after the Display Next hackfest);
  3. HDR/Color Management (ofc);
  4. Display Mux (from Display hackfest and XDC 2024 talk, bringing AMD and NVIDIA together);
  5. (Better) Commit Failure Feedback (continuing the last minute topic of the Display Next hackfest).

Unpacking the Topics

Similar to the hackfest, the meeting agenda evolved over the conference. During the 3 hours of meeting, I coordinated the room and discussion rounds, and Rodrigo Siqueira took notes and also contacted key developers to provide a detailed report of the many topics discussed.

From his notes, let’s dive into the key discussions!

How to share drivers between V4L2 and KMS for bridge-like components.

Led by Laurent Pinchart, we delved into the challenge of creating a unified driver for hardware devices (like scalers) that are used in both camera capture pipelines and display pipelines.

  • Problem Statement: How can we design a single kernel driver to handle devices that serve dual purposes in both V4L2 and DRM subsystems?
  • Potential Solutions:
    1. Multiple Compatible Strings: We could assign different compatible strings to the device tree node based on its usage in either the camera or display pipeline. However, this approach might raise concerns from device tree maintainers as it could be seen as a layer violation.
    2. Separate Abstractions: A single driver could expose the device to both DRM and V4L2 through separate abstractions: drm-bridge for DRM and V4L2 subdev for video. While simple, this approach requires maintaining two different abstractions for the same underlying device.
    3. Unified Kernel Abstraction: We could create a new, unified kernel abstraction that combines the best aspects of drm-bridge and V4L2 subdev. This approach offers a more elegant solution but requires significant design effort and potential migration challenges for existing hardware.

Real-Time Scheduling Challenges

We have discussed real-time scheduling during this year Linux Display Next hackfest and, during the XDC 2024, Jonas Adahl brought up issues uncovered while progressing on this front.

  • Context: Non-blocking page-flips can, on rare occasions, take a long time and, for that reason, get a sigkill if the thread doing the atomic commit is a real-time schedule.
  • Action items:
    • Explore alternative backtraces during the busy wait (e.g., ftrace).
    • Investigate the maximum thread time in busy wait to reproduce issues faced by compositors. Tools like RTKit (mutter) can be used for better control (Michel Dänzer can help with this setup).

HDR/Color Management

This is a well-known topic with ongoing effort on all layers of the Linux Display stack and has been discussed online and in-person in conferences and meetings over the last years.

Here’s a breakdown of the key points raised at this meeting:

  • Talk: Color operations for Linux color pipeline on AMD devices: In the previous day, Alex Hung (AMD) presented the implementation of this API on AMD display driver.
  • NVIDIA Integration: While they agree with the overall proposal, NVIDIA needs to add some missing parts. Importantly, they will implement these on top of Harry Wentland’s (AMD) proposal. Their specific requirements will be implemented on VKMS (Virtual Kernel Mode Setting driver) for further discussion. This VKMS implementation can benefit compositor developers by providing insights into NVIDIA’s specific needs.
  • Other vendors: There is a version of the KMS API applied on Intel color pipeline. Apart from that, other vendors appear to be comfortable with the current proposal but lacks the bandwidth to implement it right now.
  • Upstream Patches: The relevant upstream patches were can be found here. [As humorously notes, this series is eagerly awaiting your “Acked-by” (approval)]
  • Compositor Side: The compositor developers have also made significant progress.
    • KDE has already implemented and validated the API through an experimental implementation in Kwin.
    • Gamescope currently uses a driver-specific implementation but has a draft that utilizes the generic version. However, some work is still required to fully transition away from the driver-specific approach. AP: work on porting gamescope to KMS generic API
    • Weston has also begun exploring implementation, and we might see something from them by the end of the year.
  • Kernel and Testing: The kernel API proposal is well-refined and meets the DRM subsystem requirements. Thanks to Harry Wentland effort, we already have the API attached to two hardware vendors and IGT tests, and, thanks to Xaver Hugl, a compositor implementation in place.

Finally, there was a strong sense of agreement that the current proposal for HDR/Color Management is ready to be merged. In simpler terms, everything seems to be working well on the technical side - all signs point to merging and “shipping” the DRM/KMS plane color management API!

Display Mux

During the meeting, Daniel Dadap led a brainstorming session on the design of the display mux switching sequence, in which the compositor would arm the switch via sysfs, then send a modeset to the outgoing driver, followed by a modeset to the incoming driver.

  • Context:
  • Key Considerations:
    • HPD Handling: There was a general consensus that disabling HPD can be part of the sequence for internal panels and we don’t need to focus on it here.
    • Cross-Sync: Ensuring synchronization between the compositor and the drivers is crucial. The compositor should act as the “drm-master” to coordinate the entire sequence, but how can this be ensured?
    • Future-Proofing: The design should not assume the presence of a mux. In future scenarios, direct sharing over DP might be possible.
  • Action points:
    • Sharing DP AUX: Explore the idea of sharing DP AUX and its implications.
    • Backlight: The backlight definition represents a problem in the mux switch context, so we should explore some of the current specs available for that.

Towards Better Commit Failure Feedback

In the last part of the meeting, Xaver Hugl asked for better commit failure feedback.

  • Problem description: Compositors currently face challenges in collecting detailed information from the kernel about commit failures. This lack of granular data hinders their ability to understand and address the root causes of these failures.

To address this issue, we discussed several potential improvements:

  • Direct Kernel Log Access: One idea is to directly load relevant kernel logs into the compositor. This would provide more detailed information about the failure and potentially aid in debugging.
  • Finer-Grained Failure Reporting: We also explored the possibility of separating atomic failures into more specific categories. Not all failures are critical, and understanding the nature of the failure can help compositors take appropriate action.
  • Enhanced Logging: Currently, the dmesg log doesn’t provide enough information for user-space validation. Raising the log level to capture more detailed information during failures could be a viable solution.

By implementing these improvements, we aim to equip compositors with the necessary tools to better understand and resolve commit failures, leading to a more robust and stable display system.

A Big Thank You!

Huge thanks to Rodrigo Siqueira for these detailed meeting notes. Also, Laurent Pinchart, Jonas Adahl, Daniel Dadap, Xaver Hugl, and Harry Wentland for bringing up interesting topics and leading discussions. Finally, thanks to all the participants who enriched the discussions with their experience, ideas, and inputs, especially Alex Goins, Antonino Maniscalco, Austin Shafer, Daniel Stone, Demi Obenour, Jessica Zhang, Joan Torres, Leo Li, Liviu Dudau, Mario Limonciello, Michel Dänzer, Rob Clark, Simon Ser and Teddy Li.

This collaborative effort will undoubtedly contribute to the continued development of the Linux display stack.

Stay tuned for future updates!

19 November, 2024 01:00PM

November 18, 2024

hackergotchi for C.J. Collier

C.J. Collier

Managing HPE SAS Controllers

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.

hpacucli is the wrong command. Use ssacli instead.

$ KR='/usr/share/keyrings/hpe.gpg'
$ for fingerprint in \
  882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \
  57446EFDE098E5C934B69C7DC208ADDE26C2B797 \
  476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \
    curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \
      | gpg --no-default-keyring --keyring "${KR}" --import ; \
  done
$ gpg --list-keys --no-default-keyring --keyring "${KR}" 
/usr/share/keyrings/hpe.gpg
---------------------------
pub   rsa2048 2012-12-04 [SC] [expired: 2022-12-02]
      476DADAC9E647EE27453F2A3B070680A5CE2D476
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service)

pub   rsa2048 2014-11-19 [SC] [expired: 2024-11-16]
      882F7199B20F94BD7E3E690EFADD8D64B1275EA3
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1

pub   rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07]
      57446EFDE098E5C934B69C7DC208ADDE26C2B797
uid           [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 
$ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \
  | sudo dd of=/etc/apt/sources.list.d status=none
$ sudo apt-get update
$ sudo apt-get install -y -qq ssacli > /dev/null 2>&1
$ sudo ssacli ctrl all show status

HPE Smart Array P408i-p SR Gen10 in Slot 3
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

$ sudo ssacli ctrl all show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PFJHD0ARCCR1QM
   RAID 6 Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 2.65
   Firmware Supports Online Firmware Activation: True
   Driver Supports Online Firmware Activation: True
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Write Cache Bypass Threshold Size: 1040 KiB
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Configured Drive Write Cache Policy: Disable
   Unconfigured Drive Write Cache Policy: Default
   Total Cache Size: 2.0
   Total Cache Memory Available: 1.8
   Battery Backed Cache Size: 1.8
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Cache Module Temperature (C): 43
   Capacitor Temperature  (C): 40
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: smartpqi
   Driver Version: Linux 2.1.18-045
   PCI Address (Domain:Bus:Device.Function): 0000:11:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: Mixed
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: 2M20040D1Q
   Sanitize Erase Supported: True
   Sanitize Lock: None
   Sensor ID: 0
      Location: Capacitor
      Current Value (C): 40
      Max Value Since Power On: 42
   Sensor ID: 1
      Location: ASIC
      Current Value (C): 53
      Max Value Since Power On: 55
   Sensor ID: 2
      Location: Unknown
      Current Value (C): 43
      Max Value Since Power On: 45
   Sensor ID: 3
      Location: Cache
      Current Value (C): 43
      Max Value Since Power On: 44
   Primary Boot Volume: None
   Secondary Boot Volume: None

$ sudo ssacli ctrl all show config

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

$ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail

HPE Smart Array P408i-p SR Gen10 in Slot 3

   Array A

      physicaldrive 2I:2:7
         Port: 2I
         Box: 2
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 1.2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 10000
         Firmware Revision: U850
         Serial Number: KZGN1BDE
         WWID: 5000CCA01D247239
         Model: HGST    HUC101212CSS600
         Current Temperature (C): 46
         Maximum Temperature (C): 51
         PHY Count: 2
         PHY Transfer Rate: 6.0Gbps, Unknown
         PHY Physical Link Rate: 6.0Gbps, Unknown
         PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps
         Drive Authentication Status: OK
         Carrier Application Version: 11
         Carrier Bootloader Version: 6
         Sanitize Erase Supported: False
         Shingled Magnetic Recording Support: None
         Drive Unique ID: 5000CCA01D247238

2024-12-09 update

I was experiencing a disk failure or predictive failure when I walked into my office this afternoon. I checked this blog post and found the command I used to check the status of the sas array.

cjac@server0:~$ sudo ssacli ctrl all show config

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, Predictive Failure)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 300 GB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

At this point I engaged the under-study to have the flashing disk replaced

cjac@server0:~$ sudo ssacli ctrl all show config
[sudo] password for cjac: 

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, Recovering, 56.43% complete)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 300 GB, Rebuilding)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 300 GB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

In the next and future output, I will elide duplicate lines

cjac@server0:~$ date ; sudo ssacli ctrl all show config
Mon Dec  9 07:36:44 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 66.85% complete)...
Mon Dec  9 07:39:25 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 68.97% complete) ...
cjac@server0:~$ date ; sudo ssacli ctrl all show config | grep Recovering
Mon Dec  9 07:43:05 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 71.85% complete)
Mon Dec  9 07:43:57 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 72.55% complete)
cjac@server0:~$ sleep 5m ; date ; sudo ssacli ctrl all show config | grep Recovering
Mon Dec  9 07:49:43 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, Recovering, 77.00% complete)
cjac@server0:~$ date ; sudo ssacli ctrl all show config | grep -e 1I:2:2 -e logicaldrive
Mon Dec  9 11:36:14 PM PST 2024 ... logicaldrive 1 (1.64 TB, RAID 6, OK)
physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 300 GB, OK)

18 November, 2024 07:21PM by C.J. Collier

hackergotchi for Philipp Kern

Philipp Kern

debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.

This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.

As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.

The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.

SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

18 November, 2024 04:43PM by Philipp Kern (noreply@blogger.com)

November 14, 2024

Reproducible Builds

Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announces it has lost its founding member.

Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. Many of our earliest status reports were written by him and many of our key tools in use today are based on his design.

Lunar was a resolute opponent of surveillance and censorship, and he possessed an unwavering energy that fueled his work on Reproducible Builds and Tor. Without Lunar’s far-sightedness, drive and commitment to enabling teams around him, Reproducible Builds and free software security would not be in the position it is in today. His contributions will not be forgotten, and his high standards and drive will continue to serve as an inspiration to us as well as for the other high-impact projects he was involved in.

Lunar’s creativity, insight and kindness were often noted. He will be greatly missed.


Other tributes:

14 November, 2024 03:00PM

Stefano Zacchiroli

In memory of Lunar

In memory of Lunar

I've had the incredible fortune to share the geek path of Lunar through life on multiple occasions. First, in Debian, beginning some 15+ years ago, where we were fellow developers and participated in many DebConf editions together.

Then, on the deontology committee of Nos Oignons, a non-profit organization initiated by Lunar to operate Tor relays in France. This was with the goal of diversifying relay operators and increasing access to censorship-resistance technology for everyone in the world. It was something truly innovative and unheard of at the time in France.

Later, as a member of the steering committee of Reproducible Builds, a project that Lunar brought to widespread geek popularity with a seminal "Birds of a Feather" session at DebConf13 (and then many other talks with fellow members of the project in the years to come). A decade later, Reproducible Builds is having a major impact throughout the software industry, primarily due to growing fears about the security of the software supply chain.

Finally, we had the opportunity to recruit Lunar a couple of years ago at Software Heritage, where he insisted on working until he was able to, as part of a team he loved, and that loved him back. In addition to his numerous technical contributions to the initiative, he also facilitated our first ever multi-day team seminar. The event was so successful that it has been confirmed as a long-awaited yearly recurrence by all team members.

I fondly remember one of the last conversations I had with Lunar, a few months ago, when he told me how proud he was not only of having started Nos Oignons and contributed to the ignition of Reproducible Builds, but specifically about the fact that both initiatives were now thriving without being dependent on him. He was likely thinking about a future world without him, but also realizing how impactful his activism had been on the past and present world.

Lunar changed the world for the better and left behind a trail of love and fond memories.

Che la terra ti sia lieve, compagno.

--- Zack

14 November, 2024 01:56PM

November 13, 2024

Russell Coker

Modern Sleep

Julius wrote an insightful blog post about the “modern sleep” issue with Windows [1]. Basically Microsoft decided that the right way to run laptops is to never entirely sleep, which uses more battery but gives better options for waking up and doing things. I agree with Microsoft in concept and this is something that is a problem that can be solved. A phone can run for 24+ hours without ever fully sleeping, a laptop has a more power hungry CPU and peripherals but also has a much larger battery so it should be able to do the same. Some of the reviews for Snapdragon Windows laptops claim up to 22 hours of actual work without charging! So having suspend not really stop the system should be fine.

The ability of a phone to never fully sleep is a change in quality of the usage experience, it means that you can access it and immediately have it respond and it means that all manner of services can be checked for new updates which may require a notification to the user. The XMPP protocol (AKA Jabber) was invented in 1999 which was before laptops were common and Instant Message systems were common long before then. But using Jabber or another IM system on a desktop was a very different experience to using it on a laptop and using it on a phone is different again. The “modern sleep” allows laptops to act like phones in regard to such messaging services. Currently I have Matrix IM clients running on my Android phone and Linux laptop, if I get a notification that takes much typing for a response then I get out my laptop to respond. If I had an ARM based laptop that never fully shut down I would have much less need for Matrix on a phone.

Making “modern sleep” popular will lead to more development of OS software to work with it. For Linux this will hopefully mean that regular Linux distributions (as opposed to Android which while running a Linux kernel is very different to Debian etc) get better support for such things and therefore become more usable on phones. Debian on a Librem 5 or PinePhonePro isn’t very usable due to battery life issues.

A laptop with an LTE card can be used for full mobile phone functionality. With “modern sleep” this is a viable option. I am tempted to make a laptop with LTE card and bluetooth headset a replacement for my phone. Some people will say “what if someone tries to call you when it’s not convenient to have your laptop with you”, my response is “what if people learn to not expect me to answer the phone at any time as they managed that in the 90s”. Seriously SMS or Matrix me if you want an instant response and if you want a long chat schedule it via SMS or Matrix.

Dell has some useful advice about how to use their laptops (and probably most laptops from recent times) in this regard [2]. You can’t close the lid before unplugging the power cable you have to unplug first and then close. You shouldn’t put a laptop in a sealed bag for travel either. This is a terrible situation, you can put a tablet in a bag and don’t need to take any special precautions when unplugging and laptops should work the same. The end result of what Microsoft, Dell, Intel, and others are doing will be good but they are making some silly design choices along the way! I blame Intel mostly for selling laptop CPUs with TDPs >40W!

For an amusing take on this Linus Tech Tips has a video about being forced to use MacBooks by Microsoft’s implementation of Modern Sleep [3].

I’ll try out some ARM laptops in the near future and blog about how well they work on Debian.

13 November, 2024 10:10AM by etbe