January 09, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

RIP vorlon

I was very sad to hear that Steve Langasek, aka vorlon, has passed away from cancer. I hadn't talked to him in many years, but I did meet him at Debconf a couple of times, and more importantly: I was there when he was Release Manager for Debian.

Steve stepped up as one of the RMs at a point where Debian's releases were basically a hell march. Releases would drag on for years, freezes would be forever, at some point not a single package came through to testing over a glibc issue. In that kind of environment, and despite no small amount of toxicity surrounding it all, Steve pulled through and managed not only one, but two releases. If you've only seen the release status of Debian after this period, you won't really know how much must have happened in that decade.

The few times I met Steve, he struck me as not only knowledgeable, but also kind and not afraid to step up for people even it went against the prevailing winds. I wish we could all learn from that. Rest in peace, Steve, your passing is a huge loss for our communities.

09 January, 2025 08:00PM

Scarlett Gately Moore

KDE: Snaps 24.12.1 Release, Kubuntu Plasma 5.27.12 Call for testers

I have released more core24 snaps to –edge for your testing pleasure. If you find any bugs please report them at bugs.kde.org and assign them to me. Thanks!

Kdenlive our amazing video editor!

Haruna is a video player that also supports youtube!

Kdevelop is our feature rich development IDE

KDE applications 24.12.1 release https://kde.org/announcements/gear/24.12.1/

New qt6 ports

  • lokalize
  • isoimagewriter
  • parley
  • kteatime
  • ghostwriter
  • ktorrent
  • kanagram
  • marble

Kubuntu:

We have Plasma 5.27.12 Bugfix release in staging https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/staging-plasma for noble updates, please test! Do NOT do this on a production system. Thanks!

I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

09 January, 2025 01:24PM by sgmoore

Reproducible Builds

Reproducible Builds in December 2024

Welcome to the December 2024 report from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As ever, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. debian-repro-status
  3. On our mailing list
  4. Enhancing the Security of Software Supply Chains
  5. diffoscope
  6. Supply-chain attack in the Solana ecosystem
  7. Website updates
  8. Debian changes
  9. Other development news
  10. Upstream patches
  11. Reproducibility testing framework

reproduce.debian.net

Last month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there.

This month, however, we are pleased to announce that not only does the service now produce graphs, the reproduce.debian.net homepage itself has become a “start page” of sorts, and the amd64.reproduce.debian.net and i386.reproduce.debian.net pages have emerged. The first of these rebuilds the amd64 architecture, naturally, but it also is building Debian packages that are marked with the ‘no architecture’ label, all. The second builder is, however, only rebuilding the i386 architecture.

Both of these services were also switched to reproduce the Debian trixie distribution instead of unstable, which started with 43% of the archive rebuild with 79.3% reproduced successfully. This is very much a work in progress, and we’ll start reproducing Debian unstable soon.

Our i386 hosts are very kindly sponsored by Infomaniak whilst the amd64 node is sponsored by OSUOSL — thank you! Indeed, we are looking for more workers for more Debian architectures; please contact us if you are able to help.


debian-repro-status

Reproducible builds developer kpcyrd has published a client program for reproduce.debian.net (see above) that queries the status of the locally installed packages and rates the system with a percentage score. This tool works analogously to arch-repro-status for the Arch Linux Reproducible Builds setup.

The tool was packaged for Debian and is currently available in Debian trixie: it can be installed with apt install debian-repro-status.


On our mailing list

On our mailing list this month:

  • Bernhard M. Wiedemann wrote a detailed post on his “long journey towards a bit-reproducible Emacs package.” In his interesting message, Bernhard goes into depth about the tools that they used and the lower-level technical details of, for instance, compatibility with the version for glibc within openSUSE.

  • Shivanand Kunijadar posed a question pertaining to the reproducibility issues with encrypted images. Shivanand explains that they must “use a random IV for encryption with AES CBC. The resulting artifact is not reproducible due to the random IV used.” The message resulted in a handful of replies, hopefully helpful!

  • User Danilo posted an in interesting question related to their attempts in trying to achieve reproducible builds for Threema Desktop 2.0. The question resulted in a number of replies attempting to find the right combination of compiler and linker flags (for example).

  • Longstanding contributor David A. Wheeler wrote to our list announcing the release of the “Census III of Free and Open Source Software: Application Libraries” report written by Frank Nagle, Kate Powell, Richie Zitomer and David himself. As David writes in his message, the report attempts to “answer the question ‘what is the most popular Free and Open Source Software (FOSS)?’”.

  • Lastly, kpcyrd followed-up to a post from September 2024 which mentioned their desire for “someone” to implement “a hashset of allowed module hashes that is generated during the kernel build and then embedded in the kernel image”, thus enabling a deterministic and reproducible build. However, they are now reporting that “somebody implemented the hash-based allow list feature and submitted it to the Linux kernel mailing list”. Like kpcyrd, we hope it gets merged.


Enhancing the Security of Software Supply Chains: Methods and Practices

Mehdi Keshani of the Delft University of Technology in the Netherlands has published their thesis on “Enhancing the Security of Software Supply Chains: Methods and Practices”. Their introductory summary first begins with an outline of software supply chains and the importance of the Maven ecosystem before outlining the issues that it faces “that threaten its security and effectiveness”. To address these:

First, we propose an automated approach for library reproducibility to enhance library security during the deployment phase. We then develop a scalable call graph generation technique to support various use cases, such as method-level vulnerability analysis and change impact analysis, which help mitigate security challenges within the ecosystem. Utilizing the generated call graphs, we explore the impact of libraries on their users. Finally, through empirical research and mining techniques, we investigate the current state of the Maven ecosystem, identify harmful practices, and propose recommendations to address them.

A PDF of Mehdi’s entire thesis is available to download.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 283 and 284 to Debian:

  • Update copyright years. []
  • Update tests to support file 5.46. [][]
  • Simplify tests_quines.py::test_{differences,differences_deb} to simply use assert_diff and not mangle the test fixture. []


Supply-chain attack in the Solana ecosystem

A significant supply-chain attack impacted Solana, an ecosystem for decentralised applications running on a blockchain.

Hackers targeted the @solana/web3.js JavaScript library and embedded malicious code that extracted private keys and drained funds from cryptocurrency wallets. According to some reports, about $160,000 worth of assets were stolen, not including SOL tokens and other crypto assets.


Website updates

Similar to last month, there was a large number of changes made to our website this month, including:

  • Chris Lamb:

    • Make the landing page hero look nicer when the vertical height component of the viewport is restricted, not just the horizontal width.
    • Rename the “Buy-in” page to “Why Reproducible Builds?” []
    • Removing the top black border. [][]
  • Holger Levsen:

  • hulkoba:

    • Remove the sidebar-type layout and move to a static navigation element. [][][][]
    • Create and merge a new Success stories page, which “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds. These stories aim to enhance the technical resilience of the initiative by encouraging community involvement and inspiring new contributions.”. []
    • Further changes to the homepage. []
    • Remove the translation icon from the navigation bar. []
    • Remove unused CSS styles pertaining to the sidebar. []
    • Add sponsors to the global footer. []
    • Add extra space on large screens on the Who page. []
    • Hide the side navigation on small screens on the Documentation pages. []


Debian changes

There were a significant number of reproducibility-related changes within Debian this month, including:

  • Santiago Vila uploaded version 0.11+nmu4 of the dh-buildinfo package. In this release, the dh_buildinfo becomes a no-op — ie. it no longer does anything beyond warning the developer that the dh-buildinfo package is now obsolete. In his upload, Santiago wrote that “We still want packages to drop their [dependency] on dh-buildinfo, but now they will immediately benefit from this change after a simple rebuild.”

  • Holger Levsen filed Debian bug #1091550 requesting a rebuild of a number of packages that were built with a “very old version” of dpkg.

  • Fay Stegerman contributed to an extensive thread on the debian-devel development mailing list on the topic of “Supporting alternative zlib implementations”. In particular, Fay wrote about her results experimenting whether zlib-ng produces identical results or not.

  • kpcyrd uploaded a new rust-rebuilderd-worker, rust-derp, rust-in-toto and debian-repro-status to Debian, which passed successfully through the so-called NEW queue.

  • Gioele Barabucci filed a number of bugs against the debrebuild component/script of the devscripts package, including:

    • #1089087: Address a spurious extra subdirectory in the build path.
    • #1089201: Extra zero bytes added to .dynstr when rebuilding CMake projects.
    • #1089088: Some binNMUs have a 1-second offset in some timestamps.
  • Gioele Barabucci also filed a bug against the dh-r package to report that the Recommends and Suggests fields are missing from rebuilt R packages. At the time of writing, this bug has no patch and needs some help to make over 350 binary packages reproducible.

  • Lastly, 8 reviews of Debian packages were added, 11 were updated and 11 were removed this month adding to our knowledge about identified issues.


Other development news

In other ecosystem and distribution news:

  • Lastly, in openSUSE, Bernhard M. Wiedemann published another report for the distribution. There, Bernhard reports about the success of building ‘R-B-OS’, a partial fork of openSUSE with only 100% bit-reproducible packages. This effort was sponsored by the NLNet NGI0 initiative.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add a new i386.reproduce.debian.net rebuilder. [][][][][][]
    • Make a number of updates to the documentation. [][][][][]
    • Run i386.reproduce.debian.net run on a public port to allow external workers. []
    • Add a link to the /api/v0/pkgs/list endpoint. []
    • Add support for a statistics page. [][][][][][]
    • Limit build logs to 20 MiB and diffoscope output to 10 MiB. []
    • Improve the frontpage. [][]
    • Explain that we’re testing arch:any and arch:all on the amd64 architecture, but only arch:any on i386. []
  • Misc:

    • Remove code for testing Arch Linux, which has moved to reproduce.archlinux.org. [][]
    • Don’t install dstat on Jenkins nodes anymore as its been removed from Debian trixie. []
    • Prepare the infom08-i386 node to become another rebuilder. []
    • Add debug date output for benchmarking the reproducible_pool_buildinfos.sh script. []
    • Install installation-birthday everywhere. []
    • Temporarily disable automatic updates of pool links on buildinfos.debian.net. []
    • Install Recommends by default on Jenkins nodes. []
    • Rename rebuilder_stats.py to rebuilderd_stats.py. []
    • r.d.n/stats: minor formatting changes. []
    • Install files under /etc/cron.d/ with the correct permissions. []

… and Jochen Sprickerhof made the following changes:

Lastly, Gioele Barabucci also classified packages affected by 1-second offset issue filed as Debian bug #1089088 [][][][], Chris Hofstaedtler updated the URL for Grml’s dpkg.selections file  [], Roland Clobus updated the Jenkins log parser to parse warnings from diffoscope [] and Mattia Rizzolo banned a number of bots and crawlers from the service [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

09 January, 2025 12:00PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Tracker.debian.org updates, Salsa CI improvements, Coinstallable build-essential, Python 3.13 transition, Ruby 3.3 transition and more! (by Anupa Ann Joseph, Stefano Rivera)

Debian Contributions: 2024-12

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Tracker.debian.org updates, by Raphaël Hertzog

Profiting from end-of-year vacations, Raphaël prepared for tracker.debian.org to be upgraded to Debian 12 bookworm by getting rid of the remnants of python3-django-jsonfield in the code (it was superseded by a Django-native field). Thanks to Philipp Kern from the Debian System Administrators team, the upgrade happened on December 23rd.

Raphaël also improved distro-tracker to better deal with invalid Maintainer fields which recently caused multiples issues in the regular data updates (#1089985, MR 105). While working on this, he filed #1089648 asking dpkg tools to error out early when maintainers make such mistakes.

Finally he provided feedback to multiple issues and merge requests (MR 106, issues #21, #76, #77), there seems to be a surge of interest in distro-tracker lately. It would be nice if those new contributors could stick around and help out with the significant backlog of issues (in the Debian BTS, in Salsa).

Salsa CI improvements, by Santiago Ruano Rincón

Given that the Debian buildd network now relies on sbuild using the unshare backend, and that Salsa CI’s reproducibility testing needs to be reworked (#399), Santiago resumed the work for moving the build job to use sbuild. There was some related work a few months ago that was focused on sbuild with the schroot and the sudo backends, but those attempts were stalled for different reasons, including discussions around the convenience of the move (#296). However, using sbuild and unshare avoids all of the drawbacks that have been identified so far. Santiago is preparing two merge requests: !568 to introduce a new build image, and !569 that moves all the extract-source related tasks to the build job. As mentioned in the previous reports, this change will make it possible for more projects to use the pipeline to build the packages (See #195). Additional advantages of this change include a more optimal way to test if a package builds twice in a row: instead of actually building it twice, the Salsa CI pipeline will configure sbuild to check if the clean target of debian/rules correctly restores the source tree, saving some CPU cycles by avoiding one build. Also, the images related to Ubuntu won’t be needed anymore, since the build job will create chroots for different distributions and vendors from a single common build image. This will save space in the container registry. More changes are to come, especially those related to handling projects that customize the pipeline and make use of the extract-source job.

Coinstallable build-essential, by Helmut Grohne

Building on the gcc-for-host work of last December, a notable patch turning build-essential Multi-Arch: same became feasible. Whilst the change is small, its implications and foundations are not. We still install crossbuild-essential-$ARCH for cross building and due to a britney2 limitation, we cannot have it depend on the host’s C library. As a result, there are workarounds in place for sbuild and pbuilder. In turning build-essential Multi-Arch: same, we may actually express these dependencies directly as we install build-essential:$ARCH instead. The crossbuild-essential-$ARCH packages will continue to be available as transitional dummy packages.

Python 3.13 transition, by Colin Watson and Stefano Rivera

Building on last month’s work, Colin, Stefano, and other members of the Debian Python team fixed 3.13 compatibility bugs in many more packages, allowing 3.13 to now be a supported but non-default version in testing. The next stage will be to switch to it as the default version, which will start soon. Stefano did some test-rebuilds of packages that only build for the default Python 3 version, to find issues that will block the transition. The default version transition typically shakes out some more issues in applications that (unlike libraries) only test with the default Python version.

Colin also fixed Sphinx 8.0 compatibility issues in many packages, which otherwise threatened to get in the way of this transition.

Ruby 3.3 transition, by Lucas Kanashiro

The Debian Ruby team decided to ship Ruby 3.3 in the next Debian release, and Lucas took the lead of the interpreter transition with the assistance of the rest of the team. In order to understand the impact of the new interpreter in the ruby ecosystem, ruby-defaults was uploaded to experimental adding ruby3.3 as an alternative interpreter, and a mass rebuild of reverse dependencies was done here. Initially, a couple of hundred packages were failing to build, after many rounds of rebuilds, adjustments, and many uploads we are down to 30 package build failures, of those, 21 packages were asked to be removed from testing and for the other 9, bugs were filled. All the information to track this transition can be found here. Now, we are waiting for PHP 8.4 to finish to avoid any collision. Once it is done the Ruby 3.3 transition will start in unstable.

Miscellaneous contributions

  • Enrico Zini redesigned the way nm.debian.org stores historical audit logs and personal data backups.
  • Carles Pina submitted a new package (python-firebase-messaging) and prepared updates for python3-ring-doorbell.
  • Carles Pina developed further po-debconf-manager: better state transition, fixed bugs, automated assigning translators and reviewers on edit, updating po header files automatically, fixed bugs, etc.
  • Carles Pina reviewed, submitted and followed up the debconf templates translation (more than 20 packages) and translated some packages (about 5).
  • Santiago continued to work on DebConf 25 organization related tasks, including handling the logo survey and results. Stefano spent time on DebConf 25 too.
  • Santiago continued the exploratory work about linux livepatching with Emmanuel Arias. Santiago and Emmanuel found a challenge since kpatch won’t fully support linux in trixie and newer, so they are exploring alternatives such as klp-build.
  • Helmut maintained the /usr-move transition filing bugs in e.g. bubblewrap, e2fsprogs, libvpd-2.2-3, and pam-tmpdir and corresponding on related issues such as kexec-tools and live-build. The removal of the usrmerge package unfortunately broke debootstrap and was quickly reverted. Continued fallout is expected and will continue until trixie is released.
  • Helmut sent patches for 10 cross build failures and worked with Sandro Knauß on stuck Qt/KDE patches related to cross building.
  • Helmut continued to maintain rebootstrap removing the need to build gnu-efi in the process.
  • Helmut collaborated with Emanuele Rocca and Jochen Sprickerhof on an interesting adventure in diagnosing why gcc would FTBFS in recent sbuild.
  • Helmut proposed supporting build concurrency limits in coreutils’s nproc. As it turns out nproc is not a good place for this functionality.
  • Colin worked with Sandro Tosi and Andrej Shadura to finish resolving the multipart vs. python-multipart name conflict, as mentioned last month.
  • Colin upgraded 48 Python packages to new upstream versions, fixing four CVEs and a number of compatibility bugs with recent Python versions.
  • Colin issued an openssh bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which had been quite broken in bookworm.
  • Stefano fixed a minor bug in debian-reimbursements that was disallowing combination PDFs containing JAL tickets, encoded in UTF-16.
  • Stefano uploaded a stable update to PyPy3 in bookworm, catching up with security issues resolved in cPython.
  • Stefano fixed a regression in the eventlet from his Python 3.13 porting patch.
  • Stefano continued discussing a forwarded patch (renaming the sysconfigdata module) with cPython upstream, ending in a decision to drop the patch from Debian. This will need some continued work.
  • Anupa participated in the Debian Publicity team meeting in December, which discussed the team activities done in 2024 and projects for 2025.

09 January, 2025 12:00AM by Anupa Ann Joseph, Stefano Rivera

January 08, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGetconf 0.0.4 on CRAN: Updates

A minor package update, the first in over six years, for the RcppGetconf package for reading system configuration — not unlike getconf from the libc library — is now on CRAN

The changes are all minor package maintenance items of keeping URLs, continuous integration, and best practices current. We had two helper scripts use bash in their shebangs, and we just got dinged in one of them. Tedious as this can at times seem, it ensures CRAN packages do in fact compile just about anywhere which is a Good Thing (TM) so we obliged and updated the package with that change—and all the others that had accumulated over six years. No interface or behaviour changes, “just maintenance” as one does at times.

The short list of changes in this release follows:

Changes in inline version 0.0.4 (2025-01-07)

  • Dynamically linked compiled code is now registered in NAMESPACE

  • The continuous integration setup was update several times

  • The README was updated with current badges and URLs

  • The DESCRIPTION file now uses Authors@R

  • The configure and cleanup scripts use /bin/sh

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. More about the package is at the local RcppGetconf page and the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

08 January, 2025 09:37PM

John Goerzen

Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week’s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I — a person that has been censored by Facebook for mentioning the Open Source social network Mastodon — not cheering a “lighter touch”?

The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s.

Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not.

As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access — and wouldn’t understand how to access even if they had the equipment — you have a place where self-expression can be unleashed.

And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms.

But as I’ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough — more than a few dozen people, say — there are troublemakers that ruin it for everyone. Maybe it’s trolling, maybe it’s vicious attacks, you name it — it will arrive and it will be poisonous.

I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.)

But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the “sysop” or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you’re banned. But, in all but the smallest towns, there were other options you could try.

On the other hand, sysops enjoyed having people call their BBSs and didn’t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don’t be excessively annoying, and don’t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet.

On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn’t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty — delinking from the network — if they repeatedly failed to prevent malicious content from flowing through their site.

Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn’t be “in real life”. And yes, some harbored trolls and flamers.

The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn’t like what the vibe was at a certain place.

That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online.

With the rise of the very large platforms — and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later — the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you’d need tens or hundreds of millions of dollars of equipment and employees.

All that means you can’t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook’s mission is not the one they say it is — “[to] give people the power to build community and bring the world closer together.” If that was their goal, they wouldn’t be creating AI users and AI spam and all the rest. Zuck isn’t showing courage; he’s sucking up to Trump and those that will pay the price are those that always do: women and minorities.

Really, the point of any large social network isn’t to build community. It’s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles — even free speech — doing it. Don’t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others.

The problem with a one-size-fits-all content policy is that the world isn’t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can’t square this circle.

Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don’t have much to offer on that point. Secondly, it indicates that we are too centralized.

We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don’t.

And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance.

Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go “shrug, this sucks” and try something better.

(And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

08 January, 2025 02:59PM by John Goerzen

January 07, 2025

Jonathan Wiltshire

Using TPM for Automatic Disk Decryption in Debian 12

These days it’s straightforward to have reasonably secure, automatic decryption of your root filesystem at boot time on Debian 12. Here’s how I did it on an existing system which already had a stock kernel, secure boot enabled, grub2 and an encrypted root filesystem with the passphrase in key slot 0.

There’s no need to switch to systemd-boot for this setup but you will use systemd-cryptenroll to manage the TPM-sealed key. If that offends you, there are other ways of doing this.

Caveat

The parameters I’ll seal a key against in the TPM include a hash of the initial ramdisk. This is essential to prevent an attacker from swapping the image for one which discloses the key. However, it also means the key has to be re-sealed every time the image is rebuilt. This can be frequent, for example when installing/upgrading/removing packages which include a kernel module. You won’t get locked out (as long as you still have a passphrase in another slot), but will need to re-seal the key to restore the automation.

You can also choose not to include this parameter for the seal, but that opens the door to such an attack.

Caution: these are the steps I took on my own system. You may need to adjust them to avoid ending up with a non-booting system.

Check for a usable TPM device

We’ll bind the secure boot state, kernel parameters, and other boot measurements to a decryption key. Then, we’ll seal it using the TPM. This prevents the disk being moved to another system, the boot chain being tampered with and various other attacks.

# apt install tpm2-tools
# systemd-cryptenroll --tpm2-device list
PATH        DEVICE     DRIVER 
/dev/tpmrm0 STM0125:00 tpm_tis

Clean up older kernels including leftover configurations

I found that previously-removed (but not purged) kernel packages sometimes cause dracut to try installing files to the wrong paths. Identify them with:

# apt install aptitude
# aptitude search '~c'

Change search to purge or be more selective, this part is an exercise for the reader.

Switch to dracut for initramfs images

Unless you have a particular requirement for the default initramfs-tools, replace it with dracut and customise:

# mkdir /etc/dracut.conf.d
# echo 'add_dracutmodules+=" tpm2-tss crypt "' > /etc/dracut.conf.d/crypt.conf
# apt install dracut

Remove root device from crypttab, configure grub

Remove (or comment) the root device from /etc/crypttab and rebuild the initial ramdisk with dracut -f.

Edit /etc/default/grub and add ‘rd.auto rd.luks=1‘ to GRUB_CMDLINE_LINUX. Re-generate the config with update-grub.

At this point it’s a good idea to sanity-check the initrd contents with lsinitrd. Then, reboot using the new image to ensure there are no issues. This will also have up-to-date TPM measurements ready for the next step.

Identify device and seal a decryption key

# lsblk -ip -o NAME,TYPE,MOUNTPOINTS
NAME                                                    TYPE  MOUNTPOINTS
/dev/nvme0n1p4                                          part  /boot
/dev/nvme0n1p5                                          part  
`-/dev/mapper/luks-deff56a9-8f00-4337-b34a-0dcda772e326 crypt 
  |-/dev/mapper/lv-var                                  lvm   /var
  |-/dev/mapper/lv-root                                 lvm   /
  `-/dev/mapper/lv-home                                 lvm   /home

In this example my root filesystem is in a container on /dev/nvme0n1p5. The existing passphrase key is in slot 0.

# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7+8+9+14 /dev/nvme0n1p5
Please enter current passphrase for disk /dev/nvme0n1p5: ********
New TPM2 token enrolled as key slot 1.

The PCRs I chose (7, 8, 9 and 14) correspond to the secure boot policy, kernel command line (to prevent init=/bin/bash-style attacks), files read by grub including that crucial initrd measurement, and secure boot MOK certificates and hashes. You could also include PCR 5 for the partition table state, and any others appropriate for your setup.

Reboot

You should now be able to reboot and the root device will be unlocked automatically, provided the secure boot measurements remain consistent.

The key slot protected by a passphrase (mine is slot 0) is now your recovery key. Do not remove it!


Please consider supporting my work in Debian and elsewhere through Liberapay.

07 January, 2025 11:03PM by Jonathan

Thorsten Alteholz

My Debian Activities in December 2024

Debian LTS

This was my hundred-twenty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

I worked on updates for ffmpeg and haproxy in all releases. Along the way I marked more CVEs as not-affected than I had to fix. So finally there was no upload needed for haproxy anymore. Unfortunately testing ffmpeg was not as easy, as the recommended “just look whether mpv can play random videos” is not really satisfying. So the upload will happen only in January.

I also wonder whether fixing glewlwyd is really worth the effort, as the software is already EOL upstream.

Debian ELTS

This month was the seventy-seventhth ELTS month. During my allocated time I worked on ffmpeg, haproxy, amanda and kmail-account-wizzard.

Like LTS, all CVEs of haproxy and some of ffmpeg could be marked as not-affected and testing of the other packages was/is not really straight forward. So the final upload will only happen in January as well.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Thanks a lot to William Desportes for all fixes of my bad PHP packaging.

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

I again sponsored an upload of calceph.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

I also sponsored uploads of emacs-lsp-docker, emacs-dape, emacs-oauth2, gpgmngr, libjs-jush.

FTP master

This month I accepted 330 and rejected 13 packages. The overall number of packages that got accepted was 335.

07 January, 2025 12:29PM by alteholz

Enrico Zini

Debugging printing to a remote printer

I upgraded to Debian testing/trixie, and my network printer stopped appearing in print dialogs. These are notes from the debugging session.

Check firewall configuration

I tried out kde, which installed plasma-firewall, which installed firewalld, which closed by default the ports used for printing.

For extra fun, appindicators are not working in Gnome and so firewall-applet is currently useless, although one can run firewall-config manually, or use the command line that might be more user friendly than the UI.

Step 1: change the zone for the home wifi to "Home":

firewall-cmd  --zone home --list-interfaces
firewall-cmd  --zone home --add-interface wlp1s0

Step 2: make sure the home zone can print:

firewall-cmd --zone home --list-services
firewall-cmd --zone home --add-service=ipp
firewall-cmd --zone home --add-service=ipp-client
firewall-cmd --zone home --add-service=mdns

I searched and searched but I could not find out whether ipp is needed, ipp-client is needed, or both are needed.

Check if avahi can see the printer

Is the printer advertised correctly over mdns?

When it didn't work:

$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []

$ avahi-browse -rt _ipp._tcp
[empty]

When it works:

$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []

$ avahi-browse -rt _ipp._tcp
+ wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
+ wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
= wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1092109
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]

Check if cups can see the printer

From CUPS' Using Network Printers:

$ /usr/sbin/lpinfo --include-schemes dnssd -v

network dnssd://Brother%20HL-2030%20series%20%40%20server._ipp._tcp.local/cups?uuid=

Debugging session interrupted

At this point, the printer appeared.

It could be that:

In the end, debugging failed successfully, and this log now remains as a reference for possible further issues.

07 January, 2025 11:40AM

January 05, 2025

Dominique Dumont

hackergotchi for Jonathan McDowell

Jonathan McDowell

Free Software Activities for 2024

I tailed off on blog posts towards the end of the year; I blame a bunch of travel (personal + business), catching the ‘flu, then December being its usual busy self. Anyway, to try and start off the year a bit better I thought I’d do my annual recap of my Free Software activities.

For previous years see 2019, 2020, 2021, 2022 + 2023.

Conferences

In 2024 I managed to make it to FOSDEM again. It’s a hectic conference, and I know there are legitimate concerns about it being a super spreader event, but it has the advantage of being relatively close and having a lot of different groups of people I want to talk to / see talk at it. I’m already booked to go this year as well.

I spoke at All Systems Go in Berlin about Using TPMs at scale for protecting keys. It was nice to actually be able to talk publicly about some of the work stuff my team and I have been working on. I’d a talk submission in for FOSDEM about our use of attestation and why it’s not necessarily the evil some folk claim, but there were a lot of good talks submitted and I wasn’t selected. Maybe I’ll find somewhere else suitable to do it.

BSides Belfast may or may not count - it’s a security conference, but there’s a lot of overlap with various bits of Free software, so I feel it deserves a mention.

I skipped DebConf for 2024 for a variety of reasons, but I’m expecting to make DebConf25 in Brest, France in July.

Debian

Most of my contributions to Free software continue to happen within Debian.

In 2023 I’d done a bunch of work on retrogaming with Kodi on Debian, so I made an effort to try and keep those bits more up to date, even if I’m not actually regularly using them at present. RetroArch got 1.18.0+dfsg-1 and 1.19.1+dfsg-1 uploads. libretro-core-info got associated 1.18.0-1 and 1.19.0-1 uploads too. I note 1.20.0 has been released recently, so I’ll have to find some time to build the appropriate DFSG tarball and update it.

rcheevos saw 11.2.0-1, 11.5.0-1 + 11.6.0-1 uploaded.

kodi-game-libretro itself had 20.2.7-1 uploaded, then 21.0.7-1. Latest upstream is 22.1.0, but that’s tracking Kodi 22 and we’re still on Kodi 21 so I plan to follow the Omega branch for now. Which I’ve just noticed had a 21.0.8 release this week.

Finally in the games space I uploaded mgba 0.10.3+dfsg-1 and 0.10.3+dfsg-2 for Ryan Tandy, before realising he was already a Debian Maintainer and granting him the appropriate ACL access so he can upload it himself; I’ve had zero concerns about any of his packaging.

The Debian Electronics Packaging Team continues to be home for a bunch of packages I care about. There was nothing big there, for me, in 2024, but a few bits of cleanup here and there.

I seem to have become one of the main uploaders for sdcc - I have some interest in the space, and the sigrok firmware requires it to build, so I at least like to ensure it’s in half decent state. I uploaded 4.4.0+dfsg-1, 4.4.0+dfsg-2, and, just in time to count for 2024, 4.4.0+dfsg-3.

The sdcc 4.4 upload lead to some compilation issues for sigrok-firmware-fx2laf so I uploaded 0.1.7-2 fixing that, then 0.1.7-3 doing some further cleanups.

OpenOCD had 0.12.0-2 uploaded to disable the libgpiod backend thanks to incompatible changes upstream. There were some in-discussion patches with OpenOCD upstream at the time, but they didn’t seem to be ready yet so I held off on pulling them in. 0.12.0-3 fixed builds with more recent versions of jimtcl. It looks like the next upstream release is about a year away, so Trixie will in all probability ship with 0.12.0 as well.

libjaylink had a new upstream release, so 0.4.0-1 was uploaded. libserialsport also had a new upstream release, leading to 0.1.2-1.

I finally cracked and uploaded sg3-utils 1.48-1 into experimental. I’m not the primary maintainer, but 1.46 is nearly 4 years old now and I wanted to get it updated in enough time to shake out any problems before we get to a Trixie freeze.

Outside of team owned packages, libcli had compilation issues with GCC 14, leading to 1.10.7-2. I also added a new package, sedutil 1.20.0-2 back in April; it looks fairly unmaintained upstream (there’s been some recent activity, but it doesn’t seem to be release quality), but there was an outstanding ITP and I’ve some familiarity with the space as we’ve been using it at work as part of investigating TCG OPAL encryption.

I continue to keep an eye on Debian New Members, even though I’m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions.

Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.24, 2023.09.22 + 2023.11.24.

Linux

I’d a single kernel contribution this year, to Clean up TPM space after command failure. That was based on some issues we saw at work. I’ve another fix in progress that I hope to submit in 2025, but it’s for an intermittent failure so confirming the fix is necessary + sufficient is taking a little while.

Personal projects

I didn’t end up doing much in the way of externally published personal project work in 2024.

Despite the release of OpenPGP v6 in RFC 9580 I did not manage to really work on onak. I started on the v6 support, but have not had sufficient time to complete anything worth pushing external yet.

listadmin3 got some minor updates based on external feedback / MRs. It’s nice to know it’s useful to other folk even in its basic state.

That wraps up 2024. I’ve got no particular goals for this year at present. Ideally I’d get v6 support into onak, and it would be nice to implement some of the wishlist items people have provided for listadmin3, but I’ll settle for making sure all my Debian packages are in reasonable state for Trixie.

05 January, 2025 04:10PM

Enrico Zini

ncdu on files to back up

I use borg and restic to backup files in my system. Sometimes I run a huge download or clone a large git repo and forget to mark it with CACHEDIR.TAG, and it gets picked up slowing the backup process and wasting backup space uselessly.

I would like to occasionally audit the system to have an idea of what is a candidate for backup. ncdu would be great for this, but it doesn't know about backup exclusion filters.

Let's teach it then.

Here's a script that simulates a backup and feeds the results to ncdu:

#!/usr/bin/python3

import argparse
import os
import sys
import time
import stat
import json
import subprocess
import tempfile
from pathlib import Path
from typing import Any

FILTER_ARGS = [
    "--one-file-system",
    "--exclude-caches",
    "--exclude",
    "*/.cache",
]
BACKUP_PATHS = [
    "/home",
]


class Dir:
    """
    Dispatch borg output into a hierarchical directory structure.

    borg prints a flat file list, ncdu needs a hierarchical JSON.
    """

    def __init__(self, path: Path, name: str):
        self.path = path
        self.name = name
        self.subdirs: dict[str, "Dir"] = {}
        self.files: list[str] = []

    def print(self, indent: str = "") -> None:
        for name, subdir in self.subdirs.items():
            print(f"{indent}{name:}/")
            subdir.print(indent + " ")
        for name in self.files:
            print(f"{indent}{name}")

    def add(self, parts: tuple[str, ...]) -> None:
        if len(parts) == 1:
            self.files.append(parts[0])
            return

        subdir = self.subdirs.get(parts[0])
        if subdir is None:
            subdir = Dir(self.path / parts[0], parts[0])
            self.subdirs[parts[0]] = subdir

        subdir.add(parts[1:])

    def to_data(self) -> list[Any]:
        res: list[Any] = []
        st = self.path.stat()
        res.append(self.collect_stat(self.name, st))
        for name, subdir in self.subdirs.items():
            res.append(subdir.to_data())

        dir_fd = os.open(self.path, os.O_DIRECTORY)
        try:
            for name in self.files:
                try:
                    st = os.lstat(name, dir_fd=dir_fd)
                except FileNotFoundError:
                    print(
                        "Possibly broken encoding:",
                        self.path,
                        repr(name),
                        file=sys.stderr,
                    )
                    continue
                if stat.S_ISDIR(st.st_mode):
                    continue
                res.append(self.collect_stat(name, st))
        finally:
            os.close(dir_fd)

        return res

    def collect_stat(self, fname: str, st) -> dict[str, Any]:
        res = {
            "name": fname,
            "ino": st.st_ino,
            "asize": st.st_size,
            "dsize": st.st_blocks * 512,
        }
        if stat.S_ISDIR(st.st_mode):
            res["dev"] = st.st_dev
        return res


class Scanner:
    def __init__(self) -> None:
        self.root = Dir(Path("/"), "/")
        self.data = None

    def scan(self) -> None:
        with tempfile.TemporaryDirectory() as tmpdir_name:
            mock_backup_dir = Path(tmpdir_name) / "backup"
            subprocess.run(
                ["borg", "init", mock_backup_dir.as_posix(), "--encryption", "none"],
                cwd=Path.home(),
                check=True,
            )

            proc = subprocess.Popen(
                [
                    "borg",
                    "create",
                    "--list",
                    "--dry-run",
                ]
                + FILTER_ARGS
                + [
                    f"{mock_backup_dir}::test",
                ]
                + BACKUP_PATHS,
                cwd=Path.home(),
                stderr=subprocess.PIPE,
            )
            assert proc.stderr is not None
            for line in proc.stderr:
                match line[0:2]:
                    case b"- ":
                        path = Path(line[2:].strip().decode())
                    case b"x ":
                        continue
                    case _:
                        raise RuntimeError(f"Unparsable borg output: {line!r}")

                if path.parts[0] != "/":
                    raise RuntimeError(f"Unsupported path: {path.parts!r}")
                self.root.add(path.parts[1:])

    def to_json(self) -> list[Any]:
        return [
            1,
            0,
            {
                "progname": "backup-ncdu",
                "progver": "0.1",
                "timestamp": int(time.time()),
            },
            self.root.to_data(),
        ]

    def export(self):
        return json.dumps(self.to_json()).encode()


def main():
    parser = argparse.ArgumentParser(
        description="Run ncdu to estimate sizes of files to backup."
    )
    parser.parse_args()

    scanner = Scanner()
    scanner.scan()
    # scanner.root.print()
    res = subprocess.run(["ncdu", "-f-"], input=scanner.export())
    sys.exit(res.returncode)


if __name__ == "__main__":
    main()

05 January, 2025 03:09PM

January 04, 2025

Scarlett Gately Moore

KDE: Snap hotfixes and updates

Fixed okular pdf printing https://bugs.kde.org/show_bug.cgi?id=498065

Fixed kwave recording https://bugs.kde.org/show_bug.cgi?id=442085 please run sudo snap connect kwave:audio-record :audio-record until auto-connect gets approved here: https://forum.snapcraft.io/t/kde-auto-connect-our-two-recording-apps/44419

New qt6 snaps in –edge until 24.12.1 release

  • minuet
  • ksystemlog
  • kwordquiz
  • lokalize
  • ksirk
  • ksnakeduel
  • kturtle

I have begun the process of moving to core24 currently in –edge until 24.12.1 release.

Some major improvements come with core24!

Tokodon is our wonderful Mastadon client

I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

04 January, 2025 01:36PM by sgmoore

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Tips when building debian-installer

Recently, I'm trying to fix d-i Han-Unification issue for Japanese. This issue was not fixed for a long time since Debian 9 (stretch).

#1037256 - debian-installer: GUI font for Japanese was incorrectly rendered - Debian Bug report logs

To know about how Han-Unification is harmful for Japanese in some cases, See "Your Code Displays Japanese Wrong".

heistak.github.io

When building d-i (GUI Installer), you need to build build_netboot-gtk target.

But note that you need recent master because it has nitpick issue with GNU Make 4.4.x.

bugs.debian.org

After that, need to prepare required packages. See README in details.

apt-get update
apt-get install -y myrepos git libgtk2.0-dev fakeroot
apt-get build-dep -y debian-installer

It seems that Bug#1037256 will be fixed with supporting compressed font. I don't know how to do it furthermore , but I'm sure that Mr. Cyril Brulebois will handle this issue better. :-)

(I thought that creating fake fontconfig cache when building image, then decompress compressed font dynamically might work as just an idea, but it didn't work.)

If you would like to tackle fixing d-i issues as a newbie, it might be better to execute "make reallyclean" before rebuilding image not to fall-in pitfalls.

04 January, 2025 01:32PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - December 2024

Our Debian User Group met on December 22nd for our last meeting of 2024. I wasn't sure at first it was a good idea, but many people showed up and it was great!

Here's what we did:

pollo:

anarcat:

lelutin:

lavamind:

  • installed Debian on an oooollld (as in, with a modem) laptop
  • debugged a FTBFS on jruby

tvaz:

  • did some simple packaging QA
  • added basic salsa CI and some RFA for a bunch of packages (python-midiutil, antimony, python-pyo, rakarrack, python-pyknon, soundcraft-utils, cecilia, nasty, gnome-icon-theme-nuovo, gnome-extra-iconsg, nome-subtitles, timgm6mb-soundfont)

mjeanson and joeDoe:

  • hanged out and did some stuff :)

Some of us ended up grabbing a drink after the event at l'Isle de Garde, a pub right next to the venue.

Pictures

This time around, we were hosted by l'Espace des possibles, at their new location (they moved since our last visit). It was great! People liked the space so much we actually discussed going back there more often :)

Group photo at l'Espace des possibles

04 January, 2025 02:15AM by Louis-Philippe Véronneau

January 03, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for December.

Happy New Year 2025! Wishing everyone health, productivity, and a successful Debian release later in this year.

Strict ownership of packages

I'm glad my last bits sparked discussions about barriers between packages and contributors, summarized temporarily in some post on the debian-devel list. As one participant aptly put it, we need a way to visibly say, "I'll do the job until someone else steps up". Based on my experience with the Bug of the Day initiative, simplifying the process for engaging with packages would significantly help.

Currently we have

  1. NMU The Developers Reference outlines several preconditions for NMUs, explicitly stating, "Fixing cosmetic issues or changing the packaging style in NMUs is discouraged." This makes NMUs unsuitable for addressing package smells. However, I've seen NMUs used for tasks like switching to source format 3.0 or bumping the debhelper compat level. While it's technically possible to file a bug and then address it in an NMU, the process inherently limits the NMUer's flexibility to reduce package smells.

  2. Package Salvaging This is another approach for working on someone else's packages, aligning with the process we often follow in the Bug of the Day initiative. The criteria for selecting packages typically indicate that the maintainer either lacks time to address open bugs, has lost interest, or is generally MIA.

Both options have drawbacks, so I'd welcome continued discussion on criteria for lowering the barriers to moving packages to Salsa and modernizing their packaging. These steps could enhance Debian overall and are generally welcomed by active maintainers. The discussion also highlighted that packages on Salsa are often maintained collaboratively, fostering the team-oriented atmosphere already established in several Debian teams.

Salsa

Continuous Integration

As part of the ongoing discussion about package maintenance, I'm considering the suggestion to switch from the current opt-in model for Salsa CI to an opt-out approach. While I fully agree that human verification is necessary when the pipeline is activated, I believe the current option to enable CI is less visible than it should be. I'd welcome a more straightforward approach to improve access to better testing for what we push to Salsa.

Number of packages not on Salsa

In my campaign, I stated that I aimed to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. As of this writing, the count stands at 1,928 [1], so I consider this promise fulfilled. My thanks go out to everyone who contributed to this effort. Moving forward, I'd like to set a more ambitious goal for the remainder of my term and hope we can reduce the number to below 1,800.

[1] UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;

Past and future events

Talk at MRI Together

In early December, I gave a short online talk, primarily focusing on my work with the Debian Med team. I also used my position as DPL to advocate for attracting more users and developers from the scientific research community.

FOSSASIA

I originally planned to attend FOSDEM this year. However, given the strong Debian presence there and the need for better representation at the FOSSASIA Summit, I decided to prioritize the latter. This aligns with my goal of improving geographic diversity. I also look forward to opportunities for inter-distribution discussions.

Debian team sprints

Debian Ruby Sprint

I approved the budget for the Debian Ruby Sprint, scheduled for January 2025 in Paris. If you're interested in contributing to the Ruby team, whether in person or online, consider reaching out to them. I'm sure any helping hand would be appreciated.

Debian Med sprint

There will also be a Debian Med sprint in Berlin in mid-February. As usual, you don't need to be an expert in biology or medicine–basic bug squashing skills are enough to contribute and enjoy the friendly atmosphere the Debian Med team fosters at their sprints. For those working in biology and medicine, we typically offer packaging support. Anyone interested in spending a weekend focused on impactful scientific work with Debian is warmly invited.

Again all the best for 2025

Andreas.

03 January, 2025 11:00PM by Andreas Tille

Taavi Väänänen

Automatically updating reverse DNS entries for my Hetzner servers

Some parts of my infrastructure run on Hetzner dedicated servers. Hetzner's management console has an interface to update reverse DNS entries, and I wanted to automate that. Unfortunately there's no option to just delegate the zones to my own authoritative DNS servers. So I did the next best thing, which is updating the Hetzner-managed records with data from my own authoritative DNS servers.

Generating DNS zones the hard way

The first step of automating DNS record provisioning is, well, figuring out which records need to be provisioned. I wanted to re-use my existing automation for generating the record data, instead of coming up with a new system for these records. The basic summary is that there's a Go program creatively named dnsgen that's in charge of generating zone file snippets from various sources (these include Netbox, Kubernetes, PuppetDB and my custom reverse web proxy setup).

Those snippets are combined with Jinja templates to generate full zone files to be loaded to a hidden primary running Bind9 (like all other DNS servers I run). The zone files are then transferred to a fleet of internal authoritative servers as well as my public authoritative DNS server, which in turn transfers them to various other authoritative DNS servers (like ns-global and Traficom anycast) for redundancy.

There's also a bunch of other smaller features, like using Bind views to server different data to internal and external clients, and resolving external records during record generation time to be used on apex records that would use CNAME records if they could. (The latter is a workaround for Masto.host, the hosting provider we use for Wikis World, not having a stable IPv6 address.) Overall it's a really nice system, and I've spent quite a bit of time on it.

Updating records on Hetzner-managed space

As mentioned above, Hetzner unfortunately does not support custom DNS servers for reverse records on IP space rented from them. But I wanted to use my existing, perfectly working DNS record generation setup since that works perfectly fine. So the obvious answer is to (ab)use DNS zone file transfers.

I quickly wrote a few hundred lines of Go to request the zone data and then use the Hetzner robot API to ensure the reverse entries are in sync. The main obstacle hit here was the Hetzner API somehow requiring an "update" call (instead of a "create" one) to create a new record, as the create endpoint was returning an HTTP 400 response no matter what. Once I sorted that out, the script started working fine and created the few dozen missing records. Finally I added a CronJob in my Kubernetes cluster to run the script once in a while.

Overall this is a big improvement over doing things by hand and didn't require that much effort. The obvious next step would be to expand the script to a tiny DNS server capable of receiving zone update NOTIFYs to make the updates happen real-time. Unfortunately there's now no hiding of the records revealing my ugly hacks clever networking solutions :(

03 January, 2025 12:00AM by Taavi Väänänen (hi@taavi.wtf)

January 02, 2025

Paul Wise

FLOSS Activities December 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 January, 2025 10:54AM

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

On the future of i386 on Debian

Before we proceed, let's emphasize a few things:

  • My Testing hardware is i386 simply because I have plenty of leftovers from older days. These are hosts that I can afford to see randomly break due to transitions.
  • Meanwhile, my desktop has been a 64-bit for over 10 years. My laptop for a bit less. Basically, my daily activities don't depend on 32-bit hardware remaining supported.
  • I fully agree that there is no sense in making a fresh install on 32-bit hardware nowadays. I therefore support Debian dropping 32-bit architectures from debian-installer.

This being said, I still think that the current approach of keeping i386 among the supported architectures, all while no longer shipping kernels, is entirely the wrong decision. What should instead be done is to keep on shipping i386 kernels for Trixie, but clearly indicate in the Trixie Release Notes that i386 is supported for the last time and thereafter fully demoted to Ports.

02 January, 2025 08:02AM by Martin-Éric (noreply@blogger.com)

hackergotchi for Matthew Garrett

Matthew Garrett

The GPU, not the TPM, is the root of hardware DRM

As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim:
Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here).
This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.

I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.

What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.

Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.

The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.

The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.

In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).

Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.

The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.

comment count unavailable comments

02 January, 2025 01:14AM

hackergotchi for Colin Watson

Colin Watson

Free software activity in December 2024

Most of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via Liberapay (thanks!).

OpenSSH

I issued a bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which was quite broken in bookworm.

base-passwd

A few months ago, the adduser maintainer started a discussion with me (as the base-passwd maintainer) and the shadow maintainer about bringing all three source packages under one team, since they often need to cooperate on things like user and group names. I agreed, but hadn’t got round to doing anything about it until recently. I’ve now officially moved it under team maintenance.

debconf

Gioele Barabucci has been working on eliminating duplicated code between debconf and cdebconf, ultimately with the goal of migrating to cdebconf (which I’m not sure I’m convinced of as a goal, but if we can make improvements to both packages as part of working towards it then there’s no harm in that). I finally got round to reviewing and merging confmodule changes in each of debconf and cdebconf. This caused an installer regression due to a weirdness in cdebconf-udeb’s packaging, which I fixed - sorry about that!

I’ve also been dealing with a few patch submissions that had been in my queue for a long time, but more on that next month if all goes well.

CI issues

I noticed and fixed a problem with Restrictions: needs-sudo in autopkgtest.

I fixed broken aptly images in the Salsa CI pipeline.

Python team

Last month, I mentioned some progress on sorting out the multipart vs. python-multipart name conflict in Debian (#1085728), and said that I thought we’d be able to finish it soon. I was right! We got it all done this month:

The Python 3.13 transition continues, and last month we were able to add it to the supported Python versions in testing. (The next step will be to make it the default.) I fixed lots of problems in aid of this, including:

Sphinx 8.0 removed some old intersphinx_mapping syntax which turned out to still be in use by many packages in Debian. The fixes for this were individually trivial, but there were a lot of them:

I found that twisted 24.11.0 broke tests in buildbot and wokkel, and fixed those.

I packaged python-flatdict, needed for a new upstream version of python-semantic-release.

I tracked down a test failure in vdirsyncer (which I’ve been using for some years, but had never previously needed to modify) and contributed a fix upstream.

I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools:

I fixed django-cte to remove a build-dependency on the obsolete python3-nose package.

I added Django 5.1 support to django-polymorphic. (There are a number of other packages that still need work here.)

I fixed various other build/test failures:

I upgraded these packages to new upstream versions:

  • aioftp
  • alot
  • astroid
  • buildbot
  • cloudpickle (fixing a Python 3.13 failure)
  • django-countries
  • django-sass-processor
  • djoser (fixing CVE-2024-21543)
  • ipython
  • jsonpickle
  • lazr.delegates
  • loguru (fixing a Python 3.13 failure)
  • netmiko
  • pydantic
  • pydantic-core
  • pydantic-settings
  • pydoctor
  • pygresql
  • pylint (fixing Python 3.13 failures #1089758 and #1091029)
  • pypandoc (fixing a Python 3.12 warning)
  • python-aiohttp (fixing CVE-2024-52303 and CVE-2024-52304
  • python-aiohttp-security
  • python-argcomplete
  • python-asyncssh
  • python-click
  • python-cytoolz
  • python-jira (fixing a Python 3.13 failure)
  • python-limits
  • python-line-profiler
  • python-mkdocs
  • python-model-bakery
  • python-pgspecial
  • python-pyramid (fixing CVE-2023-40587)
  • python-pythonjsonlogger
  • python-semantic-release
  • python-utils
  • python-venusian
  • pyupgrade
  • pyzmq
  • quart
  • six
  • sqlparse
  • twisted
  • vcr.py
  • vulture
  • yoyo
  • zope.configuration
  • zope.testrunner

I updated the team’s library style guide to remove material related to Python 2 and early versions of Python 3, which is no longer relevant to any current Python packaging work.

Other Python upstream work

I happened to notice a Twisted upstream issue requesting the removal of the deprecated twisted.internet.defer.returnValue, realized it was still used in many places in Debian, and went on a PR-filing spree informed by codesearch to try to reduce the future impact of such a change on Debian:

Other small fixes

Santiago Vila has been building the archive with make --shuffle (also see its author’s explanation). I fixed associated bugs in cccc (contributed upstream), groff, and spectemu.

I backported an upstream patch to putty to fix undefined behaviour that affected use of the “small keypad”.

I removed groff’s Recommends: libpaper1 (#1091375, #1091376), since it isn’t currently all that useful and was getting in the way of a transition to libpaper2. I filed an upstream bug suggesting better integration in this area.

02 January, 2025 12:16AM by Colin Watson

January 01, 2025

Tim Retout

Strauss as Pop Music

While watching the Vienna New Year’s Concert today, reading about its perhaps somewhat problematic origins, I was struck by the observation that the Strauss family’s polkas were seen as pop music during their lifetime, not as serious as proper classical composers, and so it took some time before the Vienna Philharmonic would actually play their work.

(Perhaps the space-themed interval today and the ballet dancers pretending to be a steam train were a continuation of the true spirit of this? It felt very Eurovision.)

I can’t decide if it’s remarkable that this year was the first time a female composer (Constanze Geiger) was represented at this concert, or if that is what you get when you set up a tradition of playing mainly Strauss?

01 January, 2025 11:36PM

Russ Allbery

2024 Book Reading in Review

In 2024, I finished and reviewed 46 books, not counting another three books I've finished but not yet reviewed and which will therefore roll over to 2025. This is slightly fewer books than the last couple of years, but more books than 2021. Reading was particularly spotty this year, with much of the year's reading packed into late November and December.

This was a year in which I figured out I was trying to do too much, but did not finish figuring out what to do about it. Reading and particularly reviewing reflected that, with long silent periods and then attempts to catch up. One of the goals for next year is to find a more sustainable balance for the hobbies in my life, including reading.

My favorite books I read this year were Ashley Herring Blake's Bright Falls sapphic romance trilogy: Delilah Green Doesn't Care, Astrid Parker Doesn't Fail, and Iris Kelly Doesn't Date. These are not perfect books, but they made me laugh, made me cry, and were impossible to put down. My thanks to a video from BookTuber Georgia Marie for the recommendation.

I Shall Wear Midnight was the best of the remaining Pratchett novels. It's the penultimate Tiffany Aching book and, in my opinion, the best. All of the elements of the previous books come together in snarky competence porn that was a delight to read.

The best book I read last year was Mark Lawrence's The Book That Wouldn't Burn, which much to my surprise did not make a single award list for its publication year of 2023. It was a tour de force of world-building that surprised me multiple times. Unfortunately, the sequel was not as good and I fear the series may be heading in the wrong direction. I am attempting to stay hopeful about the upcoming third and concluding book.

I didn't read much non-fiction this year, but the best of what I did read was Zeke Faux's Number Go Up about the cryptocurrency bubble. This book will not change anyone's mind, but it's a readable and entertaining summary of some of the more obvious cryptocurrency scams. I also had enough quibbles with it to write an extended review, which is a compliment of sorts.

The Discworld read-through is done, so I may either start or return to another series re-read in 2025. I have a huge backlog of all sorts of books, though, so we will see how the year goes. As always, I have no specific numeric goals, just a hope that I can make time for regular and varied reading and maintain a rhythm with writing reviews.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2025 08:11PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities December 2024

Another short status update of what happened on my side last month. The larger blocks are the Phosh 0.44 release and landing the initial Cell Broadcast support in phosh. The rest is all just small bits of bug, fallout/regression fixing here and there.

phosh

  • Fix notification regression and release 0.43.1 (MR), 0.43.1
  • Make notifiction banner take less vertical space (MR)
  • Allow to unfullscreen apps from the overview (MR)
  • Fix a leak in the tests tripping up our ASAN CI (MR)
  • Use consistent prefix and portal name (MR). This allows us to properly name the portal
  • Undraft the initial Cell Broadcast implementation (MR)
  • Brush up and merge the 1y old background in overview MR (MR)
  • Monitor background file changes (MR)
  • Some style improvements prompted by the above MR plus some other cleanups (MR)
  • Release 0.44~rc1 and 0.44.0
  • Make new headers introduced in 0.44 private (MR)
  • Port prefs to GtkFileDialog (so we use the adaptive portal) (MR)
  • Make fake clock available in regular shell (MR)
  • Enable/disable autoconnect on wwan connection, otherwise they come back on after e.g. resume (MR)
  • Toggle top-bar transparency (MR)
  • Create thumbnails for screenshots (MR)

phoc

  • Don't crash on NULL output when using foreign-toplevel to fullscreen (MR)
  • Allow to force shell-reveal for debugging (MR)
  • Release 0.44~rc1 and 0.44.0
  • Don't forget to reset fullscreen state when tiling (MR)

phosh-mobile-settings

libphosh-rs

  • Update for 0.44~rc1 (MR)
  • Release 0.0.5 (MR)

phosh-osk-stub

  • Release 0.44~rc1
  • Drop experimental status, update screenshots and release 0.44.0

phosh-tour

pfs

  • Allow to sort by modification time (MR)
  • Allow to activate via <return> (MR)
  • Load thumbnails if they exist (MR)
  • Store sort-mode (MR)
  • Tweak file name display a bit (MR)

xdg-desktop-portal-phosh

  • Use phosh as portal name rather than pmp (which is confusing to users) (MR)
  • Update pfs subproject and adjust packaging (MR)
  • Release 0.44~rc1 and 0.44.0
  • Implement r/o mode (MR)

phog

  • Unbreak with recent phoc (MR)

Debian

git-buildpackage

  • Fix ci (MR)
  • Move upsteam ci to separate pipeline and run type checks and collect test results (MR)
  • pristine-tar: handle upstream-signatures like import-orig (MR
  • Run tests before salsa-ci pipeline and enable component tests (MR)
  • Run tests that need network access in CI, use ci-faire, etc (MR)
  • Bundle pipes module to avoid deprecation (MR)
  • Release 0.9.36
  • Fix --export-dir regression (MR)

wlr-randr

  • Document --toggle (MR

python-dbusmock

  • Add mock for cell broadcast messages (MR)

livi

  • Use AdwAboutDialog (MR)
  • Release 0.3.0 (MR)

Chatty

  • Fix crash when saving attachments in Matrix chats (MR)

feedbackd

  • Add vibrate() API to allow e.g.games more haptic control (MR). This could also be used in browser to implement the vibration API in e.g. Firefox.
  • Release 0.6.0 (MR)

libadwaita

  • Drop superfluous "makes" (MR)

phosh-ev

  • Add ci: (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!

  • phosh: Switch to AdwPreferencesDialog (MR)
  • phosh: Visual effect when swiping notification (MR)
  • phosh: Notification banner slide up animation (MR)
  • phosh: Slide down notifications when adding a new one (MR)
  • libphosh-rs: License symlinks (MR)
  • phosh-ev: Support for Nissan (MR) (got merged)
  • Debian: libvirt update (enabling nftables) (MR) (got merged)
  • Debian: libvirt update (disabling nftables again (among other things) (MR)
  • git-buildpackage: uscan --download-vesion (MR
  • git-buildpackage: manpage improvements (MR)
  • git-buildpackage: improve intro (MR)
  • git-buildpackage: Add import-ref to gbp(1) ([MR}(https://salsa.debian.org/agx/git-buildpackage/-/merge_requests/31))

Help Development

Thanks a lot to all the those who supported my work on this in 2024. Happy new year!

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 January, 2025 09:09AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

2024 — A Musical Retrospective

Another musical retrospective. If you enjoy this, I also did a 2022 and a 2023 one.

Albums

In 2024, I added 88 new albums to my collection — that's a lot!

This year again, I bought the vast majority of my music on Bandcamp. To be honest, I'm quite distraught by what's become of that website. Although it stays a wonderful place to buy underground music, Songtradr, the new owner of the platform, has been shown to be viciously anti-union.

Money continues to ruin the world, I guess.

Concerts

I continued to go to a lot of concerts in 2024 (25!). Over the past 3 years, I have been going to more and more concerts, and I think I've reached my "peak". A mean of a concert every two weeks is quite a lot :)

If you also like music and concerts, but find yourself not going to as many as you would like, the real secret is not to be afraid to go to concerts alone. Going with friends is always fun, but if I restricted myself to only going to concerts in a group, I'd barely see a few each year.

Another good advice is to bring a book or something else1 to pass the time between sets. It can often take 30-45 minutes between sets for the artists to get their instruments ready, which can get quite boring if you just stand there and wait.

Anyway, here are the concerts I went to in 2024:

  • February 22nd-23rd-24th (Montreal Madhouse 2024): Scorching Tomb, Bruiserweight, Scaramanga, Cloned Apparition, Chain Block, Freezerburn, Béton Armé, Mil-Spec, NUKE, Friction, Reality Denied, SOV, Deathnap, Glint, Mulch, Stigmatism, Plus Minus, Puffer, Deadbolt, Apes, Pale Ache, Total Nada, Verify, Cross Check
  • March 16th: Kavinsky
  • April 11th: Agriculture
  • April 26th-27th (Oi! Fest 2024): Bishops Green, The Partisans, Mess, Fuerza Bruta, Empire Down, Unwanted Noise, Lion's Law, The Oppressed, Ultra Sect, Reckless Upstarts, 21 Gun Salute, Jail
  • May 4th: MASTER BOOT RECORD
  • May 16th: Wayfarer, Valdrin, Sonja
  • May 25th: Union Thugs
  • June 15th: Ultra Razzia, Over the Hill, Street Code, Mortier
  • September 5th-6th (Droogs Fest 2024): Skarface, Inspecter 7, 2 Stone 2 Skank, Francbâtards, Les Happycuriens, Perkele, Blanks 77, Violent Way, La Gachette, Jenny Woo
  • September 16th: Too Many Zoos
  • September 27th: The Slads, Young Blades, New Release, Mortier
  • October 2nd: Amorphis, Dark Tranquility, Fires in the Distance
  • October 7th: Jordi Savall & Hespèrion XXI, accompanied by La Capella Reial de Catalunya
  • October 11th-12th (Revolution Fest 2024): René Binamé, Dirty Old Mat, Union Thugs, Gunh Twei, Vermine Kaos, Inner Terrestrials, Ultra Razzia, Battery March, Uzu, One Last Thread, Years of Lead
  • October 19th (Varning from Montreal XVI): Coupe Gorge, Flash, Imploders, Young Blades, Tenaz, Mötorwölf
  • November 2nd: Kon-Fusion, Union Thugs
  • November 12th: Chat Pile, Agriculture, Traindodge
  • November 25th: Godspeed You! Black Emperor
  • November 27th: Zeal & Ardour, Gaerea, Zetra
  • December 7th: Perestroïka, Priors, White Knuckles, Tenaz

Shout out to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there.

See you all in 2025!


  1. I bought a Miyoo Mini Plus, a handheld Linux console running OnionOS, for that express reason. So far it's been great and I've been very happy to revisit some childhood classics. 

01 January, 2025 05:00AM by Louis-Philippe Véronneau

hackergotchi for Junichi Uekawa

Junichi Uekawa

Happy New Year.

Happy New Year. Spending most of my time in work and family. Kids are taking my time.

01 January, 2025 03:55AM by Junichi Uekawa

Russ Allbery

Review: Driving the Deep

Review: Driving the Deep, by Suzanne Palmer

Series: Finder Chronicles #2
Publisher: DAW
Copyright: 2020
ISBN: 0-7564-1512-8
Format: Kindle
Pages: 426

Driving the Deep is science fiction, a sequel to Finder (not to be confused with Finders, Emma Bull's Finder, or the many other books and manga with the same title). It stands alone and you could start reading here, although there will be spoilers for the first book of the series. It's Suzanne Palmer's second novel.

When Fergus Ferguson was fifteen, he stole his cousin's motorcycle to escape an abusive home, stashed it in a storage locker, and got the hell off of Earth. Nineteen years later, he's still paying for the storage locker and it's still bothering him that he never returned the motorcycle. His friends in the Shipyard orbiting Pluto convince him to go to Earth and resolve this ghost of his past, once and for all.

Nothing for Fergus is ever that simple. When the key he's been carrying all these years fails to open the storage unit, he hacks it open, only to find no sign of his cousin's motorcycle. Instead, the unit is full of expensive storage crates containing paintings by artists like Van Gogh. They're obviously stolen. Presumably the paintings also explain the irate retired police officer who knocks him out and tries to arrest him, slightly after the urgent message from the Shipyard AI telling him his friends are under attack.

Fergus does not stay arrested, a development that will not surprise readers of the previous book. He does end up with an obsessed and increasingly angry ex-cop named Zacker as an unwanted passenger. Fergus reluctantly cuts a deal with Zacker: assist him in finding out what happened to his friends, and Fergus will then go back to Earth and help track down the art thieves who shot Zacker's daughter.

It will be some time before they get back to Earth. Fergus's friends have been abducted by skilled professionals. What faint clues he can track down point to Enceladus, a moon of Saturn with a vast subsurface ocean. One simulation test with a desperate and untrustworthy employer later, Fergus is now a newly-hired pilot of an underwater hauler.

The trend in recent SFF genre novels has been towards big feelings and character-centric stories. Sometimes this comes in the form of found family, sometimes as melodrama, and often now as romance. I am in general a fan of this trend, particularly as a corrective to the endless engineer-with-a-wrench stories, wooden protagonists, and cardboard characters that plagued classic science fiction. But sometimes I want to read a twisty and intelligent plot navigated by a competent but understated protagonist and built around nifty science fiction ideas. That is exactly what Driving the Deep is, and I suspect this series is going to become my go-to recommendation for people who "just want a science fiction novel."

I don't want to overstate this. Fergus is not a blank slate; he gets the benefit of the dramatic improvement in writing standards and characterization in SFF over the past thirty years. He's still struggling with what happened to him in Finder, and the ending of this book is rather emotional. But the overall plot structure is more like a thriller or a detective novel: there are places to go, people to investigate, bases to infiltrate, and captives to find, so the amount of time spent on emotional processing is necessarily limited. Fergus's emotions and characterization are grace notes around the edges of the plot, not its center.

I thoroughly enjoyed this. Palmer has a light but effective touch with characterization and populates the story with interesting and distinguishable characters. The plot has a layered complexity that allows Fergus to keep making forward progress without running out of twists or getting repetitive. The motivations of the villains were not the most original, but they didn't need to be; the fun of the story is figuring out who the villains are and watching Fergus get out of impossible situations with the help of new friends. Finder was a solid first novel, but I thought Driving the Deep was a substantial improvement in both pacing and plot coherence.

If I say a novel is standard science fiction, that sounds like criticism of lack of originality, but sometimes standard science fiction is exactly what I want to read. Not every book needs to do something wildly original or upend my understanding of story. I started reading science fiction because I loved tense adventures on moons of Saturn with intelligent spaceships and neat bits of technology, and they're even better with polished writing, quietly competent characterization, and an understated sense of humor.

This is great stuff, and there are two more books already published that I'm now looking forward to. Highly recommended when you just want a science fiction novel.

Followed by The Scavenger Door.

Rating: 8 out of 10

01 January, 2025 02:36AM

December 31, 2024

hackergotchi for Chris Lamb

Chris Lamb

Favourites of 2024

Here are my favourite books and movies that I read and watched throughout 2024.

It wasn't quite the stellar year for books as previous years: few of those books that make you want to recommend and/or buy them for all your friends. In subconscious compensation, perhaps, I reread a few classics (e.g. True Grit, Solaris), and I'm almost finished my second read of War and Peace.

§

Books

Elif Batuman: Either/Or (2022) Stella Gibbons: Cold Comfort Farm (1932) Michel Faber: Under The Skin (2000) Wallace Stegner: Crossing to Safety (1987) Gustave Flaubert: Madame Bovary (1857) Rachel Cusk: Outline (2014) Sara Gran: The Book of the Most Precious Substance (2022) Anonymous: The Railway Traveller’s Handy Book (1862) Natalie Hodges: Uncommon Measure: A Journey Through Music, Performance, and the Science of Time (2022)Gary K. Wolf: Who Censored Roger Rabbit? (1981)

§

Films

Recent releases

     † Seen at a 2023 festival.

Disappointments this year included Blitz (Steve McQueen), Love Lies Bleeding (Rose Glass), The Room Next Door (Pedro Almodóvar) and Emilia Pérez (Jacques Audiard), whilst the worst new film this year was likely The Substance (Coralie Fargeat), followed by Megalopolis (Francis Ford Coppola), Unfrosted (Jerry Seinfeld) and Joker: Folie à Deux (Todd Phillips).


Older releases

ie. Films released before 2023, and not including rewatches from previous years.

Distinctly unenjoyable watches included The Island of Dr. Moreau (John Frankenheimer, 1996), Southland Tales (Richard Kelly, 2006), Any Given Sunday (Oliver Stone, 1999) & The Hairdresser’s Husband (Patrice Leconte, 19990).

On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Solaris (Andrei Tarkovsky, 1972), Blade Runner (Ridley Scott, 1982), Apocalypse Now (Francis Ford Coppola, 1979) and Die Hard (John McTiernan, 1988).


31 December, 2024 03:58PM

Scarlett Gately Moore

KDE: Application snaps 24.12.0 release and more

https://kde.org/announcements/gear/24.12.0

I hope everyone had a wonderful holiday! Your present from me is shiny new application snaps! There are several new qt6 ports in this release. Please visit https://snapcraft.io/store?q=kde

I have also fixed the Krita snap unable to open/save bug. Please test –edge!

I am continuing work on core24 support and hope to be done before next release.

I do look forward to 2025! Begone 2024!

If you can help with gas, I still have 3 weeks of treatments to go. Thank you for your continued support.

https://gofund.me/573cc38e

31 December, 2024 02:34PM by sgmoore

Russell Coker

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

git.sesse.net goes IPv6-only

Following Dennis Schubert's post on how LLM bots are scraping the Internet continuously at full speed, I decided to take a look at my own server. If you exclude my chess site which naturally has a lot of unusual organic traffic right now (due to the World Rapid and Blitz Chess Championship; every site update is a long-poll request), I am at a neat 86% of requests matching “crawler” or “bot” in the UA, and then more crawler traffic with spoofed UAs on top of that. (GPTBot at the top, of course, with 25x as many requests as Googlebot. MJ12Bot is in second place, which I'd never even heard of.)

This is unsustainable, but also perhaps not a big surprise; these companies have tons of VC money (so they don't need to care much about being efficient), an insatiable lust for data, questionable ethics, and lack the sophistication in crawling that the traditional web search companies have built up over the last twenty years. So of course they will just follow every link and never really back down. Including, say, try to fetch every single tree object from all of my Git mirrors, including large repositories like an old VLC fork.

I could add robots.txt, but there are tons of them, and it's honestly not a game I want to spend energy on. So I went for a simple solution: Remove the A record. git.sesse.net is now IPv6-only; if your infrastructure administrator is remotely competent, you'll have IPv6 by now, and if not, interested programmers (the main target for Git repositories, obviously) are probably able to fix that easily enough. (Curiously enough, there are still some sites coming in with bogus “Chrome” user agents over IPv4. Not even respecting DNS timeouts, of course…)

We'll see how it turns out; perhaps I'll need to reenable if there's an influx of legitimate users wanting my software. But TBH, as we go into 2025 and IPv6 turns 30, enough is enough anyway. It will soon be more common to have IPv6 than not to have it (47% and counting), so it's time to move on.

Happy new year!

31 December, 2024 08:48AM

Russ Allbery

Review: Metal from Heaven

Review: Metal from Heaven, by August Clarke

Publisher: Erewhon
Copyright: November 2024
ISBN: 1-64566-099-0
Format: Kindle
Pages: 443

Metal from Heaven is industrial-era secondary-world fantasy with a literary bent. It is a complete story in one book, and I would be very surprised by a sequel. Clarke previously wrote the Scapegracers young-adult trilogy, which got excellent reviews and a few award nominations, as H.A. Clarke. This is his first adult novel.

Know I adore you. Look out over the glow. The cities sundered, their machines inverted, mountains split and prairies blazing, that long foreseen Hereafter crowning fast. This calamity is a promise made to you. A prayer to you, and to your shadow which has become my second self, tucked behind my eye and growing in tandem with me, pressing outwards through the pupil, the smarter, truer, almost bursting reason for our wrath. Do not doubt me. Just look. Watch us rise as the sun comes up over the beauty. The future stains the bleakness so pink. When my violence subsides, we will have nothing, and be champions.

Marney Honeycutt is twelve years old, a factory worker, and lustertouched. She works in the Yann I. Chauncey Ichorite Foundry in Ignavia City, alongside her family and her best friend, shaping the magical metal ichorite into the valuable industrial products of a new age of commerce and industry. She is the oldest of the lustertouched, the children born to factory workers and poisoned by the metal. It has made her allergic, prone to fits at any contact with ichorite, but also able to exert a strange control over the metal if she's willing to pay the price of spasms and hallucinations for hours afterwards.

As Metal from Heaven opens, the workers have declared a strike. Her older sister is the spokesperson, demanding shorter hours, safer working conditions, and an investigation into the health of the lustertouched children. Chauncey's response is to send enforcer snipers to kill the workers, including the entirety of her family.

The girl sang, "Unalone toward dawn we go, toward the glory of the new morning."

An enforcer shot her in the belly, and when she did not fall, her head.

Marney survives, fleeing into the city, swearing an impossible personal revenge against Yann Chauncey. An act of charity gets her a ticket on a train into the countryside. The woman who bought her ticket is a bandit who is on the train to rob it. Marney's ability to control ichorite allows her to help the bandits in return, winning her a place with the Highwayman's Choir who have been preying on the shipments of the rich and powerful and then disappearing into the hills.

The Choir's secret is that the agoraphobic and paranoid Baron of the Fingerbluffs is dead and has been for years. He was killed by his staff, Hereafterist idealists, who have turned his remote territory into an anarchist commune and haven for pirates and bandits. This becomes Marney's home and the Choir becomes her family, but she never forgets her oath of revenge or the childhood friend she left behind in the piles of bodies and to whom this story is narrated.

First, Clarke's writing is absolutely gorgeous.

We scaled the viny mountain jags at Montrose Barony's legal edge, the place where land was and wasn't Ignavia, Royston, and Drustland alike. There was a border but it was diffuse and hallucinatory, even more so than most. On legal papers and state maps there were harsh lines that squashed topography and sanded down the mountains into even hills in planter's rows, but here among the jutting rocks and craggy heather, the ground was lineless.

The rhythm of it, the grasp of contrast and metaphor, the word choice! That climactic word "lineless," with its echo of limitless. So good.

Second, this is the rarest of books: a political fantasy that takes class and religion seriously and uses them for more than plot drivers. This is not at all our world, and the technology level is somewhat ambiguous, but the parallels to the Gilded Age and Progressive Era are unmistakable. The Hereafterists that Marney joins are political anarchists, not in the sense of alternative governance structures and political theory sanitized for middle-class liberals, but in the sense of Emma Goldman and Peter Kropotkin. The society they have built in the Fingerbluffs is temporary, threatened, and contingent, but it is sincere and wildly popular among the people who already lived there.

Even beyond politics, class is a tangible force in this book. Marney is a factory worker and the child of factory workers. She barely knows how to read and doesn't magically learn over the course of the book. She has friends who are clever in the sense rewarded by politics and nobility, who navigate bureaucracies and political nuance, but that is not Marney's world. When, towards the end of the book, she has to deal with a gathering of high-class women, the contrast is stark, and she navigates that gathering only by being entirely unexpected.

Perhaps the best illustration of the subtlety of this is the terminology in the book for lesbian. Marney is a crawly, which is a slur thrown at people like her (and one of the rare fictional slurs that work exactly as the author intended) but is also simply what she calls herself. Whether or not it functions as a slur depends on context, and the context is never hard to understand. The high-class lesbians she meets later are Lunarists, and react to crawly as a vile and insulting word. They use language to separate themselves from both the insult and from the social class that uses it. Language is an indication of culture and manners and therefore of morality, unlike deeds, which admit endless justifications.

Conversation was fleeting. Perdita managed with whomever stood near her, chipper about every prettiness she saw, the flitting butterflies, the dappled light between the leaves, the lushness and the fragrance of untamed land, and her walking companions took turns sharing in her delight. It was infectious, how happy she was. She was going to slaughter millions. She was going to skip like this all the while.

The handling of religion is perhaps even better. Marney was raised a Tullian, which sits alongside two other fleshed-out fictional religions and sketches of several more. Tullians tend to be conservative and patriarchal, and Marney has a realistically complicated relationship with faith: sticking with some Tullian worship practices and gestures because they're part of who she is, feeling a kinship to other Tullians, discarding beliefs that don't fit her, and revising others.

Every major religion has a Hereafterist spin or reinterpretation that upends or reverses the parts of the religion that were used to prop up the existing social order and brings it more in line with Hereafterist ideals. We see the Tullian Hereafterist variation in detail, and as someone who has studied a lot of methods of reinterpreting Christianity, I was impressed by how well Clarke invents both a belief system and its revisionist rewrite. This is exactly how religions work in human history, but one almost never sees this subtlety in fantasy novels.

Marney's allergy to ichorite causes her internal dialogue to dissolve into hallucinatory synesthesia when she's manipulating or exposed to it. Since that's most of the book, substantial portions read like drug trips with growing body horror. I normally hate this type of narration, so it's a sign of just how good Clarke's writing is that I tolerated it and even enjoyed parts. It helps that the descriptions are irreverent and often surprising, full of unexpected metaphors and sudden turns. It's very hard not to quote paragraph after paragraph of this book.

Clarke is also doing a lot with gender that I don't feel qualified to comment in detail on, but it would not surprise me to see this book in the Otherwise Award recommendation list. I can think of three significant male characters, all of whom are well-done, but every other major character is female by at least some gender definition. Within that group, though, is huge gender diversity of the complicated and personal type that doesn't force people into defined boxes. Marney's sexuality is similarly unclassified and sometimes surprising. My one complaint is that I thought the sex scenes (which, to warn, are often graphic) fell into the literary fiction trap of being described so closely and physically that it didn't feel like anyone involved was actually enjoying themselves. (This is almost certainly a matter of personal taste.)

I had absolutely no idea how Clarke was going to end this book, and the last couple of chapters caught me by surprise. I'm still not sure what I think about the climax. It's not the ending that I wanted, but one of the merits of this book is that it never did what I thought I wanted and yet made me enjoy the journey anyway. It is, at least, a genre ending, not a literary ending: The reader gets a full explanation of what is going on, and the setting is not static the way that it so often is in literary fiction. The characters can change the world, for good or for ill. The story felt frustrating and incomplete when I first finished it, but I haven't stopped thinking about this book and I think I like the shape of it a bit more now. It was certainly unexpected, at least by me.

Clarke names Dhalgren as one of their influences in the acknowledgments, and yes, Metal from Heaven is that kind of book. This is the first 2024 novel I've read that felt like the kind of book that should be on award shortlists. I'm not sure it was entirely successful, and there are parts of it that I didn't like or that weren't for me, but it's trying to do something different and challenging and uncomfortable, and I think it mostly worked. And the writing is so good.

She looked like a mythic princess from the old woodcuts, who ruled nature by force of goodness and faith and had no legal power.

Metal from Heaven is not going to be everyone's taste. If you do not like literary fantasy, there is a real chance that you will hate this. I am very glad that I read it, and also am going to take a significant break from difficult books before I tackle another one. But then I'm probably going to try the Scapegracers series, because Clarke is an author I want to follow.

Content notes: Explicit sex, including sadomasochistic sex. Political violence, mostly by authorities. Murdered children, some body horror, and a lot of serious injuries and death.

Rating: 8 out of 10

31 December, 2024 03:12AM

kpcyrd

2024 wrapped

Dear blog. This post is inspired by an old friend of mine who has been writing these for the past few years. I meant to do this for a while now, but ended up not preparing anything, so this post is me writing it from memory. There’s likely stuff I forgot, me being gentle with myself I’ll probably just permit myself to complete this list the next couple of days.

I hate bragging, I try to not depend on external validation as much as possible, and being the anti-capitalist that I am, I try to be content with knowing I’m “doing good in the background”. I don’t think people owe me for the work I did, I don’t expect anything in return, and it’s my way of giving back to the community and the people around me. Consider us even.

That being said, I:

  • Uploaded 689 packages to Arch Linux
    • Most of which being reproducible, meaning I provably didn’t abuse my position of compiling the binaries
    • 59 of those are signal-desktop
    • 34 of those are metasploit
  • Made 28 commits in Alpine Linux’ aports
    • 24 of those being package releases
  • Made 43 uploads to Debian
    • All of them being related to my work in the debian-rust team, that I’ve been a part of since 2018
  • Made 5 commits in NixOS’ nixpkgs
  • Made 1 commit in homebrew-core
  • Was one of the people involved in rolling out _FORTIFY_SOURCE=3 compiler hardening in Arch Linux, for the entire operating system. I wrote lists, tools, patches and my work got me quoted in an “Additional Considerations” section of the OpenSSF compiler hardening guide for C and C++. There are now more, stricter buffer-overflow checks at runtime that hopefully make your computer harder to exploit in 2025.
  • Was one of the people behind the launch of reproduce.debian.net which is analogous to reproducible.archlinux.org that I also helped create 5 years ago. Reproducing these packages (and allowing anybody else to do the same) proves the binaries have not been backdoored by the build server (or whoever compiled them), and if there’s a backdoor, you can likely find it in the source code.
  • Integrated librustls, a memory safe TLS implementation, into Arch Linux’ C dynamic linking ecosystem and became one of the authors of the rustls curl TLS backend
  • In response to the XZ Jia Tan incident I created whatsrc.org, a source code indexing project. It doesn’t solve anything in itself, but it’s framing the concept of source code inputs and how to reason about them in a way that I consider promising. It also documents and makes it very apparent what specifically is the source code we’re putting into our computers, that would benefit from code reviews.
  • Contributed to the Reproducible Builds mailing list 33 times
  • Volunteered at a soldering workshop for beginners for the 3rd year in a row, with people describing me as a good teacher, giving very calm vibes and having endless patience
  • Reverse engineered the signal username and QR-code feature
  • Rewrote my tooling for apt.vulns.xyz to use repro-env, the .deb files can now be verified through reproducible builds, and I switched to static Rust binaries because I had trouble targeting multiple Debian/Ubuntu releases with my previous tooling
  • Wrote 0 blog posts (besides this one)
  • Wrote 5.937 messages in irc channels
  • Got mentioned 1.664 times on irc
  • Attended FOSDEM, Fusion, the Reproducible Builds summit, Hackjunta 2024#2 and 38c3
  • Made and printed 8 new sticker designs, and a custom hoodie
  • Mastered the art of pragmatic zaza cultivation and processing
  • Got 2 new piercings and 2-3 new tattoos (depending on how you count them)

Thanks to everybody who has been part of my human experience, past or present. Especially those who’ve been closest.

cheers,
kpcyrd ✨

31 December, 2024 12:00AM

December 30, 2024

hackergotchi for Steve Kemp

Steve Kemp

The CP/M emulator runs on Windows, maybe!

Today I made a new release of my CP/M emulator and I think that maybe now it will run on Microsoft Windows. Unfortunately I cannot test it!

A working CP/M implementation needs to provide facilities for reading input from the console, both reading a complete line of text and individual keystrokes. These input functions need to handle several different types of input:

  • Blocking, waiting for input to become available.
  • Non-blocking, returning any pending input if it is available otherwise nothing.
  • With echo, so the user can see what they typed.
  • Without echo, so the keys are returned by not displayed ot the user.

In the past we used a Unix-specific approach to handle the enabling and disabling of keyboard echoing (specifically we executed the stty binary to enable/disable echos), but this release adds a more portable solution, based around termbox-go which is the new default, and should allow our emulator to work on Microsoft Windows systems.

We always had the ability to select between a number of different output drivers, and as of this release we can now select between multiple input drivers too - with the new portable option being the default. This has been tested on MacOS X systems, as well as GNU/Linux, but sadly I don't have access to Windows to test that.

Fingers crossed it's all good now though, happy new year!

30 December, 2024 08:45PM

Russ Allbery

Review: House in Hiding

Review: House in Hiding, by Jenny Schwartz

Series: Uncertain Sanctuary #2
Publisher: Jenny Schwartz
Copyright: October 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 196

House in Hiding is the second book of a self-published space fantasy trilogy that started with The House That Walked Between Worlds. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata.

At the end of the previous book, Kira had gathered a motley crew for her house and discovered that she had drawn the attention of some rather significant galactic powers. Now, with the help of her new (hopefully) friends, she has to decide what role she's going to play in the galaxy.

Or she can dither a lot, ruminate repeatedly on the same topics, and flail about randomly. That's also an option.

This is slightly unfair. By the second half of the book, the series plot is beginning to cohere around two major problems: what is happening to the magic flows in the universe, and who killed Kira's parents. But apparently there was a limit on my enjoyment for the chaos in Kira's chaotic decisiveness I praised in my review of the last book, and I hit that limit around the middle of this book. I am interested in the questions of ethics, responsibility, and public image that this series is raising. I'm just not convinced that Schwartz is going to provide satisfying answers.

One thing I do appreciate about this book is that it acknowledges that politics exist and that taking powerful people at face value is a bad idea. You would think that this would be a low bar, and yet it's depressing how many fantasy novels signal the trustworthiness of a character via some variation of "I looked into his eyes and shook his hand," or at least expect readers to be surprised by the inevitable betrayals. Schwartz does not make that mistake; after getting a call from a powerful player in galactic politics, the characters take apart everything that was said while assuming it could be attempted manipulation, which is the correct initial response.

My problem comes after that. I like reading about competent characters with a plan, and these are absurdly powerful but very naive characters with no plan. This is realistic for the situation Kira has been thrust into, but it's not that entertaining to read about.

I think the root of my problem is that there are some fundamental storytelling problems here that Schwartz is struggling to fix. The basic theory of story says that you need a protagonist, a setting, a conflict, and a plot. Schwartz has a good protagonist, one great supporting character and several adequate ones, and an enjoyably weird setting. I think she's working her way up to having a plot, although usually it's best for the plot to show up before the middle book of the series. What she doesn't have is a meaningful conflict. It's not entirely clear to either the reader or to Kira why Kira cares about what's happening.

You would not think this would be a problem given that Kira's parents were murdered before the start of the first book. That's a classic conflict that's driven more books than I think anyone could count. It's not what Kira has cared about up to this point, however; she got away from Earth and has shown no sign of wanting to go back or identify the people who killed her parents, perhaps because she mostly blames herself. Instead, she's stumbling across other problems in the universe that other people would like her to care about. She occasionally feels like she ought to care about them because they involve her new friends or because she wants to be a good person, but they have very little dramatic oomph. "I'm a sorcerer and vaguely want the universe to be a better place" turns out to not work that well as a source of dramatic tension.

This lack of conflict is somewhat fascinating because it's so different than most fantasy novels. If Schwartz were more aware of how oddly disconnected her protagonist is from the story conflict, I think there could be a thoughtful, if odd, psychological novel in here about one's ethical responsibilities if one suddenly had vast power and no strong attachments to the world. Kira does gesture occasionally in that direction, but there's no real meat to her musings. Instead, her lack of motivation is solved through one of the hoariest tropes in fiction: children in danger.

I really want to like this series, and I still love the House, but this book was not good. The romance that I was delighted to not be subjected to in the first book appears to be starting (sigh), the political maneuvering that happens here is only mildly interesting and not believably competent, and the book concludes in Kira making an egregiously and blatantly stupid mistake that should have resulted in one of her friends asking her what the hell she was doing. Some setup happens, and it seems likely that the final book will have a clear conflict and plot, but this middle book was a disappointing mess.

These books are fast to read and lightly entertaining between other things, and the House still has me invested enough in this universe that I'll read the last book in the omnibus. Be warned, though, that the middle book is more a collection of anecdotes than a story, and there's only so much of Kira showing off her power I can take without a conflict and a plot.

Followed by The House That Fought.

Rating: 5 out of 10

30 December, 2024 03:54AM

December 29, 2024

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Accessing Atari ST disk images on Linux

This post leverages support for Atari Hard Disk Interface Partition (AHDI) partition tables in the Linux kernel, activated by default in Debian, and in the parted partition editor.

Accessing the content of a partition using a user mounted loop device

This is the easiest procedure and should be tried to first. Depending if your Linux kernel has support for AHDI partition tables, and the size of the FAT system on the partition, this procedure might not work. In that case, try the procedure using mtools further below.

Attach a disk image called hd80mb.image to a loop device:

$ udisksctl loop-setup --file hd80mb.image
Mapped file hd80mb.image as /dev/loop0

Notice how the kernel detected the partition table:

$ dmesg | grep loop0
[160892.151941] loop0: detected capacity change from 0 to 164138
[160892.171061]  loop0: AHDI p1 p2 p3 p4

Inspect the block devices created for each partition:

$ lsblk | grep loop0

If the partitions are not already mounted by udisks2 under /media/, mount them manually:

$ sudo mount /dev/loop0p1 /mnt/
$ ls /mnt/
SHDRIVER.SYS

When you are finished copying data, unmount the partition, and detach the loop device.

$ sudo umount /mnt
$ udisksctl loop-delete --block-device /dev/loop0

Accessing the content of a partition using mtools and parted

This procedure uses the mtools package and the support for the AHDI partition scheme in the parted partition editor.

Display the partition table, with partitions offsets in bytes:

$ parted st_mint-1.5.img -- unit B print
...
Partition Table: atari
Disk Flags: 

Number  Start       End         Size        Type     File system  Flags
 1      1024B       133170175B  133169152B  primary               boot
 2      133170176B  266339327B  133169152B  primary
 3      266339328B  399508479B  133169152B  primary
 4      399508480B  532676607B  133168128B  primary

Set some Atari-friendly mtools options:

$ export MTOOLS_SKIP_CHECK=1
$ export MTOOLS_NO_VFAT=1

List the content of the partition, passing as parameter the offset in bytes of the partition: For instance here we are interested in the second partition, and the parted output above indicates that this partition starts at byte offset 133170176 in the disk image.

$ mdir -s -i st_mint-1.5.img@@133170176
 Volume in drive : has no label
Directory for ::/

demodata          2024-08-27  11:43 
        1 file                    0 bytes

Directory for ::/demodata

We can also use the command mcopy with a similar syntax to copy data from and to the disk image. For instance we copy a file named file.zip to the root directory of the second partition:

$ mcopy -s -i st_mint-1.5.img@@133170176 file.zip ::

Recompiling mtools to access large partitions

With disk images having large AHDI partitions (well considered large in 1992 …), you might encounter the error

mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
init: sector size too big
Cannot initialize '::'

This error is caused by the non-standard large logical sectors that the TOS uses for large FAT partitions (see the Atari Hard Disk Filesystem reference on page 41, TOS partitions size)

We can inspect the logical sector size using fsck tools:

$ udiskctl loop-setup --file cecile-falcon-singlepart-1GB.img
$ sudo fsck.fat -Anv /dev/loop0p1
fsck.fat 4.2 (2021-01-31)
...
Media byte 0xf8 (hard disk)
16384 bytes per logical sector

To access the partition, you need to patch mtools, so that it supports a logical sector size of 16384 bytes. For this you need to change the MAX_SECTOR macro from 8192 to 16384 in msdos.h in the mtools distribution and recompile. A rebuilt mtools is then able to access the partition:

$ /usr/local/bin/mdir -s -i cecile-falcon-singlepart-1GB.img@@1024
 Volume in drive : has no label
Directory for ::/

CECILE   SYS      8462 1998-03-27  22:42 
NEWDESK  INF       804 2024-09-09   9:23 
        2 files               9 266 bytes
                      1 072 463 872 bytes free

29 December, 2024 08:26PM by Manu

Russ Allbery

Review: The Last Hour Between Worlds

Review: The Last Hour Between Worlds, by Melissa Caruso

Series: The Echo Archives #1
Publisher: Orbit
Copyright: November 2024
ISBN: 0-316-30364-X
Format: Kindle
Pages: 388

The Last Hour Between Worlds is urban, somewhat political high fantasy with strong fae vibes. It is the first book of a series, but it stands alone quite well.

Kembral Thorne is a Hound, a member of the guild that serves as guards, investigators, and protectors. Kembral's specialty is Echo retrieval: rescues of people and animals who have fallen through a weak spot in reality into one of the strange, dangerous, and malleable layers called Echoes. Kem once rescued a dog from six layers down, an almost unheard-of feat.

Kem is also a new single mother, which means her past two months have been spent in a sleep-deprived haze revolving exclusively around her much-beloved infant. Dona Marjorie Swift's year-turning party is the first time she's been out without Emmi since she gave birth, and she's only there because her sister took the child and practically shoved her out the door. Now, she's desperately trying to remember how to be social and normal, which is not made easier by the unexpected presence of Rika at the party.

Rika Nonesuch is not a Hound. She's a Cat, a member of the guild of thieves and occasional assassins. They are the nemesis of the Hounds, but in a stylized and formalized way in which certain courtesies are expected. (The politics of this don't really make sense; you just have to go with it.) Kem has complicated feelings about Rika's grace, banter, and intoxicating perfume, feelings that she thought might be reciprocated until Rika drugged her during an apparent date and left her buried under a pile of garbage. She was not expecting Rika to be at this party and is definitely not ready to have a conversation with her.

This emotional turmoil is rudely interrupted by the death of nearly everyone at the party via an Echo poison, the appearance of a dark figure driving a black sword into someone, and the descent of the entire party into an Echo.

This was one of those books that kept getting better the farther into the book I read. I was a bit leery at first because the publisher's blurb made it sound more like horror than I prefer, but this is more the disturbing strangeness of fae creatures than the sort of gruesomeness, disgust, or body horror that I find off-putting. Most importantly, the point of this book is not to torture the characters or scare the reader. It's instead structured a bit like a murder mystery, but one whose resolution requires working out obscure fantasy rules and hidden political agendas. One of the currencies in the world of Echos is blood, but another is emotion, revelation, and the stories that bring both, and Caruso focuses the story more on that aspect than on horrifying imagery.

Rika frowned. "Resolve it? How?"

"I have no idea." I couldn't keep my frustration from leaking through. "Might be that we have to delve deep into our own hearts to confront the unhealed wounds we've carried with us in secret. Might be that we have to say their names backward, or just close our eyes and they'll go away. Echoes never make any damned sense."

Rika made a face. "We'd better not have to confront our unhealed wounds, or I'm leaving you to die."

All of The Last Hour Between Worlds is told in the first person from Kem's perspective, but Rika is the best character in this book. Kem is a rather straightforward, dogged, stubborn protector; Rika is complicated, selfish, conflicted, and considerably more dynamic. The first obvious twist in her background I spotted so long before Kem found out that it was a bit frustrating, but there were multiple satisfying twists after that. As advertised in the blurb, there's a sapphic romance angle here, but it's the sort that comes from a complicated friendship and a lot of mutual respect rather than love at first sight. Some of their relationship conflict is driven by misunderstanding, but the misunderstanding happens before the novel begins, which means the reader doesn't have to sit through the bit where one yells at the characters for being stupid.

It helps that the characters have something concrete to do, and that driving plot problem is multi-layered and satisfying. Each time the party falls through a layer of reality, it's mostly reset to the start of the book, but the word "mostly" is hiding a lot of subtlety. Given the clock at the start of each chapter and the blurb (if one read it), the reader can make a good guess that the plot problem will not be fully resolved until the characters fall quite deep into the Echoes, but the story never felt repetitive the way that some time loop stories can. As the characters gain more understanding, the problems change, the players change, and they have to make several excursions into the surrounding world.

This is the sort of fantasy that feels a bit like science fiction. You're thrown into a world with a different culture and different rules that are foreign to the reader and natural to the characters. Part of the fun of reading is figuring out the rules, history, and backstory while watching the characters try to solve the puzzles they're faced with.

The writing is good but not great. Characterization was good enough for a story primarily focused on action and puzzle-solving, but it was a bit lacking in subtlety. I think Caruso's strengths showed most in the world design, particularly the magic system and the rules followed by the Echo creatures. The excursions outside of the somewhat-protected house struck a balance between eeriness and comprehensibility that reminded me of T. Kingfisher or Sandman. The human politics were unfortunately less successful and rested on some tired centrist cliches. Thankfully, this was not the main point of the story.

I should also warn that there is a lot of talk about babies. Kem's entire identity at the start of the novel, to the point of incessant monologue, is "new mother." This is not a perspective we get very often in fantasy, and Kem eventually finds a steadier balance between her bond with her daughter and the other parts of her life. I think some readers will feel very seen. But Caruso leans hard into maternal bonding. So hard. If you don't want to read about someone who is deliriously obsessed with their new child, you may want to skip this one.

Right after I finished this book, I thought it was amazing. Now that I've had a few days to think about it, the lack of subtlety and the facile human politics brought it down a notch. I'm a science fiction reader at heart, so I loved the slow revelation of mechanics; the reader starts the story by knowing that Kem can "blink step" but not knowing what that means, and by the end of the story one not only knows but has opinions about its limitations, political implications, and interactions with other forms of magic. The Echo worlds are treated similarly, and this type of world-building is my jam. But the cost is that the human characters, particularly the supporting cast, don't get the same focus and therefore are a bit straightforward and obvious. The subplot with Dona Vandelle was particularly annoying.

Ah well. Kem and Rika's relationship did work, and it's the center of the book.

If you like fantasy mechanics but are a bit leery of fae stories because they feel too symbolic or arbitrary, give this a try. It's the most satisfyingly constructed fae story that I've read in a long time. It's not great literary fiction, but it's also not trying to be; it's a puzzle adventure, and a well-executed one. Recommended, and I will definitely be reading the sequel.

Content notes: Lots of violent death and other physical damage, creepy dream worlds with implied but not explicit horror, and rather a lot of blood.

Followed by The Last Soul Among Wolves, not yet published at the time I wrote this review.

Rating: 8 out of 10

29 December, 2024 03:40AM

December 28, 2024

hackergotchi for Thomas Goirand

Thomas Goirand

Running a Lenovo Legion pro 7 laptop under Debian

As I was tired of long build times, so I convinced my boss to buy me a Lenovo Legion pro 7. The reason is: this laptop has an AMD Ryzen 9 7945HX that has 16 cores (32 threads). This reduces a lot the time I have to just wait for my laptop to compile, or run unit tests, especially for big packages like Ceph, OpenVSwitch, and so on.

When buying it, I knew it would not be a good fit for Debian, as this type of laptop is aimed at gaming, and the support under Linux is rather bad. I wish Lenovo had other policies, but that is the way it is: if you’re a Linux user, you’re not suppose to be needing a big CPU, apparently.

Anyways, I slowly have been able to fix all issues over this year. In this blog post I’ll explain how I fixed all problems, in the hope it can be useful to others. And I’ll explain what the src:lenovolegionlinux package (that I now maintain in Debian) does.

Video

The laptop comes with an nVidia RTX-4080 and a Radeon. I quickly tried the radeon, but couldn’t make it work with an external monitor. So I gave up on it, disabled it, and now I’m using the proprietary nVidia driver from non-free. I don’t like it: the nVidia card drains too much power, and I don’t care at all 3D acceleration. I would have prefer an intel board, but no choice: all laptops with this kind of CPU comes with gamer’s 3D card. Anyways, apart from the power issue, it works out well.

Fan control

This sounds like a non-issue, but it is a huge one. Indeed, if not controlling the fan, it is impossible to get the full potential of the CPUs that are otherwise throttling. One may end up using the laptop at a few hundred MHz instead of 5GHz+. More on this later.

Sound

It took me a really long time to figure out what to do. Indeed, while the sound card works out of the box, the issue was that my laptop came with a TI (Texas Instrument) speaker firmware that isn’t on by default. I suppose the purpose is to save on power when it isn’t in use. Anyways, to have sound working, one need in Debian, to run at least kernel 6.10, which means for me, running the Bookworm backport, so that there’s a kernel module for the speakers. But that’s not it. The speakers also need a proprietary firmware in /lib/firmware/TAS2XXX38*.bin. I was able to find that in the ti.com forum. As I tried so many packages, I wouldn’t be able to tell which one was the correct one. Once that was done, the firmware needs to be initialized through the i2c interface. I could find a script that did that, which I pushed in my lenovolegionlinux package (see below).

WiFi

WiFi worked out of the box for me, just it wouldn’t wake up if I closed the laptop lead. This fixed it for me in /etc/modprobe.d/rtw8852be.conf:

options rtw89_pci disable_aspm_l1=y disable_aspm_l1ss=y
options rtw89_core disable_ps_mode=y

lenovolegionlinux package

I came across https://github.com/johnfanv2/LenovoLegionLinux which I packaged. The result is now 4 binary packages: lenovolegionlinux-dkms that provides the kernel module for accessing the fan control. python3-legion-linux that provides legion_cli and legion_gui, written in Python, that make it possible to control the kernel module. I often use sudo legion_gui, click on “Other options” and then switch the power profile from quiet to balanced. Many things on this GUI do no work for me, like the fancurve thingy, but should be working for other flavors of Legion laptops. Please feel free to contribute. There’s also legiond that provides a daemon for setting-up the fan curve on wake up. And finally, I pushed my i2c speaker script to a new lenovolegionlinux-sound debian binary package that I have just uploaded today, in the hope it may be useful for others.

Conclusion

Finally, almost everything is (almost) working as expected. Just my webcam (lsusb says it’s a Luxvisions Innotech Limited Integrated Camera) went dark at some point (it did work previously). It is now as if it is working, but just transmitting a black picture. If anyone knows how to fix, please tell me. Also, I only get 40 minutes of battery time if I’m lucky, I hope this could be fixed. But overall, I’m happy of the laptop.

Thanks to Ding Shenghao for his support of many people in the ti.com forum. Thanks to the people maintaining the LenovoLegionLinux that helped me a lot writing this Debian package.

Please try and report issue with lenovolegionlinux in Debian, and help me improving it. It is in Salsa’s debian namespace in the hope that others may push contributions.

28 December, 2024 02:55PM by Goirand Thomas

Enrico Zini

Disable spellchecker popup on Android

On Android, there's a spellchecker popup that occasionally appears over the keyboard, getting very annoyingly in the way. See for example this unanswered question with screenshots.

It looks like a feature of the keyboard, but it's not, and so I looked and I looked and I could not find how to turn it off.

The answer is to look for how to disable the spellchecker in the keyboard section of the android system settings, not in the android keyboard app settings.

See for example this answer on stackexchange.

28 December, 2024 12:47PM

December 27, 2024

hackergotchi for Wouter Verhelst

Wouter Verhelst

Writing an extensible JSON-based DSL with Moose

At work, I've been maintaining a perl script that needs to run a number of steps as part of a release workflow.

Initially, that script was very simple, but over time it has grown to do a number of things. And then some of those things did not need to be run all the time. And then we wanted to do this one exceptional thing for this one case. And so on; eventually the script became a big mess of configuration options and unreadable flow, and so I decided that I wanted it to be more configurable. I sat down and spent some time on this, and eventually came up with what I now realize is a domain-specific language (DSL) in JSON, implemented by creating objects in Moose, extensible by writing more object classes.

Let me explain how it works.

In order to explain, however, I need to explain some perl and Moose basics first. If you already know all that, you can safely skip ahead past the "Preliminaries" section that's next.

Preliminaries

Moose object creation, references.

In Moose, creating a class is done something like this:

package Foo;

use v5.40;
use Moose;

has 'attribute' => (
    is  => 'ro',
    isa => 'Str',
    required => 1
);

sub say_something {
    my $self = shift;
    say "Hello there, our attribute is " . $self->attribute;
}

The above is a class that has a single attribute called attribute. To create an object, you use the Moose constructor on the class, and pass it the attributes you want:

use v5.40;
use Foo;

my $foo = Foo->new(attribute => "foo");

$foo->say_something;

(output: Hello there, our attribute is foo)

This creates a new object with the attribute attribute set to bar. The attribute accessor is a method generated by Moose, which functions both as a getter and a setter (though in this particular case we made the attribute "ro", meaning read-only, so while it can be set at object creation time it cannot be changed by the setter anymore). So yay, an object.

And it has methods, things that we set ourselves. Basic OO, all that.

One of the peculiarities of perl is its concept of "lists". Not to be confused with the lists of python -- a concept that is called "arrays" in perl and is somewhat different -- in perl, lists are enumerations of values. They can be used as initializers for arrays or hashes, and they are used as arguments to subroutines. Lists cannot be nested; whenever a hash or array is passed in a list, the list is "flattened", that is, it becomes one big list.

This means that the below script is functionally equivalent to the above script that uses our "Foo" object:

use v5.40;
use Foo;

my %args;

$args{attribute} = "foo";

my $foo = Foo->new(%args);

$foo->say_something;

(output: Hello there, our attribute is foo)

This creates a hash %args wherein we set the attributes that we want to pass to our constructor. We set one attribute in %args, the one called attribute, and then use %args and rely on list flattening to create the object with the same attribute set (list flattening turns a hash into a list of key-value pairs).

Perl also has a concept of "references". These are scalar values that point to other values; the other value can be a hash, a list, or another scalar. There is syntax to create a non-scalar value at assignment time, called anonymous references, which is useful when one wants to remember non-scoped values. By default, references are not flattened, and this is what allows you to create multidimensional values in perl; however, it is possible to request list flattening by dereferencing the reference. The below example, again functionally equivalent to the previous two examples, demonstrates this:

use v5.40;
use Foo;

my $args = {};

$args->{attribute} = "foo";

my $foo = Foo->new(%$args);

$foo->say_something;

(output: Hello there, our attribute is foo)

This creates a scalar $args, which is a reference to an anonymous hash. Then, we set the key attribute of that anonymous hash to bar (note the use arrow operator here, which is used to indicate that we want to dereference a reference to a hash), and create the object using that reference, requesting hash dereferencing and flattening by using a double sigil, %$.

As a side note, objects in perl are references too, hence the fact that we have to use the dereferencing arrow to access the attributes and methods of Moose objects.

Moose attributes don't have to be strings or even simple scalars. They can also be references to hashes or arrays, or even other objects:

package Bar;

use v5.40;
use Moose;

extends 'Foo';

has 'hash_attribute' => (
    is => 'ro',
    isa => 'HashRef[Str]',
    predicate => 'has_hash_attribute',
);

has 'object_attribute' => (
    is => 'ro',
    isa => 'Foo',
    predicate => 'has_object_attribute',
);

sub say_something {
    my $self = shift;

    if($self->has_object_attribute) {
        $self->object_attribute->say_something;
    }

    $self->SUPER::say_something unless $self->has_hash_attribute;

    say "We have a hash attribute!"
}

This creates a subclass of Foo called Bar that has a hash attribute called hash_attribute, and an object attribute called object_attribute. Both of them are references; one to a hash, the other to an object. The hash ref is further limited in that it requires that each value in the hash must be a string (this is optional but can occasionally be useful), and the object ref in that it must refer to an object of the class Foo, or any of its subclasses.

The predicates used here are extra subroutines that Moose provides if you ask for them, and which allow you to see if an object's attribute has a value or not.

The example script would use an object like this:

use v5.40;
use Bar;

my $foo = Foo->new(attribute => "foo");

my $bar = Bar->new(object_attribute => $foo, attribute => "bar");

$bar->say_something;

(output: Hello there, our attribute is foo)

This example also shows object inheritance, and methods implemented in child classes.

Okay, that's it for perl and Moose basics. On to...

Moose Coercion

Moose has a concept of "value coercion". Value coercion allows you to tell Moose that if it sees one thing but expects another, it should convert is using a passed subroutine before assigning the value.

That sounds a bit dense without example, so let me show you how it works. Reimaginging the Bar package, we could use coercion to eliminate one object creation step from the creation of a Bar object:

package "Bar";

use v5.40;

use Moose;
use Moose::Util::TypeConstraints;

extends "Foo";

coerce "Foo",
    from "HashRef",
    via { Foo->new(%$_) };

has 'hash_attribute' => (
    is => 'ro',
    isa => 'HashRef',
    predicate => 'has_hash_attribute',
);

has 'object_attribute' => (
    is => 'ro',
    isa => 'Foo',
    coerce => 1,
    predicate => 'has_object_attribute',
);

sub say_something {
    my $self = shift;

    if($self->has_object_attribute) {
        $self->object_attribute->say_something;
    }

    $self->SUPER::say_something unless $self->has_hash_attribute;

    say "We have a hash attribute!"
}

Okay, let's unpack that a bit.

First, we add the Moose::Util::TypeConstraints module to our package. This is required to declare coercions.

Then, we declare a coercion to tell Moose how to convert a HashRef to a Foo object: by using the Foo constructor on a flattened list created from the hashref that it is given.

Then, we update the definition of the object_attribute to say that it should use coercions. This is not the default, because going through the list of coercions to find the right one has a performance penalty, so if the coercion is not requested then we do not do it.

This allows us to simplify declarations. With the updated Bar class, we can simplify our example script to this:

use v5.40;

use Bar;

my $bar = Bar->new(attribute => "bar", object_attribute => { attribute => "foo" });

$bar->say_something

(output: Hello there, our attribute is foo)

Here, the coercion kicks in because the value object_attribute, which is supposed to be an object of class Foo, is instead a hash ref. Without the coercion, this would produce an error message saying that the type of the object_attribute attribute is not a Foo object. With the coercion, however, the value that we pass to object_attribute is passed to a Foo constructor using list flattening, and then the resulting Foo object is assigned to the object_attribute attribute.

Coercion works for more complicated things, too; for instance, you can use coercion to coerce an array of hashes into an array of objects, by creating a subtype first:

package MyCoercions;
use v5.40;

use Moose;
use Moose::Util::TypeConstraints;

use Foo;

subtype "ArrayOfFoo", as "ArrayRef[Foo]";
subtype "ArrayOfHashes", as "ArrayRef[HashRef]";

coerce "ArrayOfFoo", from "ArrayOfHashes", via { [ map { Foo->create(%$_) } @{$_} ] };

Ick. That's a bit more complex.

What happens here is that we use the map function to iterate over a list of values.

The given list of values is @{$_}, which is perl for "dereference the default value as an array reference, and flatten the list of values in that array reference".

So the ArrayRef of HashRefs is dereferenced and flattened, and each HashRef in the ArrayRef is passed to the map function.

The map function then takes each hash ref in turn and passes it to the block of code that it is also given. In this case, that block is { Foo->create(%$_) }. In other words, we invoke the create factory method with the flattened hashref as an argument. This returns an object of the correct implementation (assuming our hash ref has a type attribute set), and with all attributes of their object set to the correct value. That value is then returned from the block (this could be made more explicit with a return call, but that is optional, perl defaults a return value to the rvalue of the last expression in a block).

The map function then returns a list of all the created objects, which we capture in an anonymous array ref (the [] square brackets), i.e., an ArrayRef of Foo object, passing the Moose requirement of ArrayRef[Foo].

Usually, I tend to put my coercions in a special-purpose package. Although it is not strictly required by Moose, I find that it is useful to do this, because Moose does not allow a coercion to be defined if a coercion for the same type had already been done in a different package. And while it is theoretically possible to make sure you only ever declare a coercion once in your entire codebase, I find that doing so is easier to remember if you put all your coercions in a specific package.

Okay, now you understand Moose object coercion! On to...

Dynamic module loading

Perl allows loading modules at runtime. In the most simple case, you just use require inside a stringy eval:

my $module = "Foo";
eval "require $module";

This loads "Foo" at runtime. Obviously, the $module string could be a computed value, it does not have to be hardcoded.

There are some obvious downsides to doing things this way, mostly in the fact that a computed value can basically be anything and so without proper checks this can quickly become an arbitrary code vulnerability. As such, there are a number of distributions on CPAN to help you with the low-level stuff of figuring out what the possible modules are, and how to load them.

For the purposes of my script, I used Module::Pluggable. Its API is fairly simple and straightforward:

package Foo;

use v5.40;
use Moose;

use Module::Pluggable require => 1;

has 'attribute' => (
    is => 'ro',
    isa => 'Str',
);

has 'type' => (
    is => 'ro',
    isa => 'Str',
    required => 1,
);

sub handles_type {
    return 0;
}

sub create {
    my $class = shift;
    my %data = @_;

    foreach my $impl($class->plugins) {
        if($impl->can("handles_type") && $impl->handles_type($data{type})) {
            return $impl->new(%data);
        }
    }
    die "could not find a plugin for type " . $data{type};
}

sub say_something {
    my $self = shift;
    say "Hello there, I am a " . $self->type;
}

The new concept here is the plugins class method, which is added by Module::Pluggable, and which searches perl's library paths for all modules that are in our namespace. The namespace is configurable, but by default it is the name of our module; so in the above example, if there were a package "Foo::Bar" which

  • has a subroutine handles_type
  • that returns a truthy value when passed the value of the type key in a hash that is passed to the create subroutine,
  • then the create subroutine creates a new object with the passed key/value pairs used as attribute initializers.

Let's implement a Foo::Bar package:

package Foo::Bar;

use v5.40;
use Moose;

extends 'Foo';

has 'type' => (
    is => 'ro',
    isa => 'Str',
    required => 1,
);

has 'serves_drinks' => (
    is => 'ro',
    isa => 'Bool',
    default => 0,
);

sub handles_type {
    my $class = shift;
    my $type = shift;

    return $type eq "bar";
}

sub say_something {
    my $self = shift;
    $self->SUPER::say_something;
    say "I serve drinks!" if $self->serves_drinks;
}

We can now indirectly use the Foo::Bar package in our script:

use v5.40;
use Foo;

my $obj = Foo->create(type => bar, serves_drinks => 1);

$obj->say_something;

output:

Hello there, I am a bar.
I serve drinks!

Okay, now you understand all the bits and pieces that are needed to understand how I created the DSL engine. On to...

Putting it all together

We're actually quite close already. The create factory method in the last version of our Foo package allows us to decide at run time which module to instantiate an object of, and to load that module at run time. We can use coercion and list flattening to turn a reference to a hash into an object of the correct type.

We haven't looked yet at how to turn a JSON data structure into a hash, but that bit is actually ridiculously trivial:

use JSON::MaybeXS;

my $data = decode_json($json_string);

Tada, now $data is a reference to a deserialized version of the JSON string: if the JSON string contained an object, $data is a hashref; if the JSON string contained an array, $data is an arrayref, etc.

So, in other words, to create an extensible JSON-based DSL that is implemented by Moose objects, all we need to do is create a system that

  • takes hash refs to set arguments
  • has factory methods to create objects, which

    • uses Module::Pluggable to find the available object classes, and
    • uses the type attribute to figure out which object class to use to create the object
  • uses coercion to convert hash refs into objects using these factory methods

In practice, we could have a JSON file with the following structure:

{
    "description": "do stuff",
    "actions": [
        {
            "type": "bar",
            "serves_drinks": true,
        },
        {
            "type": "bar",
            "serves_drinks": false,
        }
    ]
}

... and then we could have a Moose object definition like this:

package MyDSL;

use v5.40;
use Moose;

use MyCoercions;

has "description" => (
    is => 'ro',
    isa => 'Str',
);

has 'actions' => (
    is => 'ro',
    isa => 'ArrayOfFoo'
    coerce => 1,
    required => 1,
);

sub say_something {
    say "Hello there, I am described as " . $self->description . " and I am performing my actions: ";

    foreach my $action(@{$self->actions}) {
        $action->say_something;
    }
}

Now, we can write a script that loads this JSON file and create a new object using the flattened arguments:

use v5.40;
use MyDSL;
use JSON::MaybeXS;

my $input_file_name = shift;

my $args = do {
    local $/ = undef;

    open my $input_fh, "<", $input_file_name or die "could not open file";
    <$input_fh>;
};

$args = decode_json($args);

my $dsl = MyDSL->new(%$args);

$dsl->say_something

Output:

Hello there, I am described as do stuff and I am performing my actions:
Hello there, I am a bar
I am serving drinks!
Hello there, I am a bar

In some more detail, this will:

  • Read the JSON file and deserialize it;
  • Pass the object keys in the JSON file as arguments to a constructor of the MyDSL class;
  • The MyDSL class then uses those arguments to set its attributes, using Moose coercion to convert the "actions" array of hashes into an array of Foo::Bar objects.
  • Perform the say_something method on the MyDSL object

Once this is written, extending the scheme to also support a "quux" type simply requires writing a Foo::Quux class, making sure it has a method handles_type that returns a truthy value when called with quux as the argument, and installing it into the perl library path. This is rather easy to do.

It can even be extended deeper, too; if the quux type requires a list of arguments rather than just a single argument, it could itself also have an array attribute with relevant coercions. These coercions could then be used to convert the list of arguments into an array of objects of the correct type, using the same schema as above.

The actual DSL is of course somewhat more complex, and also actually does something useful, in contrast to the DSL that we define here which just says things.

Creating an object that actually performs some action when required is left as an exercise to the reader.

27 December, 2024 11:39AM

hackergotchi for Guido Günther

Guido Günther

Phosh 2024 in Retrospect

As in 2023 I took another look back at what changed in Phosh in 2024 and instead of just updating my notes why not again share it here. The Phosh developers focus from day one was to make devices running Phosh daily drivable without having to resort to any proprietary OSes as a fallback. So the past years were often dominated by adding essential features to make that possible and reliable at all.

27 December, 2024 11:21AM

December 26, 2024

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

How to check what matches linux-any?

Usually Architecture: any is recommended in debian/control except upstream explicitly doesn't/won't support that architecture.

In practical use case, linux-any is useful to exclude hurd architecture. (Previously it is also useful to exclude kfreebsd)

Here is the simple script to check whether specific architecutre matches linux-any or not.

2024/12/28: UPDATE

I've got feedback that the following command should be used. (Thanks Cyril Brulebois and Guillem Jover)

dpkg-architecture -L -W linux-any

or

dpkg-architecture --match-wildcard linux-any --list-known

NOTE: the following example is wrong, but for a record what I did wrongly, keep it as is:

#!usr/bin/bash

TARGETS="
amd64
arm64
armel
armhf
i386
mips64el
ppc64el
riscv64
s390x
alpha
hppa
hurd-amd64
hurd-i386
loong64
m68k
powerpc
ppc64
sh4
sparc64
x32
"

for d in $TARGETS; do
    dpkg-architecture -i linux-any -a $d
    if [ $? -eq 0 ]; then
    echo -e "[\e[32m\e[40mOK\e[0m] $d is linux-any (dpkg-architecture -i linux-any -a $d)"
    else
    echo -e "[\e[31m\e[40mNG\e[0m] $d is NOT linux-any (dpkg-architecture -i linux-any -a $d)"
    fi
done

screenshot of shell script

26 December, 2024 12:29PM

December 24, 2024

Divine Attah-Ohiemi

Seamless Transitions: Mastering Apache Redirects for a Smooth Hugo Migration

This week, I dove into setting up redirects with Apache to make the transition to Hugo's multilingual system smoother. The challenge? Ensuring that all those old links still worked while I migrated to the new URL format.

For instance, I needed to redirect:

/es/distrib to /distrib/index.es.html
/es/social_contract to /social_contract.es.html
/es/intro/about to /intro/about.es.html
/da to /index.da.html

To tackle this, I turned to Apache's mod_rewrite. Here’s the magic I came up with in my .htaccess file:

RewriteCond %{REQUEST_URI} ^/([a-z]{2}(?:-[a-z]{2})?)/(.*)$
RewriteCond %{DOCUMENT_ROOT}/$2/index.%1.html -f
RewriteCond %{DOCUMENT_ROOT}/$1/$2 !-d
RewriteRule ^/([a-z]{2}(?:-[a-z]{2})?)/(.*)$ /$2/index.%1.html [last,redirect]

RewriteCond %{REQUEST_URI} ^/([a-z]{2}(?:-[a-z]{2})?)/(.*)$
RewriteCond %{DOCUMENT_ROOT}/$2.%1.html -f
RewriteCond %{DOCUMENT_ROOT}/$1/$2 !-d
RewriteRule ^/([a-z]{2}(?:-[a-z]{2})?)/(.*)$ /$2.%1.html [last,redirect]

What’s happening here? The rules check if the URL starts with a language code (like /es or /da). Then, they verify whether the corresponding HTML file exists. If it does, and the path isn’t a directory, voilà! The user gets redirected to the new format.

It’s a bit of a dance with conditions and rules, but it’s satisfying to see everything working seamlessly. Now, as I continue migrating content, users clicking on old links won’t end up in a digital dead end. It’s all about keeping the flow smooth and maintaining that user experience.

So, if you’re also juggling multilingual pages and thinking about making the switch to Hugo, don’t underestimate the power of mod_rewrite. It’s your best friend in the world of redirects! Happy coding!

24 December, 2024 03:54PM by Divine Attah-Ohiemi

December 23, 2024

Sahil Dhiman

Debian Mirrors Hierarchy

After finding AlmaLinux mirror sync capacity at Tier 0 (or Tier 1, however you look at it) is around 140 Gbps, I wanted to find source and hierarchy in Debian mirroring systems.

There are two main types of mirrors in Debian - Debian package mirrors (for package installs and updates) and Debian CD mirrors (for ISO and others medias). Let’s talk about package mirrors (and it’s hierarchy) first.

Package mirror hierarchy

Trace file was a good starting point for checking upstream for a package mirror in Debian. It resides at <URL>/debian/project/trace/_traces and shows flow of data. Sample trace file from jing.rocks’s mirror. It showed, canonical source for packages is ftp-master.debian.org. Checking via https://db.debian.org/machines.cgi, showed it’s fasolo.d.o hosted at Brown University, US. This serves as “Master Archive Server”, making it a Tier 0 mirror. It’s entry mentions that it has 1 Gbps shared LAN connectivity (dated information?) but it only has to push to 3 other machines/sites.

Side note - .d.o is .debian.org

As shown on https://mirror-master.debian.org/status/mirror-hierarchy.html, the three sites are:

  • syncproxy2.eu.debian.org ie smit.d.o hosted by University of Twente, Netherlands with 2x10 Gbps connectivity.
  • syncproxy4.eu.debian.org ie schmelzer.d.o hosted by Conova in Austria with 2x10 Gbps connectivity.
  • syncproxy2.wna.debian.org - https://db.debian.org/machines.cgi entry mentions it being hosted at UBC here, but IP seems to be pointing to OSUOSL IP range as of now. IIRC few months ago, syncproxy2.wna.d.o was made to point to other host due to some issue (?). mirror-osuosl.d.o seems to be serving as syncproxy2.wna.d.o now. Bandwidth isn’t explicitly mentioned but from my experience seeing bandwidths which other free software projects hosted at OSUOSL have, it would be atleast 10 Gbps and maybe more for Debian.

                     syncproxy2.eu.d.o (NL) ---> to the world
                    /
ftp-master.d.o (US) -- syncproxy4.eu.d.o (AT)  --> to the world 
                    \
                     syncproxy2.wna.d.o (US) --> to the world
A visualation of flow of package from ftp-master.d.o

These form the Debian Tier 1 mirror network, as all the mirrors sync from them. So Debian has atleast 50 Gbps+ capacity at Tier 1. A normal Debian user might never directly interact with any of these 3 machines, but every Debian package they run/download/install flows through these machines. Though, I’m unsure what wna stands for in syncproxy2.wna.d.o. NA probably is North America and W is west (coast)? If you know, do let me know.

After Tier 1, there are a few more syncproxies (detailed below). There are atleast 45 mirrors at Tier 2, updates for which are directly pushed from the three Tier 1 sync proxies. Most country mirrors i.e. ftp..debian.org are at Tier 2 too (barring a few like ftp.au.d.o, ftp.nz.do etc).

Coming back to Sync proxies at Tier 2:

  • syncproxy3.wna.debian.org - gretchaninov.d.o which is marked as syncproxy2 on db.d.o (information dated). It’s hosted in University of British Columbia, Canada, where a lot of Debian infrastructure including Salsa is hosted.
  • syncproxy.eu.debian.org - Croatian Academic and Research Network managed machine. CNAME directs to debian.carnet.hr.
  • syncproxy.au.debian.org - mirror-anu.d.o hosted by Australian National University with 100Mbps connectivity. Closest sync proxy for all Australian mirrors.
  • syncproxy4.wna.debian.org - syncproxy-aws-wna-01.d.o hosted in AWS, in US (according to GeoIP). IPv6 only (CNAME to syncproxy-aws-wna-01.debian.org. which only has an AAAA record, no A record). A m6g.2xlarge instance which has speeds upto 10 Gbps.

Coming back to https://mirror-master.debian.org/status/mirror-hierarchy.html, one can see chain extend till Tier 6 like in case of this mirror in AU which should add some latency for the updates from being pushed at ftp-master.d.o to them. Ideally, which shouldn’t be a problem as https://www.debian.org/mirror/ftpmirror#when mentions “The main archive gets updated four times a day”.

In my case, I get my updates from NITC mirror, so my updates flows from US > US > TW > IN > me in IN.

CDNs have to internally manage cache purging too unlike normal mirrors which directly serve static file. Both deb.debian.org (sponsored by Fastly) and cdn-aws.deb.debian.org (sponsored by Amazon Cloudfront) sync from following CDN backends:

See deb.d.o trace file and cdn-aws.deb.d.o trace file.

(Thanks to Philipp Kern for the heads up here.)

CD image mirrors Hierarchy

Till now, I have only talked about Debian package mirrors. When you see /debian directory on various mirrors, they’re usually for package install and updates. If you want to grab the latest (and greatest) Debian ISO, you go to Debian CD (as they’re still called) mirror site.

casulana.d.o is mentioned as CD builder site hosted by Bytemark while pettersson-ng.d.o is mentioned as CD publishing server hosted at Academic Computer Club in Umeå, Sweden. Primary download site for Debian CD when you click download on debian.org homepage is https://cdimage.debian.org/debian-cd/ is hosted here as well. This essentially becomes Tier 0 mirror for Debian CD. All Debian CD mirrors are downstream to it.

pettersson-ng.d.o / cdimage.d.o (SE) ---> to the world
A visualation of flow of Debian CD from cdimage.d.o

Academic Computer Club’s mirror setup uses a combination of multiple machines (called frontends and offloading servers) to load balance requests. Their document setup is a highly recommended read. Also, in that document, they mention , “All machines are reachable via both IPv4 and IPv6 and connected with 10 or 25 gigabit Ethernet, external bandwidth available is 200 gigabit/s.”

For completeness sake, following mirror (or mirror systems) exists too for Debian:

Debian heavily rely on various organizations to donate resources (hosting and hardware) to distribute and update Debian. Compiling above information made me thankful to all these organizations. Many thanks to DSA and mirror team as well for managing these stuffs.

I relied heavily on https://db.debian.org/machines.cgi which seems to be manually updated, so things might have changed along the way. If anything looks amiss, feel free to ping.

23 December, 2024 03:32PM

hackergotchi for Joey Hess

Joey Hess

the twenty-fifth year of my free software career

I've been lucky to be able to spend twenty! five! years! developing free software and making a living on it, and this was a banner year for that career.

To start with, there was the Distribits conference. There's a big ecosystem of tools and projects that are based on git-annex, especially in scientific data management, and this was the first conference focused on that. Basically every talk involved git-annex in some way. It's been a while since I was at a conference where my software was in the center like that -- reminded me of Debconf days.

I gave a talk on how git-annex was probably basically feature complete. I have been very busy ever since adding new features to it, because in mapping out git-annex's feature set, I discovered new possibilities.

Meeting people and getting a better feel for the shape of that ecosytem, both technically and funding wise, led to several big developments in funding later in the year. Going into the year, I had an ongoing source of funding from several projects at Dartmouth that use git-annex, but after 10 years, some of that was winding up.

That all came together in my essentially writing a grant proposal to the OpenNeuro project at Stanford, to spend 6 months building out a whole constellation of features. The summer became a sprint to get it all done. Signficant amounts of very productive design work were done while swimming in the river. That was great.

(Somehow in there, I ended up onstage at FOSSY in Portland, in a keynote panel on Open Source and AI. This required developing a nuanced understanding of the mess of the OSI's Open Source AI definition, but I was mostly on the panel as the unqualified guy.)

Capping off the year, I have a new maintenance contract with Forschungszentrum Jülich. This covers the typical daily grind kind of tasks, like bug triage, keeping on top of security, release preparation, and updating dependencies, which is the kind of thing I've never been able to find dedicated funding for before.

A career in free software is a succession of hurdles. How to do something new and worthwhile? How to make any income while developing it at all? How to maintain your independant vision when working on it for hire? How to deal with burn-out? How to grow a project to be more than a one developer affair? And on and on.

How does a free software project keep paying the bills once it's feature complete? Maybe I am starting to get a glimpse of an answer.

23 December, 2024 02:57PM

hackergotchi for Thomas Lange

Thomas Lange

Happy Birthday FAI!

A Brief History of FAI, Which Began 25 Years Ago

On Dec 21st, 1999 version 1.0 of FAI (Fully Automatic Installation) was announced. That was 25 years ago.

Some months before, the computer science department of the University of Cologne bought a small HPC cluster with 16 nodes (each with dual CPU Pentium II 400Mhz, 256 MB RAM) and I was too lazy to install those nodes manually. That's why I started the FAI project. With FAI you can install computers in a few minutes from scratch to a machine with a custom configuration that is ready to go for their users.

At that time Debian 2.1 aka slink was using kernel 2.0.36 and it was the first release using apt. Many things have happened since then.

In the beginning we wrote the first technical report about FAI and a lot of documentation were added afterwards. I gave more than 45 talks about FAI all over the world. Over the past 25 years, there has been an average of more than one commit per day to the FAI software repository.

Several top500.org HPC clusters were built using FAI and many companies are using FAI for their IT infrastructure or deploying Linux on their products using FAI. An overview of users can be found here.

Some major milestones of FAI are listed in the blog post of the 20th anniversary.

What Happended in the Last 5 Years?

  • Live images can be created
  • Writeable data partition on USB sticks
  • FAIme web service creates custom live ISOs
  • Support for Alpine Linux and Arch Linux package managers
  • Automatic detect a local config space
  • Live and installation images for Debian for new hardware using a backports kernel or using the Debian testing release
  • The FAIme web services created more than 30.000 customized ISOs

Currently, I'm preparing for the next FAI release and I still have ideas for new features.

Thanks for all the feedback from you, which helped a lot in making FAI a successful project.

About FAI

FAI is a tool for unattended mass deployment of Linux. It's a system to install and configure Linux systems and software packages on computers as well as virtual machines, from small labs to large-scale infrastructures like clusters and cloud environments. You can take one or more virgin PC's, turn on the power, and after a few minutes, the systems are installed, and completely configured to your exact needs, without any interaction necessary.

23 December, 2024 11:45AM

Simon Josefsson

OpenSSH and Git on a Post-Quantum SPHINCS+

Are you aware that Git commits and tags may be signed using OpenSSH? Git signatures may be used to improve integrity and authentication of our software supply-chain. Popular signature algorithms include Ed25519, ECDSA and RSA. Did you consider that these algorithms may not be safe if someone builds a post-quantum computer?

As you may recall, I have earlier blogged about the efficient post-quantum key agreement mechanism called Streamlined NTRU Prime and its use in SSH and I have attempted to promote the conservatively designed Classic McEliece in a similar way, although it remains to be adopted.

What post-quantum signature algorithms are available? There is an effort by NIST to standardize post-quantum algorithms, and they have a category for signature algorithms. According to wikipedia, after round three the selected algorithms are CRYSTALS-Dilithium, FALCON and SPHINCS+. Of these, SPHINCS+ appears to be a conservative choice suitable for long-term digital signatures. Can we get this to work?

Recall that Git uses the ssh-keygen tool from OpenSSH to perform signing and verification. To refresh your memory, let’s study the commands that Git uses under the hood for Ed25519. First generate a Ed25519 private key:

jas@kaka:~$ ssh-keygen -t ed25519 -f my_ed25519_key -P ""
Generating public/private ed25519 key pair.
Your identification has been saved in my_ed25519_key
Your public key has been saved in my_ed25519_key.pub
The key fingerprint is:
SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka
The key's randomart image is:
+--[ED25519 256]--+
|    .+=.E ..     |
|     oo=.ooo     |
|    . =o=+o .    |
|     =oO+o .     |
|     .=+S.=      |
|      oo.o o     |
|     . o  .      |
|    ...o.+..     |
|   .o.o.=**.     |
+----[SHA256]-----+
jas@kaka:~$ cat my_ed25519_key
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQAAAJCeDotOng6L
TgAAAAtzc2gtZWQyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQ
AAAEBFRvzgcD3YItl9AMmVK4xDKj8NTg4h2Sluj0/x7aSPlhY/9pnyHM3RY1ExKmPNuBbW
0lc13a/r92dsppC3uIgFAAAACGphc0BrYWthAQIDBAU=
-----END OPENSSH PRIVATE KEY-----
jas@kaka:~$ cat my_ed25519_key.pub 
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF jas@kaka
jas@kaka:~$ 

Then let’s sign something with this key:

jas@kaka:~$ echo "Hello world!" > msg
jas@kaka:~$ ssh-keygen -Y sign -f my_ed25519_key -n my-namespace msg
Signing file msg
Write signature to msg.sig
jas@kaka:~$ cat msg.sig 
-----BEGIN SSH SIGNATURE-----
U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgFj/2mfIczdFjUTEqY824FtbSVz
Xdr+v3Z2ymkLe4iAUAAAAMbXktbmFtZXNwYWNlAAAAAAAAAAZzaGE1MTIAAABTAAAAC3Nz
aC1lZDI1NTE5AAAAQLmWsq05tqOOZIJqjxy5ZP/YRFoaX30lfIllmfyoeM5lpVnxJ3ZxU8
SF0KodDr8Rtukg2N3Xo80NGvZOzbG/9Aw=
-----END SSH SIGNATURE-----
jas@kaka:~$

Now let’s create a list of trusted public-keys and associated identities:

jas@kaka:~$ echo 'my.name@example.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF' > allowed-signers
jas@kaka:~$ 

Then let’s verify the message we just signed:

jas@kaka:~$ cat msg | ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with ED25519 key SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ
jas@kaka:~$ 

I have implemented support for SPHINCS+ in OpenSSH. This is early work, but I wanted to announce it to get discussion of some of the details going and to make people aware of it.

What would a better way to demonstrate SPHINCS+ support in OpenSSH than by validating the Git commit that implements it using itself?

Here is how to proceed, first get a suitable development environment up and running. I’m using a Debian container launched in a protected environment using podman.

jas@kaka:~$ podman run -it --rm debian:stable

Then install the necessary build dependencies for OpenSSH.

# apt-get update 
# apt-get install git build-essential autoconf libz-dev libssl-dev

Now clone my OpenSSH branch with the SPHINCS+ implentation and build it. You may browse the commit on GitHub first if you are curious.

# cd
# git clone https://github.com/jas4711/openssh-portable.git -b sphincsp
# cd openssh-portable
# autoreconf -fvi
# ./configure
# make

Configure a Git allowed signers list with my SPHINCS+ public key (make sure to keep the public key on one line with the whitespace being one ASCII SPC character):

# mkdir -pv ~/.ssh
# echo 'simon@josefsson.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAECI6eacTxjB36xcPtP0ZyxJNIGCN350GluLD5h0KjKDsZLNmNaPSFH2ynWyKZKOF5eRPIMMKSCIV75y+KP9d6w3' > ~/.ssh/allowed_signers
# git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers

Then verify the commit using the newly built ssh-keygen binary:

# PATH=$PWD:$PATH
# git log -1 --show-signature
commit ce0b590071e2dc845373734655192241a4ace94b (HEAD -> sphincsp, origin/sphincsp)
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
Author: Simon Josefsson <simon@josefsson.org>
Date:   Tue Dec 3 18:44:25 2024 +0100

    Add SPHINCS+.

# git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ
# 

Yay!

So what are some considerations?

SPHINCS+ comes in many different variants. First it comes with three security levels approximately matching 128/192/256 bit symmetric key strengths. Second choice is between the SHA2-256, SHAKE256 (SHA-3) and Haraka hash algorithms. Final choice is between a “robust” and a “simple” variant with different security and performance characteristics. To get going, I picked the “sphincss256sha256robust” SPHINCS+ implementation from SUPERCOP 20241022. There is a good size comparison table in the sphincsplus implementation, if you want to consider alternative variants.

SPHINCS+ public-keys are really small, as you can see in the allowed signers file. This is really good because they are handled by humans and often by cut’n’paste.

What about private keys? They are slightly longer than Ed25519 private keys but shorter than typical RSA private keys.

# ssh-keygen -t sphincsplus -f my_sphincsplus_key -P ""
Generating public/private sphincsplus key pair.
Your identification has been saved in my_sphincsplus_key
Your public key has been saved in my_sphincsplus_key.pub
The key fingerprint is:
SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg root@ad600ff56253
The key's randomart image is:
+[SPHINCSPLUS 256-+
| .  .o           |
|o . oo.          |
| = .o.. o        |
|o o  o o . .   o |
|.+    = S o   o .|
|Eo=  . + . . .. .|
|=*.+  o . . oo . |
|B+=    o o.o. .  |
|o*o   ... .oo.   |
+----[SHA256]-----+
# cat my_sphincsplus_key.pub 
ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7 root@ad600ff56253
# cat my_sphincsplus_key 
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAYwAAABtzc2gtc3
BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9slu
L/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAQidiIwanYiMGgAAAB
tzc2gtc3BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1
Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAIAbwBxEhA
NYzITN6VeCMqUyvw/59JM+WOLXBlRbu3R8qS7ljc4qFVWUtmhy8B3t9e4jrhdO6w0n5I4l
mnLnBi2hJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpS
vYgZvUkB2WVWGXXZBCfRdQ+wAAABFyb290QGFkNjAwZmY1NjI1MwECAwQ=
-----END OPENSSH PRIVATE KEY-----
# 

Signature size? Now here is the challenge, for this variant the size is around 29kb or close to 600 lines of base64 data:

# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | head -10
tree ede42093e7d5acd37fde02065a4a19ac1f418703
parent 826483d51a9fee60703298bbf839d9ce37943474
author Simon Josefsson <simon@josefsson.org> 1733247865 +0100
committer Simon Josefsson <simon@josefsson.org> 1734907869 +0100
gpgsig -----BEGIN SSH SIGNATURE-----
 U1NIU0lHAAAAAQAAAGMAAAAbc3NoLXNwaGluY3NwbHVzQG9wZW5zc2guY29tAAAAQIjp5p
 xPGMHfrFw+0/RnLEk0gYI3fnQaW4sPmHQqMoOxks2Y1o9IUfbKdbIpko4Xl5E8gwwpIIhX
 vnL4o/13rDcAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAHSDAAAAG3NzaC1zcGhpbmNzcGx1c0
 BvcGVuc3NoLmNvbQAAdGDHlobgfgkKKQBo3UHmnEnNXczCMNdzJmeYJau67QM6xZcAU+d+
 2mvhbksm5D34m75DWEngzBb3usJTqWJeeDdplHHRe3BKVCQ05LHqRYzcSdN6eoeZqoOBvR
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | tail -5 
 ChvXUk4jfiNp85RDZ1kljVecfdB2/6CHFRtxrKHJRDiIavYjucgHF1bjz0fqaOSGa90UYL
 RZjZ0OhdHOQjNP5QErlIOcZeqcnwi0+RtCJ1D1wH2psuXIQEyr1mCA==
 -----END SSH SIGNATURE-----

Add SPHINCS+.
# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | wc -l
579
# 

What about performance? Verification is really fast:

# time git verify-commit ce0b590071e2dc845373734655192241a4ace94b
Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ

real	0m0.010s
user	0m0.005s
sys	0m0.005s
# 

On this machine, verifying an Ed25519 signature is a couple of times slower, and needs around 0.07 seconds.

Signing is slower, it takes a bit over 2 seconds on my laptop.

# echo "Hello world!" > msg
# time ssh-keygen -Y sign -f my_sphincsplus_key -n my-namespace msg
Signing file msg
Write signature to msg.sig

real	0m2.226s
user	0m2.226s
sys	0m0.000s
# echo 'my.name@example.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7' > allowed-signers
# cat msg | ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig
Good "my-namespace" signature for my.name@example.org with SPHINCSPLUS key SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg
# 

Welcome to our new world of Post-Quantum safe digital signatures of Git commits, and Happy Hacking!

23 December, 2024 12:44AM by simon

December 22, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Kernel adventures: When two rights make a wrong

My 3D printer took me on another adventure recently. Or, well, actually someone else's 3D printer did: It turns out that building a realtime system (with high-speed motors controlling to a 300-degree metal rod) by cobbling together a bunch of Python and JavaScript on an anemic Arm SoC with zero resource isolation doesn't always meet those realtime guarantees. So in particular after installing a bunch of plugins, people would report the infamous “MCU timer too close” Klipper error, which essentially means that the microcontroller didn't get new commands in time from the Linux host and shut down as a failsafe. (Understandably, this sucks if it happens in the middle of an eight-hour print. Nobody really invented a way to reliably resume from these things yet.)

I was wondering whether it was possible to provoke this and then look at what was actually going on in the scheduler; perf sched lets you look at scheduling history on the host, so if I could reproduce the error while collecting data, I could go in afterwards and see what was the biggest CPU hog, or at least that was the theory.

However, to my surprise, perf sched record died with an error essentially saying that the kernel was compiled without ftrace support (which is needed for the scheduler hooks; it's somewhat possible to do without by just doing a regular profile, but that's a different story and much more annoying). Not very surprising, these things tend to run stone-age vendor kernels from some long-forgotten branch with zero security support and seemingly no ftrace.

Now, I did not actually run said vendor kernel; at some point, I upgraded to the latest stable kernel (6.6) from Armbian, which is still far from mainline (for one, it needs to carry out-of-tree drivers to make wireless work at all) but which I trust infinitely more to actually provide updated kernels over time. It doesn't support ftrace either, so I thought the logical step would be to upgrade to the latest “edge” kernel (aka 6.11) and then compile with the right stuff on.

After a couple of hours of compiling (almost nostalgic to have such slow kernel compiles; cross-compiling didn't work for me!), I could boot into the new kernel, and:

[   23.775976] platform 5070400.thermal-sensor: deferred probe pending: platform: wait for supplier 

and then Klipper would refuse to start because it couldn't find the host thermal sensors. (I don't know exactly why it is a hard dependency, but seemingly, it is.) A bit of searching shows that this error message is doubly vexing; it should have said “wait for supplier /i2c@fdd40000/pmic@20/regulators/SWITCH_REG1” or something similar, but ends only in a space and then nothing.

So evidently this has to be something about the device tree (DT), and switching out the new DT for the old one didn't work. Bisecting was also pretty much out of the question (especially with 400+ patches that go on top of the git tree), but after a fair bit of printk debugging and some more reading, I figured out what had happened:

First, the sun8i-thermal driver, which had been carried out-of-tree in Armbian, had gone into mainline. But it was in a slightly different version; while the out-of-tree version used previously (in Armbian's 6.6 kernel) had relied on firmware (run as part of U-Boot, as I understand it) to set a special register bit, the mainline version would be stricter and take care to set it itself. I don't really know what the bit does, short of “if you don't set it, all the values you get back are really crazy”, so this is presumably a good change. So the driver would set a bit in a special memory address somewhere (sidenote: MMIO will always feel really weird to me; like, some part of the CPU has to check all memory accesses in case they're really not to RAM at all?), and for that, the thermal driver would need to take on a DT reference to the allwinner,sram (comma is evidently some sort of hierarchical separator) node so that it could get its address. Like, in case it was moved around in future SoCs or something.

Second, there was an Armbian patch that dealt with exactly these allwinner,sram nodes in another way; it would make sure that references to them would cause devlink references between the nodes. I don't know what those are either, but it seems the primary use case is for waiting: If you have a dependency from A to B, then A's initialization will wait until B is ready. The configuration bit in question is always ready, but I guess it's cleaner somehow, and you get a little symlink somewhere in /sys to explain the relationship, so perhaps it's good? But that's what the error message means; “A: deferred probe pending: wait for supplier B” means that we're not probing for A's existence yet, because it wants B to supply something and B isn't ready yet.

But why is the relationship broken? Well, for that, we need to look at how the code in the patch looks:

        sram_node = of_parse_phandle(np, prop_name, 0);
        sram_node = of_get_parent(sram_node);
        sram_node = of_get_parent(sram_node);

        return sram_node;

And how the device tree is set up in this case (lots of irrelevant stuff removed for clarity):

        bus@1000000 {  /* this works */
                reg = <0x1000000 0x400000>;
                allwinner,sram = <&de3_sram 1>;
        };
        ths: thermal-sensor@5070400 {  /* this doesn't */
                allwinner,sram = <&syscon>;
        };
        syscon: syscon@3000000 {
                sram_c: sram@28000 {
                        de3_sram: sram-section@0 {
                                reg = <0x0000 0x1e000>;
                        };
                };
        };

So that explains it; the code expects that all DT references are to a child of a child of syscon to find the supplier, and just goes up two levels to find it. But for the thermal sensor, the reference is directly to the syscon itself, and it goes up past the root of the tree, which is, well, NULL. And then the error message doesn't have a node name to print out, and the dependency just fails forever.

So that's two presumably good changes that just interacted in a really bad way (in particular, due to too little flexibility in the second one). A small patch later, and the kernel boots with thermals again!

Oh, and those scheduling issues I wanted to debug? I never managed to reliably reproduce them; I have seen them, but they're very rare for me. I guess that upstream for the plugins in question just made things a bit less RAM-hungry in the meantime, or that having a newer kernel improves things enough in itself. Shrug. :-)

22 December, 2024 08:50AM

December 21, 2024

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Thug Life

My current playlist is this diorama of Lulu the Piggy channeling Tupac Shakur in a toy vending machine in the basement of New World Mall in Flushing Chinatown.

21 December, 2024 11:06PM by Benjamin Mako Hill

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.11 on CRAN: Maintenance

A follow-up release 0.3.11 to the recent 0.3.10 release release of the anytime package arrived on CRAN two days ago. The package is fairly feature-complete, and code and functionality remain mature and stable, of course.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub repo for a few examples, and the beautiful documentation site for all documentation.

This release simply skips one test file. CRAN labeled an error ‘M1mac’ yet it did not reproduce on any of the other M1 macOS I can access (macbuilder, GitHub Actions) as this appeared related to a local setting of timezone values I could not reproduce anywwhere. So the only way to get rid of the ‘fail’ is to … not to run the test. Needless to say the upload process was a little tedious as I got the passive-aggressive ‘not responding’ treatment on a first upload and the required email answer it lead to. Anyway, after a few days, and even more deep breaths, it is taken care of and now the package result standing is (at least currently) pristinely clean.

The short list of changes follows.

Changes in anytime version 0.3.11 (2024-12-18)

  • Skip a test file

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo and the documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

21 December, 2024 08:35PM

hackergotchi for Joey Hess

Joey Hess

aiming at December

I have been working all year on a solar upgrade aimed at December. Now here it is, midwinter, and my electric car is charging on a cloudy day from my offgrid solar fence.

I lived happily enough with 1 kilowatt of solar that I installed in 2017. Meanwhile, solar panel prices came down massively, incentives increased and everything came together: This was the year.

In the spring I started clearing forest trees that were leaning over the house, making both a firebreak and a solar field.

In June I picked up a pallet of panels in a box truck.

a porch with a a bunch of solar panels, stacked on edge leaning up against the wall. A black and white cat is sprawled in front of them.

In August I bought the EV and was able to charge it offgrid from my old solar system... a few miles per day on the most sunny days.

In September and October I built a solar fence, of my own design.

Me standing in front of the solar fence, which is 10 panels long

For the past several weeks I have been installing additional solar panels on ballasted ground mounts full of gravel. At this point I'm half way through installing my 30 panel upgrade.

The design goal of my 12 kilowatt system is to produce 1 kilowatt of power all day on a cloudy day in midwinter, which allows swapping between major loads (EV charger, hot water heater, etc) on a cloudy day and running everything on a sunny day. So the size of the battery bank doesn't matter much. Batteries are getting cheaper fast too, but they are a wear item, so it's better to oversize the solar system and minimize the battery.

A lot of this is nonstandard and experimental. And that makes sense with the price of solar panels. It costs more to mount solar panels now than the panels are worth. And non-ideal panel orientation isn't a problem when the system is massively overpaneled.

I'm hoping to finish up the install before the end of winter. I have more trees to clear, more ballasted ground mounts to install, and need to come up with something even more experimental for a half dozen or so panels. Using solar panels as mounts for solar panels? Hanging them from trees?

Soon the wan light will fade, time to head off to the solstice party to enjoy the long night, and a bonfire.

Solar fence with some ballasted ground mounts in front of it, late evening light. Old pole mounted solar panels in the foreground are from the 90's.

21 December, 2024 05:00AM

December 20, 2024

hackergotchi for Steve Kemp

Steve Kemp

The CP/M emulator runs on Windows?

Today I made a new release of my CP/M emulator and I think that maybe now it will run on Microsoft Windows. Unfortunately I cannot test it!

A working CP/M implementation needs to provide facilities for reading input from the console, both reading a complete line of text and individual keystrokes. These input functions need to handle several different types of input:

  • Blocking, waiting for input to become available.
  • Non-blocking, returning any pending input if it is available otherwise nothing.
  • With echo, so the user can see what they typed.
  • Without echo, so the keys are returned by not displayed ot the user.

In the past we used a Unix-specific approach to handle the enabling and disabling of keyboard echoing (specifically we executed the stty binary to enable/disable echos), but this release adds a more portable solution, based around termbox-go which is the new default, and should allow our emulator to work on Microsoft Windows systems.

We always had the ability to select between a number of different output drivers, and as of this release we can now select between multiple input drivers too - with the new portable option being the default. This has been tested on MacOS X systems, as well as GNU/Linux, but sadly I don't have access to Windows to test that.

Fingers crossed it's all good now though, happy new year!

20 December, 2024 11:00PM

hackergotchi for Michael Prokop

Michael Prokop

Grml 2024.12 – codename Adventgrenze

Picture with metrics of three user profiles on GitHub.com, with many contributions especially in the last quarter of the year

We did it again™! Just in time, we’re excited to announce the release of Grml stable version 2024.12, code-named ‘Adventgrenze’! (If you’re not familiar with Grml, it’s a Debian-based live system tailored for system administrators.)

This new release is built on Debian trixie, and for the first time, we’re introducing support for 64-bit ARM CPUs (arm64 architecture)!

I’m incredibly proud of the hard work that went into this release. A significant amount of behind-the-scenes effort went into reworking our infrastructure and redesigning the build process. Special thanks to Chris and Darsha – our Grml developer days in November and December were a blast!

For a detailed overview of the changes between releases 2024.02 and 2024.12, check out our official release announcement. And, as always, after a release comes the next one – exciting improvements are already in the works!

BTW: recently we also celebrated 20(!) years of Grml Releases. If you’re a Grml and or grml-zsh user, please join us in celebrating and send us a postcard!

20 December, 2024 06:05PM by mika

Noah Meyerhans

Local Development VM Management

A coworker asked recently about how people use VMs locally for dev work, so I figured I’d take a few minutes to write up a bit about what I do. There are many use cases for local virtual machines in software development and testing. They’re self-contained, meaning you can make a mess of them without impacting your day-to-day computing environment. They can run different distributions, kernels, and even entirely different operating systems from the one you use regularly. Etc. They’re also cheaper than cloud services and provide finer grained control over the resources.

I figured I’d share a little bit about how I manage different virtual machines in case anybody finds this useful. This is what works for me, but it won’t necessarily work for you, or maybe you’ve already got something better. I’ve found it to be easy to work with, light weight, and is easy to evolve my needs change.

Use short-lived VMs

Rather than keep a long-lived “development” VM around that you customize over time, I recommend automating the common customizations and provisioning new VMs regularly. If I’m working on reproducing a bug or testing a change prior to submitting it upstream, I’ll do this work in a VM and delete the VM when when I’m done. When provisioning VMs this frequently, though, walking through the installation process for every new VM is tedious and a waste of time. Since most of my work is done in Debian, so I start with images generated daily by the cloud team. These images are available for multiple releases and architectures. The ‘nocloud’ variant boots to a root prompt and can be useful directly, or the ‘generic’ images can be used for cloud-init based customization.

Automating image preparation

This makefile lets me do something like make image and get a new qcow2 image with the latest build of a given Debian release (sid by default, with others available by specifying DIST).

DATESTAMP=$(shell date +"%Y-%m-%d")
FLAVOR?=generic
ARCH?=$(shell dpkg --print-architecture)
DIST?=sid
RELEASE=$(DIST)
URL_PATH=https://cloud.debian.org/images/cloud/$(DIST)/daily/latest/
ifeq ($(DIST),trixie)
RELEASE=13
endif
ifeq ($(DIST),bookworm)
RELEASE=12
endif
ifeq ($(DIST),bullseye)
RELEASE=11
endif
debian-$(DIST)-$(FLAVOR)-$(ARCH)-daily.tar.xz:
curl --fail --connect-timeout 20 -LO \
$(URL_PATH)/debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz
$(DIST)-$(FLAVOR)-$(DATESTAMP).qcow2: debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz
tar xvf debian-$(RELEASE)-$(FLAVOR)-$(ARCH)-daily.tar.xz
qemu-img convert -O qcow2 disk.raw $@
rm -f disk.raw
qemu-img resize $@ 20g
qemu-img snapshot -c untouched $@
image: $(DIST)-$(FLAVOR)-$(DATESTAMP).qcow2
.PHONY: image

Customize the VM environment with cloud-init

While the ‘nocloud’ images can be useful, I typically find that I want to apply the same modifications to each new VM I launch, and they don’t provide facilities for automating this. The ‘generic’ images, on the other hand, run cloud-init by default. Using cloud-init, I can create my user account, point apt at local mirrors, install my preferred tools, ensure the root filesystem is resized to make full use of the backing storage, etc.

The cloud-init configuration on the generic images will read from a local config drive, which can contain an ISO9660 (cdrom) filesystem image. This image can be generated from a subdirectory containing the various cloud-init input files using the following make syntax:

IMDS_FILES=$(shell find seedconfig -path '*/.git/*' \
-prune -o -type f -name '*.in.json' -print) \
seedconfig/openstack/latest/user_data
seed.iso: $(IMDS_FILES)
genisoimage -V config-2 -o $@ -J -R -m '*~' -m '.git' seedconfig

With the image in place, the VM can be created with

 qemu-system-x86_64 -machine q35,accel=kvm
-cpu host -m 4g -drive file=${img},index=0,if=virtio,media=disk
-drive file=seed.iso,media=cdrom,format=raw,index=2,if=virtio
-nic user -nographic

This invokes qemu with the root volume and ISO image attached as disks, uses an emulated “q35” machine with the host’s CPU and KVM acceleration, the userspace network stack, and a serial console. The first time the VM boots, cloud-init will apply the configuration from the cloud-config available in the ISO9660 filesystem.

Alternatives to cloud-init

virt-customize is another tool accomplishing the same type of customization. I use cloud-init because it works directly with cloud providers in addition to local VM images. You could also use something like ansible.

Variations

I have a variant of this that uses a bridged network, which I’ll write more about later. The bridge is nice because it’s more featureful, with full support for IPv6, etc, but it needs a bit more infrastructure in place.

It also can be helpful to use 9p or virtfs to share filesystem state between the host the VM. I don’t tend to rely on these, and will instead use rsync or TRAMP for moving files around.

Containers are also useful, of course, and there are plenty of times when the full isolation of a VM is not worth the overhead.

20 December, 2024 02:40PM by Noah Meyerhans (frodo+blog@morgul.net)

December 19, 2024

hackergotchi for Gregory Colpart

Gregory Colpart

MiniDebConf Toulouse 2024

After the MiniDebConf Marseille 2019, COVID-19 made it impossible or difficult to organize new MiniDebConfs for a few years. With the gradual resumption of in-person events (like FOSDEM, DebConf, etc.), the idea emerged to host another MiniDebConf in France, but with a lighter organizational load. In 2023, we decided to reach out to the organizers of Capitole du Libre to repeat the experience of 2017: hosting a MiniDebConf alongside their annual event in Toulouse in November. However, our request came too late for 2023. After discussions with Capitole du Libre in November 2023 in Toulouse and again in February 2024 in Brussels, we confirmed that a MiniDebConf Toulouse would take place in November 2024!

We then assembled a small organizing team and got to work: a Call for Papers in May 2024, adding a two-day MiniDebCamp, coordinating with the DebConf video team, securing sponsors, creating a logo, ordering T-shirts and stickers, planning the schedule, and managing registrations. Even with lighter logistics (conference rooms, badges, and catering during the weekend were handled by Capitole du Libre), there was still quite a bit of preparation to do.

On Thursday, November 14, and Friday, November 15, 2024, about forty developers arrived from around the world (France, Spain, Italy, Switzerland, Germany, England, Brazil, Uruguay, India, Brest, Marseille…) to spend two days at the MiniDebCamp in the beautiful collaborative spaces of Artilect in Toulouse city center.

Then, on Saturday, November 16, and Sunday, November 17, 2024, the MiniDebConf took place at ENSEEIHT as part of the Capitole du Libre event. The conference kicked off on Saturday morning with an opening session by Jérémy Lecour, which included a tribute to Lunar (Nicolas Dandrimont). This was followed by Reproducible Builds – Rebuilding What is Distributed from ftp.debian.org (Holger Levsen) and Discussion on My Research Work on Sustainability of Debian OS (Eda). After lunch at the Capitole du Libre food trucks, the intense afternoon schedule began: What’s New in the Linux Kernel (and What’s Missing in Debian) (Ben Hutchings), Linux Live Patching in Debian (Santiago Ruano Rincón), Trixie on Mobile: Are We There Yet? (Arnaud Ferraris), PostgreSQL Container Groups, aka cgroups Down the Road (Cédric Villemain), Upgrading a Thousand Debian Hosts in Less Than an Hour (Jérémy Lecour and myself), and Using Debusine to Automate Your QA (Stefano Rivera & co).

Sunday marked the second day, starting with a presentation on DebConf 25 (Benjamin Somers), which will be held in Brest in July 2025. The morning continued with talks: How LTS Goes Beyond LTS (Santiago Ruano Rincón & Roberto C. Sánchez), Cross-Building (Helmut Grohne), and State of JavaScript (Bastien Roucariès). In the afternoon, there were Lightning Talks, PyPI Security: Past, Present & Future (Salvo “LtWorf” Tomaselli), and the classic Bits from DPL (Andreas Tille), before closing with the final session led by Pierre-Elliott Bécue.

All talks are available on video (a huge thanks to the amazing DebConf video team), and many thanks to our sponsors (Viridien, Freexian, Evolix, Collabora, and Data Bene). A big thank-you as well to the entire Capitole du Libre team for hosting and supporting us… see you in Brest in July 2025!

Articles about (or mentioning) MiniDebConf Toulouse:

19 December, 2024 09:18AM by Gregory Colpart

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Being a bread torus

A concerned nutritional epidemiologist in Tokyo realizes that if you are what you eat, that means…

It’s a similar situation in Seoul, albeit with less oil and more confidence.

19 December, 2024 02:49AM by Benjamin Mako Hill

December 18, 2024

Simon Josefsson

Guix Container Images for GitLab CI/CD

I am using GitLab CI/CD pipelines for several upstream projects (libidn, libidn2, gsasl, inetutils, libtasn1, libntlm, …) and a long-time concern for these have been that there is too little testing on GNU Guix. Several attempts have been made, and earlier this year Ludo’ came really close to finish this. My earlier effort to idempotently rebuild Debian recently led me to think about re-bootstrapping Debian. Since Debian is a binary distribution, it re-use earlier binary packages when building new packages. The prospect of re-bootstrapping Debian in a reproducible way by rebuilding all of those packages going back to the beginning of time does not appeal to me. Instead, wouldn’t it be easier to build Debian trixie (or some future release of Debian) from Guix, by creating a small bootstrap sandbox that can start to build Debian packages, and then make sure that the particular Debian release can idempotently rebuild itself in a reproducible way? Then you will eventually end up with a reproducible and re-bootstrapped Debian, which pave the way for a trustworthy release of Trisquel. Fortunately, such an endeavour appears to offer many rabbit holes. Preparing Guix container images for use in GitLab pipelines is one that I jumped into in the last few days, and just came out of.

Let’s go directly to the point of this article: here is a GitLab pipeline job that runs in a native Guix container image that builds libksba after installing the libgpg-error dependency from Guix using the pre-built substitutes.

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - lndir /gnu/store/*profile/etc/ /etc
  - rm -f /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - export LANG=C.UTF-8
  - guix-daemon --disable-chroot --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix package -i libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

You can put that in a .gitlab-ci.yml and push it to GitLab and you will end up with a nice pipeline job output.

As you may imagine, there are several things that are sub-optimal in the before_script above that ought to be taken care of by the Guix container image, and I hope to be able to remove as much of the ugliness as possible. However that doesn’t change that these images are useful now, and I wanted to announce this work to allow others to start testing them and possibly offer help. I have started to make use of these images in some projects, see for example the libntlm commit for that.

You are welcome to join me in the Guix container images for GitLab CI/CD project! Issues and merge requests are welcome – happy hacking folks!

18 December, 2024 06:43PM by simon

December 17, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.87.0-1 on CRAN: New Upstream

Boost

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 38.5 million package downloads.

Version 1.87.0 of Boost was released last week following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed six packages requiring changes or adjustments. We opened issue #103 to coordinate the issue (just as we did in previous years). Our sincere thanks to Matt Fidler who fixed two packages pretty much immediately.

As I had not heard back from the other maintainers since filing the issue, I uploaded the package to CRAN suggesting that the coming winter break may be a good opportunity for the four other packages to catch up. CRAN concurred, and 1.87.0-1 is now available there.

There are no other changes apart from cosmetics in the DESCRIPTION file. For once, we did not add any new Boost libraries. The short NEWS entry follows.

Changes in version 1.87.0-1 (2024-12-17)

  • Upgrade to Boost 1.87.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN

  • Switched to Authors@R

Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

17 December, 2024 10:34PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

The science of detecting LLM-generated text

This post is a review for Computing Reviews for The science of detecting LLM-generated text , a article published in Communications of the ACM

While artificial intelligence (AI) applications for natural language processing (NLP) are no longer something new or unexpected, nobody can deny the revolution and hype that started, in late 2022, with the announcement of the first public version of ChatGPT. By then, synthetic translation was well established and regularly used, many chatbots had started attending users’ requests on different websites, voice recognition personal assistants such as Alexa and Siri had been widely deployed, and complaints of news sites filling their space with AI-generated articles were already commonplace. However, the ease of prompting ChatGPT or other large language models (LLMs) and getting extensive answers–its text generation quality is so high that it is often hard to discern whether a given text was written by an LLM or by a human–has sparked significant concern in many different fields. This article was written to present and compare the current approaches to detecting human- or LLM-authorship in texts.

The article presents several different ways LLM-generated text can be detected. The first, and main, taxonomy followed by the authors is whether the detection can be done aided by the LLM’s own functions (“white-box detection”) or only by evaluating the generated text via a public application programming interface (API) (“black-box detection”).

For black-box detection, the authors suggest training a classifier to discern the origin of a given text. Although this works at first, this task is doomed from its onset to be highly vulnerable to new LLMs generating text that will not follow the same patterns, and thus will probably evade recognition. The authors report that human evaluators find human-authored text to be more emotional and less objective, and use grammar to indicate the tone of the sentiment that should be used when reading the text–a trait that has not been picked up by LLMs yet. Human-authored text also tends to have higher sentence-level coherence, with less term repetition in a given paragraph. The frequency distribution for more and less common words is much more homogeneous in LLM-generated texts than in human-written ones.

White-box detection includes strategies whereby the LLMs will cooperate in identifying themselves in ways that are not obvious to the casual reader. This can include watermarking, be it rule based or neural based; in this case, both processes become a case of steganography, as the involvement of a LLM is explicitly hidden and spread through the full generated text, aiming at having a low detectability and high recoverability even when parts of the text are edited.

The article closes by listing the authors’ concerns about all of the above-mentioned technologies. Detecting an LLM, be it with or without the collaboration of the LLM’s designers, is more of an art than a science, and methods deemed as robust today will not last forever. We also cannot assume that LLMs will continue to be dominated by the same core players; LLM technology has been deeply studied, and good LLM engines are available as free/open-source software, so users needing to do so can readily modify their behavior. This article presents itself as merely a survey of methods available today, while also acknowledging the rapid progress in the field. It is timely and interesting, and easy to follow for the informed reader coming from a different subfield.

17 December, 2024 11:23AM

December 16, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#45: Some r-ci Updates

market monitor

Welcome to post 45 in the $R^4 series!

We introduced r-ci here in post #32 here nearly four years ago. It has found pretty widespread use and adoption, and we received a few kind words then (in the linked issue) and also more recently (in a follow-up comment) from which we merrily quote:

[…] almost 3 years later on and I have had zero problems with this CI setup. For people who want reliable R software, resources like these are invaluable.

And while we followed up with post #41 about r2u for simple continuous integration, we may not have posted when we based r-ci on r2u (for the obvious Linux usage case). So let’s make time now for a (comparitively smaller) update, and an update usage examples.

We made two changes in the last few days. One is a (obvious in hindsight) simplification. Given that the bootstrap step was always executed, and needed no parameters, we pulled it into a new aggregated setup simply called r-ci that includes it so that it can be omitted as a step in the yaml file. Second, we recently needed Fortran on macOS too, and realized it was not installed by default so we just added that too.

With that a real and used example is now as simple as the screenshot to the left (and hence one ‘paragraph’ shorter). The trained eye will no doubt observe that there is nothing specific to a given repo. And that is basically the key feature: we can simply copy this file around and get fast and easy and reliable CI by taking advantage of the underlying robustness of r2u solving all dependency automagically and reliably. The option to enable macOS is also solid and compelling as the GitHub runners are fast (but more ‘expensive’ in how the count against the limit of minutes—so again a tradeoff to make), as is the option to run coverage if one so desires. Some of my repos do too.

Take a look at the r-ci website which has more examples for the other supported CI servics it can used with, and feel free to ask questions as issue in the repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. Please report excessive re-aggregation in third-party for-profit settings.

16 December, 2024 10:57PM