April 15, 2023

Simon Josefsson

Sigstore protects Apt archives: apt-verify & apt-sigstore

Do you want your apt-get update to only ever use files whose hash checksum have been recorded in the globally immutable tamper-resistance ledger rekor provided by the Sigstore project? Well I thought you’d never ask, but now you can, thanks to my new projects apt-verify and apt-sigstore. I have not done proper stable releases yet, so this is work in progress. To try it out, adapt to the modern era of running random stuff from the Internet as root, and run the following commands. Use a container or virtual machine if you have trust issues.

apt-get install -y apt gpg bsdutils wget
wget -nv -O/usr/local/bin/rekor-cli 'https://github.com/sigstore/rekor/releases/download/v1.1.0/rekor-cli-linux-amd64'
echo afde22f01d9b6f091a7829a6f5d759d185dc0a8f3fd21de22c6ae9463352cf7d  /usr/local/bin/rekor-cli | sha256sum -c
chmod +x /usr/local/bin/rekor-cli
wget -nv -O/usr/local/bin/apt-verify-gpgv https://gitlab.com/debdistutils/apt-verify/-/raw/main/apt-verify-gpgv
chmod +x /usr/local/bin/apt-verify-gpgv
mkdir -p /etc/apt/verify.d
ln -s /usr/bin/gpgv /etc/apt/verify.d
echo 'APT::Key::gpgvcommand "apt-verify-gpgv";' > /etc/apt/apt.conf.d/75verify
wget -nv -O/etc/apt/verify.d/apt-rekor https://gitlab.com/debdistutils/apt-sigstore/-/raw/main/apt-rekor
chmod +x /etc/apt/verify.d/apt-rekor
apt-get update
less /var/log/syslog

If the stars are aligned (and the puppet projects’ of debdistget and debdistcanary have ran their GitLab CI/CD pipeline recently enough) you will see a successful output from apt-get update and your syslog will contain debug logs showing the entries from the rekor log for the release index files that you downloaded. See sample outputs in the README.

If you get tired of it, disabling is easy:

chmod -x /etc/apt/verify.d/apt-rekor

Our project currently supports Trisquel GNU/Linux 10 (nabia) & 11 (aramo), PureOS 11 (byzantium), Gnuinos chimaera, Ubuntu 20.04 (focal) & 22.04 (jammy), Debian 10 (buster) & 11 (bullseye), and Devuan GNU+Linux 4.0 (chimaera). Others can be supported to, please open an issue about it, although my focus is on FSDG-compliant distributions and their upstreams.

This is a continuation of my previous work on apt-canary. I have realized that it was better to separate out the generic part of apt-canary into my new project apt-verify that offers a plugin-based method, and then rewrote apt-canary to be one such plugin. Then apt-sigstore‘s apt-rekor was my second plugin for apt-verify.

Due to the design of things, and some current limitations, Ubuntu is the least stable since they push out new signed InRelease files frequently (mostly due to their use of Phased-Update-Percentage) and debdistget and debdistcanary CI/CD runs have a hard time keeping up. If you have insight on how to improve this, please comment me in the issue tracking the race condition.

There are limitations of what additional safety a rekor-based solution actually provides, but I expect that to improve as I get a cosign-based approach up and running. Currently apt-rekor mostly make targeted attacks less deniable. With a cosign-based approach, we could design things such that your machine only downloads updates when they have been publicly archived in an immutable fashion, or submitted for validation by a third-party such as my reproducible build setup for Trisquel GNU/Linux aramo.

What do you think? Happy Hacking!

15 April, 2023 08:33AM by simon

April 14, 2023

Scarlett Gately Moore

KDE Snaps! Oh so many released! Debian update

KDE Elisia snapKDE Elisia snap

It has been another very busy couple of weeks! I have released many snaps, fixed a few bugs, started the documentation, and have many snaps in progress. So without further ado here is my status update for KDE snaps:

Fixed two very important Krita bugs:

https://bugs.kde.org/show_bug.cgi?id=465307

Fixed several other minor bugs and triaged all the snap bugs I could find on bugs.kde.org. Remember to assign snap bugs to me! It makes my life easier and I see them quicker.

I have started some documentation for developers that want to assist in getting their KDE app snapped here: https://invent.kde.org/teams/neon/-/wikis/Snaps

I have released around 40 snaps! This would make for a very unruly blog post to list, so I have created a list here of my releases:

https://invent.kde.org/packaging/snapcraft-kde-applications/-/issues/30

I am happy to report there are several first time releases including a very kool music app called Elisia !

I have many more snaps in various stages and many more to come. My goal is to get them all by the end of this 3 months ( 1.5 left to get it done ), a lofty goal I know. What can I say, I am ambitious.

Now on to Debian. I am excited to announce that soon I will become contributor status for Freexian ( and hopefully, after time, a collaborator ). I am waiting for my mentor to upload my first security update(s) for Buster LTS. This will be a new type of contributions for me, but I am confident I will do well and I have new hope that I have found my new forever home. I will be very useful in many areas and I look forward to growing with the company.

In closing, if you can spare some change we have a few unexpected expenses ( doesn’t everyone?! ) Anything helps, thank you!

https://gofund.me/2c7b1808

14 April, 2023 06:39PM by sgmoore

Russ Allbery

Review: Babel

Review: Babel, by R.F. Kuang

Publisher: Harper Voyage
Copyright: August 2022
ISBN: 0-06-302144-7
Format: Kindle
Pages: 544

Babel, or the Necessity of Violence: An Arcane History of the Oxford Translators' Revolution, to give it its full title, is a standalone dark academia fantasy set in the 1830s and 1840s, primarily in Oxford, England. The first book of R.F. Kuang's previous trilogy, The Poppy War, was nominated for multiple awards and won the Compton Crook Award for best first novel. Babel is her fourth book.

Robin Swift, although that was not his name at the time, was born and raised in Canton and educated by an inexplicable English tutor his family could not have afforded. After his entire family dies of cholera, he is plucked from China by a British professor and offered a life in England as his ward. What follows is a paradise of books, a hell of relentless and demanding instruction, and an unpredictably abusive emotional environment, all aiming him towards admission to Oxford University. Robin will join University College and the Royal Institute of Translation.

The politics of this imperial Britain are almost precisely the same as in our history, but one of the engines is profoundly different. This world has magic. If words from two different languages are engraved on a metal bar (silver is best), the meaning and nuance lost in translation becomes magical power. With a careful choice of translation pairs, and sometimes additional help from other related words and techniques, the silver bar becomes a persistent spell. Britain's industrial revolution is in overdrive thanks to the country's vast stores of silver and the applied translation prowess of Babel.

This means Babel is also the only part of very racist Oxford that accepts non-white students and women. They need translators (barely) more than they care about maintaining social hierarchy; translation pairs only work when the translator is fluent in both languages. The magic is also stronger when meanings are more distinct, which is creating serious worries about classical and European languages. Those are still the bulk of Babel's work, but increased trade and communication within Europe is eroding the meaning distinctions and thus the amount of magical power. More remote languages, such as Chinese and Urdu, are full of untapped promise that Britain's colonial empire wants to capture. Professor Lowell, Robin's dubious benefactor, is a specialist in Chinese languages; Robin is a potential tool for his plans.

As Robin discovers shortly after arriving in Oxford, he is not the first of Lowell's tools. His predecessor turned against Babel and is trying to break its chokehold on translation magic. He wants Robin to help.

This is one of those books that is hard to review because it does some things exceptionally well and other things that did not work for me. It's not obvious if the latter are flaws in the book or a mismatch between book and reader (or, frankly, flaws in the reader). I'll try to explain as best I can so that you can draw your own conclusions.

First, this is one of the all-time great magical system hooks. The way words are tapped for power is fully fleshed out and exceptionally well-done. Kuang is a professional translator, which shows in the attention to detail on translation pairs. I think this is the best-constructed and explained word-based magic system I've read in fantasy. Many word-based systems treat magic as its own separate language that is weirdly universal. Here, Kuang does the exact opposite, and the result is immensely satisfying.

A fantasy reader may expect exploration of this magic system to be the primary point of the book, however, and this is not the case. It is an important part of the book, and its implications are essential to the plot resolution, but this is not the type of fantasy novel where the plot is driven by character exploration of the magic system. The magic system exists, the characters use it, and we do get some crunchy details, but the heart of the book is elsewhere. If you were expecting the typical relationship of a fantasy novel to its magic system, you may get a bit wrong-footed.

Similarly, this is historical fantasy, but it is the type of historical fantasy where the existence of magic causes no significant differences. For some people, this is a pet peeve; personally, I don't mind that choice in the abstract, but some of the specifics bugged me.

The villains of this book assert that any country could have done what Britain did in developing translation magic, and thus their hoarding of it is not immoral. They are obviously partly lying (this is a classic justification for imperialism), but it's not clear from the book how they are lying. Technologies (and magic here works like a technology) tend to concentrate power when they require significant capital investment, and tend to dilute power when they are portable and easy to teach. Translation magic feels like the latter, but its effect in the book is clearly the former, and I was never sure why.

England is not an obvious choice to be a translation superpower. Yes, it's a colonial empire, but India, southeast Asia, and most certainly Africa (the continent largely not appearing in this book) are home to considerably more languages from more wildly disparate families than western Europe. Translation is not a peculiarly European idea, and this magic system does not seem hard to stumble across. It's not clear why England, and Oxford in particular, is so dramatically far ahead. There is some sign that Babel is keeping the mechanics of translation magic secret, but that secret has leaked, seems easy to develop independently, and is simple enough that a new student can perform basic magic with a few hours of instruction. This does not feel like the kind of power that would be easy to concentrate, let alone to the extreme extent required by the last quarter of this book.

The demand for silver as a base material for translation magic provides a justification for mercantilism that avoids the confusing complexities of currency economics in our actual history, so fine, I guess, but it was a bit disappointing for this great of an idea for a magic system to have this small of an impact on politics.

I'll come to the actual thrust of this book in a moment, but first something else Babel does exceptionally well: dark academia.

The remainder of Robin's cohort at Oxford is Remy, a dark-skinned Muslim from Calcutta; Victoire, a Haitian woman raised in France; and Letty, the daughter of a British admiral. All of them are non-white except Letty, and Letty and Victoire additionally have to deal with the blatant sexism of the time. (For example, they have to live several miles from Oxford because women living near the college would be a "distraction.")

The interpersonal dynamics between the four are exceptionally well done. Kuang captures the dislocation of going away to college, the unsettled life upheaval that makes it both easy and vital to form suddenly tight friendships, and the way that the immense pressure from classes and exams leaves one so devoid of spare emotional capacity that those friendships become both unbreakable and badly strained. Robin and Remy almost immediately become inseparable in that type of college friendship in which profound trust and constant companionship happen first and learning about the other person happens afterwards.

It's tricky to talk about this without spoilers, but one of the things Kuang sets up with this friend group is a pointed look at intersectionality. Babel has gotten a lot of positive review buzz, and I think this is one of the reasons why. Kuang does not pass over or make excuses for characters in a place where many other books do. This mostly worked for me, but with a substantial caveat that I think you may want to be aware of before you dive into this book.

Babel is set in the 1830s, but it is very much about the politics of 2022. That does not necessarily mean that the politics are off for the 1830s; I haven't done the research to know, and it's possible I'm seeing the Tiffany problem (Jo Walton's observation that Tiffany is a historical 12th century women's name, but an author can't use it as a medieval name because readers think it sounds too modern). But I found it hard to shake the feeling that the characters make sense of their world using modern analytical frameworks of imperialism, racism, sexism, and intersectional feminism, although without using modern terminology, and characters from the 1830s would react somewhat differently. This is a valid authorial choice; all books are written for the readers of the time when they're published. But as with magical systems that don't change history, it's a pet peeve for some readers. If that's you, be aware that's the feel I got from it.

The true center of this book is not the magic system or the history. It's advertised directly in the title — the necessity of violence — although it's not until well into the book before the reader knows what that means. This is a book about revolution, what revolution means, what decisions you have to make along the way, how the personal affects the political, and the inadequacy of reform politics. It is hard, uncomfortable, and not gentle on its characters.

The last quarter of this book was exceptional, and I understand why it's getting so much attention. Kuang directly confronts the desire for someone else to do the necessary work, the hope that surely the people with power will see reason, and the feeling of despair when there are no good plans and every reason to wait and do nothing when atrocities are about to happen. If you are familiar with radical politics, these aren't new questions, but this is not the sort of thing that normally shows up in fantasy. It does not surprise me that Babel struck a nerve with readers a generation or two younger than me. It captures that heady feeling on the cusp of adulthood when everything is in flux and one is assembling an independent politics for the first time. Once I neared the end of the book, I could not put it down. The ending is brutal, but I think it was the right ending for this book.

There are two things, though, that I did not like about the political arc.

The first is that Victoire is a much more interesting character than Robin, but is sidelined for most of the book. The difference of perspectives between her and Robin is the heart of what makes the end of this book so good, and I wish that had started 300 pages earlier. Or, even better, I wish Victoire has been the protagonist; I liked Robin, but he's a very predictable character for most of the book. Victoire is not; even the conflicts she had earlier in the book, when she didn't get much attention in the story, felt more dynamic and more thoughtful than Robin's mix of guilt and anxiety.

The second is that I wish Kuang had shown more of Robin's intellectual evolution. All of the pieces of why he makes the decisions that he does are present in this book, and Kuang shows his emotional state (sometimes in agonizing detail) at each step, but the sense-making, the development of theory and ideology beneath the actions, is hinted at but not shown. This is a stylistic choice with no one right answer, but it felt odd because so much of the rest of the plot is obvious and telegraphed. If the reader shares Robin's perspective, I think it's easy to fill in the gaps, but it felt odd to read Robin giving clearly thought-out political analyses at the end of the book without seeing the hashing-out and argument with friends required to develop those analyses. I felt like I had to do a lot of heavy lifting as the reader, work that I wish had been done directly by the book.

My final note about this book is that I found much of it extremely predictable. I think that's part of why reviewers describe it as accessible and easy to read; accessibility and predictability can be two sides of the same coin. Kuang did not intend for this book to be subtle, and I think that's part of the appeal. But very few of Robin's actions for the first three-quarters of the book surprised me, and that's not always the reading experience I want. The end of the book is different, and I therefore found it much more gripping, but it takes a while to get there.

Babel is, for better or worse, the type of fantasy where the politics, economics, and magic system exist primarily to justify the plot the author wanted. I don't think the societal position of the Institute of Translation that makes the ending possible is that believable given the nature of the technology in question and the politics of the time, and if you are inclined to dig into the specifics of the world-building, I think you will find it frustrating. Where it succeeds brilliantly is in capturing the social dynamics of hothouse academic cohorts, and in making a sharp and unfortunately timely argument about the role of violence in political change, in a way that the traditionally conservative setting of fantasy rarely does.

I can't say Babel blew me away, but I can see why others liked it so much. If I had to guess, I'd say that the closer one is in age to the characters in the book and to that moment of political identity construction, the more it's likely to appeal.

Rating: 7 out of 10

14 April, 2023 03:10AM

John Goerzen

Easily Accessing All Your Stuff with a Zero-Trust Mesh VPN

Probably everyone is familiar with a regular VPN. The traditional use case is to connect to a corporate or home network from a remote location, and access services as if you were there.

But these days, the notion of “corporate network” and “home network” are less based around physical location. For instance, a company may have no particular office at all, may have a number of offices plus a number of people working remotely, and so forth. A home network might have, say, a PVR and file server, while highly portable devices such as laptops, tablets, and phones may want to talk to each other regardless of location. For instance, a family member might be traveling with a laptop, another at a coffee shop, and those two devices might want to communicate, in addition to talking to the devices at home.

And, in both scenarios, there might be questions about giving limited access to friends. Perhaps you’d like to give a friend access to part of your file server, or as a company, you might have contractors working on a limited project.

Pretty soon you wind up with a mess of VPNs, forwarded ports, and tricks to make it all work. With the increasing prevalence of CGNAT, a lot of times you can’t even open a port to the public Internet. Each application or device probably has its own gateway just to make it visible on the Internet, some of which you pay for.

Then you add on the question of: should you really trust your LAN anyhow? With possibilities of guests using it, rogue access points, etc., the answer is probably “no”.

We can move the responsibility for dealing with NAT, fluctuating IPs, encryption, and authentication, from the application layer further down into the network stack. We then arrive at a much simpler picture for all.

So this page is fundamentally about making the network work, simply and effectively.

How do we make the Internet work in these scenarios?

We’re going to combine three concepts:

  1. A VPN, providing fully encrypted and authenticated communication and stable IPs
  2. Mesh Networking, in which devices automatically discover optimal paths to reach each other
  3. Zero-trust networking, in which we do not need to trust anything about the underlying LAN, because all our traffic uses the secure systems in points 1 and 2.

By combining these concepts, we arrive at some nice results:

  • You can ssh hostname, where hostname is one of your machines (server, laptop, whatever), and as long as hostname is up, you can reach it, wherever it is, wherever you are.
    • Combined with mosh, these sessions will be durable even across moving to other host networks.
    • You could just as well use telnet, because the underlying network should be secure.
  • You don’t have to mess with encryption keys, certs, etc., for every internal-only service. Since IPs are now trustworthy, that’s all you need. hosts.allow could make a comeback!
  • You have a way of transiting out of extremely restrictive networks. Every tool discussed here has a way of falling back on routing things via a broker (relay) on TCP port 443 if all else fails.

There might sometimes be tradeoffs. For instance:

  • On LANs faster than 1Gbps, performance may degrade due to encryption and encapsulation overhead. However, these tools should let hosts discover the locality of each other and not send traffic over the Internet if the devices are local.
  • With some of these tools, hosts local to each other (on the same LAN) may be unable to find each other if they can’t reach the control plane over the Internet (Internet is down or provider is down)

Some other features that some of the tools provide include:

  • Easy sharing of limited access with friends/guests
  • Taking care of everything you need, including SSL certs, for exposing a certain on-net service to the public Internet
  • Optional routing of your outbound Internet traffic via an exit node on your network. Useful, for instance, if your local network is blocking tons of stuff.

Let’s dive in.

Types of Mesh VPNs

I’ll go over several types of meshes in this article:

  1. Fully decentralized with automatic hop routing

    This model has no special central control plane. Nodes discover each other in various ways, and establish routes to each other. These routes can be direct connections over the Internet, or via other nodes. This approach offers the greatest resilience. Examples I’ll cover include Yggdrasil and tinc.

  2. Automatic peer-to-peer with centralized control

    In this model, nodes, by default, communicate by establishing direct links between them. A regular node never carries traffic on behalf of other nodes. Special-purpose relays are used to handle cases in which NAT traversal is impossible. This approach tends to offer simple setup. Examples I’ll cover include Tailscale, Zerotier, Nebula, and Netmaker.

  3. Roll your own and hybrid approaches

    This is a “grab bag” of other ideas; for instance, running Yggdrasil over Tailscale.

Terminology

For the sake of consistency, I’m going to use common language to discuss things that have different terms in different ecosystems:

  • Every tool discussed here has a way of dealing with NAT traversal. It may assist with establishing direct connections (eg, STUN), and if that fails, it may simply relay traffic between nodes. I’ll call such a relay a “broker”. This may or may not be the same system that is a control plane for a tool.
  • All of these systems operate over lower layers that are unencrypted. Those lower layers may be a LAN (wired or wireless, which may or may not have Internet access), or the public Internet (IPv4 and/or IPv6). I’m going to call the unencrypted lower layer, whatever it is, the “clearnet”.

Evaluation Criteria

Here are the things I want to see from a solution:

  • Secure, with all communications end-to-end encrypted and authenticated, and prevention of traffic from untrusted devices.
  • Flexible, adapting to changes in network topology quickly and automatically.
  • Resilient, without single points of failure, and with devices local to each other able to communicate even if cut off from the Internet or other parts of the network.
  • Private, minimizing leakage of information or metadata about me and my systems
  • Able to traverse CGNAT without having to use a broker whenever possible
  • A lesser requirement for me, but still a nice to have, is the ability to include others via something like Internet publishing or inviting guests.
  • Fully or nearly fully Open Source
  • Free or very cheap for personal use
  • Wide operating system support, including headless Linux on x86_64 and ARM.

Fully Decentralized VPNs with Automatic Hop Routing

Two systems fit this description: Yggdrasil and Tinc. Let’s dive in.

Yggdrasil

I’ll start with Yggdrasil because I’ve written so much about it already. It featured in prior posts such as:

Yggdrasil can be a private mesh VPN, or something more

Yggdrasil can be a private mesh VPN, just like the other tools covered here. It’s unique, however, in that a key goal of the project is to also make it useful as a planet-scale global mesh network. As such, Yggdrasil is a testbed of new ideas in distributed routing designed to scale up to massive sizes and all sorts of connection conditions. As of 2023-04-10, the main global Yggdrasil mesh has over 5000 nodes in it. You can choose whether or not to participate.

Every node in a Yggdrasil mesh has a public/private keypair. Each node then has an IPv6 address (in a private address space) derived from its public key. Using these IPv6 addresses, you can communicate right away.

Yggdrasil differs from most of the other tools here in that it does not necessarily seek to establish a direct link on the clearnet between, say, host A and host G for them to communicate. It will prefer such a direct link if it exists, but it is perfectly happy if it doesn’t.

The reason is that every Yggdrasil node is also a router in the Yggdrasil mesh. Let’s sit with that concept for a moment. Consider:

  • If you have a bunch of machines on your LAN, but only one of them can peer over the clearnet, that’s fine; all the other machines will discover this route to the world and use it when necessary.
  • All you need to run a broker is just a regular node with a public IP address. If you are participating in the global mesh, you can use one (or more) of the free public peers for this purpose.
  • It is not necessary for every node to know about the clearnet IP address of every other node (improving privacy). In fact, it’s not even necessary for every node to know about the existence of all the other nodes, so long as it can find a route to a given node when it’s asked to.
  • Yggdrasil can find one or more routes between nodes, and it can use this knowledge of multiple routes to aggressively optimize for varying network conditions, including combinations of, say, downloads and low-latency ssh sessions.

Behind the scenes, Yggdrasil calculates optimal routes between nodes as necessary, using a mesh-wide DHT for initial contact and then deriving more optimal paths. (You can also read more details about the routing algorithm.)

One final way that Yggdrasil is different from most of the other tools is that there is no separate control server. No node is “special”, in charge, the sole keeper of metadata, or anything like that. The entire system is completely distributed and auto-assembling.

Meeting neighbors

There are two ways that Yggdrasil knows about peers:

  • By broadcast discovery on the local LAN
  • By listening on a specific port (or being told to connect to a specific host/port)

Sometimes this might lead to multiple ways to connect to a node; Yggdrasil prefers the connection auto-discovered by broadcast first, then the lowest-latency of the defined path. In other words, when your laptops are in the same room as each other on your local LAN, your packets will flow directly between them without traversing the Internet.

Unique uses

Yggdrasil is uniquely suited to network-challenged situations. As an example, in a post-disaster situation, Internet access may be unavailable or flaky, yet there may be many local devices – perhaps ones that had never known of each other before – that could share information. Yggdrasil meets this situation perfectly. The combination of broadcast auto-detection, distributed routing, and so forth, basically means that if there is any physical path between two nodes, Yggdrasil will find and enable it.

Ad-hoc wifi is rarely used because it is a real pain. Yggdrasil actually makes it useful! Its broadcast discovery doesn’t require any IP address provisioned on the interface at all (it just uses the IPv6 link-local address), so you don’t need to figure out a DHCP server or some such. And, Yggdrasil will tend to perform routing along the contours of the RF path. So you could have a laptop in the middle of a long distance relaying communications from people farther out, because it could see both. Or even a chain of such things.

Yggdrasil: Security and Privacy

Yggdrasil’s mesh is aggressively greedy. It will peer with any node it can find (unless told otherwise) and will find a route to anywhere it can. There are two main ways to make sure you keep unauthorized traffic out: by restricting who can talk to your mesh, and by firewalling the Yggdrasil interface. Both can be used, and they can be used simultaneously.

I’ll discuss firewalling more at the end of this article. Basically, you’ll almost certainly want to do this if you participate in the public mesh, because doing so is akin to having a globally-routable public IP address direct to your device.

If you want to restrict who can talk to your mesh, you just disable the broadcast feature on all your nodes (empty MulticastInterfaces section in the config), and avoid telling any of your nodes to connect to a public peer. You can set a list of authorized public keys that can connect to your nodes’ listening interfaces, which you’ll probably want to do. You will probably want to either open up some inbound ports (if you can) or set up a node with a known clearnet IP on a place like a $5/mo VPS to help with NAT traversal (again, setting AllowedPublicKeys as appropriate). Yggdrasil doesn’t allow filtering multicast clients by public key, only by network interface, so that’s why we disable broadcast discovery. You can easily enough teach Yggdrasil about static internal LAN IPs of your nodes and have things work that way. (Or, set up an internal “gateway” node or two, that the clients just connect to when they’re local). But fundamentally, you need to put a bit more thought into this with Yggdrasil than with the other tools here, which are closed-only.

Compared to some of the other tools here, Yggdrasil is better about information leakage; nodes only know details, such as clearnet IPs, of directly-connected peers. You can obtain the list of directly-connected peers of any known node in the mesh – but that list is the public keys of the directly-connected peers, not the clearnet IPs.

Some of the other tools contain a limited integrated firewall of sorts (with limited ACLs and such). Yggdrasil does not, but is fully compatible with on-host firewalls. I recommend these anyway even with many other tools.

Yggdrasil: Connectivity and NAT traversal

Compared to the other tools, Yggdrasil is an interesting mix. It provides a fully functional mesh and facilitates connectivity in situations in which no other tool can. Yet its NAT traversal, while it exists and does work, results in using a broker under some of the more challenging CGNAT situations more often than some of the other tools, which can impede performance.

Yggdrasil’s underlying protocol is TCP-based. Before you run away screaming that it must be slow and unreliable like OpenVPN over TCP – it’s not, and it is even surprisingly good around bufferbloat. I’ve found its performance to be on par with the other tools here, and it works as well as I’d expect even on flaky 4G links.

Overall, the NAT traversal story is mixed. On the one hand, you can run a node that listens on port 443 – and Yggdrasil can even make it speak TLS (even though that’s unnecessary from a security standpoint), so you can likely get out of most restrictive firewalls you will ever encounter. If you join the public mesh, know that plenty of public peers do listen on port 443 (and other well-known ports like 53, plus random high-numbered ones).

If you connect your system to multiple public peers, there is a chance – though a very small one – that some public transit traffic might be routed via it. In practice, public peers hopefully are already peered with each other, preventing this from happening (you can verify this with yggdrasilctl debug_remotegetpeers key=ABC...). I have never experienced a problem with this. Also, since latency is a factor in routing for Yggdrasil, it is highly unlikely that random connections we use are going to be competitive with datacenter peers.

Yggdrasil: Sharing with friends

If you’re open to participating in the public mesh, this is one of the easiest things of all. Have your friend install Yggdrasil, point them to a public peer, give them your Yggdrasil IP, and that’s it. (Well, presumably you also open up your firewall – you did follow my advice to set one up, right?)

If your friend is visiting at your location, they can just hop on your wifi, install Yggdrasil, and it will automatically discover a route to you. Yggdrasil even has a zero-config mode for ephemeral nodes such as certain Docker containers.

Yggdrasil doesn’t directly support publishing to the clearnet, but it is certainly possible to proxy (or even NAT) to/from the clearnet, and people do.

Yggdrasil: DNS

There is no particular extra DNS in Yggdrasil. You can, of course, run a DNS server within Yggdrasil, just as you can anywhere else. Personally I just add relevant hosts to /etc/hosts and leave it at that, but it’s up to you.

Yggdrasil: Source code, pricing, and portability

Yggdrasil is fully open source (LGPLv3 plus additional permissions in an exception) and highly portable. It is written in Go, and has prebuilt binaries for all major platforms (including a Debian package which I made).

There is no charge for anything with Yggdrasil. Listed public peers are free and run by volunteers. You can run your own peers if you like; they can be public and unlisted, public and listed (just submit a PR to get it listed), or private (accepting connections only from certain nodes’ keys). A “peer” in this case is just a node with a known clearnet IP address.

Yggdrasil encourages use in other projects. For instance, NNCP integrates a Yggdrasil node for easy communication with other NNCP nodes.

Yggdrasil conclusions

Yggdrasil is tops in reliability (having no single point of failure) and flexibility. It will maintain opportunistic connections between peers even if the Internet is down. The unique added feature of being able to be part of a global mesh is a nice one. The tradeoffs include being more prone to need to use a broker in restrictive CGNAT environments. Some other tools have clients that override the OS DNS resolver to also provide resolution of hostnames of member nodes; Yggdrasil doesn’t, though you can certainly run your own DNS infrastructure over Yggdrasil (or, for that matter, let public DNS servers provide Yggdrasil answers if you wish).

There is also a need to pay more attention to firewalling or maintaining separation from the public mesh. However, as I explain below, many other options have potential impacts if the control plane, or your account for it, are compromised, meaning you ought to firewall those, too. Still, it may be a more immediate concern with Yggdrasil.

Although Yggdrasil is listed as experimental, I have been using it for over a year and have found it to be rock-solid. They did change how mesh IPs were calculated when moving from 0.3 to 0.4, causing a global renumbering, so just be aware that this is a possibility while it is experimental.

tinc

tinc is the oldest tool on this list; version 1.0 came out in 2003! You can think of tinc as something akin to “an older Yggdrasil without the public option.”

I will be discussing tinc 1.0.36, the latest stable version, which came out in 2019. The development branch, 1.1, has been going since 2011 and had its latest release in 2021. The last commit to the Github repo was in June 2022.

Tinc is the only tool here to support both tun and tap style interfaces. I go into the difference more in the Zerotier review below. Tinc actually provides a better tap implementation than Zerotier, with various sane options for broadcasts, but I still think the call for an Ethernet, as opposed to IP, VPN is small.

To configure tinc, you generate a per-host configuration and then distribute it to every tinc node. It contains a host’s public key. Therefore, adding a host to the mesh means distributing its key everywhere; de-authorizing it means removing its key everywhere. This makes it rather unwieldy.

tinc can do LAN broadcast discovery and mesh routing, but generally speaking you must manually teach it where to connect initially. Somewhat confusingly, the examples all mention listing a public address for a node. This doesn’t make sense for a laptop, and I suspect you’d just omit it. I think that address is used for something akin to a Yggdrasil peer with a clearnet IP.

Unlike all of the other tools described here, tinc has no tool to inspect the running state of the mesh.

Some of the properties of tinc made it clear I was unlikely to adopt it, so this review wasn’t as thorough as that of Yggdrasil.

tinc: Security and Privacy

As mentioned above, every host in the tinc mesh is authenticated based on its public key. However, to be more precise, this key is validated only at the point it connects to its next hop peer. (To be sure, this is also the same as how the list of allowed pubkeys works in Yggdrasil.) Since IPs in tinc are not derived from their key, and any host can assign itself whatever mesh IP it likes, this implies that a compromised host could impersonate another.

It is unclear whether packets are end-to-end encrypted when using a tinc node as a router. The fact that they can be routed at the kernel level by the tun interface implies that they may not be.

tinc: Connectivity and NAT traversal

I was unable to find much information about NAT traversal in tinc, other than that it does support it. tinc can run over UDP or TCP and auto-detects which to use, preferring UDP.

tinc: Sharing with friends

tinc has no special support for this, and the difficulty of configuration makes it unlikely you’d do this with tinc.

tinc: Source code, pricing, and portability

tinc is fully open source (GPLv2). It is written in C and generally portable. It supports some very old operating systems. Mobile support is iffy.

tinc does not seem to be very actively maintained.

tinc conclusions

I haven’t mentioned performance in my other reviews (see the section at the end of this post). But, it is so poor as to only run about 300Mbps on my 2.5Gbps network. That’s 1/3 the speed of Yggdrasil or Tailscale. Combine that with the unwieldiness of adding hosts and some uncertainties in security, and I’m not going to be using tinc.

Automatic Peer-to-Peer Mesh VPNs with centralized control

These tend to be the options that are frequently discussed. Let’s talk about the options.

Tailscale

Tailscale is a popular choice in this type of VPN. To use Tailscale, you first sign up on tailscale.com. Then, you install the tailscale client on each machine. On first run, it prints a URL for you to click on to authorize the client to your mesh (“tailnet”). Tailscale assigns a mesh IP to each system. The Tailscale client lets the Tailscale control plane gather IP information about each node, including all detectable public and private clearnet IPs.

When you attempt to contact a node via Tailscale, the client will fetch the known contact information from the control plane and attempt to establish a link. If it can contact over the local LAN, it will (it doesn’t have broadcast autodetection like Yggdrasil; the information must come from the control plane). Otherwise, it will try various NAT traversal options. If all else fails, it will use a broker to relay traffic; Tailscale calls a broker a DERP relay server. Unlike Yggdrasil, a Tailscale node never relays traffic for another; all connections are either direct P2P or via a broker.

Tailscale, like several others, is based around Wireguard; though wireguard-go rather than the in-kernel Wireguard.

Tailscale has a number of somewhat unique features in this space:

  • Funnel, which lets you expose ports on your system to the public Internet via the VPN.
  • Exit nodes, which automate the process of routing your public Internet traffic over some other node in the network. This is possible with every tool mentioned here, but Tailscale makes switching it on or off a couple of quick commands away.
  • Node sharing, which lets you share a subset of your network with guests
  • A fantastic set of documentation, easily the best of the bunch.

Funnel, in particular, is interesting. With a couple of “tailscale serve”-style commands, you can expose a directory tree (or a development webserver) to the world. Tailscale gives you a public hostname, obtains a cert for it, and proxies inbound traffic to you. This is subject to some unspecified bandwidth limits, and you can only choose from three public ports, so it’s not really a production solution – but as a quick and easy way to demonstrate something cool to a friend, it’s a neat feature.

Tailscale: Security and Privacy

With Tailscale, as with the other tools in this category, one of the main threats to consider is the control plane. What are the consequences of a compromise of Tailscale’s control plane, or of the credentials you use to access it?

Let’s begin with the credentials used to access it. Tailscale operates no identity system itself, instead relying on third parties. For individuals, this means Google, Github, or Microsoft accounts; Okta and other SAML and similar identity providers are also supported, but this runs into complexity and expense that most individuals aren’t wanting to take on. Unfortunately, all three of those types of accounts often have saved auth tokens in a browser. Personally I would rather have a separate, very secure, login.

If a person does compromise your account or the Tailscale servers themselves, they can’t directly eavesdrop on your traffic because it is end-to-end encrypted. However, assuming an attacker obtains access to your account, they could:

  • Tamper with your Tailscale ACLs, permitting new actions
  • Add new nodes to the network
  • Forcibly remove nodes from the network
  • Enable or disable optional features

Of note is that they cannot just commandeer an existing IP. I would say the riskiest possibility here is that could add new nodes to the mesh. Because they could also tamper with your ACLs, they could then proceed to attempt to access all your internal services. They could even turn on service collection and have Tailscale tell them what and where all the services are.

Therefore, as with other tools, I recommend a local firewall on each machine with Tailscale. More on that below.

Tailscale has a new alpha feature called tailnet lock which helps with this problem. It requires existing nodes in the mesh to sign a request for a new node to join. Although this doesn’t address ACL tampering and some of the other things, it does represent a significant help with the most significant concern. However, tailnet lock is in alpha, only available on the Enterprise plan, and has a waitlist, so I have been unable to test it.

Any Tailscale node can request the IP addresses belonging to any other Tailscale node. The Tailscale control plane captures, and exposes to you, this information about every node in your network: the OS hostname, IP addresses and port numbers, operating system, creation date, last seen timestamp, and NAT traversal parameters. You can optionally enable service data capture as well, which sends data about open ports on each node to the control plane.

Tailscale likes to highlight their key expiry and rotation feature. By default, all keys expire after 180 days, and traffic to and from the expired node will be interrupted until they are renewed (basically, you re-login with your provider and do a renew operation). Unfortunately, the only mention I can see of warning of impeding expiration is in the Windows client, and even there you need to edit a registry key to get the warning more than the default 24 hours in advance. In short, it seems likely to cut off communications when it’s most important. You can disable key expiry on a per-node basis in the admin console web interface, and I mostly do, due to not wanting to lose connectivity at an inopportune time.

Tailscale: Connectivity and NAT traversal

When thinking about reliability, the primary consideration here is being able to reach the Tailscale control plane. While it is possible in limited circumstances to reach nodes without the Tailscale control plane, it is “a fairly brittle setup” and notably will not survive a client restart. So if you use Tailscale to reach other nodes on your LAN, that won’t work unless your Internet is up and the control plane is reachable.

Assuming your Internet is up and Tailscale’s infrastructure is up, there is little to be concerned with. Your own comfort level with cloud providers and your Internet should guide you here.

Tailscale wrote a fantastic article about NAT traversal and they, predictably, do very well with it. Tailscale prefers UDP but falls back to TCP if needed. Broker (DERP) servers step in as a last resort, and Tailscale clients automatically select the best ones. I’m not aware of anything that is more successful with NAT traversal than Tailscale. This maximizes the situations in which a direct P2P connection can be used without a broker.

I have found Tailscale to be a bit slow to notice changes in network topography compared to Yggdrasil, and sometimes needs a kick in the form of restarting the client process to re-establish communications after a network change. However, it’s possible (maybe even probable) that if I’d waited a bit longer, it would have sorted this all out.

Tailscale: Sharing with friends

I touched on the funnel feature earlier. The sharing feature lets you give an invite to an outsider. By default, a person accepting a share can make only outgoing connections to the network they’re invited to, and cannot receive incoming connections from that network – this makes sense. When sharing an exit node, you get a checkbox that lets you share access to the exit node as well. Of course, the person accepting the share needs to install the Tailnet client. The combination of funnel and sharing make Tailscale the best for ad-hoc sharing.

Tailscale: DNS

Tailscale’s DNS is called MagicDNS. It runs as a layer atop your standard DNS – taking over /etc/resolv.conf on Linux – and provides resolution of mesh hostnames and some other features. This is a concept that is pretty slick.

It also is a bit flaky on Linux; dueling programs want to write to /etc/resolv.conf. I can’t really say this is entirely Tailscale’s fault; they document the problem and some workarounds.

I would love to be able to add custom records to this service; for instance, to override the public IP for a service to use the in-mesh IP. Unfortunately, that’s not yet possible. However, MagicDNS can query existing nameservers for certain domains in a split DNS setup.

Tailscale: Source code, pricing, and portability

Tailscale is almost fully open source and the client is highly portable. The client is open source (BSD 3-clause) on open source platforms, and closed source on closed source platforms. The DERP servers are open source. The coordination server is closed source, although there is an open source coordination server called Headscale (also BSD 3-clause) made available with Tailscale’s blessing and informal support. It supports most, but not all, features in the Tailscale coordination server.

Tailscale’s pricing (which does not apply when using Headscale) provides a free plan for 1 user with up to 20 devices. A Personal Pro plan expands that to 100 devices for $48 per year - not a bad deal at $4/mo. A “Community on Github” plan also exists, and then there are more business-oriented plans as well. See the pricing page for details.

As a small note, I appreciated Tailscale’s install script. It properly added Tailscale’s apt key in a way that it can only be used to authenticate the Tailscale repo, rather than as a systemwide authenticator. This is a nice touch and speaks well of their developers.

Tailscale conclusions

Tailscale is tops in sharing and has a broad feature set and excellent documentation. Like other solutions with a centralized control plane, device communications can stop working if the control plane is unreachable, and the threat model of the control plane should be carefully considered.

Zerotier

Zerotier is a close competitor to Tailscale, and is similar to it in a lot of ways. So rather than duplicate all of the Tailscale information here, I’m mainly going to describe how it differs from Tailscale.

The primary difference between the two is that Zerotier emulates an Ethernet network via a Linux tap interface, while Tailscale emulates a TCP/IP network via a Linux tun interface.

However, Zerotier has a number of things that make it be a somewhat imperfect Ethernet emulator. For one, it has a problem with broadcast amplification; the machine sending the broadcast sends it to all the other nodes that should receive it (up to a set maximum). I wouldn’t want to have a lot of programs broadcasting on a slow link. While in theory this could let you run Netware or DECNet across Zerotier, I’m not really convinced there’s much call for that these days, and Zerotier is clearly IP-focused as it allocates IP addresses and such anyhow. Zerotier provides special support for emulated ARP (IPv4) and NDP (IPv6). While you could theoretically run Zerotier as a bridge, this eliminates the zero trust principle, and Tailscale supports subnet routers, which provide much of the same feature set anyhow.

A somewhat obscure feature, but possibly useful, is Zerotier’s built-in support for multipath WAN for the public interface. This actually lets you do a somewhat basic kind of channel bonding for WAN.

Zerotier: Security and Privacy

The picture here is similar to Tailscale, with the difference that you can create a Zerotier-local account rather than relying on cloud authentication. I was unable to find as much detail about Zerotier as I could about Tailscale - notably I couldn’t find anything about how “sticky” an IP address is. However, the configuration screen lets me delete a node and assign additional arbitrary IPs within a subnet to other nodes, so I think the assumption here is that if your Zerotier account (or the Zerotier control plane) is compromised, an attacker could remove a legit device, add a malicious one, and assign the previous IP of the legit device to the malicious one. I’m not sure how to mitigate against that risk, as firewalling specific IPs is ineffective if an attacker can simply take them over. Zerotier also lacks anything akin to Tailnet Lock.

For this reason, I didn’t proceed much further in my Zerotier evaluation.

Zerotier: Connectivity and NAT traversal

Like Tailscale, Zerotier has NAT traversal with STUN. However, it looks like it’s more limited than Tailscale’s, and in particular is incompatible with double NAT that is often seen these days. Zerotier operates brokers (“root servers”) that can do relaying, including TCP relaying. So you should be able to connect even from hostile networks, but you are less likely to form a P2P connection than with Tailscale.

Zerotier: Sharing with friends

I was unable to find any special features relating to this in the Zerotier documentation. Therefore, it would be at the same level as Yggdrasil: possible, maybe even not too difficult, but without any specific help.

Zerotier: DNS

Unlike Tailscale, Zerotier does not support automatically adding DNS entries for your hosts. Therefore, your options are approximately the same as Yggdrasil, though with the added option of pushing configuration pointing to your own non-Zerotier DNS servers to the client.

Zerotier: Source code, pricing, and portability

The client ZeroTier One is available on Github under a custom “business source license” which prevents you from using it in certain settings. This license would preclude it being included in Debian. Their library, libzt, is available under the same license. The pricing page mentions a community edition for self hosting, but the documentation is sparse and it was difficult to understand what its feature set really is.

The free plan lets you have 1 user with up to 25 devices. Paid plans are also available.

Zerotier conclusions

Frankly I don’t see much reason to use Zerotier. The “virtual Ethernet” model seems to be a weird hybrid that doesn’t bring much value. I’m concerned about the implications of a compromise of a user account or the control plane, and it lacks a lot of Tailscale features (MagicDNS and sharing). The only thing it may offer in particular is multipath WAN, but that’s esoteric enough – and also solvable at other layers – that it doesn’t seem all that compelling to me. Add to that the strange license and, to me anyhow, I don’t see much reason to bother with it.

Netmaker

Netmaker is one of the projects that is making noise these days. Netmaker is the only one here that is a wrapper around in-kernel Wireguard, which can make a performance difference when talking to peers on a 1Gbps or faster link. Also, unlike other tools, it has an ingress gateway feature that lets people that don’t have the Netmaker client, but do have Wireguard, participate in the VPN. I believe I also saw a reference somewhere to nodes as routers as with Yggdrasil, but I’m failing to dig it up now.

The project is in a bit of an early state; you can sign up for an “upcoming closed beta” with a SaaS host, but really you are generally pointed to self-hosting using the code in the github repo. There are community and enterprise editions, but it’s not clear how to actually choose. The server has a bunch of components: binary, CoreDNS, database, and web server. It also requires elevated privileges on the host, in addition to a container engine. Contrast that to the single binary that some others provide.

It looks like releases are frequent, but sometimes break things, and have a somewhat more laborious upgrade processes than most.

I don’t want to spend a lot of time managing my mesh. So because of the heavy needs of the server, the upgrades being labor-intensive, it taking over iptables and such on the server, I didn’t proceed with a more in-depth evaluation of Netmaker. It has a lot of promise, but for me, it doesn’t seem to be in a state that will meet my needs yet.

Nebula

Nebula is an interesting mesh project that originated within Slack, seems to still be primarily sponsored by Slack, but is also being developed by Defined Networking (though their product looks early right now). Unlike the other tools in this section, Nebula doesn’t have a web interface at all. Defined Networking looks likely to provide something of a SaaS service, but for now, you will need to run a broker (“lighthouse”) yourself; perhaps on a $5/mo VPS.

Due to the poor firewall traversal properties, I didn’t do a full evaluation of Nebula, but it still has a very interesting design.

Nebula: Security and Privacy

Since Nebula lacks a traditional control plane, the root of trust in Nebula is a CA (certificate authority). The documentation gives this example of setting it up:

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24"

So the cert contains your IP, hostname, and group allocation. Each host in the mesh gets your CA certificate, and the per-host cert and key generated from each of these steps.

This leads to a really nice security model. Your CA is the gatekeeper to what is trusted in your mesh. You can even have it airgapped or something to make it exceptionally difficult to breach the perimeter.

Nebula contains an integrated firewall. Because the ability to keep out unwanted nodes is so strong, I would say this may be the one mesh VPN you might consider using without bothering with an additional on-host firewall.

You can define static mappings from a Nebula mesh IP to a clearnet IP. I haven’t found information on this, but theoretically if NAT traversal isn’t required, these static mappings may allow Nebula nodes to reach each other even if Internet is down. I don’t know if this is truly the case, however.

Nebula: Connectivity and NAT traversal

This is a weak point of Nebula. Nebula sends all traffic over a single UDP port; there is no provision for using TCP. This is an issue at certain hotel and other public networks which open only TCP egress ports 80 and 443.

I couldn’t find a lot of detail on what Nebula’s NAT traversal is capable of, but according to a certain Github issue, this has been a sore spot for years and isn’t as capable as Tailscale.

You can designate nodes in Nebula as brokers (relays). The concept is the same as Yggdrasil, but it’s less versatile. You have to manually designate what relay to use. It’s unclear to me what happens if different nodes designate different relays. Keep in mind that this always happens over a UDP port.

Nebula: Sharing with friends

There is no particular support here.

Nebula: DNS

Nebula has experimental DNS support. In contrast with Tailscale, which has an internal DNS server on every node, Nebula only runs a DNS server on a lighthouse. This means that it can’t forward requests to a DNS server that’s upstream for your laptop’s particular current location. Actually, Nebula’s DNS server doesn’t forward at all. It also doesn’t resolve its own name.

The Nebula documentation makes reference to using multiple lighthouses, which you may want to do for DNS redundancy or performance, but it’s unclear to me if this would make each lighthouse form a complete picture of the network.

Nebula: Source code, pricing, and portability

Nebula is fully open source (MIT). It consists of a single Go binary and configuration. It is fairly portable.

Nebula conclusions

I am attracted to Nebula’s unique security model. I would probably be more seriously considering it if not for the lack of support for TCP and poor general NAT traversal properties. Its datacenter connectivity heritage does show through.

Roll your own and hybrid

Here is a grab bag of ideas:

Running Yggdrasil over Tailscale

One possibility would be to use Tailscale for its superior NAT traversal, then allow Yggdrasil to run over it. (You will need a firewall to prevent Tailscale from trying to run over Yggdrasil at the same time!) This creates a closed network with all the benefits of Yggdrasil, yet getting the NAT traversal from Tailscale.

Drawbacks might be the overhead of the double encryption and double encapsulation. A good Yggdrasil peer may wind up being faster than this anyhow.

Public VPN provider for NAT traversal

A public VPN provider such as Mullvad will often offer incoming port forwarding and nodes in many cities. This could be an attractive way to solve a bunch of NAT traversal problems: just use one of those services to get you an incoming port, and run whatever you like over that.

Be aware that a number of public VPN clients have a “kill switch” to prevent any traffic from egressing without using the VPN; see, for instance, Mullvad’s. You’ll need to disable this if you are running a mesh atop it.

Other

Combining with local firewalls

For most of these tools, I recommend using a local firewal in conjunction with them. I have been using firehol and find it to be quite nice. This means you don’t have to trust the mesh, the control plane, or whatever. The catch is that you do need your mesh VPN to provide strong association between IP address and node. Most, but not all, do.

Performance

I tested some of these for performance using iperf3 on a 2.5Gbps LAN. Here are the results. All speeds are in Mbps.

Tool iperf3 (default) iperf3 -P 10 iperf3 -R
Direct (no VPN) 2406 2406 2764
Wireguard (kernel) 1515 1566 2027
Yggdrasil 892 1126 1105
Tailscale 950 1034 1085
Tinc 296 300 277

You can see that Wireguard was significantly faster than the other options. Tailscale and Yggdrasil were roughly comparable, and Tinc was terrible.

IP collisions

When you are communicating over a network such as these, you need to trust that the IP address you are communicating with belongs to the system you think it does. This protects against two malicious actor scenarios:

  1. Someone compromises one machine on your mesh and reconfigures it to impersonate a more important one
  2. Someone connects an unauthorized system to the mesh, taking over a trusted IP, and uses the privileges of the trusted IP to access resources

To summarize the state of play as highlighted in the reviews above:

  • Yggdrasil derives IPv6 addresses from a public key
  • tinc allows any node to set any IP
  • Tailscale IPs aren’t user-assignable, but the assignment algorithm is unknown
  • Zerotier allows any IP to be allocated to any node at the control plane
  • I don’t know what Netmaker does
  • Nebula IPs are baked into the cert and signed by the CA, but I haven’t verified the enforcement algorithm

So this discussion really only applies to Yggdrasil and Tailscale. tinc and Zerotier lack detailed IP security, while Nebula expects IP allocations to be handled outside of the tool and baked into the certs (therefore enforcing rigidity at that level).

So the question for Yggdrasil and Tailscale is: how easy is it to commandeer a trusted IP?

Yggdrasil has a brief discussion of this. In short, Yggdrasil offers you both a dedicated IP and a rarely-used /64 prefix which you can delegate to other machines on your LAN. Obviously by taking the dedicated IP, a lot more bits are available for the hash of the node’s public key, making “collisions technically impractical, if not outright impossible.” However, if you use the /64 prefix, a collision may be more possible. Yggdrasil’s hashing algorithm includes some optimizations to make this more difficult. Yggdrasil includes a genkeys tool that uses more CPU cycles to generate keys that are maximally difficult to collide with.

Tailscale doesn’t document their IP assignment algorithm, but I think it is safe to say that the larger subnet you use, the better. If you try to use a /24 for your mesh, it is certainly conceivable that an attacker could remove your trusted node, then just manually add the 240 or so machines it would take to get that IP reassigned. It might be a good idea to use a purely IPv6 mesh with Tailscale to minimize this problem as well.

So, I think the risk is low in the default configurations of both Yggdrasil and Tailscale (certainly lower than with tinc or Zerotier). You can drive the risk even lower with both.

Final thoughts

For my own purposes, I suspect I will remain with Yggdrasil in some fashion. Maybe I will just take the small performance hit that using a relay node implies. Or perhaps I will get clever and use an incoming VPN port forward or go over Tailscale.

Tailscale was the other option that seemed most interesting. However, living in a region with Internet that goes down more often than I’d like, I would like to just be able to send as much traffic over a mesh as possible, trusting that if the LAN is up, the mesh is up.

I have one thing that really benefits from performance in excess of Yggdrasil or Tailscale: NFS. That’s between two machines that never leave my LAN, so I will probably just set up a direct Wireguard link between them. Heck of a lot easier than trying to do Kerberos!

Finally, I wrote this intending to be useful. I dealt with a lot of complexity and under-documentation, so it’s possible I got something wrong somewhere. Please let me know if you find any errors.


This blog post is a copy of a page on my website. That page may be periodically updated.

14 April, 2023 02:47AM by John Goerzen

April 13, 2023

Russ Allbery

Review: Once Upon a Tome

Review: Once Upon a Tome, by Oliver Darkshire

Publisher: W.W. Norton & Company
Copyright: 2022
Printing: 2023
ISBN: 1-324-09208-4
Format: Kindle
Pages: 243

The full title page of this book, in delightful 19th century style, is:

Once Upon a Tome: The Misadventures of a Rare Bookseller, wherein the theory of the profession is partially explained, with a variety of insufficient examples, by Oliver Darkshire. Interspersed with several diverting FOOTNOTES of a comical nature, ably ILLUSTRATED by Rohan Eason, PUBLISHED by W.W. Norton, and humbly proposed to the consideration of the public in this YEAR 2023

That may already be enough to give you a feel for this book. Oliver Darkshire works for Sotheran's Rare Books and Prints in London, most notably running their highly entertaining Twitter account. This is his first book.

If you have been hanging out in the right corners of Twitter, you have probably been anticipating the release of this book, and may already have your own copy. If you have not (and to be honest it's increasingly dubious whether there are right corners of Twitter left), you're in for a treat. Darkshire has made Sotheran's a minor Twitter phenomenon due to tweets like this:

CUSTOMER: oh thank heavens I have been searching for a rare book expert with the knowledge to solve my complex problem

ME (extremely and unhelpfully specialized): ok well the words are usually on the inside and I can see that's true here, so that's a good start

I find I know lots of things until anyone asks me about it or there is a question to answer, at which point I know nothing, I am a void, a tragic bucket of ignorance

My hope was that Once Upon a Tome would be the same thing at book length, and I am delighted to report that's exactly what it is. By the time I finished reading the story of Darkshire's early training, I knew I was going to savor every word.

The hardest part, though, lies in recording precisely in what ways a book has survived the ravages of time. An entire lexicon of book-related terminology has evolved over hundreds of years for exactly this purpose — terminology that means absolutely nothing to the average observer. It's traditional to adopt this baroque language when describing your books, for two reasons. The first is that the specific language of the book trade allows you to be exceedingly accurate and precise without using hundreds of words, and the second is that the elegance of it serves to dull the blow a little. Most rare books come with some minor defects, but that doesn't mean one has to be rude about it.

You will learn something about rare book selling in this book, and more about Darkshire's colleagues, but primarily this is a book-length attempt to convey the slightly uncanny experience of working in a rare bookstore in an entertaining way. Also, to be fully accurate, it is an attempt shift the bookstore sideways in the reader's mind into a fantasy world that mostly but not entirely parallels ours; as the introduction mentions, this is not a strictly accurate day-by-day account of life at the store, and stories have been altered and conflated in the telling.

Rare bookselling is a retail job but a rather strange one, with its own conventions and unusual customers. Darkshire memorably divides rare book collectors into Smaugs and Draculas: Smaugs assemble vast lairs of precious items, Draculas have one very specific interest, and one's success at selling a book depends on identifying which type of customer one is dealing with. Like all good writing about retail jobs, half of the fun is descriptions of the customers.

The Suited Gentlemen turn up annually, smartly dressed in matching suits and asking to see any material we have on Ayn Rand. Faces usually obscured by large dark glasses, they move without making a sound, and only travel in pairs. Sometimes they will bark out a laugh at nothing in particular, as if mimicking what they think humans do.

There are more facets than the typical retail job, though, since the suppliers of the rare book trade (book runners, estate sales, and collectors who have been sternly instructed by spouses to trim down their collections) are as odd and varied as the buyers.

This sort of book rests entirely on the sense of humor of the author, and I thought Darkshire's approach was perfect. He has the knack of poking fun at himself as much as he pokes fun at anyone around him. This book conveys an air of perpetual bafflement at stumbling into a job that suits him as well as this one does, praise of the skills of his coworkers, and gently self-deprecating descriptions of his own efforts. Combine that with well-honed sentences, a flair for brief and memorable description, and an accurate sense of how long a story should last, and one couldn't ask for more from this style of book.

The book rest where the bible would be held (leaving arms free for gesticulation) was carved into the shape of a huge wooden eagle. I’m given to understand this is the kind of eloquent and confusing metaphor one expects in a place of worship, as the talons of the divine descend from above in a flurry of wings and death, but it seemed to alarm people to come face to face with the beaked fury of God as they entered the bookshop.

I've barely scratched the surface of great quotes from this book. If you like rare books, bookstores, or even just well-told absurd stories of working a retail job, read this. It reminds me of True Porn Clerk Stories, except with much less off-putting subject matter and even better writing. (Interestingly to me, it also shares with those stories, albeit for different reasons, a more complicated balance of power between the retail worker and the customer than the typical retail establishment.) My one wish is that I would have enjoyed more specific detail about the rare books themselves, since Darkshire only rarely describes successful retail transactions. But that's only a minor quibble.

This was a pure delight from cover to cover and exactly what I was hoping for when I preordered it. Highly recommended.

Rating: 9 out of 10

13 April, 2023 02:18AM

April 12, 2023

hackergotchi for Thomas Lange

Thomas Lange

FAI creates your own Ubuntu installation ISO

A new service is available on the FAI project website https://fai-project.org/FAIme

Build your own customized installation ISO for Ubuntu!

You can select if you want to install an Ubuntu LTS 22.04 server or desktop and enable Ubuntu LTS packages which are also called Hardware Enablement (HWE). Different disk partition schemes including LVM are available and you can select a language and keyboard layout and add your own list of packages which are then automatically installed from the customized installation media. It's possible to upload your public ssh key or specify your github account name for easy access to the root account.

Advanced users may upload a postinst script, which can be executed during the first boot of the computer. It's also possible to add options for grub.

12 April, 2023 07:25PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

blog after death

I've been pondering what should happen to personal websites once the owner has passed, or otherwise "moved on". Some time ago I stumbled across a blog by Kev Quirk, who wrote

I’d need to come up with contingency plans for...This website, and any other websites and own/manage

I thought it was a strange idea, to have a contingency plan for a personal site to survive its writer. Even a site such as my own, which (as sites go) would be trivial to host/mirror (since it's static), who would want to do that? Why would they do that?

On the other hand, whether something I've written is useful or not is largely independent of whether I'm alive and well. And if anything I've written is useful independently from me (to pick a recent example, my notes on imaging optical media), should it persist somewhere?

I've long thought of personal sites much like any website, which is to say, implicitly eternal, despite mountains of evidence that practically the reverse is true. I was influenced a long time ago by the article Cool URIs don't change. Thinking more about "digital legacy" led me to start believing that personal sites are ultimately not the right place for any kind of content that should have some persistence. (You might well have started with that assumption!)

Yesterday via Planet Debian I read Aurélien Jarno's recent blog post where they have deprecated all of their personal site bar their blog. Quoting Aurélien:

Wikipedia is a much better platform for sharing knowledge than random websites.

This struck me as an interesting idea. Wikipedia is a reasonable place for a lot of material that might otherwise be hosted on personal sites, and as a "living website", content can be adapted, corrected, etc. over time. Wikipedia is clearly not the right place for much other material that might exist on personal sites (it's not the right place for my aforementioned article). I'm not sure where might be. Perhaps we, as a culture, need to move away from the notion of personal sites, and devise a more collective concept. Or perhaps it's no bad thing that, without intervention, this stuff disappears.

12 April, 2023 03:14PM

Andrej Shadura

Connecting lights to a Swytch e-bike kit

Last year I purchased an e-bike upgrade kit for my mother in law. We decided to install it on a bicycle she originally bought back in the 80s, which I fixed and refurbished a couple of years ago and used until September 2022 when I bought myself a Dutch Cortina U4.

When I used this bicycle, I installed a lightweight Shutter Precision dynamo hub and compatible lights, XLC at the front, Büchel at the back. Unfortunately, since Swytch is a front wheel with a built-in electric motor, these lights don’t have a dynamo to connect to anymore, and Swytch doesn’t have a dedicated connector for lights. I tried asking the manufacturer for more documentation or schematics, but they refused to do so.

Luckily, a Canadian member of the Pedelecs forum managed to reverse-engineer the Swytch connector pinouts, which gave me an idea on how to proceed. Unfortunately, that meant that I had to replace both lights, and by trial and error I found specific models that worked. Before the Axa lights, I also tried Büchel’s Tivoli e-bike light, but it didn’t work because the voltage was too low:

Büchel Tivoli light that didn’t workBüchel Tivoli light that didn’t work

Once I knew what to do, the rest was super easy 🙂

So, here we go:

  1. Get a 3-pin (yellow) male connector with a cable, e.g. off AliExpress. Only two wires will be used, the white one is +4.2V (4-point-2, not 42!), the black one is earth. This will go into the throttle port. If you actually have a throttle, you need some sort of Y-splitter, but I don’t, so this was not an issue for me. (However, I bought both sides (M and F), just to be sure.)

    Cable for the throttle portCable for the throttle port

  2. Purchase an Axa 606 E6-48 front light. The 606 comes in two versions, for dynamos and e-bikes, use the one for e-bikes; despite being officially rated as 6—48V, these lights work quite well off 4.2V too.

    Axa 606 E6-48 lightAxa 606 E6-48 light

  3. Purchase an Axa Spark Steady rear light. This light works with both AC and DC (just like the 606, the official rating is 6—48V), and works off 4.2V without an issue.

    Axa Spark lightAxa Spark light

  4. Wire lights up. I used tiny wire terminals to join the wires, but I’m sure there are better options too. Insulate them well, make sure the red wire from the throttle connector is insulated too. I used a bunch of shrink tubes and black insulation tape. Since the voltage is not wildly different from what the dynamo hub produced (although AC, not DC), I was able to reuse the cable I had already routed to the rear carrier.

  5. Lights go on automatically as soon as you touch the power button on the battery pack, and stay on until the battery pack is switched off completely. I was considering adding a handlebar switch, but since I lost the only one I had, I had to do without.

Front lightFront light
Rear lightRear light

The side effect of using the Axa Spark at the rear is that it has a capacitor inside and keeps going for a couple more minutes after the battery pack is off — I haven’t decided whether that’s a benefit or a drawback 😄

12 April, 2023 01:09PM by Andrej Shadura

Jamie McClelland

Doing whatever Gmail says

As we slowly move our members to our new email infrastructure, an unexpected twist turned up: One member reported getting the Gmail warning:

Be careful with this message The sender hasn’t authenticated this message so Gmail can’t verify that it actually came from them.

They have their email delivered to May First, but have configured Gmail to pull in that email using the “Check mail from other accounts” feature. It worked fine on our old infrastructure, but started giving this message when we transitioned.

A further twist: he only receives this message from email sent by other people in his organization - in other words email sent via May First gets flagged, email sent from other people does not.

These Gmail messages typically warn users about email that has failed (or lacks) both SPF and DKIM. However, before diving into the technical details, my first thought was: why is Gmail giving a warning on a message that wasn’t even delivered to them? It’s always nice to get confirmation from others that this is totally wrong behavior. Unfortunately, when it’s Gmail, it doesn’t matter if they are wrong. We all have to deal with it.

So next, I decided to investigate why this message failed both DKIM (digital signature) and SPF (ensuring the message was sent from an authorized server).

Examining the headers immediately turned up the SPF failure:

Authentication-Results: mx.google.com;
       spf=fail (google.com: domain of xxx@xxx.org does not designate n.n.n.n as permitted sender) smtp.mailfrom=xxx@xxx.org

The IP address Google checked to ensure the message was sent by an authorized server is the IP address of our internal mail filter server in our new email infrastructure. That’s the last hop before delivery to the user’s mailbox, so that’s the last hop Gmail sees. This is why Gmail is totally wrong to run this check: all email messages retreived via their mail fecthing service are going to fail the SPF test because Gmail has no way of knowing what the actual last hop is.

So why is this problem only showing up after we transitioned to our new infrastructure? Because our old infrastructure had only one mail server for every user. The one mail server was the MX server and the relay server, so it was included in their SPF record.

And why does this only affect mail sent via May First and not other domains?

Because we add our DKIM signature to outgoing email, not to email delivered internally. Therefore, these messages both fail the SPF check and also don’t have a DKIM signature. Other messages have a DKIM signature.

Ugggg. So what do we do now? Clearly, something dumb and simple is in order: I added the IP addresses of our internal filter servers to our global SPF record.

Someday, years from now, after Gmail is long gone (or has fixed this dumb behavior), when I’m doing whatever retired people like me do, someone will notice that our internal filter server IPs are included in our SPF record. Hopefully they will fix the problem, but instead they’ll probably think: no idea why these are here - something will probably break if I remove them.

12 April, 2023 12:27PM

hackergotchi for Aurelien Jarno

Aurelien Jarno

Backup server upgraded to Bookworm

A few months ago, I switched my backup server to an ODROID-M1 SBC. It uses a RK3568 SoC with a quad-core Cortex-A55 and AES extensions (useful for disk encryption), and I added a 2 TB NVME SSD to the M2 slot. It also has a SATA connector, but the default enclosure does not have space for 2.5" drives. It's not the fastest SBC, but it runs stable and quite well as a backup server, and it's fanless, and low-power (less than 2 W idle). The support for the SoC has been added recently to the Linux kernel (it's used by various SBC), however the device tree for the ODROID-M1 was missing, so I contributed it based on the vendor one, and also submitted a few small fixes.

All the changes ended in the Bookworm kernel, and with the Bookworm release approaching, I decided it was the good moment to upgrade it. It went quite well, and now I can enjoy running dist-upgrade like on other stable servers without having to care about the kernel. I am currently using Borg as a backup software, but the upgrade also gave access to a newer Restic version supporting compression (a must have for me), so I may give it a try.

12 April, 2023 09:07AM by aurel32

Russ Allbery

Review: The Last Hero

Review: The Last Hero, by Terry Pratchett

Illustrator: Paul Kidby
Series: Discworld #27
Publisher: Harper
Copyright: 2001, 2002
ISBN: 0-06-050777-2
Format: Graphic novel
Pages: 176

The Last Hero is the 27th Discworld novel and part of the Rincewind subseries. This is something of a sequel to Interesting Times and relies heavily on the cast that was built up in previous books. It's not a good place to start with the series.

At last, the rare Rincewind novel that I enjoyed. It helps that Rincewind is mostly along for the ride.

Cohen the Barbarian and his band of elderly heroes have decided they're tired of enjoying their spoils and are going on a final adventure. They're going to return fire to the gods, in the form of a giant bomb. The wizards in Ankh-Morpork get wind of this and realize that an explosion at the Hub where the gods live could disrupt the magical field of the entire Disc, effectively destroying it. The only hope seems to be to reach Cori Celesti before Cohen and head him off, but Cohen is already almost there. Enter Lord Vetinari, who has Leonard of Quirm design a machine that will get them there in time by slingshotting under the Disc itself.

First off, let me say how much I love the idea of returning fire to the gods with interest. I kind of wish Pratchett had done more with their motivations, but I was laughing about that through the whole book.

Second, this is the first of the illustrated Discworld books that I've read in the intended illustrated form (I read the paperback version of Eric), and this book is gorgeous. I enjoyed Paul Kidby's art far more than I had expected to. His style what I will call, for lack of better terminology due to my woeful art education, "highly detailed caricature." That's not normally a style that clicks with me, but it works incredibly well for Discworld.

The Last Hero is richly illustrated, with some amount of art, if only subtle background behind the text, on nearly every page. There are several two-page spreads, but oddly I thought those (including the parody of The Scream on the cover) were the worst art of the book. None of them did much for me. The best art is in the figure studies and subtle details: Leonard of Quirm's beautiful calligraphy, his numerous sketches, the labeled illustration of the controls of the flying machine, and the portraits of Cohen's band and the people they encounter. The edition I got is printed on lovely, thick glossy paper, and the subtle art texture behind the writing makes this book a delight to read. I'm not sure if, like Eric, this book comes in other editions, but if so, I highly recommend getting or finding the high-quality illustrated edition for the best reading experience.

The plot, like a lot of the Rincewind books, doesn't amount to much, but I enjoyed the mission to intercept Cohen. Leonard of Quirm is a great character, and the slow revelation of his flying machine design (which I will not spoil) is a delightful combination of Leonardo da Vinci parody, Discworld craziness, and NASA homage. Also, the Librarian is involved, which always improves a Discworld book. (The Luggage, sadly, is not; I would have liked to have seen a richly-illustrated story about it, but it looks like I'll have to find the illustrated version of Eric for that.)

There is one of Pratchett's philosophical subtexts here, about heroes and stories and what it means for your story to live on. To be honest, it didn't grab me; it's mostly subtext, and this particular set of characters weren't quite introspective enough to make the philosophy central to the story. Also, I was perhaps too sympathetic to Cohen's goals, and thus not very interested in anyone successfully stopping him. But I had a lot more fun with this one than I usually do with Rincewind books, helped considerably by the illustrations. If you've been skipping Rincewind books in your Discworld read-through and have access to the illustrated edition of The Last Hero, consider making an exception for this one.

Followed by The Amazing Maurice and His Educated Rodents in publication order and, thematically, by Unseen Academicals.

Rating: 7 out of 10

12 April, 2023 02:29AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Debian Developer Survey Results, DebConf updates, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Results of the Debian Developer Survey, by Roberto C. Sánchez

In 2022, Freexian polled Debian Developers about the usage of money in Debian. More than 200 Debian Developers graciously participated, providing useful and constructive answers. Roberto and Utkarsh have worked on reviewing this feedback and summarizing it in a report recently published and announced to the project.

DebConf 23 Website, by Stefano Rivera

In preparation for DebConf 23, Stefano did some work on the DebConf website’s registration system. To support an expected large number of local registration requests, and a limited venue size, Stefano added a review system for registration requests.

There was also some infrastructure work for the website framework. We use the same framework for miniconfs and DebConf, but without the full registration system. Since last DebConf, we have migrated from a pure-JS toolchain for the static assets, to django-compressor, to be friendlier to contributors and have a simpler dependency setup. This required some updates in the full-DebConf registration system that hadn’t been noticed yet in miniDebConfs. Finally, with Utkarsh, we started to wind up the DebConf 22 travel bursary reimbursement process.

Debian Reimbursements Web App Progress, by Stefano Rivera

In a project funded by Freexian’s Project Funding initiative, Stefano made some more progress on the Debian Reimbursements Web App. The first rough implementation core request lifecycle is almost complete. Receipts can be collected and itemized, and the request can be submitted for a reimbursement request.

Debian Printing, by Thorsten Alteholz

Due to the upcoming release, only bug fixing uploads are allowed in this part of the release cycle and Thorsten did uploads of three Debian Printing packages.

The upload of hplip was rather straightforward and five bugs could be closed.

cups-filters suddenly started to FTBFS and thus got an RC bug. It failed due to a compile error in a header file of some dependency. Luckily the maintainer of that dependency knew that his package now needed c++17, so the fix was to just remove an old compile flag that forced the compiler to use c++0x. This flag was once progressive but nowadays it is more of a hindrance than a help.

The third package upload was for cups, which got some translation updates. Unfortunately this was the most tricky one as some translations did not appear in the binary packages. After debugging for some time, it turned out that the handling of links did not work properly. Now the version in Bookworm will be the cups version with the most translated man pages ever.

Miscellaneous contributions

  • Stefano Rivera updated a few Python modules in the Debian Python Team, to the latest upstream versions.
  • Stefano Rivera reviewed the current patch series applied to Python 3.12, as an Arch package maintainer had noticed that we dropped a patch by mistake, and reinstated it.
  • Anton Gladky prepared an upload of newer version (9.2.6) of vtk library and uploaded it into the experimental due to a freeze. VTK is the visualization kit - a library used mostly for scientific and engineering applications to visualize complex objects. Transition of dependent packages is planned on “after-release” phase.
  • Helmut Grohne, in the continual effort to improve Debian’s cross-build support, provided 22 cross-build patches to packages in the archive.

12 April, 2023 12:00AM by Utkarsh Gupta

April 11, 2023

hackergotchi for Aurelien Jarno

Aurelien Jarno

New website, or kind of...

For over 15 years, I've hardly made any updates to my website, and it remains low on my priority list. So I made a radical decision to replace it entirely with my blog. The content of the website has been reduced to just two additional pages.

But nothing has been lost: nowadays, Wikipedia is a much better platform for sharing knowledge than random websites. And it happens that they already cover all that was on my website about subaquatic diving in French. They also offer a multitude of resources in electronics, including the topics that were on my website: LCD displays, I²C bus, barcodes, parallel ports, serial ports, and DCF77 reception.

Finally if you're in need of Debian QEMU images for various architectures, I recommend the Debian Quick Image Baker pre-baked images page instead.

11 April, 2023 08:29AM by aurel32

Russ Allbery

Review: Circe

Review: Circe, by Madeline Miller

Publisher: Little, Brown and Company
Copyright: April 2018
Printing: 2020
ISBN: 0-316-55633-5
Format: Kindle
Pages: 421

Circe is the story of the goddess Circe, best known as a minor character in Homer's Odyssey. Circe was Miller's third book if you count the short novella Galatea. She wrote it after Song of Achilles, a reworking of part of the Iliad, but as with Homer, you do not need to read Song of Achilles first.

You will occasionally see Circe marketed or reviewed as a retelling of the Odyssey, but it isn't in any meaningful sense. Odysseus doesn't make an appearance until nearly halfway through the book, and the material directly inspired by the Odyssey is only about a quarter of the book. There is nearly as much here from the Telegony, a lost ancient Greek epic poem that we know about only from summaries by later writers and which picks up after the end of the Odyssey.

What this is, instead, is Circe's story, starting with her childhood in the halls of Helios, the Titan sun god and her father. She does not have a happy childhood; her voice is considered weak by the gods (Homer describes her as having "human speech"), and her mother and elder siblings are vicious and cruel. Her father is high in the councils of the Titans, who have been overthrown by Zeus and the other Olympians. She is in awe of him and sits at his feet to observe his rule, but he's a petty tyrant who cares very little about her. Her only true companion is her brother Aeëtes.

The key event of the early book comes when Prometheus is temporarily chained in Helios's halls after stealing fire from the gods and before Zeus passes judgment on him. A young Circe brings him something to drink and has a brief conversation with him. That's the spark for one of the main themes of this book: Circe slowly developing a conscience and empathy, neither of which are common among Miller's gods. But it's still a long road from there to her first meeting with Odysseus.

One of the best things about this book is the way that Miller unravels the individual stories of Greek myth and weaves them into a chronological narrative of Circe's life. Greek mythology is mostly individual stories, often contradictory and with only a loose chronology, but Miller pulls together all the ones that touch on Circe's family and turns them into a coherent history. This is not easy to do, and she makes it feel effortless. We get a bit of Jason and Medea (Jason is as dumb as a sack of rocks, and Circe can tell there's already something not right with Medea), the beginnings of the story of Theseus and Ariadne, and Daedalus (one of my favorite characters in the book) with his son Icarus, in addition to the stories more directly associated with Circe (a respinning of Glaucus and Scylla from Ovid's Metamorphoses that makes Circe more central). By the time Odysseus arrives on Circe's island, this world feels rich and full of history, and Circe has had a long and traumatic history that has left her suspicious and hardened.

If you know some Greek mythology already, seeing it deftly woven into this new shape is a delight, but Circe may be even better if this is your first introduction to some of these stories. There are pieces missing, since Circe only knows the parts she's present for or that someone can tell her about later, but what's here is vivid, easy to follow, and recast in a narrative structure that's more familiar to modern readers. Miller captures the larger-than-life feel of myth while giving the characters an interiority and comprehensible emotional heft that often gets summarized out of myth retellings or lost in translation from ancient plays and epics, and she does it without ever calling the reader's attention to the mechanics.

The prose, similarly, is straightforward and clear, getting out of the way of the story but still providing a sense of place and description where it's needed. This book feels honed, edited and streamlined until it maintains an irresistible pace. There was only one place where I felt like the story dragged (the raising of Telegonus), and then mostly because it's full of anger and anxiety and frustration and loss of control, which is precisely what Miller was trying to achieve. The rest of the book pulls the reader relentlessly forward while still delivering moments of beauty or sharp observation.

My house was crowded with some four dozen men, and for the first time in my life, I found myself steeped in mortal flesh. Those frail bodies of theirs took relentless attention, food and drink, sleep and rest, the cleaning of limbs and fluxes. Such patience mortals must have, I thought, to drag themselves through it hour after hour.

I did not enjoy reading about Telegonus's childhood (it was too stressful; I don't like reading about characters fighting in that way), but apart from that, the last half of this book is simply beautiful. By the time Odysseus arrives, we're thoroughly in Circe's head and agree with all of the reasons why he might receive a chilly reception. Odysseus talks the readers around at the same time that he talks Circe around. It's one of the better examples of writing intelligent, observant, and thoughtful characters that I have read recently. I also liked that Odysseus has real flaws, and those flaws do not go away even when the reader warms to him.

I'll avoid saying too much about the very end of the book to avoid spoilers (insofar as one can spoil Greek myth, but the last quarter of the book is where I think Miller adds the most to the story). I'll just say that both Telemachus and Penelope are exceptional characters while being nothing like Circe or Odysseus, and watching the characters tensely circle each other is a wholly engrossing reading experience. It's a much more satisfying ending than the Telegony traditionally gets (although I have mixed feelings about the final page).

I've mostly talked about the Greek mythology part of Circe, since that's what grabbed me the most, but it's quite rightly called a feminist retelling and it lives up to that label with the same subtlety and skill that Miller brings to the prose and characterization. The abusive gender dynamics of Greek myth are woven into the narrative so elegantly you'd think they were always noted in the stories. It is wholly satisfying to see Circe come into her own power in a defiantly different way than that chosen by her mother and her sister. She spends the entire book building an inner strength and sense of herself that allows her to defend her own space and her own identity, and the payoff is pure delight. But even better are the quiet moments between her and Penelope.

"I am embarrassed to ask this of you, but I did not bring a black cloak with me when we left. Do you have one I might wear? I would mourn for him."

I looked at her, as vivid in my doorway as the moon in the autumn sky. Her eyes held mine, gray and steady. It is a common saying that women are delicate creatures, flowers, eggs, anything that may be crushed in a moment’s carelessness. If I had ever believed it, I no longer did.

"No," I said. "But I have yarn, and a loom. Come."

This is as good as everyone says it is. Highly recommended for the next time you're in the mood for a myth retelling.

Rating: 8 out of 10

11 April, 2023 04:20AM

April 10, 2023

Simon Josefsson

Trisquel is 42% Reproducible!

The absolute number may not be impressive, but what I hope is at least a useful contribution is that there actually is a number on how much of Trisquel is reproducible. Hopefully this will inspire others to help improve the actual metric.

tl;dr: go to reproduce-trisquel.

When I set about to understand how Trisquel worked, I identified a number of things that would improve my confidence in it. The lowest hanging fruit for me was to manually audit the package archive, and I wrote a tool called debdistdiff to automate this for me. That led me to think about apt archive transparency more in general. I have made some further work in that area (hint: apt-verify) that deserve its own blog post eventually. Most of apt archive transparency is futile if we don’t trust the intended packages that are in the archive. One way to measurable increase trust in the package are to provide reproducible builds of the packages, which should by now be an established best practice. Code review is still important, but since it will never provide positive guarantees we need other processes that can identify sub-optimal situations automatically. The way reproducible builds easily identify negative results is what I believe has driven much of its success: its results are tangible and measurable. The field of software engineering is in need of more such practices.

The design of my setup to build Trisquel reproducible are as follows.

  • The project debdistget is responsible for downloading Release/Packages files (which are the most relevant files from dists/) from apt archives, and works by commiting them into GitLab-hosted git-repositories. I maintain several such repositories for popular apt-archives, including for Trisquel and its upstream Ubuntu. GitLab invokes a schedule pipeline to do the downloading, and there is some race conditions here.
  • The project debdistdiff is used to produce the list of added and modified packages, which are the input to actually being able to know what packages to reproduce. It publishes human readable summary of difference for several distributions, including Trisquel vs Ubuntu. Early on I decided that rebuilding all of the upstream Ubuntu packages is out of scope for me: my personal trust in the official Debian/Ubuntu apt archives are greater than my trust of the added/modified packages in Trisquel.
  • The final project reproduce-trisquel puts the pieces together briefly as follows, everything being driven from its .gitlab-ci.yml file.
    • There is a (manually triggered) job generate-build-image to create a build image to speed up CI/CD runs, using a simple Dockerfile.
    • There is a (manually triggered) job generate-package-lists that uses debdistdiff to generate and store package lists and puts its output in lists/. The reason this is manually triggered right now is due to a race condition.
    • There is a (scheduled) job that does two things: from the package lists, the script generate-ci-packages.sh builds a GitLab CI/CD instruction file ci-packages.yml that describes jobs for each package to build. The second part is generate-readme.sh that re-generate the project’s README.md based on the build logs and diffoscope outputs that stored in the git repository.
    • Through the ci-packages.yml file, there is a large number of jobs that are dynamically defined, which currently are manually triggered to not overload the build servers. The script build-package.sh is invoked and attempts to rebuild a package, and stores build log and diffoscope output in the git project itself.

I did not expect to be able to use the GitLab shared runners to do the building, however they turned out to work quite well and I postponed setting up my own runner. There is a manually curated lists/disabled-aramo.txt with some packages that all required too much disk space or took over two hours to build. Today I finally took the time to setup a GitLab runner using podman running Trisquel aramo, and I expect to complete builds of the remaining packages soon — one of my Dell R630 server with 256GB RAM and dual 2680v4 CPUs should deliver sufficient performance.

Current limitations and ideas on further work (most are filed as project issues) include:

  • We don’t support *.buildinfo files. As far as I am aware, Trisquel does not publish them for their builds. Improving this would be a first step forward, anyone able to help? Compare buildinfo.debian.net. For example, many packages differ only in their NT_GNU_BUILD_ID symbol inside the ELF binary, see example diffoscope output for libgpg-error. By poking around in jenkins.trisquel.org I managed to discover that Trisquel built initramfs-utils in the randomized path /build/initramfs-tools-bzRLUp and hard-coding that path allowed me to reproduce that package. I expect the same to hold for many other packages. Unfortunately, this failure turned into success with that package moved the needle from 42% reproducibility to 43% however I didn’t let that stand in the way of a good headline.
  • The mechanism to download the Release/Package-files from dists/ is not fool-proof: we may not capture all ever published such files. While this is less of a concern for reproducibility, it is more of a concern for apt transparency. Still, having Trisquel provide a service similar to snapshot.debian.org would help.
  • Having at least one other CPU architecture would be nice.
  • Due to lack of time and mental focus, handling incremental updates of new versions of packages is not yet working. This means we only ever build one version of a package, and never discover any newly published versions of the same package. Now that Trisquel aramo is released, the expected rate of new versions should be low, but still happens due to security or backports.
  • Porting this to test supposedly FSDG-compliant distributions such as PureOS and Gnuinos should be relatively easy. I’m also looking at Devuan because of Gnuinos.
  • The elephant in the room is how reproducible Ubuntu is in the first place.

Happy Easter Hacking!

10 April, 2023 05:36PM by simon

hackergotchi for Gunnar Wolf

Gunnar Wolf

Twenty years

Twenty years… A seemingly big, very round number, at least for me.

I can recall several very well-known songs mentioning this timespan:

  • «It was twenty years ago today Sgt. Pepper taught the band to play», sang four youth idols in 1967 for whom said timespan was not-quite-but-almost their full lifes so far.
  • «Si las cosas que uno quiere se pudieran alcanzar, tú me quisieras lo mismo que veinte años atrás» (if what one wants could be achieved, you would love me the same as twenty years ago), says a heartbroken song by María Teresa Vera where she is resigned not to recover the love of a former lover.
  • «Volver con la frente marchita, las nieves del tiempo platearon mi sien. Sentir que es un soplo la vida, que veinte años no es nada, que febril la mirada, errante en las sombras te busca y te nombra» (To return, with a withered forehead, the snows of time have silvered my temples. To feel that life is but a wind blow, that twenty years is like nothing, how feverish the look, wandering in the shadows, it looks for you and names you) says one of the best known tangos, written by Carlos Gardel, Fernando Maldonado and Alfredo Le Pera, where the singer returns after 20 years, tired and beaten, but still with some hope of finding his long-lost love.

A quick Internet search yields many more… And yes, in human terms… 20 years is quite a big deal. And, of course, I have been long waiting for the right time to write this post.

Because twenty years ago, I got the mail.

Of course, the mail notifying me I had successfully finished my NM process and, as of April 2003, could consider myself to be a full-fledged Debian Project member.

Maybe by sheer chance it was today also that we spent the evening at Max’s house – I never worked directly with Max, but we both worked at Universidad Pedagógica Nacional at the same time back then.

But… Of course, a single twentyversary is not enough!

I don’t have the exact date, but I guess I might be off by some two or three months due to other things I remember from back then.

This year, I am forty years old as an Emacs and TeX user!

Back in 1983, on Friday nights, I went with my father to IIMAS (where I’m currently adscribed to as a PhD student, and where he was a researcher between 1971 and the mid-1990s) and used the computer — one of the two big computers they had in the Institute. And what could a seven-year-old boy do? Of course… use the programs this great Foonly F2 system had. Emacs and TeX (this is still before LaTeX).

40 years… And I still use the same base tools for my daily work, day in, day out.

10 April, 2023 06:01AM

Russell Coker

BTRFS Rebuild Time

In February I replaced a Dell T320 server with a HP Z640 workstation for a home server/workstation [1]. The T320 has 8*3.5″ drive bays which I had used to put 3*4TB disks in a BTRFS RAID-10 array for 6TB of usable capacity. The Z640 has only 2*3.5″ bays and 4*2.5″ bays, so one option I could have taken was to buy a 4TB 2.5″ SSD and keep the same 3*4TB array as before. Instead I chose to use an 8TB disk I had spare in an array with one of the original 4TB disks and some extra on NVMe devices (the system has 2*1TB NVMe devices which are used as a 380G RAID-1 for the root filesystem and the rest for the storage array). It’s nice how BTRFS allows putting any storage you have in a RAID-10 configuration.

Unfortunately it seems that I chose the wrong 4TB disk to use for this as it failed three days ago. It gave thousands of read and write errors and Linux decided that the drive no longer existed. I tried rebooting the system to get it in the BTRFS array again but it failed again and failed so quickly that it wasn’t even possible to use the data on it as part of a RAID rebuild. So I removed that disk and put in one of the other 4TB disks.

As the array is comprised of an 8TB disk and 3 other devices that don’t add up to 8TB the layout is the 8TB disk having one copy of everything and the other devices having parts of it. So the rebuild process comprised of copying data from the 8TB disk to the 4TB disk. For a RAID array run in the manner of Linux software RAID the rebuild of a RAID-1 involves a linear copy of data which is the optimal case for hard disks, copying 4TB of data in that manner would have an average speed of a bit over 100MB/s and take about 11 hours. With BTRFS the source disk has to be updated for each block that is recreated so the process was bottlenecked on writing to the 8TB disk. It took 2 days 23 hours to complete. The process involved reading 3,478,031MB and writing 4,405,545MB. The system was live for the process and some cron jobs etc were writing to the array, but in the 12 hours since the rebuild completed the array has had 7,038MB written. So presumably during the rebuild time about 42G of actual data were written to the array and the other 4.3TB written to the 8TB disk were from the process of copying 3.5TB from it to another device. Iostat reported that there were 645.36 TPS for the duration of the rebuild which seems like a decent number for a hard drive, during the process iostat reported that the drive had 99%+ of IO capacity used for the duration.

While waiting for this to complete I wrote a blog post about storage trends [2]. One thing I didn’t mention in that post is that if you are the type of person who checks the rebuild process fifty times a day then that should be counted as part of the cost of using slow storage. If instead of an 8TB disk plus some SSD storage I had used 2*4TB disks and 1*4TB SSD as I had considered doing then instead of having 3.8TB on one device I would have had about 2.5TB and the reconstruct would have probably taken 2/3 of the time. If I had moved the array to 3*4TB SSDs then it would have taken a small fraction of the time.

One thing to note is that I made a mistake in this operation by removing the failed device instead of doing a “btrfs replace” operation which can be significantly faster. If I had correctly done this then I would have written a blog post about the rebuild taking 2 days or something, the issues of hard drives being slow and me compulsively checking the progress would still apply.

10 April, 2023 05:19AM by etbe

April 09, 2023

hackergotchi for Marco d'Itri

Marco d'Itri

Installing Debian 12 on a Banana Pi M5

I recently bought a Banana Pi BPI-M5, which uses the Amlogic S905X3 SoC: these are my notes about installing Debian on it.

While this SoC is supported by the upstream U-Boot it is not supported by the Debian U-Boot package, so debian-installer does not work. Do not be fooled by seeing the DTB file for this exact board being distributed with debian-installer: all DTB files are, and it does not mean that the board is supposed to work.

As I documented in #1033504, the Debian kernels are currently missing some patches needed to support the SD card reader.

I started by downloading an Armbian Banana Pi image and booted it from an SD card. From there I partitioned the eMMC, which always appears as /dev/mmcblk1:

parted /dev/mmcblk1
(parted) mklabel msdos
(parted) mkpart primary ext4 4194304B -1
(parted) align-check optimal 1
mkfs.ext4 /dev/mmcblk1p1

Make sure to leave enough space before the first partition, or else U-Boot will overwrite it: as it is common for many ARM SoCs, U-Boot lives somewhere in the gap between the MBR and the first partition.

I looked at Armbian's /usr/lib/u-boot/platform_install.sh and installed U-Boot by manually copying it to the eMMC:

dd if=/usr/lib/linux-u-boot-edge-bananapim5_22.08.6_arm64/u-boot.bin of=/dev/mmcblk1 bs=1 count=442
dd if=/usr/lib/linux-u-boot-edge-bananapim5_22.08.6_arm64/u-boot.bin of=/dev/mmcblk1 bs=512 skip=1 seek=1

Beware: Armbian's U-Boot 2022.10 is buggy, so I had to use an older image.

I did not want to install a new system, so I copied over my old Cubieboard install:

mount /dev/mmcblk1p1 /mnt/
rsync -xaHSAX --delete --numeric-ids root@old-server:/ /mnt/ --exclude='/tmp/*' --exclude='/var/tmp/*'

Since the Cubieboard has a 32 bit CPU and the Banana Pi requires an arm64 kernel I enabled the architecture and installed a new kernel:

dpkg --add-architecture arm64
apt update
apt install linux-image-arm64
apt purge linux-image-6.1.0-6-armmp linux-image-armmp

At some point I will cross-grade the entire system.

Even if ttyS0 exists it is not the serial console, which appears as ttyAML0 instead. Nowadays systemd automatically start a getty if the serial console is enabled on the kernel command line, so I just had to disable the old manually-configured getty:

systemctl disable serial-getty@ttyS0.service

I wanted to have a fully working flash-kernel, so I used Armbian's boot.scr as a template to create /etc/flash-kernel/bootscript/bootscr.meson and then added a custom entry for the Banana Pi to /etc/flash-kernel/db:

Machine: Banana Pi BPI-M5
Kernel-Flavors: arm64
DTB-Id: amlogic/meson-sm1-bananapi-m5.dtb
U-Boot-Initrd-Address: 0x0
Boot-Initrd-Path: /boot/uInitrd
Boot-Initrd-Path-Version: yes
Boot-Script-Path: /boot/boot.scr
U-Boot-Script-Name: bootscr.meson
Required-Packages: u-boot-tools

All things considered I do not think that I would recommend to Debian users to buy Amlogic-based boards since there are many other better supported SoCs.

09 April, 2023 01:47PM

April 08, 2023

hackergotchi for Evgeni Golov

Evgeni Golov

Running autopkgtest with Docker inside Docker

While I am not the biggest fan of Docker, I must admit it has quite some reach across various service providers and can often be seen as an API for running things in isolated environments.

One such service provider is GitHub when it comes to their Actions service.

I have no idea what isolation technology GitHub uses on the outside of Actions, but inside you just get an Ubuntu system and can run whatever you want via Docker as that comes pre-installed and pre-configured. This especially means you can run things inside vanilla Debian containers, that are free from any GitHub or Canonical modifications one might not want ;-)

So, if you want to run, say, lintian from sid, you can define a job to do so:

  lintian:
    runs-on: ubuntu-latest
    container: debian:sid
    steps:
      - [ do something to get a package to run lintian on ]
      - run: apt-get update
      - run: apt-get install -y --no-install-recommends lintian
      - run: lintian --info --display-info *.changes

This will run on Ubuntu (latest right now means 22.04 for GitHub), but then use Docker to run the debian:sid container and execute all further steps inside it. Pretty short and straight forward, right?

Now lintian does static analysis of the package, it doesn't need to install it. What if we want to run autopkgtest that performs tests on an actually installed package?

autopkgtest comes with various "virt servers", which are providing isolation of the testbed, so that it does not interfere with the host system. The simplest available virt server, autopkgtest-virt-null doesn't actually provide any isolation, as it runs things directly on the host system. This might seem fine when executed inside an ephemeral container in an CI environment, but it also means that multiple tests have the potential to influence each other as there is no way to revert the testbed to a clean state. For that, there are other, "real", virt servers available: chroot, lxc, qemu, docker and many more. They all have one in common: to use them, one needs to somehow provide an "image" (a prepared chroot, a tarball of a chroot, a vm disk, a container, …, you get it) to operate on and most either bring a tool to create such an "image" or rely on a "registry" (online repository) to provide them.

Most users of autopkgtest on GitHub (that I could find with their terrible search) are using either the null or the lxd virt servers. Probably because these are dead simple to set up (null) or the most "native" (lxd) in the Ubuntu environment.

As I wanted to execute multiple tests that for sure would interfere with each other, the null virt server was out of the discussion pretty quickly.

The lxd one also felt odd, as that meant I'd need to set up lxd (can be done in a few commands, but still) and it would need to download stuff from Canonical, incurring costs (which I couldn't care less about) and taking time which I do care about!).

Enter autopkgtest-virt-docker, which recently was added to autopkgtest! No need to set things up, as GitHub already did all the Docker setup for me, and almost no waiting time to download the containers, as GitHub does heavy caching of stuff coming from Docker Hub (or at least it feels like that).

The only drawback? It was added in autopkgtest 5.23, which Ubuntu 22.04 doesn't have. "We need to go deeper" and run autopkgtest from a sid container!

With this idea, our current job definition would look like this:

  autopkgtest:
    runs-on: ubuntu-latest
    container: debian:sid
    steps:
      - [ do something to get a package to run autopkgtest on ]
      - run: apt-get update
      - run: apt-get install -y --no-install-recommends autopkgtest autodep8 docker.io
      - run: autopkgtest *.changes --setup-commands="apt-get update" -- docker debian:sid

(--setup-commands="apt-get update" is needed as the container comes with an empty apt cache and wouldn't be able to find dependencies of the tested package)

However, this will fail:

# autopkgtest *.changes --setup-commands="apt-get update" -- docker debian:sid
autopkgtest [10:20:54]: starting date and time: 2023-04-07 10:20:54+0000
autopkgtest [10:20:54]: version 5.28
autopkgtest [10:20:54]: host a82a11789c0d; command line:
  /usr/bin/autopkgtest bley_2.0.0-1_amd64.changes '--setup-commands=apt-get update' -- docker debian:sid
Unexpected error:
Traceback (most recent call last):
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 829, in mainloop
    command()
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 758, in command
    r = f(c, ce)
        ^^^^^^^^
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 692, in cmd_copydown
    copyupdown(c, ce, False)
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 580, in copyupdown
    copyupdown_internal(ce[0], c[1:], upp)
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 607, in copyupdown_internal
    copydown_shareddir(sd[0], sd[1], dirsp, downtmp_host)
  File "/usr/share/autopkgtest/lib/VirtSubproc.py", line 562, in copydown_shareddir
    shutil.copy(host, host_tmp)
  File "/usr/lib/python3.11/shutil.py", line 419, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib/python3.11/shutil.py", line 258, in copyfile
    with open(dst, 'wb') as fdst:
         ^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/autopkgtest-virt-docker.shared.kn7n9ioe/downtmp/wrapper.sh'
autopkgtest [10:21:07]: ERROR: testbed failure: unexpected eof from the testbed

Running the same thing locally of course works, so there has to be something special about the setup at GitHub. But what?!

A bit of digging revealed that autopkgtest-virt-docker tries to use a shared directory (using Dockers --volume) to exchange things with the testbed (for the downtmp-host capability). As my autopkgtest is running inside a container itself, nothing it tells the Docker deamon to mount will be actually visible to it.

In retrospect this makes total sense and autopkgtest-virt-docker has a switch to "fix" the issue: --remote as the Docker deamon is technically remote when viewed from the place autopkgtest runs at.

I'd argue this is not a bug in autopkgtest(-virt-docker), as the situation is actually cared for. There is even some auto-detection of "remote" daemons in the code, but that doesn't "know" how to detect the case where the daemon socket is mounted (vs being set as an environment variable). I've opened an MR (assume remote docker when running inside docker) which should detect the case of running inside a Docker container which kind of implies the daemon is remote.

Not sure the patch will be accepted (it is a band-aid after all), but in the meantime I am quite happy with using --remote and so could you ;-)

08 April, 2023 05:39PM by evgeni

Russell Coker

April 07, 2023

Petter Reinholdtsen

rtlsdr-scanner, software defined radio frequency scanner for Linux - nice free software

Today I finally found time to track down a useful radio frequency scanner for my software defined radio. Just for fun I tried to locate the radios used in the areas, and a good start would be to scan all the frequencies to see what is in use. I've tried to find a useful program earlier, but ran out of time before I managed to find a useful tool. This time I was more successful, and after a few false leads I found a description of rtlsdr-scanner over at the Kali site, and was able to track down the Kali package git repository to build a deb package for the scanner. Sadly the package is missing from the Debian project itself, at least in Debian Bullseye. Two runtime dependencies, python-visvis and python-rtlsdr had to be built and installed separately. Luckily 'gbp buildpackage' handled them just fine and no further packages had to be manually built. The end result worked out of the box after installation.

My initial scans for FM channels worked just fine, so I knew the scanner was functioning. But when I tried to scan every frequency from 100 to 1000 MHz, the program stopped unexpectedly near the completion. After some debugging I discovered USB software radio I used rejected frequencies above 948 MHz, triggering a unreported exception breaking the scan. Changing the scan to end at 957 worked better. I similarly found the lower limit to be around 15, and ended up with the following full scan:

Saving the scan did not work, but exporting it as a CSV file worked just fine. I ended up with around 477k CVS lines with dB level for the given frequency.

The save failure seem to be a missing UTF-8 encoding issue in the python code. Will see if I can find time to send a patch upstream later to fix this exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/main_window.py", line 485, in __on_save
    save_plot(fullName, self.scanInfo, self.spectrum, self.locations)
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/file.py", line 408, in save_plot
    handle.write(json.dumps(data, indent=4))
TypeError: a bytes-like object is required, not 'str'
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/main_window.py", line 485, in __on_save
    save_plot(fullName, self.scanInfo, self.spectrum, self.locations)
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/file.py", line 408, in save_plot
    handle.write(json.dumps(data, indent=4))
TypeError: a bytes-like object is required, not 'str'

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

07 April, 2023 09:10PM

hackergotchi for Matthew Palmer

Matthew Palmer

Database Encryption: If It's So Good, Why Isn't Everyone Doing It?

a wordcloud of organisations who have been reported to have had data breaches in 2022 Just some of the organisations that leaked data in 2022

It seems like just about every day there’s another report of another company getting “hacked” and having its sensitive data (or, worse, the sensitive data of its customers) stolen. Sometimes, people’s most intimate information gets dumped for the world to see. Other times it’s “just” used for identity theft, extortion, and other crimes. In the least worst case, the attacker gets cold feet, but people suffer stress and inconvenience from having to replace identity documents.

A great way to protect information from being leaked is to encrypt it. We encrypt data while it’s being sent over the Internet (with TLS), and we encrypt it when it’s “at rest” (with disk or volume encryption). Yet, everyone’s data seems to still get stolen on a regular basis. Why?

Because the data is kept online in an unencrypted form, sitting in the database while its being used. This means that attackers can just connect to the database, or trick the application into dumping the database, and all the data is just lying there, waiting to be misused.

It’s Not the Devs’ Fault, Though

You may be thinking that leaving an entire database full of sensitive data unencrypted seems like a terrible idea. And you’re right: it is a terrible idea. But it’s seemingly unavoidable.

The problem is that in order to do what a database does best (query, sort, and aggregate data), it needs to be able to know what the data is. When you encrypt data, however, all the database sees is a locked box.

a locked box Not very useful for a database

The database can’t tell what’s in the locked box – whether it’s a number equal to 42, or a date that’s less than 2023-01-01, or a string that contains the substring “foo”. Every value is just an opaque blob of “stuff”, and the database is rendered completely useless.

Since modern applications usually rely pretty heavily on their database, it’s essentially impossible to build an application if you’ve turned your database into a glorified flat-file by encrypting everything in it. Thus, it’s hardly surprising that developers have to leave the data laying around unencrypted, for anyone to come along and take.

Introducing Enquo

I said before that having data unencrypted in a database is seemingly unavoidable. That’s because there are some innovative cryptographic techniques that can make it possible to query encrypted data.

Andy Dwyer being amazed Indeed

The purpose of the Enquo project is to provide a common set of cryptographic primitives that implement ENcrypted QUery Operations (ie “Enquo”), and integrate those operations into databases, ORMs, and anywhere else that could benefit. The end goal is to provide the ability to encrypt all the data stored in any database server, while still allowing the data to be queried and aggregated.

So far, the project consists of these components:

  • the enquo-core library, that implements queryable encrypted integers, dates, and text in Rust and Ruby;
  • a PostgreSQL extension, pg_enquo, that allows PostgreSQL to query encrypted data; and
  • a Rails ActiveRecord extension, ActiveEnquo, that augments ActiveRecord to do the encryption/decryption required.

Support for other languages and ORMs is designed to be as straightforward as possible, and integration with other databases is mostly dependent on their own extensibility.

The project’s core tenets emphasise both uncompromising security, and a friendly developer experience.

Naturally, all Enquo code is open source, released under the MIT licence.

Would You Like To Know More?

Desire to know more intensifies Everyone who uses a database...

If all this sounds relevant to your interests:

  1. If you use Ruby on Rails and PostgreSQL, you’re halfway home already. Follow the ActiveEnquo getting started tutorial and see how much of your data Enquo can already protect. When you find data you want to encrypt but can’t, tell me about it.

    • If you use Ruby and PostgreSQL with another ORM, such as Sequel, writing a plugin to support Enquo shouldn’t be too difficult. The ActiveEnquo code should give you a good start. If you get stuck, get in touch.
  2. If you use PostgreSQL with another programming language, tell me what language you use and we’ll work together to get bindings for that library created.

  3. If you use another database server, support is coming for your database of choice eventually, but at present there’s no timeline on support. On the off chance that you happen to be a hard-core database hacking expert, and would like to work on getting Enquo support in your preferred database server, I’d love to talk to you.

07 April, 2023 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

April 06, 2023

Reproducible Builds

Reproducible Builds in March 2023

Welcome to the March 2023 report from the Reproducible Builds project.

In these reports we outline the most important things that we have been up to over the past month. As a quick recap, the motivation behind the reproducible builds effort is to ensure no malicious flaws have been introduced during compilation and distributing processes. It does this by ensuring identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

If you are interested in contributing to the project, please do visit our Contribute page on our website.


News

There was progress towards making the Go programming language reproducible this month, with the overall goal remaining making the Go binaries distributed from Google and by Arch Linux (and others) to be bit-for-bit identical. These changes could become part of the upcoming version 1.21 release of Go. An issue in the Go issue tracker (#57120) is being used to follow and record progress on this.


Arnout Engelen updated our website to add and update reproducibility-related links for NixOS to reproducible.nixos.org. []. In addition, Chris Lamb made some cosmetic changes to our presentations and resources page. [][]


Intel published a guide on how to reproducibly build their Trust Domain Extensions (TDX) firmware. TDX here refers to an Intel technology that combines their existing virtual machine and memory encryption technology with a new kind of virtual machine guest called a Trust Domain. This runs the CPU in a mode that protects the confidentiality of its memory contents and its state from any other software.


A reproducibility-related bug from early 2020 in the GNU GCC compiler as been fixed. The issues was that if GCC was invoked via the as frontend, the -ffile-prefix-map was being ignored. We were tracking this in Debian via the build_path_captured_in_assembly_objects issue. It has now been fixed and will be reflected in GCC version 13.


Holger Levsen will present at foss-north 2023 in April of this year in Gothenburg, Sweden on the topic of Reproducible Builds, the first ten years.


Anthony Andreoli, Anis Lounis, Mourad Debbabi and Aiman Hanna of the Security Research Centre at Concordia University, Montreal published a paper this month entitled On the prevalence of software supply chain attacks: Empirical study and investigative framework:

Software Supply Chain Attacks (SSCAs) typically compromise hosts through trusted but infected software. The intent of this paper is twofold: First, we present an empirical study of the most prominent software supply chain attacks and their characteristics. Second, we propose an investigative framework for identifying, expressing, and evaluating characteristic behaviours of newfound attacks for mitigation and future defense purposes. We hypothesize that these behaviours are statistically malicious, existed in the past, and thus could have been thwarted in modernity through their cementation x-years ago. []


On our mailing list this month:

  • Mattia Rizzolo is asking everyone in the community to save the date for the 2023’s Reproducible Builds summit which will take place between October 31st and November 2nd at Dock Europe in Hamburg, Germany. Separate announcement(s) to follow. []

  • ahojlm posted an message announcing a new project which is “the first project offering bootstrappable and verifiable builds without any binary seeds.” That is to say, a way of providing a verifiable path towards trusted software development platform without relying on pre-provided binary code in order to prevent against various forms of compiler backdoors. The project’s homepage is hosted on Tor (mirror).


The minutes and logs from our March 2023 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 25th April at 15:00 UTC on #reproducible-builds on the OFTC network.


… and as a Valentines Day present, Holger Levsen wrote on his blog on 14th February to express his thanks to OSUOSL for their continuous support of reproducible-builds.org. []



Debian

Vagrant Cascadian developed an easier setup for testing debian packages which uses sbuild’s “unshare mode” along and reprotest, our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. []


Over 30 reviews of Debian packages were added, 14 were updated and 7 were removed this month, all adding to our knowledge about identified issues. A number of issues were updated, including the Holger Levsen updating build_path_captured_in_assembly_objects to note that it has been fixed for GCC 13 [] and Vagrant Cascadian added new issues to mark packages where the build path is being captured via the Rust toolchain [] as well as new categorisation for where virtual packages have nondeterministic versioned dependencies [].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

In addition, Vagrant Cascadian filed a bug with a patch to ensure GNU Modula-2 supports the SOURCE_DATE_EPOCH environment variable.


Testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In March, the following changes were made by Holger Levsen:

  • Arch Linux-related changes:

    • Build Arch packages in /tmp/archlinux-ci/$SRCPACKAGE instead of /tmp/$SRCPACKAGE. []
    • Start 2/3 of the builds on the o1 node, the rest on o2. []
    • Add graphs for Arch Linux (and OpenWrt) builds. []
    • Toggle Arch-related builders to debug why a specific node overloaded. [][][][]
  • Node health checks:

    • Detect SetuptoolsDeprecationWarning tracebacks in Python builds. []
    • Detect failures do perform chdist calls. [][]
  • OSUOSL node migration.

    • Install megacli packages that are needed for hardware RAID. [][]
    • Add health check and maintenance jobs for new nodes. []
    • Add mail config for new nodes. [][]
    • Handle a node running in the future correctly. [][]
    • Migrate some nodes to Debian bookworm. []
    • Fix nodes health overview for osuosl3. []
    • Make sure the /srv/workspace directory is owned by by the jenkins user. []
    • Use .debian.net names everywhere, except when communicating with the outside world. []
    • Grant fpierret access to a new node. []
    • Update documentation. []
    • Misc migration changes. [][][][][][][][]
  • Misc changes:

    • Enable fail2ban everywhere and monitor it with munin [].
    • Gracefully deal with non-existing Alpine schroots. []

In addition, Roland Clobus is continuing his work on reproducible Debian ISO images:

  • Add/update openQA configuration [], and use the actual timestamp for openQA builds [].
  • Moved adding the user to the docker group from the janitor_setup_worker script to the (more general) update_jdn.sh script. []
  • Use the (short-term) ‘reproducible’ source when generating live-build images. []

diffoscope development

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats as well. This month, Mattia Rizzolo released versions 238, and Chris Lamb released versions 239 and 240. Chris Lamb also made the following changes:

  • Fix compatibility with PyPDF 3.x, and correctly restore test data. []
  • Rework PDF annotation handling into a separate method. []

In addition, Holger Levsen performed a long-overdue overhaul of the Lintian overrides in the Debian packaging [][][][], and Mattia Rizzolo updated the packaging to silence an include_package_data=True [], fixed the build under Debian bullseye [], fixed tool name in a list of tools permitted to be absent during package build tests [] and as well as documented sending out an email upon  [].

In addition, Vagrant Cascadian updated the version of GNU Guix to 238 [ and 239 []. Vagrant also updated reprotest to version 0.7.23. []


Other development work

Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE




If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 April, 2023 01:52PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.2.0.0 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1052 other packages on CRAN, downloaded 28.6 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 522 times according to Google Scholar.

This release brings a new upstream release 12.2.0 made by Conrad a day or so ago. We prepared the usual release candidate, tested on the over 1000 reverse depends, found no issues and sent it to CRAN. Where it got tested again and was auto-processed smoothly by CRAN.

The releases actually has a relatively small set of changes as a first follow-up release in the 12.2.* series.

Changes in RcppArmadillo version 0.12.2.0.0 (2023-04-04)

  • Upgraded to Armadillo release 12.2.0 (Cortisol Profusion Deluxe)

    • more efficient use of FFTW3 by fft() and ifft()

    • faster in-place element-wise multiplication of sparse matrices by dense matrices

    • added spsolve_factoriser class to allow reuse of sparse matrix factorisation for solving systems of linear equations

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 April, 2023 12:22AM

Michael Ablassmeier

tracking changes between pypi package releases

I wondered if there is some tracking for differences between packages published on pypi, something that stores this information in a format similar to debdiff..

I failed to find something on the web, so created a little utility which watches the pypi changelog for new releaes and fetches the new and old version.

It uses diffoscope to create reports on the published releases and automatically pushes them to a github repository:

https://github.com/pypi-diff

Is it useful? I dont know, it may be handy for code review or maybe running different security scanners on it, to identify accidentaly pushed keys or other sensitive data.

Currently its pushing the changes for every released package every 10 minutes, lets see how far this can go just for fun :-)

06 April, 2023 12:00AM

April 05, 2023

Valhalla's Things

Fitting Top and Camisole

Posted on April 5, 2023

A woman wearing a simple, long sleeved, fitted white top; the fabric is somewhat transparent and the outline of a camisole can be seen.

For this summer, I’ve just made a nice sleeveless dress, but that doesn’t mean that I’m planning to go around with bare arms like, I don’t know, a peasant or even somebody with no health issues, perish the thought!

Instead, at the end of last season I’ve bought a remnant of white ramie / viscose jersey that is a bit too transparent to be decent when worn on its own, but should still give some protection from the sun without being unconfortable in the heat, with the intent to make myself a new top.

Because of the transparency I wasn’t sure whether to actually use it for a top, or just to make camisoles, but with the decency preserved by the dress the choice was made: I had enough fabric for a top, a camisole and maybe something small.

I already have a trusted pattern for the top and camisole, but they still had to be published, so I took care to write down instructions and take step-by-step pictures; maybe the white fabric isn’t the best, but it’s better than nothing, and I can still take better pictures the next time I’ll do another make. They are of course on my pattern webiste: top and camisole.

some fabric (the top pictured above) crunched up in a loose ball, less than 15 cm in diameter (there is a ruler for scale).

Since the fabric was bought online as a remnant, I didn’t exactly know what to expect, and was pleasantly surprised by how soft it is: it feels like touching a cloud.

This however means that it felt like working with a cloud, and, well, let’s just say I’m happy that both patterns were quite simple and I didn’t have to deal with fiddly bits.

I was also not so pleasantly surprised by the fact that part of the fabric had a few small holes as if the end of the roll had been caught on something; I was able to cut everything around the holes, other than a small bit that I hadn’t noticed and had to be mended. It’s not a big deal, but I suspect it’s a sign that this fabric may not be as sturdy as it could have been, and that there will be more mending in the future as I wear it.

And then, when I had finished the set I was faced with another problem: taking pictures. For one thing, worn on their own they aren’t exactly decent, and then there was the fact that after a week of late spring weather in March, as I was working on summer clothing the temperature dropped and there was even a hint of rain.

image

I solved this by wearing the new set on top of another set of fitting top and camisole, with matching leggings. Not exactly something I would wear on the regular streets, but good enough for a picture of underwear.

A woman wearing a white camisole on top of black top and leggings.

Still, the pictures were taken in quite a hurry, because I wasn’t completely freezing, but still pretty cold.

Anyway, I’m off to find some other piece of summer wear to make, hoping that it will bring proper rain. :)

05 April, 2023 12:00AM

April 03, 2023

hackergotchi for Adnan Hodzic

Adnan Hodzic

“Kubernetes, Resistance is Futile” – KubeCon & KCD 2023 talk

A story to tell As part of my career at ING with my previous team, MLP (Machine Learning Platform). I spent over 2 years leading...

The post “Kubernetes, Resistance is Futile” – KubeCon & KCD 2023 talk appeared first on FoolControl: Phear the penguin.

03 April, 2023 02:16PM by Adnan Hodzic

Russ Allbery

Review: The Nordic Theory of Everything

Review: The Nordic Theory of Everything, by Anu Partanen

Publisher: Harper
Copyright: 2016
Printing: June 2017
ISBN: 0-06-231656-7
Format: Kindle
Pages: 338

Anu Partanen is a Finnish journalist who immigrated to the United States. The Nordic Theory of Everything, subtitled In Search of a Better Life, is an attempt to explain the merits of Finnish approaches to government and society to a US audience. It was her first book.

If you follow US policy discussion at all, you have probably been exposed to many of the ideas in this book. There was a time when the US left was obsessed with comparisons between the US and Nordic countries, and while that obsession has faded somewhat, Nordic social systems are still discussed with envy and treated as a potential model. Many of the topics of this book are therefore predictable: parental leave, vacation, health care, education, happiness, life expectancy, all the things that are far superior in Nordic countries than in the United States by essentially every statistical measure available, and which have been much-discussed.

Partanen brings two twists to this standard analysis. The first is that this book is part memoir: she fell in love with a US writer and made the decision to move to the US rather than asking him to move to Finland. She therefore experienced the transition between social and government systems first-hand and writes memorably on the resulting surprise, trade-offs, anxiety, and bafflement. The second, which I've not seen previously in this policy debate, is a fascinating argument that Finland is a far more individualistic country than the United States precisely because of its policy differences.

Most people, including myself, assumed that part of what made the United States a great country, and such an exceptional one, was that you could live your life relatively unencumbered by the downside of a traditional, old-fashioned society: dependency on the people you happened to be stuck with. In America you had the liberty to express your individuality and choose your own community. This would allow you to interact with family, neighbors, and fellow citizens on the basis of who you were, rather than on what you were obligated to do or expected to be according to old-fashioned thinking.

The longer I lived in America, therefore, and the more places I visited and the more people I met — and the more American I myself became — the more puzzled I grew. For it was exactly those key benefits of modernity — freedom, personal independence, and opportunity — that seemed, from my outsider’s perspective, in a thousand small ways to be surprisingly missing from American life today. Amid the anxiety and stress of people’s daily lives, those grand ideals were looking more theoretical than actual.

The core of this argument is that the structure of life in the United States essentially coerces dependency on other people: employers, spouses, parents, children, and extended family. Because there is no universally available social support system, those relationships become essential for any hope of a good life, and often for survival. If parents do not heavily manage their children's education, there is a substantial risk of long-lasting damage to the stability and happiness of their life. If children do not care for their elderly parents, they may receive no care at all. Choosing not to get married often means choosing precarity and exhaustion because navigating society without pooling resources with someone else is incredibly difficult.

It was as if America, land of the Hollywood romance, was in practice mired in a premodern time when marriage was, first and foremost, not an expression of love, but rather a logistical and financial pact to help families survive by joining resources.

Partanen contrasts this with what she calls the Nordic theory of love:

What Lars Trägårdh came to understand during his years in the United States was that the overarching ambition of Nordic societies during the course of the twentieth century, and into the twenty-first, has not been to socialize the economy at all, as is often mistakenly assumed. Rather the goal has been to free the individual from all forms of dependency within the family and in civil society: the poor from charity, wives from husbands, adult children from parents, and elderly parents from their children. The express purpose of this freedom is to allow all those human relationships to be unencumbered by ulterior motives and needs, and thus to be entirely free, completely authentic, and driven purely by love.

She sees this as the common theme through most of the policy differences discussed in this book. The Finnish approach is to provide neutral and universal logistical support for most of life's expected challenges: birth, child-rearing, education, health, unemployment, and aging. This relieves other social relations — family, employer, church — of the corrosive strain of dependency and obligation. It also ensures people's basic well-being isn't reliant on accidents of association.

If the United States is so worried about crushing entrepreneurship and innovation, a good place to start would be freeing start-ups and companies from the burdens of babysitting the nation’s citizens.

I found this fascinating as a persuasive technique. Partanen embraces the US ideal of individualism and points out that, rather than being collectivist as the US right tends to assume, Finland is better at fostering individualism and independence because the government works to removes unnecessary premodern constraints on individual lives. The reason why so many Americans are anxious and frantic is not a personal failing or bad luck. It's because the US social system is deeply hostile to healthy relationships and individual independence. It demands a constant level of daily problem-solving and crisis management that is profoundly exhausting, nearly impossible to navigate alone, and damaging to the ideal of equal relationships.

Whether this line of argument will work is another question, and I'm dubious for reasons that Partanen (probably wisely) avoids. She presents the Finnish approach as a discovery that the US would benefit from, and the US approach as a well-intentioned mistake. I think this is superficially appealing; almost all corners of US political belief at least give lip service to individualism and independence. However, advocates of political change will eventually need to address the fact that many US conservatives see this type of social coercion as an intended feature of society rather than a flaw.

This is most obvious when one looks at family relationships. Partanen treats the idea that marriage should be a free choice between equals rather than an economic necessity as self-evident, but there is a significant strain of US political thought that embraces punishing people for not staying within the bounds of a conservative ideal of family. One will often find, primarily but not exclusively among the more religious, a contention that the basic unit of society is the (heterosexual, patriarchal) family, not the individual, and that the suffering of anyone outside that structure is their own fault. Not wanting to get married, be the primary caregiver for one's parents, or abandon a career in order to raise children is treated as malignant selfishness and immorality rather than a personal choice that can be enabled by a modern social system.

Here, I think Partanen is accurate to identify the Finnish social system as more modern. It embraces the philosophical concept of modernity, namely that social systems can be improved and social structures are not timeless. This is going to be a hard argument to swallow for those who see the pressure towards forming dependency ties within families as natural, and societal efforts to relieve those pressures as government meddling. In that intellectual framework, rather than an attempt to improve the quality of life, government logistical support is perceived as hostility to traditional family obligations and an attempt to replace "natural" human ties with "artificial" dependence on government services. Partanen doesn't attempt to have that debate.

Two other things struck me in this book. The first is that, in Partanen's presentation, Finns expect high-quality services from their government and work to improve it when it falls short. This sounds like an obvious statement, but I don't think it is in the context of US politics, and neither does Partanen. She devotes a chapter to the topic, subtitled "Go ahead: ask what your country can do for you."

This is, to me, one of the most frustrating aspects of US political debate. Our attitude towards government is almost entirely hostile and negative even among the political corners that would like to see government do more. Failures of government programs are treated as malice, malfeasance, or inherent incompetence: in short, signs the program should never have been attempted, rather than opportunities to learn and improve. Finland had mediocre public schools, decided to make them better, and succeeded. The moment US public schools start deteriorating, we throw much of our effort into encouraging private competition and dismantling the public school system.

Partanen doesn't draw this connection, but I see a link between the US desire for market solutions to societal problems and the level of exhaustion and anxiety that is so common in US life. Solving problems by throwing them open to competition is a way of giving up, of saying we have no idea how to improve something and are hoping someone else will figure it out for a profit. Analyzing the failures of an existing system and designing incremental improvements is hard and slow work. Throwing out the system and hoping some corporation will come up with something better is disruptive but easy.

When everyone is already overwhelmed by life and devoid of energy to work on complex social problems, it's tempting to give up on compromise and coalition-building and let everyone go their separate ways on their own dime. We cede the essential work of designing a good society to start-ups. This creates a vicious cycle: the resulting market solutions are inevitably gated by wealth and thus precarious and artificially scarce, which in turn creates more anxiety and stress. The short-term energy savings from not having to wrestle with a hard problem is overwhelmed by the long-term cost of having to navigate a complex and adversarial economic relationship.

That leads into the last point: schools. There's a lot of discussion here about school quality and design, which I won't review in detail but which is worth reading. What struck me about Partanen's discussion, though, is how easy the Finnish system is to use. Finnish parents just send their kids to the most convenient school and rarely give that a second thought. The critical property is that all the schools are basically fine, and therefore there is no need to place one's child in an exceptional school to ensure they have a good life.

It's axiomatic in the US that more choice is better. This is a constant refrain in our political discussion around schools: parental choice, parental control, options, decisions, permission, matching children to schools tailored for their needs. Those choices are almost entirely absent in Finland, at least in Partanen's description, and the amount of mental and emotional energy this saves is astonishing. Parents simply don't think about this, and everything is fine.

I think we dramatically underestimate the negative effects of constantly having to make difficult decisions with significant consequences, and drastically overstate the benefits of having every aspect of life be full of major decision points. To let go of that attempt at control, however illusory, people have to believe in a baseline of quality that makes the choice less fraught. That's precisely what Finland provides by expecting high-quality social services and working to fix them when they fall short, an effort that the United States has by and large abandoned.

A lot of non-fiction books could be turned into long articles without losing much substance, and I think The Nordic Theory of Everything falls partly into that trap. Partanen repeats the same ideas from several different angles, and the book felt a bit padded towards the end. If you're already familiar with the policy comparisons between the US and Nordic countries, you will have seen a lot of this before, and the book bogs down when Partanen strays too far from memoir and personal reactions. But the focus on individualism and eliminating dependency is new, at least to me, and is such an illuminating way to look at the contrast that I think the book is worth reading just for that.

Rating: 7 out of 10

03 April, 2023 02:45AM

hackergotchi for Matt Brown

Matt Brown

Retrospective: Mar 2023

The key decision I made mid-March was to commit to pursuing ventilation monitoring as my primary product development focus.

Prior to that decision, I hoped to use my writing plan to drive a breadth-first survey of the opportunities for each of my product ideas before deciding which had the best business potential to focus on first. Two factors changed my mind:

  1. As noted last month, I’m finding the writing process much slower and harder than I expected – the survey across all the ideas may not complete until mid-year or later!
  2. I’ve realised that having begun building co2mon.nz last year, to stop work on the project at this point would leave me feeling that I had not done justice to developing the product and testing the market - seeing it to a conclusion is important to me.

This decision is an explicit choice to prioritize seeing a project through to a conclusion (successful, or otherwise) regardless of whether or not it has the highest potential of the various ideas I could invest time into. I’m comfortable making that trade-off in this instance, but I am going to bound my time investment to two months. I’ll evaluate at the end of May whether I’m seeing sufficient traction and potential to justify continuing further with the idea.

I had only one fully uninterrupted work week in March due to a combination of days out due to school trips, two LandSAR call outs and various farm maintenance tasks. April will be similarly disrupted given school holidays and a planned family trip to Brisbane. Sharpening my focus feels particularly necessary given this reality to ensure I’m not spread overly thin.

Goal Scoring

See last month’s retrospective for a refresher on my scoring methodology.

Consulting - 4/10

Goal: Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.

Consulting hours were down from February, hitting only 31% of target this month as the client didn’t make use of all the hours I had allocated for them. I didn’t invest any time in advertising my services or developing new clients or projects over the month, which will now become a priority for April.

Product Development - 4/10

Goal: Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.

With the new focus entirely on co2mon.nz, I spent a lot of time re-working and developing my thinking around how I want to take this forward, specifically trying to analyse where I saw an opportunity in the market. After attending a workshop on finding product market fit using quantifiable metrics at the Southern SaaS conference this month, I’ve realised that much of the time I spent on this analysis is too insular and focused on my own observations - I need to get out and talk to a lot more people and get more feedback on their needs and understanding of the space instead. Seems obvious in retrospect!

I also spent a few days beginning to build another batch of prototype CO2 monitors so I have some units to use for experimentation and testing with potential customers as I get out and have those conversations. I can probably build one or two more batches of prototype monitors before needing to look at PCB assembly in earnest.

Professional Network Development - 8/10

Goal: To build a professional relationship with at least 30 new people this year.

This goal continues to be my highlight with 8 new contacts added this month and catch-ups with 4 existing people I had not spoken to for a while. I joined the KiwiSaaS central community and attended the SouthernSaas conference this month as well, which has been time well spent given the workshop learnings discussed above.

Writing - 3/10

Goal: To publish a high-quality piece of writing on this site at least once a week.

I published a single post, the first half of my updated ventilation monitoring business plan. I continue to find the writing process much harder and slower than I hoped or expected and remain well below my target publishing rate, but one post is better than zero!

I tested working with an editor I contracted via UpWork who provided some very useful feedback on the structure of my writing which helped to unblock some of my progress. I plan to continue doing this for at least a few more posts.

Community - 5/10

Goal: To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

Feedback

As always, I’d love to hear from you if you have thoughts or feedback triggered by anything I’ve written above.

03 April, 2023 02:24AM

April 02, 2023

Thorsten Alteholz

My Debian Activities in March 2023

FTP master

This month I accepted 78 and rejected 12 packages. The overall number of packages that got accepted was 78.

I still love this calm and peaceful time now within the Debian project, when everybody only cares for RC bugs and NEW does not grow.

Debian LTS

This was my hundred-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3358-1] mpv security update for one CVE
  • [DLA 3372-1] xorg-server embargoed security update for one CVE
  • [DLA 3374-1] libmicrohttpd security update for one CVE
  • [DLA 3378-1] duktape security update for one CVE
  • [1033759] pu-bug for duktape/bullseye

I also participated in the monthly LTS-meeting.

Last but not least I did some days of frontdesk duties and took care of issues on security-master.

Debian ELTS

This month was the fifty sixth ELTS month.

  • [ELA-821-1] xorg-server embargoed security update of Jessie and Stretch for one CVE
  • [ELA-824-1] libmicrohttpd security update of Jessie and Stretch for one CVE

The duktape update in Stretch is more complicated than expected and I could not finish it this month.

I also started to work on openssl1.0

Last but not least I did some days of frontdesk duties

Debian Printing

This month I uploaded new versions or improved packages of:

  • hplip (bug fixing)
  • cups (update translations)

The unblock bug for hplip was already processed, the unblock bug for cups is still waiting. Hopefully the last minute work of the translators was not wasted.

Parts of this work is generously funded by Freexian!

Other stuff

Looking at my notes, there is nothing to be reported here.

02 April, 2023 10:57AM by alteholz

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nearest multiple of a double

This problem came up the other day; given doubles (or floats, for that matter) a >= 0 and b > 0, find the multiples of b that are closest to a, in both directions.

I was surprised that I could not find anything about this online; perhaps I just don't know the right term, but there is usually a pretty vast and completely inscrutinable literature about anything related to floating-point, usually ending in some magic incantation you can just paste in (or 2000 lines of FORTRAN). I didn't really need a perfect answer, but it would be fun to reason about, so why not?

The obvious lower = floor(a / b) * b is of course possible, but pretty bad; if a is large and b is small, you instantly go out of range and get infinities. And there are edge cases where, due to rounding, your lower becomes higher than a. So while this is a simple solution, it's far from perfect.

The expression lower = a - fmod(a, b) seems far better. Perhaps surprisingly, the answer from fmod() is exact; it does not need any rounding (assuming a competent libc). This is possible to see if you divide by the common 2^exponent factor (where the exponent may be negative, but definitely is a whole number); this will essentially leave you with A % B for two natural numbers, which cannot possibly have more leading digits than B, which was already representable as a double. So while the fmod() result may be denormal, it cannot overflow or lose any precision. And of course, the answer of the subtraction will be correctly rounded by the CPU itself.

The upper end is a bit more tricky, and here's where I left the realm of perfection. You cannot just do upper = lower + b; if a is large and b is small, the double-rounding can give you a number that's a tiny bit off. An assocative reordering like upper = a + (b - fmod(a, b)) fixes that specific case, but creates new problem; for instance, when b is much bigger than a, this essentially reduces to a + (b - a), which should be equivalent to b, but isn't always. (I tried Kahan summation, but alas, it didn't help.)

There are ways of summing up series of floating-point numbers exactly, but they seem to be quite hairy. If one has access to extended precision and the obscure “round to odd” rounding mode, there is a paper explaining how to avoid the problematic double-rounding. And there is even a 33-page paper solving the general problem of adding any number of numbers together and getting the correctly rounded answer.

I didn't really bother peering into any of that, though, because it seems that we can cheat; if we just sort the numbers [a, b, -fmod(a, b)] by increasing magnitude and add from left to right, it appears that… we get the right result? I don't have a proof here, and of course I can't try the entire space, but I haven't found a counterexample yet. I would assume there's something about the signs and the fact that fmod(a, b) is smaller than both a and b, but I don't know.

So, there you have it. A dodgy way of computing both the lower and upper bounds that seems to always be correct, does not require multiprecision arithmetic, does not require extended precision or weird rounding modes, and always is fast. And if there should be a case where the upper one is rounded incorrectly, I'm at least pretty certain that the lower one is right.

02 April, 2023 09:06AM

hackergotchi for David Bremner

David Bremner

Installing Debian using the OVH rescue environment

Problem description(s)

For some of its cheaper dedicated servers, OVH does not provide a KVM (in the virtual console sense) interface. Sometimes when a virtual console is provided, it requires a horrible java applet that won't run on modern systems without a lot of song and dance. Although OVH provides a few web based ways of installing,

  • I prefer to use the debian installer image I'm used to and trust, and
  • I needed some way to debug a broken install.

I have only tested this in the OVH rescue environment, but the general approach should work anywhere the rescue environment can install and run QEMU.

QEMU to the rescue

Initially I was horrified by the ovh forums post but eventually I realized it not only gave a way to install from a custom ISO, but provided a way to debug quite a few (but not all, as I discovered) boot problems by using the rescue env (which is an in-memory Debian Buster, with an updated kernel). The original solution uses VNC but that seemed superfluous to me, so I modified the procedure to use a "serial" console.

Preliminaries

  • Set up a default ssh key in the OVH web console
  • (re)boot into rescue mode
  • ssh into root@yourhost (you might need to ignore changing host keys)
  • cd /tmp
  • You will need qemu (and may as well use kvm). ovmf is needed for a UEFI bios.
    apt install qemu-kvm ovmf
  • Download the netinstaller iso
  • Download vmlinuz and initrd.gz that match your iso. In my case:

    https://deb.debian.org/debian/dists/testing/main/installer-amd64/current/images/cdrom/

Doing the install

  • Boot the installer in qemu. Here the system has two hard drives visible as /dev/sda and /dev/sdb.
qemu-system-x86_64               \
   -enable-kvm                    \
   -nographic \
   -m 2048                         \
   -bios /usr/share/ovmf/OVMF.fd  \
   -drive index=0,media=disk,if=virtio,file=/dev/sda,format=raw  \
   -drive index=1,media=disk,if=virtio,file=/dev/sdb,format=raw  \
   -cdrom debian-bookworm-DI-alpha2-amd64-netinst.iso \
   -kernel ./vmlinuz \
   -initrd ./initrd.gz \
   -append console=ttyS0,9600,n8
  • Optionally follow Debian wiki to configure root on software raid.
  • Make sure your disk(s) have an ESP partition.
  • qemu and d-i are both using Ctrl-a as a prefix, so you need to C-a C-a 1 (e.g.) to switch terminals
  • make sure you install ssh server, and a user account

Before leaving the rescue environment

  • You may have forgotten something important, no problem you can boot the disks you just installed in qemu (I leave the apt here for convenient copy pasta in future rescue environments).
apt install qemu-kvm ovmf && \
qemu-system-x86_64               \
   -enable-kvm                    \
   -nographic \
   -m 2048                         \
   -bios /usr/share/ovmf/OVMF.fd  \
   -drive index=0,media=disk,if=virtio,file=/dev/sda,format=raw  \
   -drive index=1,media=disk,if=virtio,file=/dev/sdb,format=raw  \
   -nic user,hostfwd=tcp:127.0.0.1:2222-:22 \
   -boot c
  • One important gotcha is that the installer guess interface names based on the "hardware" it sees during the install. I wanted the network to work both in QEMU and in bare hardware boot, so I added a couple of link files. If you copy this, you most likely need to double check the PCI paths. You can get this information, e.g. from udevadm, but note you want to query in rescue env, not in QEMU, for the second case.

  • /etc/systemd/network/50-qemu-default.link

[Match]
Path=pci-0000:00:03.0
Virtualization=kvm

[Link]
Name=lan0
  • /etc/systemd/network/50-hardware-default.link
[Match]
Path=pci-0000:03:00.0
Virtualization=no

[Link]
Name=lan0
  • Then edit /etc/network/interfaces to refer to lan0

02 April, 2023 12:34AM

April 01, 2023

hackergotchi for Junichi Uekawa

Junichi Uekawa

April.

April. Cherry blossoms are in full. Beautiful weather.

01 April, 2023 10:49PM by Junichi Uekawa

hackergotchi for Debian Brasil

Debian Brasil

MiniDebConf Brasília 2023 - 25 a 27 de maio

Nesse ano a MiniDebConf Brasil está de volta! A comunidade brasileira de usuários(as) e desenvolvedores(as) Debian convida a todos(as) a participarem da MiniDebConf Brasília 2023 que acontecerá durante 3 dias na capital federal.

Nos dias 25 e 26 de maio estaremos no Complexo Avançado da Câmara dos Deputados - LabHacker/CEFOR, promovendo palestras, oficinas e outras atividades. E, no - dia 27 de maio (sábado), estaremos em um coworking (local a definir) para - colocar a mão na massa hackeando software e contribuindo para o Projeto Debian.

A MiniDebConf Brasília 2023 é um evento aberto a todos(as), independente do seu nível de conhecimento sobre Debian. O mais importante será reunir a comunidade para celebrar o maior projeto de Software Livre do mundo, por isso queremos receber desde usuários(as) inexperientes que estão iniciando o seu contato com o Debian até Desenvolvedores(as) oficiais do projeto.

A programação da MiniDebConf Brasília 2023 será composta de palestras de nível básico e intermediário para aqueles(as) participantes que estão tendo o primeiro contato com o Debian ou querem conhecer mais sobre determinados assuntos, e workshops/oficinas de nível intermediário e avançado para usuários(as) do Debian que querem colocar a mão-na-massa durante o evento. No último dia do evento, teremos um Hacking Day em um espaço de coworking para que todos possam interagir e de fato fazerem contribuições para o projeto.

Inscrição

A inscrição na MiniDebConf Brasília 2023 é totalmente gratuita e poderá ser feita no formulário disponível no site do evento. É mandatória a inscrição prévia para que todos(as) possam acessar a Câmara dos Deputados - devido a questões de segurança da casa. Além de auxiliar a organização no dimensionamento do evento.

O evento é organizado de forma voluntária, e toda ajuda é bem-vinda. Portanto, se você gostaria de contribuir para a realização do evento, preencha o formulário para inscrição de voluntários. Os formulários para inscrição (na MiniDebConf e para ajudar na organização) e submissão de atividades serão abertos no dia 1º de abril (não tem nenhuma pegadinha, se inicia nesse dia mesmo :)

Contato

Para entrar em contato com o time local, a lista de emails debian-br-eventos, o canal IRC #debian-bsb no OFTC e o grupo no telegram DebianBrasília podem ser utilizados. Além do endereço de email: brasil.mini@debconf.org.

01 April, 2023 11:00AM

Primeira oficina de tradução em 2023 da equipe pt_BR

The Brazilian translation team debian-l10n-portuguese realizou a primeira oficina de 2023 em fevereiro, com ótimos resultados:

  • A oficina foi direcionada para iniciantes, usando o DDTP/DDTSS.
  • Dois dias de oficina mão-na-massa via Jitsi.
  • Nos dias seguintes, continuidade dos trabalho de tradução de forma independente, com o suporte da equipe.
  • Quantidade de pessoas inscritos(as): 29
  • Novos(as) contribuidores(as) no DDPT/DDTSS: 22
  • Traduções dos(as) novos(as) participantes: 175
  • Revisões dos(as) novos(as) participantes : 261

Nosso foco era completar as descrições dos 500 pacotes mais populares (popcon). Apesar de não termos conseguido chegar à 100% do ciclo de tradução, grande parte dessas descrições estão em andamento e com um pouco mais de trabalho estarão disponíveis à comunidade.

Agradeço aos(às) participantes pelas contribuições. As oficinas foram bem movimentadas e, muito além das traduções em si, conversamos sobre diversas faces da comunidade Debian. Esperamos ter ajudado aos(às) iniciantes a contribuir com o projeto de maneira frequente.

Agradeço em especial ao Charles (charles) que ministrou um dos dias da oficina, ao Paulo (phls) que sempre está aí dando uma força, e ao Fred Maranhão pelo seu incansável trabalho no DDTSS.

Equipe de tradução do português do Brasil

01 April, 2023 10:00AM

First 2023 translation workshop from the pt_BR team

The Brazilian translation team debian-l10n-portuguese had their first workshop of 2023 in February, with great results:

  • The workshop was aimed at beginners, working in DDTP/DDTSS.
  • Two days of a hands-on workshop via Jitsi.
  • In the following days, translation work continued independently, with team support.

  • Subscribers: 29

  • New contributors to DDPT/DDTSS: 22

  • Translations from new participants: 175

  • Revisions from new participants : 261

Our focus was to complete the descriptions of the 500 most popular packages (popcon). Although we were unable to reach 100% of the translation cycle, much of these descriptions are in progress and with a little more work will be available to the community.

The Brazilian Portuguese translation team

01 April, 2023 10:00AM

Mike Hommey

Announcing git-cinnabar 0.6.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.11?

  • Full rewrite of the Python parts of git-cinnabar in Rust.
  • Push performance is between twice and 10 times faster than 0.5.x, depending on scenarios.
  • Based on git 2.38.0.
  • git cinnabar fetch now accepts a --tags flag to fetch tags.
  • git cinnabar bundle now accepts a -t flag to give a specific bundlespec.
  • git cinnabar rollback now accepts a --candidates flag to list the metadata sha1 that can be used as target of the rollback.
  • git cinnabar rollback now also accepts a --force flag to allow any commit sha1 as metadata.
  • git cinnabar now has a self-update subcommand that upgrades it when a new version is available. The subcommand is only available when building with the self-update feature (enabled on prebuilt versions of git-cinnabar).
  • Disabled inexact copy/rename detection, that was enabled by accident.

What’s new since 0.6.0rc2?

  • Fixed use-after-free in metadata initialization.
  • Look for the new location of the CA bundle in git-windows 2.40.

01 April, 2023 02:17AM by glandium

Paul Wise

FLOSS Activities March 2023

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian QA services: disabled updating jessie as it was removed
  • Debian IRC: rescued #debian-s390x from inactive person
  • Debian servers: repair a /etc git repo
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The gensim, sptag, purple-discord, harmony work was sponsored. All other work was done on a volunteer basis.

01 April, 2023 01:31AM

March 31, 2023

Enrico Zini

Things I learnt in March 2023

  • str.endswith() can take a tuple of possible endings instead of a single string

About JACK and Debian

  • There are 3 JACK implementations: jackd1, jackd2, pipewire-jack.
  • jackd1 is mostly superseded in favour of jackd2, and as far as I understand, can be ignored
  • pipewire-jack integrates well with pipewire and the rest of the Linux audio world
  • jackd2 is the native JACK server. When started it handles the sound card directly, and will steal it from pipewire. Non-JACK audio applications will likely cease to see the sound card until JACK is stopped and wireplumber is restarted. Pipewire should be able to keep working as a JACK client but I haven't gone down that route yet
  • pipewire-jack mostly works. At some point I experienced glitches in complex JACK apps like giada or ardour that went away after switching to jackd2. I have not investigated further into the glitches
  • So: try things with pw-jack. If you see odd glitches, try without pw-jack to use the native jackd2. Keep in mind, if you do so, that you will lose standard pipewire until you stop jackd2 and restart wireplumber.

31 March, 2023 10:00PM

Scarlett Gately Moore

KDE Snaps! Many new releases, more to come.

I have been extremely busy the last 2 weeks churning out KDE snaps! All of the have been tested and released on AMD64 and Arm64 architectures. If you run into any problems please file bugs @ http://bugs.kde.org and feel free to assign me. Thanks!

KDE Krita snapKDE Krita snap

Krita Version 5.1.5

https://apps.kde.org/krita/

KDE Parley snapKDE Parley snap

Parley Version 22.12.3

https://apps.kde.org/parley/

KDE Kate snapKDE Kate snap

Kate Version 22.12.3

https://apps.kde.org/kate/

KDE Okular snapKDE Okular snap

Okular Version 22.12.3

https://apps.kde.org/okular/

KDE Haruna snapKDE Haruna snap

Haruna Version 0.10.3

https://apps.kde.org/haruna/

KDE Granatier snapKDE Granatier snap

Granatier Version 22.12.3

https://apps.kde.org/granatier/

KDE Gwenview snapKDE Gwenview snap

Gwenview Version 22.12.3

https://apps.kde.org/gwenview/

KDE Gcompris snapKDE Gcompris snap

GCompris-qt Version 3.2

https://apps.kde.org/gcompris/

KDE Bomber snapKDE Bomber snap

Bomber Version 22.12.3

https://apps.kde.org/bomber/

KDE Falkon snapKDE Falkon snap

Falkon Version 22.12.3

https://apps.kde.org/falkon/

KDE Ark snapKDE Ark snap

Ark Version 22.12.3

https://apps.kde.org/ark/

KDE Blinken snapKDE Blinken snap

Blinken Version 22.12.3

https://apps.kde.org/blinken/

KDE Bovo snapKDE Bovo Snap

Bovo Version 22.12.3

https://apps.kde.org/bovo/

KDE Atikulate snapKDE Atikulate snap

Artikulate Version 22.12.3

https://apps.kde.org/artikulate/

There are many more snaps coming your way!

As usual, I hate to ask, but if you can spare anything we appreciate it, even if they are just kind words, thank you!

https://gofund.me/d73342c3

31 March, 2023 05:47PM by sgmoore

Reproducible Builds (diffoscope)

diffoscope 240 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 240. This version includes the following changes:

[ Holger Levsen ]
* Update Lintian override info format in debian/source/lintian-overrides.
* Add Lintian overrides for some "very long lines" in test cases.
* Update Lintian overrides for tests being tagged source-is-missing or
  prebuilt.
* Add Lintian override for very long lines for debian/tests/control.
* Re-add two Lintian overrides about (well-known) source-is-missing
  instances.

[ Mattia Rizzolo ]
* Drop the use of include_package_data=True in setup.py.

You find out more by visiting the project homepage.

31 March, 2023 12:00AM

March 30, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

Buttering up my storage

(TL;DR: I’ve been trying out btrfs in some places instead of ext4, I’ve hit absolutely zero issues and there are a few features that make me plan to use it more.)

Despite (or perhaps because of) working on storage products for a reasonable chunk of my career I have tended towards a conservative approach to my filesystems. By the time I came to Linux ext2 was well established, the move to ext3 was a logical one (the joys of added journalling for faster recovery after unclean shutdowns) and for a long time my default stack has been MD raid with LVM2 on top and then ext4 as the filesystem.

I’ve dabbled with other filesystems; I ran XFS for a while on my VDR machine, and also when I had a large tradspool with INN, but never really had a hard requirement for it. I’ve ended up adminning a machine that had JFS in the past, largely for historical reasons, but don’t really remember any issues (vague recollections of NFS problems but that might just have been NFS being NFS).

However. ZFS has gathered itself a significant fan base and that makes me wonder about what it can offer and whether I want that. Firstly, let’s be clear that I’m never going to run a primary filesystem that isn’t part of the mainline kernel. So ZFS itself is out, because I run Linux. So what do I want that I can’t get with ext4? Firstly, I’d like data checksumming. As storage gets larger there’s a bigger chance of silent data corruption and while I have backups of the important stuff that doesn’t help if you don’t know you need to use them. Secondly, these days I have machines running containers, VMs, or with lots of source checkouts with a reasonable amount of overlap in their data. Disk space has got cheaper, but I’d still like to be able to do some sort of deduplication of common blocks.

So, I’ve been trying out btrfs. When I installed my desktop I went with btrfs for / and /home (I kept /boot as ext4). The thought process was that this was a local machine (so easy access if it all went wrong) and I take regular backups (so if it all went wrong I could recover). That was a year and a half ago and it’s been pretty dull; I mostly forget I’m running btrfs instead of ext4. This is on a machine that tracks Debian testing, so currently on kernel 6.1 but originally installed with 5.10. So it seems modern btrfs is reasonably stable for a machine that isn’t driven especially hard. Good start.

The fact I forget what filesystem I’m running points to the fact that I’m not actually doing anything special here. I get the advantage of data checksumming, but not much else. 2 things spring to mind. Firstly, I don’t do snapshots. Given I run testing it might be wiser if I did take a snapshot before every apt-get upgrade, and I have a friend who does just that, but even when I’ve run unstable I’ve never had a machine get itself into a state that I couldn’t recover so I haven’t spent time investigating. I note Ubuntu has apt-btrfs-snapshot but it doesn’t seem to have any updates for years.

The other thing I didn’t do when I installed my desktop is take advantage of subvolumes. I’m still trying to get my head around exactly what I want them for, but they provide a partial replacement for LVM when it comes to carving up disk space. Instead of the separate / and /home LVs I created I could have created a single LV that would have a single btrfs filesystem on it. / and /home would then be separate subvolumes, allowing me to snapshot each individually. Quotas can also be applied separately so there’s still the potential to prevent one subvolume taking all available space.

Encouraged by the lack of hassle with my desktop I decided to try moving my sbuild machine over to use btrfs for its build chroots. For Reasons this is a VM kindly hosted by a friend, rather than something local. To be honest these days I would probably go for local hosting, but it works and there’s no strong reason to move. The point is it’s remote, and so if migrating went wrong and I had to ask for assistance I’d be bothering someone who’s doing me a favour as it is.

The build VM is, of course, running LVM, and there was luckily some free space available. I’m reasonably sure the underlying storage involves spinning rust, so I did a laborious set of pvmove commands to make sure all the available space was at the start of the PV, and created a new btrfs volume there. I was advised that while btrfs-convert would do the job it was better to create a fresh filesystem where possible. This time I did create an initial root subvolume.

Configuring up sbuild was then much simpler than I’d expected. My setup originally started out as a set of tarballs for the chroots that would get untarred + used for the builds, which is pretty slow. Once overlayfs was mature enough I switched to that. I’d had a conversation with Enrico about his nspawn/btrfs setup, but it turned out Russ Allbery had written an excellent set of instructions on sbuild with btrfs. I tweaked my existing setup based on his details, and I was in business. Each chroot is a separate subvolume - I don’t actually end up having to mount them individually, but it means that only the chroot in use gets snapshotted. For example during a build the following can be observed:

# btrfs subvolume list /
ID 257 gen 111534 top level 5 path root
ID 271 gen 111525 top level 257 path srv/chroot/unstable-amd64-sbuild
ID 275 gen 27873 top level 257 path srv/chroot/bullseye-amd64-sbuild
ID 276 gen 27873 top level 257 path srv/chroot/buster-amd64-sbuild
ID 343 gen 111533 top level 257 path srv/chroot/snapshots/unstable-amd64-sbuild-328059a0-e74b-4d9f-be70-24b59ccba121

I was a little confused about whether I’d got something wrong because the snapshot top level is listed as 257 rather than 271, but digging further with btrfs subvolume show on the 2 mounted directories correctly showed the snapshot had a parent equal to the chroot, not /.

As a final step I ran jdupes via jdupes -1Br / to deduplicate things across the filesystem. It didn’t end up providing a significant saving unfortunately - I guess there’s a reasonable amount of change between Debian releases - but I think tried it on my desktop, which tends to have a large number of similar source trees checked out. There I managed to save about 5% on /home, which didn’t seem too shabby.

The sbuild setup has been in place for a couple of months now, and I’ve run quite a few builds on it while preparing for the freeze. So I’m fairly confident in the stability of the setup and my next move is to transition my local house server over to btrfs for its containers (which all run under systemd-nspawn). Those are generally running a Debian stable base so there should be a decent amount of commonality for deduping.

I’m not saying I’m yet at the point where I’ll default to btrfs on new installs, but I’m definitely looking at it for situations where I think I can get benefits from deduplication, or being able to divide up disk space without hard partitioning space.

(And, just to answer the worry I had when I started, I’ve got nowhere near ENOSPC problems, but I believe they’re handled much more gracefully these days. And my experience of ZFS when it got above 90% utilization was far from ideal too.)

30 March, 2023 06:50PM

Russell Coker

March 28, 2023

hackergotchi for Shirish Agarwal

Shirish Agarwal

Three Ancient Civilizations, Tesla, IMU and other things.

Three Ancient Civilizations

I have been reading books for a long time but somehow I don’t know how or when I realized that there are three medieval civilizations that time and again seem to fascinate the authors, either American or European. The three civilizations that do get mentioned every now and then are Egyptian, Greece and Roman and I have no clue as to why. Another two civilizations closely follow them, Mesopotian and Sumerian Civilizations. Why most of the authors irrespective of genre are mystified by these 5 civilizations is beyond me but they also conjure lot of imagination. Now if I was on the right side of 40, maybe 22-25 onwards and had the means or the opportunity or both, I would have gone and tried to learn as much as I can about these various civilizations. There is still a lot of enigma attached and it seems that the official explanations of how these various civilizations ended seems too good to be true or whatever. One of the more interesting points has been how Greece mythology were subverted and flipped to make Roman mythology. Apparently, many Greek gods have a Roman ‘aspect’ and the qualities are opposite to the Greek aspect. I haven’t learned the reason why it is so, yet.

Apart from the Iziko Natural Museum in South Africa which did have its share of wonders (the whole South African experience was surreal) nowhere I have witnessed stuff from the above other than in Africa. The whole thing seemed just surreal. I could go on but as I don’t know what is real and what is mythology when it comes to various cultures including whatever little I experienced in Africa.

Recent History

Having said the above, I also find that many Indians somehow do not either know or are not interested in understanding recent history. I am talking of the time between 1800s and 1950’s. British apologist Mr. William Dalrymple in his book ‘Anarchy‘ has shared how the East Indian Company looted India and gave less than half back to the British. So where did the rest of the money go ? It basically went to the tax free havens around UK. That tale is very much similar as to how the Axis gold was used to have an entity called Switzerland from scratch. There have been books as well as couple of miniseries or two that document that fact. But for me this is not the real story, this is more of a side dish per-se.

The real story perhaps is how the EU peace project was born. Because of Hitler’s rise and the way World War 2 happened, the Allies knew that they couldn’t subjugate Germany through another humiliation as they had after World War 1, who knows another Hitler might come. So while they did fine the Germans, they also helped them via the Marshall plan. Germany also realized that it had been warring with other European countries for over a thousand years as well as quite a few other countries. The first sort of treaty immediately in that regard was the Western Union Alliance . While on the face of it, it was a defense treaty between sovereign powers. Interestingly, as can be seen UK at that point in time wanted more countries to be part of the Treaty of Dunkirk. This was quickly followed 2 years later by the Treaty of London which made way for the Council of Europe to be formed. The reason I am sharing is because a lot of Indians whom I meet on SM do not know that UK was instrumental front and center for the formation of European Union (EU). Also they perhaps don’t realize that after World War 2, UK was greatly diminished both financially and militarily. That is the reason it gave back its erstwhile colonies including India and other countries. If UK hadn’t become a part of EU they would have been called the poor man of Europe as they had been called few hundred years ago.

In fact, after Brexit, UK has been the only nation that has fallen on the dire times as much as it has. They have practically no food, supermarkets running out of veggies and whatnot. electricity sky-high as they don’t want to curtail profits of the gas companies which in turn donate money to Tory coffers. Sadly, most of my brethen do not know that hence I have had to share it. The EU became a power in its own right due to Gravity model of trade. The U.S. is attempting the same thing and calls it near off-shoring.

Tesla, Investor Day Presentation, Toyota and Free Software

The above brings it nicely about the Tesla Investor Day that happened couple of weeks back. The biggest news though was broken just 24 hours earlier when the governor of Monterrey] shared how Giga New Mexico would be happening and shared some site photographs. There have been bits of news on that off-and-on since then. Toyota meanwhile has been putting lot of anti-EV posters and whatnot. In fact, India seems to be mirroring Japan in a lot of ways, the only difference is Japan is still superior economically than India. I do sincerely hope that at least Japan can get out of its lost decades. I have asked privately if any of the Japanese translators would be willing to translate the anti-EV poster so we may all know.

Anti-EV poster by Toyota

Urinal outside UK Embassy

About a week back few people chanting Khalistan entered the Indian Embassy and showed the flags. This was in the UK. Now in a retaliatory move against a friendly nation, we want to make a Urinal. after making a security downgrade for the Embassy. The pettiness being shown by Indian Govt. especially when it has very few friends. There are also plans to do the same for the U.S. and other embassies as well. I do not know when we will get common sense.

Holi

Just a few weeks back, Holi happened. It used to be a sweet and innocent festival. But from the last few years, I have been hearing and seeing sexual harassment on the rise. In fact, saw quite a few lewd posts written to women on Holi or verge of Holi and also videos of the same. One of the most shameful incidents occurred with a Japanese tourist. She was not only sexually molested but also terrorized that if she were to report then she could be raped and murdered. She promptly left Delhi. Once it was reported in mass media, Delhi Police tried to show it was doing something. FWIW, Delhi’s crime against women stats have been at record high and conviction at record low.

Ecommerce Rules being changed, logistics gonna be tough.

Just today there have been changes in Ecommerce rules (again) and this is gonna be a pain for almost all companies big and small with the exception of Adani and Ambani. Almost all players including the Tatas have called them out. Of course, all such laws have been passed without debate. In such a scenario, small startups like these cannot hope to grow their business. 😦

Inertial Measurement Unit or Full Body Tracking with cheap hardware.

Apparently, a whole host of companies are looking at 3-D tracking using cheap hardware apparently known as IMU sensors. Sooner than later cheap 3-D glasses and IMU sensors should explode the 3-D market worldwide. There is a huge potential and upside to it and will probably overtake smartphones as well. But then the danger will be of our thoughts, ideas, nightmares etc. to be shared without our consent. The more we tap into a virtual world, what stops anybody from tapping into our brain and practically stealing our identity in more than one way. I dunno if there are any Debian people or FSF projects working on the above. Even Laws can only do so much, until and unless there are alternative places and ways it would be difficult to say the least 😦

28 March, 2023 10:29PM by shirishag75

hackergotchi for Jonathan Dowland

Jonathan Dowland

daily log

I was asked on the Fediverse1 to describe my $dayjob daily workflow. Blogging about it seemed like a good opportunity to take stock of my current setup.

The core of my approach is to maintain a daily log, rather like an engineering Lab Book2. My number one tip for anyone starting a career in Software Engineering (or for that matter systems administration) is to keep a daily log of what you intended to do, what you actually did, what you changed, what changed as a consequence, etcetera. I've already written about the tools I use for that: vimwiki.

They key drawback of that system is managing TODO lists. In Vimwiki, tasks look like this:

  * [ ] buy milk (still todo)
  * [X] clean the car (done)

These get dispersed or duplicated around different days/wiki pages. It's hard to get a current view of open (or otherwise relevant) tasks, and impractical to update many copies of the same task when its state changes (such as when it's done).

The solution I've adopted for now is another Vim plugin, taskwiki, which synchronises tasks with Taskwarrior3, an external task-management tool.

If I mark a task as "done", Taskwiki updates all references to that task to reflect the new state. I can also construct queries to list all tasks matching some criteria. I have a special Vimwiki page named "Backlog" which runs the query "all tasks tagged 'redhat' in state 'pending'" (Linked from the boilerplate at the top of every page I write, for quick access):

= Backlog | +redhat status:pending =
* [ ] buy milk (still todo)
...

Much like the base Vimwiki plugin, Taskwiki is very opinionated, and I've had to tame it by disabling several of its features. I've also hit a couple of mildly frustrating bugs (#368, #425). I might one day have a go at writing an alternative, simpler plugin in Lua (Neovim's native scripting language), but for now it works well enough and I don't have the time.

There's very little in this current workflow for managing scheduling tasks, and that's probably where the focus should be for my next iterative improvement efforts. I think Taskwarrior, the underlying tool, has some good support for that. I'd particularly like some more visual approaches for managing the backlog, such as something Kanban-style.


  1. sorry I can't find who/the toot any more
  2. Analogous to an engineering lab book, but the analogy is not exact. I recently read a little bit about what Engineers are actually taught to do in a lab book, and it's quite a bit more narrowly-scoped than I realised. This slide-deck is interesting: https://www.training.nih.gov/assets/Lab_Notebook_508_(new).pdf
  3. I rarely use Taskwarrior directly yet, and so I haven't written about it independently of Taskwiki..

28 March, 2023 02:34PM

hackergotchi for Mike Gabriel

Mike Gabriel

UbuntuTouch Focal OTA-1 has been released

Yesterday, the UBports core developer team released Ubuntu Touch Focal OTA-1

(In fact, Raoul, Marius and I were in a conference call when Marius froze and said: the PR team already posted the release blog post; the post is out, but we haven't released yet... ahhhh... panic... Shall I?, Marius said, and we said: GO!!! This is why the release occurred in public five hours ahead of schedule. OMG.)

For all the details, please study:
https://ubports.com/blog/ubports-news-1/post/ubuntu-touch-ota-1-focal-re...

Credits

Thanks to all the developers, other contributors and funding providers that helped to reach this massive milestone. I dare to drop some names here at the risk of forgetting others (I put them in alphanumerical order): Alan, Alfred, Brian, Christoffer, Daniel, Eline, Florian, Guido, Jami, Jonathan, Kugi, Lionel, Maciek, Mardy, Marius, Mike, Nigel, Nikita, Raoul, Ratchanan, Robert, Sergey.

I have been involved in the development and release process over the past four years and I feel honoured to work with so many fine and genuine people on such a unique project.

It is a pleasure to work with you guys!!!

Also a big thanks to the UBports Foundation and its BoD for being the umbrella organisation of all Ubuntu Touch related initiatives.

Consumer-Ready

Ubuntu Touch is one of the very few Open Source projects that brings fourth a 100% FLOSS phone operating system.

After using Ubuntu Touch myself for several months now, I can confirm that it is a consumer grade OS that can be used by non-tech people as a daily driver for mobile communications and connectivity.

Go for it and try it out.

28 March, 2023 08:46AM by sunweaver

hackergotchi for Matt Brown

Matt Brown

Ventilation Monitoring: Ensuring every space has clean, fresh air

The importance of clean, fresh indoor air is one of the most tangible takeaways of the Covid-19 pandemic. In addition to being an effective risk mitigation strategy for reducing the spread of respiratory illnesses, clean, fresh air is necessary to enable effective cognitive performance.

Monitoring indoor air quality is relatively easy to do, but traditionally has not been a key focus. I believe air quality monitoring should be accessible for any indoor space, and for highly occupied indoor spaces should be provided on a continuous basis. This post explores the need and an opportunity for a business that can accelerate the adoption of ventilation monitoring through the following topics:

The importance of indoor air quality

Clean, fresh air is fundamental to life and health. That might sound obvious, but unfortunately being obvious is not enough to ensure the air we breathe is in fact always clean and healthy. Repeated studies have revealed that in many cases the air you’re breathing at school, in the bus or at work and probably also at home falls well below the ideal of what clean, fresh air should be.

Unclean air has potential long-term health impacts and has also been shown to lower cognitive performance impacting the ability to learn and work as well as increasing the risk of transmission of respiratory illnesses like Covid-19 and the flu.

Ventilation (replacing old stale air with clean fresh air) is the most effective and economical method of improving and maintaining high indoor air quality. Most New Zealand buildings (including schools and houses) are designed to rely on manual ventilation (opening windows), while newer buildings, often including larger or commercial buildings may use mechanical ventilation involving fans and ducts. Mechanical ventilation including filtration may also be required in situations where the outdoor air is not clean and fresh such as in a city or next to a busy intersection.

Observing the invisible

Overall air quality is a complex topic involving many contributing factors, many of which are invisible and not perceptible to us until well after adverse effects or irritation occur. This complexity and lack of visible signal is a large contributing factor to the ignorance and lack of attention towards indoor air quality that is prevalent in most buildings and indoor spaces today. Our attention is biased towards the risks that we can see, and this default bias has not been helped by hesitation and resistance to the idea that aerosol transmission and air quality is an important factor in preventing disease transmission that has only recently started to change. Zeynep Tufekci has a great overview that provides fascinating context for how an overreaction to the early incorrect theories of bad air and miasma causing disease contributed to aerosol transmission and air quality being incorrectly neglected for so long.

Correcting this history of inattention to indoor air quality is going to take time and effort, but one significant step that we can take to help start the journey to ensuring all indoor spaces have clean, healthy air is to make the invisible part visible. The concentration of carbon dioxide (CO2) in a space is an incredibly effective and easy to measure proxy for the ventilation of a space. The atmospheric background level of CO2 is around 420 parts per million (ppm), while our exhaled breath has concentrations as high as 40,000 ppm. Without effective ventilation, one or more people breathing in an enclosed space will rapidly lead to an observable increase in CO2 concentration, which in turn provides a signal that the ventilation is insufficient and needs to be improved.

Monitoring CO2 and improving ventilation is not a panacea for all possible air quality issues, but for the majority of buildings and indoor spaces, using CO2 as a proxy for ventilation and increasing ventilation when CO2 levels rise above recommended levels is a simple, effective and achievable approach that will deliver improvements in cognitive performance and reduction in the risk of disease transmission with few, if any, downsides or risks. See this Public Health Communication Centre briefing for a more detailed explanation.

Adding clean air to our hygiene practices

We have well established expectations of hygiene for the food we eat and the water we drink and these expectations are codified in regulations that ensure those providing these services do so in a way that gives us confidence that we’re not going to be at risk of illness. You may recall seeing food grade ratings prominently displayed on the walls of restaurants and cafes that you visit as an example of this.

Why should the air we breathe be treated any differently?

I think there is a strong argument that indoor air quality deserves regulation, both of the absolute quality of the air and ensuring that the practices and achieved air quality are clearly advertised and available. Ventilation monitoring via measurement of CO2 concentration provides an effective and achievable method that can be used to achieve this, and countries like Belgium and Japan are already starting to regulate indoor air quality. In the UK, the independent SAGE group of scientists has published “Scores on the Doors”, a proposal which demonstrates how CO2 monitoring can be helpful in providing information about the air quality of indoor spaces.

Unfortunately there is no movement in any of these directions in New Zealand yet, and no sign that regulation or even a basic campaign to raise awareness of ventilation and air quality is even being planned. This is disappointing, but even if such work was planned, it would still require appropriate ventilation monitoring products and services to enable it, and while there are some options available, it is not a fully solved space yet.

Existing ventilation monitoring options

Until recently the available offerings for ventilation monitoring have sat at two distinct ends of the price and quality spectrum:

  • Handheld “air quality meters” advertised as measuring CO2, but in reality reporting only an approximation. These meters do not contain actual CO2 sensors, and only approximate CO2 levels based on measurements of other components of the air. While cheap (often less than $100), these meters are not useful for providing reliable data that can be systematically used to assess and improve ventilation and should be avoided.

  • High-end building management systems (BMS), and industrial measurement products targeted at large buildings such as offices or commercial applications such as food production. These systems require specialist installation, often integrated with large whole-building air conditioning systems. These systems, if appropriately configured, can be a great solution for the types of buildings and spaces that can afford them, but by their nature and cost, they do not offer a solution for the majority of smaller buildings and indoor spaces where we tend to spend a lot of our time.

Over the last few years a growing number of companies have developed products that fit in between the unreliable “air quality meters” and the expensive BMS/industrial measurement products. Promising NZ-based options in this space include Air Suite, Tether and Monkeytronics. These products are wall mountable, resemble a smoke alarm and utilise a WiFi network to report their measurements to a supporting web service. Pricing varies between $200 and $300 ex GST per unit.

Aranet, while not NZ based, provides a handheld monitor – the Aranet4 Home, which is well regarded for quality and accuracy. Aranet4 Home devices are the most expensive in this space, retailing at $386 ex GST and offer a clunkier and less convenient set of connectivity options via a Bluetooth connection to an associated phone. To obtain similar reporting functionality to the other products requires upgrading to their Pro model and purchasing a separate base station at a combined cost of $1255 ex GST.

Outside the commercial product offerings are a number of open source DIY options, which can be built by anyone with basic electronics knowledge. AirGradient is a leading example based in Thailand, and within New Zealand Oliver Seiler’s CO2 Monitor provides similar functionality. These open source options have a parts cost in the $100-$150 range, depending on volume built and provide high-quality measurements via trusted CO2 sensors while also offering huge flexibility in terms of how they operate, interact with users and potential supporting web services.

An opportunity: Small businesses and organisations

While a growing number of high-quality CO2 monitors has the potential to help drive increased adoption of ventilation monitoring, the plethora of small businesses and organisations that own, operate and manage many of the indoor spaces we visit on a day-to-day basis do not appear to be well served by these existing products.

To deploy ventilation monitoring a small business or organisation needs to first become aware of the need or demand for it, and then have a simple and easy path to acquire and install the monitor and access the data. Little to no marketing or demand generation appears to be targeted towards this market from the existing businesses and tellingly, several of the products are not directly available for sale, requiring interaction with a salesperson to purchase. This indicates a focus on selling to larger customers who have a campus or portfolio of buildings and will purchase in larger quantities than the typical small business or organisation will.

Small businesses and organisations are likely to occupy smaller buildings and spaces where manual ventilation is the prevalent method of improving and maintaining air quality. Maintaining clean, fresh air via manual ventilation requires the occupants of the space to receive an obvious and straightforward signal when action (opening windows, etc) is required. While the products above all tend to provide some form of local feedback and display in the room, the indication provided and notification of when to take action is less obvious and prominent than would be ideal in a situation where manual ventilation is being relied upon.

Informally testing this opportunity with family and friends running small businesses over the last few months has resulted in promising feedback. One particular success story was the discovery of a fresh air duct on the air conditioning unit in a small office that had never been connected to the outside air and was simply recirculating air from the ceiling space back into an office! The resulting stuffiness and poor air quality had been noticed, but without the clear indication from the CO2 monitor that the air conditioning was actually making things worse, rather than better, the underlying issue had not been understood. With the issue fixed and the duct now connected, that business is now enjoying much more productive and healthy working conditions.

Next steps

Many small businesses and organisations are likely to have poor air quality and opportunities for improvement similar to the example above that are waiting to be found and fixed, and the existing products available are neither focused or ideal for the needs of this market.

I have spent some time over the past six months building a basic CO2 monitoring service that I have used to deploy ventilation monitoring to our local school, and a few other local businesses. There are a number of challenges that still need to be addressed in order to scale the business up, but I think there is a reasonable chance that I can build a viable business that offers an attractive and useful solution that would accelerate the deployment of ventilation monitoring for small businesses and organisations.

In an upcoming post, I will explain the foundations of the service that I have built to date, the challenges that need to be overcome and how I plan to evolve the service from the current prototype into a sustainable, bootstrapped business.

28 March, 2023 12:44AM

kpcyrd

Writing a Linux executable from scratch with x86_64-unknown-none and Rust

I recently mentioned on the internet I did work in this direction and a friend of mine asked me to write a blogpost on this. I didn’t blog for a long time (keeping all the goodness for myself hehe), so here we go. 🦝 To set the scene, let’s assume we want to make an exectuable binary for x86_64 Linux that’s supposed to be extremely portable. It should work on both Debian and Arch Linux. It should work on systems without glibc like Alpine Linux. It should even work in a FROM scratch Docker container. In a more serious setting you would statically link musl-libc with your Rust program, but today we’re in a silly-goofy mood so we’re going to try to make this work without a libc. And we’re also going to use Rust for this, more specifically the stable release channel of Rust, so this blog post won’t use any nightly-only features that might still change/break. If you’re using a Rust 1.0 version that was recent at the time of writing or later (>= 1.68.0 according to my computer), you should be able to try this at home just fine™.

This tutorial assumes you have no prior programming experience in any programming language, but it’s going to involve some x86_64 assembly. If you already know what a syscall is, you’ll be just fine. If this is your first exposure to programming you might still be able to follow along, but it might be a wild ride.

If you haven’t already, install rustup (possibly also available in your package manager, who knows?)

# when asked, press enter to confirm default settings
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

This is going to install everything you need to use Rust on Linux (this tutorial assumes you’re following along on Linux btw). Usually it’s still using a system linker (by calling the cc binary, and errors out if none is present), but instead we’re going to use rustup to install an additional target:

rustup target add x86_64-unknown-none

I don’t know if/how this is made available by Linux distributions, so I recommend following along with rust installed from rustup.

Anyway, we’re creating a new project with cargo, this creates a new directory that we can then change into (you might’ve done this before):

cargo new hack-the-planet
cd hack-the-planet

There’s going to be a file named Cargo.toml, we don’t need to make any changes there, but the one that was auto-generated for me at the time of writing looks like this:

[package]
name = "hack-the-planet"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]

There’s a second file named src/main.rs, it’s going to contain some pre-generated hello world, but we’re going to delete it and create a new, empty file:

rm src/main.rs
touch src/main.rs

Alrighty, leaving this file empty is not valid but we’re going to walk through the individual steps so we’re going to try to build with an empty file first. At this point I would like to credit this chapter of a fasterthanli.me series and a blogpost by Philipp Oppermann, this tutorial is merely an 2023 update and makes it work with stable Rust. Let’s run the build:

$ cargo build --release --target x86_64-unknown-none
   Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error[E0463]: can't find crate for `std`
  |
  = note: the `x86_64-unknown-none` target may not support the standard library
  = note: `std` is required by `hack_the_planet` because it does not declare `#![no_std]`

error[E0601]: `main` function not found in crate `hack_the_planet`
  |
  = note: consider adding a `main` function to `src/main.rs`

Some errors have detailed explanations: E0463, E0601.
For more information about an error, try `rustc --explain E0463`.
error: could not compile `hack-the-planet` due to 2 previous errors

Since this doesn’t use a libc (oh right, I forgot to mention this up to this point actually), this also means there’s no std standard library. Usually the standard library of Rust still uses the system libc to do syscalls, but since we specify our libc as none this means std won’t be available (use std::fs::rename won’t work). There are still other functions we can use and import, for example there’s core that’s effectively a second standard library, but much smaller.

To opt-out of the std standard library, we can put #![no_std] into src/main.rs:

#![no_std]

Running the build again:

$ cargo build --release --target x86_64-unknown-none
   Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error[E0601]: `main` function not found in crate `hack_the_planet`
 --> src/main.rs:1:11
  |
1 | #![no_std]
  |           ^ consider adding a `main` function to `src/main.rs`

For more information about this error, try `rustc --explain E0601`.
error: could not compile `hack-the-planet` due to previous error

Rust noticed we didn’t define a main function and suggest we add one. This isn’t what we want though so we’ll politely decline and inform Rust we don’t have a main and it shouldn’t attempt to call it. We’re adding #![no_main] to our file and src/main.rs now looks like this:

#![no_std]
#![no_main]

Running the build again:

$ cargo build
   Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error: `#[panic_handler]` function required, but not found

error: language item required, but not found: `eh_personality`
  |
  = note: this can occur when a binary crate with `#![no_std]` is compiled for a target where `eh_personality` is defined in the standard library
  = help: you may be able to compile for a target that doesn't need `eh_personality`, specify a target with `--target` or in `.cargo/config`

error: could not compile `hack-the-planet` due to 2 previous errors

Rust is asking us for a panic handler, basically “I’m going to jump to this address if something goes terribly wrong and execute whatever you put there”. Eventually we would put some code there to just exit the program, but for now an infinitely loop will do. This is likely going to get stripped away anyway by the compiler if it notices our program has no code-branches leading to a panic and the code is unused. Our src/main.rs now looks like this:

#![no_std]
#![no_main]

use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop {}
}

Running the build again:

$ cargo build --release --target x86_64-unknown-none
   Compiling hack-the-planet v0.1.0 (/hack-the-planet)
    Finished release [optimized] target(s) in 0.16s

Neat, it worked! What happens if we run it? 👀

$ target/x86_64-unknown-none/release/hack-the-planet
Segmentation fault (core dumped)

Oops. Let’s try to disassemble it:

$ objdump -d target/x86_64-unknown-none/release/hack-the-planet

target/x86_64-unknown-none/release/hack-the-planet:     file format elf64-x86-64

Ok that looks pretty “from scratch to me”. The file contains no cpu instructions. Also note how our infinity loop is not present (as predicted).

Making a basic program and executing it

Ok let’s try to make a valid program that basically just cleanly exits. First let’s try to add some cpu instructions and verify they’re indeed getting executed. Lemme introduce, the ✨ INT 3 ✨ instruction in x86_64 assembly. In binary it’s also known as the 0xCC opcode. It crashes our program in a slightly different way, so if the error message changes, we know it worked. The other tutorials use a #[naked] function for the entry point, but since this feature isn’t stabilized at the time of writing we’re going to use the global_asm! macro. Also don’t worry, I’m not going to introduce every assembly instruction individually. Our program now looks like this:

#![no_std]
#![no_main]

use core::arch::global_asm;
use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop {}
}

global_asm! {
    ".global _start",
    "_start:",
    "int 3"
}

Running the build again (ok basically from now on the build is always going to be expected to work unless I say otherwise):

$ cargo build --release --target x86_64-unknown-none
   Compiling hack-the-planet v0.1.0 (/hack-the-planet)
    Finished release [optimized] target(s) in 0.11s

Let’s try to disassemble the binary again:

$ objdump -d target/x86_64-unknown-none/release/hack-the-planet

target/x86_64-unknown-none/release/hack-the-planet:     file format elf64-x86-64


Disassembly of section .text:

0000000000001210 <_start>:
    1210:	cc                   	int3

And sure enough, there’s a cc instruction that was identified as int3. Let’s try to run this:

$ target/x86_64-unknown-none/release/hack-the-planet
Trace/breakpoint trap (core dumped)

The error message of the crash is now slightly different because it’s hitting our breakpoint cpu instruction. Funfact btw, if you run this in strace you can see this isn’t making any system calls (aka not talking to the kernel at all, it just crashes):

$ strace -f ./hack-the-planet
execve("./hack-the-planet", ["./hack-the-planet"], 0x74f12430d1d8 /* 39 vars */) = 0
--- SIGTRAP {si_signo=SIGTRAP, si_code=SI_KERNEL, si_addr=NULL} ---
+++ killed by SIGTRAP (core dumped) +++
[1]    2796457 trace trap (core dumped)  strace -f ./hack-the-planet

Let’s try to make a program that does a clean shutdown. To do this we inform the kernel with a system call that we may like to exit. We can get more info on this with man 2 exit and it defines exit like this:

[[noreturn]] void _exit(int status);

On Linux this syscall is actually called _exit and exit is implemented as a libc function, but we don’t care about any of that today, it’s going to do the job just fine. Also note how it takes a single argument of type int. In C-speak this means “signed 32 bit”, i32 in Rust.

Next we need to figure out the “syscall number” of this syscall. These numbers are cpu architecture specific for some reason (idk, idc). We’re looking these numbers up with ripgrep in /usr/include/asm/:

$ rg __NR_exit /usr/include/asm
/usr/include/asm/unistd_64.h
64:#define __NR_exit 60
235:#define __NR_exit_group 231

/usr/include/asm/unistd_x32.h
53:#define __NR_exit (__X32_SYSCALL_BIT + 60)
206:#define __NR_exit_group (__X32_SYSCALL_BIT + 231)

/usr/include/asm/unistd_32.h
5:#define __NR_exit 1
253:#define __NR_exit_group 252

Since we’re on x86_64 the correct value is the one in unistd_64.h, 60. Also, on x86_64 the syscall number goes into the rax cpu register, the status argument goes in the rdi register. The return value of the syscall is going to be placed in the rax register after the syscall is done, but for exit the execution is never given back to us. Let’s try to write 60 into the rax register and 69 into the rdi register. To copy into registers we’re going to use the mov destination, source instruction to copy from source to destination. With these registers setup we can use the syscall cpu instruction to hand execution over to the kernel. Don’t worry, there’s only one more assembly instruction coming and for everything else we’re going to use Rust.

Our code now looks like this:

#![no_std]
#![no_main]

use core::arch::global_asm;
use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop {}
}

global_asm! {
    ".global _start",
    "_start:",
    "mov rax, 60",
    "mov rdi, 69",
    "syscall"
}

Build the binary, run it and print the exit code:

$ cargo build --release --target x86_64-unknown-none
$ target/x86_64-unknown-none/release/hack-the-planet; echo $?
69

Nice. Rust is quite literally putting these cpu instructions into the binary for us, nothing else.

$ objdump -d target/x86_64-unknown-none/release/hack-the-planet

target/x86_64-unknown-none/release/hack-the-planet:     file format elf64-x86-64


Disassembly of section .text:

0000000000001210 <_start>:
    1210:	48 c7 c0 3c 00 00 00 	mov    $0x3c,%rax
    1217:	48 c7 c7 45 00 00 00 	mov    $0x45,%rdi
    121e:	0f 05                	syscall

Running this with strace shows the program does exactly one thing.

$ strace -f ./hack-the-planet
execve("./hack-the-planet", ["./hack-the-planet"], 0x70699fe8c908 /* 39 vars */) = 0
exit(69)                                = ?
+++ exited with 69 +++

Writing Rust

Ok but even though cpu instructions can be fun at times, I’d rather not deal with them most of the time (this might strike you as odd, considering this blog post). Instead let’s try to define a function in Rust and call into that instead. We’re going to define this function as unsafe (btw none of this is taking advantage of the safety guarantees by Rust in case it wasn’t obvious. This tutorial is mostly going to stick to unsafe Rust, but for bigger projects you can attempt to reduce your usage of unsafe to opt back into “normal” safe Rust), it also declares the function with #[no_mangle] so the function name is preserved as main and we can call it from our global_asm entry point. Lastely, when our program is started it’s going to get the stack address passed in one of the cpu registers, this value is expected to be passed to our function as an argument. Our function declares ! as return type, which means it never returns:

#[no_mangle]
unsafe fn main(_stack_top: *const u8) -> ! {
    // TODO: this is missing
}

This won’t compile yet, we need to add our assembly for the exit syscall back in.

#[no_mangle]
unsafe fn main(_stack_top: *const u8) -> ! {
    asm!(
        "syscall",
        in("rax") 60,
        in("rdi") 0,
        options(noreturn)
    );
}

This time we’re using the asm! macro, this is a slightly more declarative approach. We want to run the syscall cpu instruction with 60 in the rax register, and this time we want the rdi register to be zero, to indicate a successful exit. We also use options(noreturn) so Rust knows it should assume execution does not resume after this assembly is executed (the Linux kernel guarantees this). We modify our global_asm! entrypoint to call our new main function, and to copy the stack address from rsp into the register for the first argument rdi because it would otherwise get lost forever:

global_asm! {
    ".global _start",
    "_start:",
    "mov rdi, rsp",
    "call main"
}

Our full program now looks like this:

#![no_std]
#![no_main]

use core::arch::asm;
use core::arch::global_asm;
use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    loop {}
}

global_asm! {
    ".global _start",
    "_start:",
    "mov rdi, rsp",
    "call main"
}

#[no_mangle]
unsafe fn main(_stack_top: *const u8) -> ! {
    asm!(
        "syscall",
        in("rax") 60,
        in("rdi") 0,
        options(noreturn)
    );
}

After building and disassembling this the Rust compiler is slowly starting to do work for us:

$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet

target/x86_64-unknown-none/release/hack-the-planet:     file format elf64-x86-64


Disassembly of section .text:

0000000000001210 <_start>:
    1210:	48 89 e7             	mov    %rsp,%rdi
    1213:	e8 08 00 00 00       	call   1220 <main>
    1218:	cc                   	int3
    1219:	cc                   	int3
    121a:	cc                   	int3
    121b:	cc                   	int3
    121c:	cc                   	int3
    121d:	cc                   	int3
    121e:	cc                   	int3
    121f:	cc                   	int3

0000000000001220 <main>:
    1220:	50                   	push   %rax
    1221:	b8 3c 00 00 00       	mov    $0x3c,%eax
    1226:	31 ff                	xor    %edi,%edi
    1228:	0f 05                	syscall
    122a:	0f 0b                	ud2

The mov and syscall instructions are still the same, but it noticed it can XOR the rdi register with itself to set it to zero. It’s using x86 assembly language (the 32 bit variant of x86_64, that also happens to work on x86_64) to do so, that’s why the register is refered to as edi in the disassembly. You can also see it’s inserting a bunch of 0xCC instructions (for alignment) and Rust puts the opcodes 0x0F 0x0B at the end of the function to force an “invalid opcode exception” so the program is guaranteed to crash in case the exit syscall doesn’t do it.

This code still executes as expected:

$ strace -f ./hack-the-planet
execve("./hack-the-planet", ["./hack-the-planet"], 0x72dae7e5dc08 /* 39 vars */) = 0
exit(0)                                 = ?
+++ exited with 0 +++

Adding functions

Ok we’re getting closer but we aren’t quite there yet. Let’s try to write an exit function for our assembly that we can then call like a normal function. Remember that it takes a signed 32 bit integer that’s supposed to go into rdi.

unsafe fn exit(status: i32) -> ! {
    asm!(
        "syscall",
        in("rax") 60,
        in("rdi") status,
        options(noreturn)
    );
}

Actually, since this function doesn’t take any raw pointers and any i32 is valid for this syscall we’re going to remove the unsafe marker of this function. When doing this we still need to use unsafe { } within the function for our inline assembly.

fn exit(status: i32) -> ! {
    unsafe {
        asm!(
            "syscall",
            in("rax") 60,
            in("rdi") status,
            options(noreturn)
        );
    }
}

Let’s call this function from our main, and also remove the infinity loop of the panic handler with a call to exit(1):

#![no_std]
#![no_main]

use core::arch::asm;
use core::arch::global_asm;
use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    exit(1);
}

global_asm! {
    ".global _start",
    "_start:",
    "mov rdi, rsp",
    "call main"
}

fn exit(status: i32) -> ! {
    unsafe {
        asm!(
            "syscall",
            in("rax") 60,
            in("rdi") status,
            options(noreturn)
        );
    }
}

#[no_mangle]
unsafe fn main(_stack_top: *const u8) -> ! {
    exit(0);
}

Running this still works, but interestingly the generated assembly didn’t change at all:

$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet

target/x86_64-unknown-none/release/hack-the-planet:     file format elf64-x86-64


Disassembly of section .text:

0000000000001210 <_start>:
    1210:	48 89 e7             	mov    %rsp,%rdi
    1213:	e8 08 00 00 00       	call   1220 <main>
    1218:	cc                   	int3
    1219:	cc                   	int3
    121a:	cc                   	int3
    121b:	cc                   	int3
    121c:	cc                   	int3
    121d:	cc                   	int3
    121e:	cc                   	int3
    121f:	cc                   	int3

0000000000001220 <main>:
    1220:	50                   	push   %rax
    1221:	b8 3c 00 00 00       	mov    $0x3c,%eax
    1226:	31 ff                	xor    %edi,%edi
    1228:	0f 05                	syscall
    122a:	0f 0b                	ud2

Rust noticed there’s no need to make it a separate function at runtime and instead merged the instructions of the exit function directly into our main. It also noticed the 0 argument in exit(0) means “rdi is supposed to be zero” and uses the XOR optimization mentioned before.

Since main is not calling any unsafe functions anymore we could mark it as safe too, but in the next few functions we’re going to deal with file descriptors and raw pointers, so this is likely the only safe function we’re going to write in this tutorial so let’s just keep the unsafe marker.

Printing text

Ok let’s try to do a quick hello world, to do this we’re going to call the write syscall. Looking it up with man 2 write:

ssize_t write(int fd, const void buf[.count], size_t count);

The write syscall takes 3 arguments and returns a signed size_t. In Rust this is called isize. In C size_t is an unsigned integer type that can hold any value of sizeof(...) for the given platform, ssize_t can only store half of that because it uses one of the bits to indicate an error has occured (the first s means signed, write returns -1 in case of an error).

The arguments for write are:

  • the file descriptor to write to. stdout is located on file descriptor 1.
  • a pointer/address to some memory.
  • the number of bytes that should be written, starting at the given address.

Let’s also lookup the syscall number of write:

% rg __NR_write /usr/include/asm
/usr/include/asm/unistd_64.h
5:#define __NR_write 1
24:#define __NR_writev 20

/usr/include/asm/unistd_32.h
8:#define __NR_write 4
150:#define __NR_writev 146

/usr/include/asm/unistd_x32.h
5:#define __NR_write (__X32_SYSCALL_BIT + 1)
323:#define __NR_writev (__X32_SYSCALL_BIT + 516)

The value we’re looking for is 1. Let’s write our write function (heh).

unsafe fn write(fd: i32, buf: *const u8, count: usize) -> isize {
    let r0;
    asm!(
        "syscall",
        inlateout("rax") 1 => r0,
        in("rdi") fd,
        in("rsi") buf,
        in("rdx") count,
        lateout("rcx") _,
        lateout("r11") _,
        options(nostack, preserves_flags)
    );
    r0
}

Now that’s a lot of stuff at once. Since this syscall is actually going to hand execution back to our program we need to let Rust know which cpu registers the syscall is writing to, so Rust doesn’t attempt to use them to store data (that would be silently overwritten by the syscall). inlateout("raw") 1 => r0 means we’re writing a value to the register and want the result back in variable r0. in("rdi") fd means we want to write the value of fd into the rdi register. lateout("rcx") _ means the Linux kernel may write to that register (so the previous value may get lost), but we don’t want to store the value anywhere (the underscore acts as a dummy variable name).

This doesn’t compile just yet though

$ cargo build --release --target x86_64-unknown-none
   Compiling hack-the-planet v0.1.0 (/hack-the-planet)
error: incompatible types for asm inout argument
  --> src/main.rs:35:26
   |
35 |         inlateout("rax") 1 => r0,
   |                          ^    ^^ type `isize`
   |                          |
   |                          type `i32`
   |
   = note: asm inout arguments must have the same type, unless they are both pointers or integers of the same size

error: could not compile `hack-the-planet` due to previous error

Rust has inferred the type of r0 is isize since that’s what our function returns, but the type of the input value for the register was inferred to be i32. We’re going to select a specific number type to fix this.

unsafe fn write(fd: i32, buf: *const u8, count: usize) -> isize {
    let r0;
    asm!(
        "syscall",
        inlateout("rax") 1isize => r0,
        in("rdi") fd,
        in("rsi") buf,
        in("rdx") count,
        lateout("rcx") _,
        lateout("r11") _,
        options(nostack, preserves_flags)
    );
    r0
}

We can now call our new write function like this:

write(1, b"Hello world\n".as_ptr(), 12);

We need to set the number of bytes we want to write explicitly because there’s no concept of null-byte termination in the write system call, it’s quite literally “write the next X bytes, starting from this address”. Our program now looks like this:

#![no_std]
#![no_main]

use core::arch::asm;
use core::arch::global_asm;
use core::panic::PanicInfo;

#[panic_handler]
fn panic(_info: &PanicInfo) -> ! {
    exit(1);
}

global_asm! {
    ".global _start",
    "_start:",
    "mov rdi, rsp",
    "call main"
}

fn exit(status: i32) -> ! {
    unsafe {
        asm!(
            "syscall",
            in("rax") 60,
            in("rdi") status,
            options(noreturn)
        );
    }
}

unsafe fn write(fd: i32, buf: *const u8, count: usize) -> isize {
    let r0;
    asm!(
        "syscall",
        inlateout("rax") 1isize => r0,
        in("rdi") fd,
        in("rsi") buf,
        in("rdx") count,
        lateout("rcx") _,
        lateout("r11") _,
        options(nostack, preserves_flags)
    );
    r0
}

#[no_mangle]
unsafe fn main(_stack_top: *const u8) -> ! {
    write(1, b"Hello world\n".as_ptr(), 12);
    exit(0);
}

Let’s try to build and disassemble it:

$ cargo build --release --target x86_64-unknown-none
$ objdump -d target/x86_64-unknown-none/release/hack-the-planet

target/x86_64-unknown-none/release/hack-the-planet:     file format elf64-x86-64


Disassembly of section .text:

0000000000001220 <_start>:
    1220:	48 89 e7             	mov    %rsp,%rdi
    1223:	e8 08 00 00 00       	call   1230 <main>
    1228:	cc                   	int3
    1229:	cc                   	int3
    122a:	cc                   	int3
    122b:	cc                   	int3
    122c:	cc                   	int3
    122d:	cc                   	int3
    122e:	cc                   	int3
    122f:	cc                   	int3

0000000000001230 <main>:
    1230:	50                   	push   %rax
    1231:	48 8d 35 d5 ef ff ff 	lea    -0x102b(%rip),%rsi        # 20d <_start-0x1013>
    1238:	b8 01 00 00 00       	mov    $0x1,%eax
    123d:	ba 0c 00 00 00       	mov    $0xc,%edx
    1242:	bf 01 00 00 00       	mov    $0x1,%edi
    1247:	0f 05                	syscall
    1249:	b8 3c 00 00 00       	mov    $0x3c,%eax
    124e:	31 ff                	xor    %edi,%edi
    1250:	0f 05                	syscall
    1252:	0f 0b                	ud2

This time there are 2 syscalls, first write, then exit. For write it’s setting up the 3 arguments in our cpu registers (rdi, rsi, rdx). The lea instruction subtracts 0x102b from the rip register (the instruction pointer) and places the result in the rsi register. This is effectively saying “an address relative to wherever this code was loaded into memory”. The instruction pointer is going to point directly behind the opcodes of the lea instruction, so 0x1238 - 0x102b = 0x20d. This address is also pointed out in the disassembly as a comment.

We don’t see the string in our disassembly but we can convert our 0x20d hex to 525 in decimal and use dd to read 12 bytes from that offset, and sure enough:

$ dd bs=1 skip=525 count=12 if=target/x86_64-unknown-none/release/hack-the-planet
Hello world
12+0 records in
12+0 records out

Execute our binary with strace also shows the new write syscall (and the bytes that are being written mixed up in the output).

$ strace -f ./hack-the-planet
execve("./hack-the-planet", ["./hack-the-planet"], 0x74493abe64a8 /* 39 vars */) = 0
write(1, "Hello world\n", 12Hello world
)           = 12
exit(0)                                 = ?
+++ exited with 0 +++

After running strip on it to remove some symbols the binary is so small, if you open it in a text editor it fits on a screenshot:

28 March, 2023 12:00AM

March 27, 2023

Vincent Fourmond

QSoas version 3.2 is out

Version 3.2 of QSoas is out ! It is mostly a bug-fix release, fixing the computation mistake found in the eecr-relay wave shape fit, see the correction to our initial article in JACS. We strongly encourage all the users of the eecr-relay wave shape fit to upgrade, and, unfortunately, refit previously fitted data as the results might change. The other wave shape fits are not affected by the issue.

New features

In addition to this important bug fix, new possibilities have been added, including a way to make fits with partially global parameters using the new define-indexed-fit command, to pick the best parameters dataset-by-dataset within fit trajectories, but also a parameter space explorer trying all possible permutations of one or more sets of parameters, and the possibility to save the results of a command to a global ruby variable.

There are a lot of other new features, improvements and so on, look for the full list there.

About QSoas


QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.2. You can download for free its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.

27 March, 2023 04:06PM by Vincent Fourmond (noreply@blogger.com)

Simon Josefsson

OpenPGP master key on Nitrokey Start

I’ve used hardware-backed OpenPGP keys since 2006 when I imported newly generated rsa1024 subkeys to a FSFE Fellowship card. This worked well for several years, and I recall buying more ZeitControl cards for multi-machine usage and backup purposes. As a side note, I recall being unsatisfied with the weak 1024-bit RSA subkeys at the time – my primary key was a somewhat stronger 1280-bit RSA key created back in 2002 — but OpenPGP cards at the time didn’t support more than 1024 bit RSA, and were (and still often are) also limited to power-of-two RSA key sizes which I dislike.

I had my master key on disk with a strong password for a while, mostly to refresh expiration time of the subkeys and to sign other’s OpenPGP keys. At some point I stopped carrying around encrypted copies of my master key. That was my main setup when I migrated to a new stronger RSA 3744 bit key with rsa2048 subkeys on a YubiKey NEO back in 2014. At that point, signing other’s OpenPGP keys was a rare enough occurrence that I settled with bringing out my offline machine to perform this operation, transferring the public key to sign on USB sticks. In 2019 I re-evaluated my OpenPGP setup and ended up creating a offline Ed25519 key with subkeys on a FST-01G running Gnuk. My approach for signing other’s OpenPGP keys were still to bring out my offline machine and sign things using the master secret using USB sticks for storage and transport. Which meant I almost never did that, because it took too much effort. So my 2019-era Ed25519 key still only has a handful of signatures on it, since I had essentially stopped signing other’s keys which is the traditional way of getting signatures in return.

None of this caused any critical problem for me because I continued to use my old 2014-era RSA3744 key in parallel with my new 2019-era Ed25519 key, since too many systems didn’t handle Ed25519. However, during 2022 this changed, and the only remaining environment that I still used my RSA3744 key for was in Debian — and they require OpenPGP signatures on the new key to allow it to replace an older key. I was in denial about this sub-optimal solution during 2022 and endured its practical consequences, having to use the YubiKey NEO (which I had replaced with a permanently inserted YubiKey Nano at some point) for Debian-related purposes alone.

In December 2022 I bought a new laptop and setup a FST-01SZ with my Ed25519 key, and while I have taken a vacation from Debian, I continue to extend the expiration period on the old RSA3744-key in case I will ever have to use it again, so the overall OpenPGP setup was still sub-optimal. Having two valid OpenPGP keys at the same time causes people to use both for email encryption (leading me to have to use both devices), and the WKD Key Discovery protocol doesn’t like two valid keys either. At FOSDEM’23 I ran into Andre Heinecke at GnuPG and I couldn’t help complain about how complex and unsatisfying all OpenPGP-related matters were, and he mildly ignored my rant and asked why I didn’t put the master key on another smartcard. The comment sunk in when I came home, and recently I connected all the dots and this post is a summary of what I did to move my offline OpenPGP master key to a Nitrokey Start.

First a word about device choice, I still prefer to use hardware devices that are as compatible with free software as possible, but the FST-01G or FST-01SZ are no longer easily available for purchase. I got a comment about Nitrokey start in my last post, and had two of them available to experiment with. There are things to dislike with the Nitrokey Start compared to the YubiKey (e.g., relative insecure chip architecture, the bulkier form factor and lack of FIDO/U2F/OATH support) but – as far as I know – there is no more widely available owner-controlled device that is manufactured for an intended purpose of implementing an OpenPGP card. Thus it hits the sweet spot for me.

Nitrokey Start

The first step is to run latest firmware on the Nitrokey Start – for bug-fixes and important OpenSSH 9.0 compatibility – and there are reproducible-built firmware published that you can install using pynitrokey. I run Trisquel 11 aramo on my laptop, which does not include the Python Pip package (likely because it promotes installing non-free software) so that was a slight complication. Building the firmware locally may have worked, and I would like to do that eventually to confirm the published firmware, however to save time I settled with installing the Ubuntu 22.04 packages on my machine:

$ sha256sum python3-pip*
ded6b3867a4a4cbaff0940cab366975d6aeecc76b9f2d2efa3deceb062668b1c  python3-pip_22.0.2+dfsg-1ubuntu0.2_all.deb
e1561575130c41dc3309023a345de337e84b4b04c21c74db57f599e267114325  python3-pip-whl_22.0.2+dfsg-1ubuntu0.2_all.deb
$ doas dpkg -i python3-pip*
...
$ doas apt install -f
...
$

Installing pynitrokey downloaded a bunch of dependencies, and it would be nice to audit the license and security vulnerabilities for each of them. (Verbose output below slightly redacted.)

jas@kaka:~$ pip3 install --user pynitrokey
Collecting pynitrokey
  Downloading pynitrokey-0.4.34-py3-none-any.whl (572 kB)
Collecting frozendict~=2.3.4
  Downloading frozendict-2.3.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (113 kB)
Requirement already satisfied: click<9,>=8.0.0 in /usr/lib/python3/dist-packages (from pynitrokey) (8.0.3)
Collecting ecdsa
  Downloading ecdsa-0.18.0-py2.py3-none-any.whl (142 kB)
Collecting python-dateutil~=2.7.0
  Downloading python_dateutil-2.7.5-py2.py3-none-any.whl (225 kB)
Collecting fido2<2,>=1.1.0
  Downloading fido2-1.1.0-py3-none-any.whl (201 kB)
Collecting tlv8
  Downloading tlv8-0.10.0.tar.gz (16 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: certifi>=14.5.14 in /usr/lib/python3/dist-packages (from pynitrokey) (2020.6.20)
Requirement already satisfied: pyusb in /usr/lib/python3/dist-packages (from pynitrokey) (1.2.1.post1)
Collecting urllib3~=1.26.7
  Downloading urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting spsdk<1.8.0,>=1.7.0
  Downloading spsdk-1.7.1-py3-none-any.whl (684 kB)
Collecting typing_extensions~=4.3.0
  Downloading typing_extensions-4.3.0-py3-none-any.whl (25 kB)
Requirement already satisfied: cryptography<37,>=3.4.4 in /usr/lib/python3/dist-packages (from pynitrokey) (3.4.8)
Collecting intelhex
  Downloading intelhex-2.3.0-py2.py3-none-any.whl (50 kB)
Collecting nkdfu
  Downloading nkdfu-0.2-py3-none-any.whl (16 kB)
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from pynitrokey) (2.25.1)
Collecting tqdm
  Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
Collecting nrfutil<7,>=6.1.4
  Downloading nrfutil-6.1.7.tar.gz (845 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: cffi in /usr/lib/python3/dist-packages (from pynitrokey) (1.15.0)
Collecting crcmod
  Downloading crcmod-1.7.tar.gz (89 kB)
  Preparing metadata (setup.py) ... done
Collecting libusb1==1.9.3
  Downloading libusb1-1.9.3-py3-none-any.whl (60 kB)
Collecting pc_ble_driver_py>=0.16.4
  Downloading pc_ble_driver_py-0.17.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.9 MB)
Collecting piccata
  Downloading piccata-2.0.3-py3-none-any.whl (21 kB)
Collecting protobuf<4.0.0,>=3.17.3
  Downloading protobuf-3.20.3-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)
Collecting pyserial
  Downloading pyserial-3.5-py2.py3-none-any.whl (90 kB)
Collecting pyspinel>=1.0.0a3
  Downloading pyspinel-1.0.3.tar.gz (58 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: pyyaml in /usr/lib/python3/dist-packages (from nrfutil<7,>=6.1.4->pynitrokey) (5.4.1)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil~=2.7.0->pynitrokey) (1.16.0)
Collecting pylink-square<0.11.9,>=0.8.2
  Downloading pylink_square-0.11.1-py2.py3-none-any.whl (78 kB)
Collecting jinja2<3.1,>=2.11
  Downloading Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting bincopy<17.11,>=17.10.2
  Downloading bincopy-17.10.3-py3-none-any.whl (17 kB)
Collecting fastjsonschema>=2.15.1
  Downloading fastjsonschema-2.16.3-py3-none-any.whl (23 kB)
Collecting astunparse<2,>=1.6
  Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting oscrypto~=1.2
  Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB)
Collecting deepmerge==0.3.0
  Downloading deepmerge-0.3.0-py2.py3-none-any.whl (7.6 kB)
Collecting pyocd<=0.31.0,>=0.28.3
  Downloading pyocd-0.31.0-py3-none-any.whl (12.5 MB)
Collecting click-option-group<0.6,>=0.3.0
  Downloading click_option_group-0.5.5-py3-none-any.whl (12 kB)
Collecting pycryptodome<4,>=3.9.3
  Downloading pycryptodome-3.17-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Collecting pyocd-pemicro<1.2.0,>=1.1.1
  Downloading pyocd_pemicro-1.1.5-py3-none-any.whl (9.0 kB)
Requirement already satisfied: colorama<1,>=0.4.4 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (0.4.4)
Collecting commentjson<1,>=0.9
  Downloading commentjson-0.9.0.tar.gz (8.7 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: asn1crypto<2,>=1.2 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.0)
Collecting pypemicro<0.2.0,>=0.1.9
  Downloading pypemicro-0.1.11-py3-none-any.whl (5.7 MB)
Collecting libusbsio>=2.1.11
  Downloading libusbsio-2.1.11-py3-none-any.whl (247 kB)
Collecting sly==0.4
  Downloading sly-0.4.tar.gz (60 kB)
  Preparing metadata (setup.py) ... done
Collecting ruamel.yaml<0.18.0,>=0.17
  Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB)
Collecting cmsis-pack-manager<0.3.0
  Downloading cmsis_pack_manager-0.2.10-py2.py3-none-manylinux1_x86_64.whl (25.1 MB)
Collecting click-command-tree==1.1.0
  Downloading click_command_tree-1.1.0-py3-none-any.whl (3.6 kB)
Requirement already satisfied: bitstring<3.2,>=3.1 in /usr/lib/python3/dist-packages (from spsdk<1.8.0,>=1.7.0->pynitrokey) (3.1.7)
Collecting hexdump~=3.3
  Downloading hexdump-3.3.zip (12 kB)
  Preparing metadata (setup.py) ... done
Collecting fire
  Downloading fire-0.5.0.tar.gz (88 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/lib/python3/dist-packages (from astunparse<2,>=1.6->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.37.1)
Collecting humanfriendly
  Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
Collecting argparse-addons>=0.4.0
  Downloading argparse_addons-0.12.0-py3-none-any.whl (3.3 kB)
Collecting pyelftools
  Downloading pyelftools-0.29-py2.py3-none-any.whl (174 kB)
Collecting milksnake>=0.1.2
  Downloading milksnake-0.1.5-py2.py3-none-any.whl (9.6 kB)
Requirement already satisfied: appdirs>=1.4 in /usr/lib/python3/dist-packages (from cmsis-pack-manager<0.3.0->spsdk<1.8.0,>=1.7.0->pynitrokey) (1.4.4)
Collecting lark-parser<0.8.0,>=0.7.1
  Downloading lark-parser-0.7.8.tar.gz (276 kB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: MarkupSafe>=2.0 in /usr/lib/python3/dist-packages (from jinja2<3.1,>=2.11->spsdk<1.8.0,>=1.7.0->pynitrokey) (2.0.1)
Collecting asn1crypto<2,>=1.2
  Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB)
Collecting wrapt
  Downloading wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78 kB)
Collecting future
  Downloading future-0.18.3.tar.gz (840 kB)
  Preparing metadata (setup.py) ... done
Collecting psutil>=5.2.2
  Downloading psutil-5.9.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (280 kB)
Collecting capstone<5.0,>=4.0
  Downloading capstone-4.0.2-py2.py3-none-manylinux1_x86_64.whl (2.1 MB)
Collecting naturalsort<2.0,>=1.5
  Downloading naturalsort-1.5.1.tar.gz (7.4 kB)
  Preparing metadata (setup.py) ... done
Collecting prettytable<3.0,>=2.0
  Downloading prettytable-2.5.0-py3-none-any.whl (24 kB)
Collecting intervaltree<4.0,>=3.0.2
  Downloading intervaltree-3.1.0.tar.gz (32 kB)
  Preparing metadata (setup.py) ... done
Collecting ruamel.yaml.clib>=0.2.6
  Downloading ruamel.yaml.clib-0.2.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (485 kB)
Collecting termcolor
  Downloading termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting sortedcontainers<3.0,>=2.0
  Downloading sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: wcwidth in /usr/lib/python3/dist-packages (from prettytable<3.0,>=2.0->pyocd<=0.31.0,>=0.28.3->spsdk<1.8.0,>=1.7.0->pynitrokey) (0.2.5)
Building wheels for collected packages: nrfutil, crcmod, sly, tlv8, commentjson, hexdump, pyspinel, fire, intervaltree, lark-parser, naturalsort, future
  Building wheel for nrfutil (setup.py) ... done
  Created wheel for nrfutil: filename=nrfutil-6.1.7-py3-none-any.whl size=898520 sha256=de6f8803f51d6c26d24dc7df6292064a468ff3f389d73370433fde5582b84a10
  Stored in directory: /home/jas/.cache/pip/wheels/39/2b/9b/98ab2dd716da746290e6728bdb557b14c1c9a54cb9ed86e13b
  Building wheel for crcmod (setup.py) ... done
  Created wheel for crcmod: filename=crcmod-1.7-cp310-cp310-linux_x86_64.whl size=31422 sha256=5149ac56fcbfa0606760eef5220fcedc66be560adf68cf38c604af3ad0e4a8b0
  Stored in directory: /home/jas/.cache/pip/wheels/85/4c/07/72215c529bd59d67e3dac29711d7aba1b692f543c808ba9e86
  Building wheel for sly (setup.py) ... done
  Created wheel for sly: filename=sly-0.4-py3-none-any.whl size=27352 sha256=f614e413918de45c73d1e9a8dca61ca07dc760d9740553400efc234c891f7fde
  Stored in directory: /home/jas/.cache/pip/wheels/a2/23/4a/6a84282a0d2c29f003012dc565b3126e427972e8b8157ea51f
  Building wheel for tlv8 (setup.py) ... done
  Created wheel for tlv8: filename=tlv8-0.10.0-py3-none-any.whl size=11266 sha256=3ec8b3c45977a3addbc66b7b99e1d81b146607c3a269502b9b5651900a0e2d08
  Stored in directory: /home/jas/.cache/pip/wheels/e9/35/86/66a473cc2abb0c7f21ed39c30a3b2219b16bd2cdb4b33cfc2c
  Building wheel for commentjson (setup.py) ... done
  Created wheel for commentjson: filename=commentjson-0.9.0-py3-none-any.whl size=12092 sha256=28b6413132d6d7798a18cf8c76885dc69f676ea763ffcb08775a3c2c43444f4a
  Stored in directory: /home/jas/.cache/pip/wheels/7d/90/23/6358a234ca5b4ec0866d447079b97fedf9883387d1d7d074e5
  Building wheel for hexdump (setup.py) ... done
  Created wheel for hexdump: filename=hexdump-3.3-py3-none-any.whl size=8913 sha256=79dfadd42edbc9acaeac1987464f2df4053784fff18b96408c1309b74fd09f50
  Stored in directory: /home/jas/.cache/pip/wheels/26/28/f7/f47d7ecd9ae44c4457e72c8bb617ef18ab332ee2b2a1047e87
  Building wheel for pyspinel (setup.py) ... done
  Created wheel for pyspinel: filename=pyspinel-1.0.3-py3-none-any.whl size=65033 sha256=01dc27f81f28b4830a0cf2336dc737ef309a1287fcf33f57a8a4c5bed3b5f0a6
  Stored in directory: /home/jas/.cache/pip/wheels/95/ec/4b/6e3e2ee18e7292d26a65659f75d07411a6e69158bb05507590
  Building wheel for fire (setup.py) ... done
  Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116951 sha256=3d288585478c91a6914629eb739ea789828eb2d0267febc7c5390cb24ba153e8
  Stored in directory: /home/jas/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95
  Building wheel for intervaltree (setup.py) ... done
  Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26119 sha256=5ff1def22ba883af25c90d90ef7c6518496fcd47dd2cbc53a57ec04cd60dc21d
  Stored in directory: /home/jas/.cache/pip/wheels/fa/80/8c/43488a924a046b733b64de3fac99252674c892a4c3801c0a61
  Building wheel for lark-parser (setup.py) ... done
  Created wheel for lark-parser: filename=lark_parser-0.7.8-py2.py3-none-any.whl size=62527 sha256=3d2ec1d0f926fc2688d40777f7ef93c9986f874169132b1af590b6afc038f4be
  Stored in directory: /home/jas/.cache/pip/wheels/29/30/94/33e8b58318aa05cb1842b365843036e0280af5983abb966b83
  Building wheel for naturalsort (setup.py) ... done
  Created wheel for naturalsort: filename=naturalsort-1.5.1-py3-none-any.whl size=7526 sha256=bdecac4a49f2416924548cae6c124c85d5333e9e61c563232678ed182969d453
  Stored in directory: /home/jas/.cache/pip/wheels/a6/8e/c9/98cfa614fff2979b457fa2d9ad45ec85fa417e7e3e2e43be51
  Building wheel for future (setup.py) ... done
  Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492037 sha256=57a01e68feca2b5563f5f624141267f399082d2f05f55886f71b5d6e6cf2b02c
  Stored in directory: /home/jas/.cache/pip/wheels/5e/a9/47/f118e66afd12240e4662752cc22cefae5d97275623aa8ef57d
Successfully built nrfutil crcmod sly tlv8 commentjson hexdump pyspinel fire intervaltree lark-parser naturalsort future
Installing collected packages: tlv8, sortedcontainers, sly, pyserial, pyelftools, piccata, naturalsort, libusb1, lark-parser, intelhex, hexdump, fastjsonschema, crcmod, asn1crypto, wrapt, urllib3, typing_extensions, tqdm, termcolor, ruamel.yaml.clib, python-dateutil, pyspinel, pypemicro, pycryptodome, psutil, protobuf, prettytable, oscrypto, milksnake, libusbsio, jinja2, intervaltree, humanfriendly, future, frozendict, fido2, ecdsa, deepmerge, commentjson, click-option-group, click-command-tree, capstone, astunparse, argparse-addons, ruamel.yaml, pyocd-pemicro, pylink-square, pc_ble_driver_py, fire, cmsis-pack-manager, bincopy, pyocd, nrfutil, nkdfu, spsdk, pynitrokey
  WARNING: The script nitropy is installed in '/home/jas/.local/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed argparse-addons-0.12.0 asn1crypto-1.5.1 astunparse-1.6.3 bincopy-17.10.3 capstone-4.0.2 click-command-tree-1.1.0 click-option-group-0.5.5 cmsis-pack-manager-0.2.10 commentjson-0.9.0 crcmod-1.7 deepmerge-0.3.0 ecdsa-0.18.0 fastjsonschema-2.16.3 fido2-1.1.0 fire-0.5.0 frozendict-2.3.5 future-0.18.3 hexdump-3.3 humanfriendly-10.0 intelhex-2.3.0 intervaltree-3.1.0 jinja2-3.0.3 lark-parser-0.7.8 libusb1-1.9.3 libusbsio-2.1.11 milksnake-0.1.5 naturalsort-1.5.1 nkdfu-0.2 nrfutil-6.1.7 oscrypto-1.3.0 pc_ble_driver_py-0.17.0 piccata-2.0.3 prettytable-2.5.0 protobuf-3.20.3 psutil-5.9.4 pycryptodome-3.17 pyelftools-0.29 pylink-square-0.11.1 pynitrokey-0.4.34 pyocd-0.31.0 pyocd-pemicro-1.1.5 pypemicro-0.1.11 pyserial-3.5 pyspinel-1.0.3 python-dateutil-2.7.5 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.7 sly-0.4 sortedcontainers-2.4.0 spsdk-1.7.1 termcolor-2.2.0 tlv8-0.10.0 tqdm-4.65.0 typing_extensions-4.3.0 urllib3-1.26.15 wrapt-1.15.0
jas@kaka:~$

Then upgrading the device worked remarkable well, although I wish that the tool would have printed URLs and checksums for the firmware files to allow easy confirmation.

jas@kaka:~$ PATH=$PATH:/home/jas/.local/bin
jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.15-5D271572: Nitrokey Nitrokey Start (RTM.12.1-RC2-modified)
jas@kaka:~$ nitropy start update
Command line tool to interact with Nitrokey devices 0.4.34
Nitrokey Start firmware update tool
Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
System: Linux, is_linux: True
Python: 3.10.6
Saving run log to: /tmp/nitropy.log.gc5753a8
Admin PIN: 
Firmware data to be used:
- FirmwareType.REGNUAL: 4408, hash: ...b'72a30389' valid (from ...built/RTM.13/regnual.bin)
- FirmwareType.GNUK: 129024, hash: ...b'25a4289b' valid (from ...prebuilt/RTM.13/gnuk.bin)
Currently connected device strings:
Device: 
    Vendor: Nitrokey
   Product: Nitrokey Start
    Serial: FSIJ-1.2.15-5D271572
  Revision: RTM.12.1-RC2-modified
    Config: *:*:8e82
       Sys: 3.0
     Board: NITROKEY-START-G
initial device strings: [{'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.15-5D271572', 'Revision': 'RTM.12.1-RC2-modified', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G'}]
Please note:
- Latest firmware available is: 
  RTM.13 (published: 2022-12-08T10:59:11Z)
- provided firmware: None
- all data will be removed from the device!
- do not interrupt update process - the device may not run properly!
- the process should not take more than 1 minute
Do you want to continue? [yes/no]: yes
...
Starting bootloader upload procedure
Device: Nitrokey Start FSIJ-1.2.15-5D271572
Connected to the device
Running update!
Do NOT remove the device from the USB slot, until further notice
Downloading flash upgrade program...
Executing flash upgrade...
Waiting for device to appear:
  Wait 20 seconds.....

Downloading the program
Protecting device
Finish flashing
Resetting device
Update procedure finished. Device could be removed from USB slot.

Currently connected device strings (after upgrade):
Device: 
    Vendor: Nitrokey
   Product: Nitrokey Start
    Serial: FSIJ-1.2.19-5D271572
  Revision: RTM.13
    Config: *:*:8e82
       Sys: 3.0
     Board: NITROKEY-START-G
device can now be safely removed from the USB slot
final device strings: [{'name': '', 'Vendor': 'Nitrokey', 'Product': 'Nitrokey Start', 'Serial': 'FSIJ-1.2.19-5D271572', 'Revision': 'RTM.13', 'Config': '*:*:8e82', 'Sys': '3.0', 'Board': 'NITROKEY-START-G'}]
finishing session 2023-03-16 21:49:07.371291
Log saved to: /tmp/nitropy.log.gc5753a8
jas@kaka:~$ 

jas@kaka:~$ nitropy start list
Command line tool to interact with Nitrokey devices 0.4.34
:: 'Nitrokey Start' keys:
FSIJ-1.2.19-5D271572: Nitrokey Nitrokey Start (RTM.13)
jas@kaka:~$ 

Before importing the master key to this device, it should be configured. Note the commands in the beginning to make sure scdaemon/pcscd is not running because they may have cached state from earlier cards. Change PIN code as you like after this, my experience with Gnuk was that the Admin PIN had to be changed first, then you import the key, and then you change the PIN.

jas@kaka:~$ gpg-connect-agent "SCD KILLSCD" "SCD BYE" /bye
OK
ERR 67125247 Slut på fil <GPG Agent>
jas@kaka:~$ ps auxww|grep -e pcsc -e scd
jas        11651  0.0  0.0   3468  1672 pts/0    R+   21:54   0:00 grep --color=auto -e pcsc -e scd
jas@kaka:~$ gpg --card-edit

Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: [not set]
Language prefs ...: [not set]
Salutation .......: 
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

gpg/card> admin
Admin commands are allowed

gpg/card> kdf-setup

gpg/card> passwd
gpg: OpenPGP card no. D276000124010200FFFE5D2715720000 detected

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? 3
PIN changed.

1 - change PIN
2 - unblock PIN
3 - change Admin PIN
4 - set the Reset Code
Q - quit

Your selection? q

gpg/card> name
Cardholder's surname: Josefsson
Cardholder's given name: Simon

gpg/card> lang
Language preferences: sv

gpg/card> sex
Salutation (M = Mr., F = Ms., or space): m

gpg/card> login
Login data (account name): jas

gpg/card> url
URL to retrieve public key: https://josefsson.org/key-20190320.txt

gpg/card> forcesig

gpg/card> key-attr
Changing card key attribute for: Signature key
Please select what kind of key you want:
   (1) RSA
   (2) ECC
Your selection? 2
Please select which elliptic curve you want:
   (1) Curve 25519
   (4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519
Note: There is no guarantee that the card supports the requested size.
      If the key generation does not succeed, please check the
      documentation of your card to see what sizes are allowed.
Changing card key attribute for: Encryption key
Please select what kind of key you want:
   (1) RSA
   (2) ECC
Your selection? 2
Please select which elliptic curve you want:
   (1) Curve 25519
   (4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: cv25519
Changing card key attribute for: Authentication key
Please select what kind of key you want:
   (1) RSA
   (2) ECC
Your selection? 2
Please select which elliptic curve you want:
   (1) Curve 25519
   (4) NIST P-384
Your selection? 1
The card will now be re-configured to generate a key of type: ed25519

gpg/card> 
jas@kaka:~$ gpg --card-edit

Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
KDF setting ......: on
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

jas@kaka:~$ 

Once setup, bring out your offline machine and boot it and mount your USB stick with the offline key. The paths below will be different, and this is using a somewhat unorthodox approach of working with fresh GnuPG configuration paths that I chose for the USB stick.

jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ cp -a gnupghome-backup-masterkey gnupghome-import-nitrokey-5D271572
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ gpg --homedir $PWD/gnupghome-import-nitrokey-5D271572 --edit-key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

sec  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expired: 2019-10-22  usage: SC  
     trust: ultimate      validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>

gpg> keytocard
Really move the primary key? (y/N) y
Please select where to store the key:
   (1) Signature key
   (3) Authentication key
Your selection? 1

sec  ed25519/D73CF638C53C06BE
     created: 2019-03-20  expired: 2019-10-22  usage: SC  
     trust: ultimate      validity: expired
[ expired] (1). Simon Josefsson <simon@josefsson.org>

gpg> 
Save changes? (y/N) y
jas@kaka:/media/jas/2c699cbd-b77e-4434-a0d6-0c4965864296$ 

At this point it is useful to confirm that the Nitrokey has the master key available and that is possible to sign statements with it, back on your regular machine:

jas@kaka:~$ gpg --card-status
Reader ...........: 20A0:4211:FSIJ-1.2.19-5D271572:0
Application ID ...: D276000124010200FFFE5D2715720000
Application type .: OpenPGP
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 5D271572
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Salutation .......: Mr.
URL of public key : https://josefsson.org/key-20190320.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: ed25519 cv25519 ed25519
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 1
KDF setting ......: on
Signature key ....: B1D2 BD13 75BE CB78 4CF4  F8C4 D73C F638 C53C 06BE
      created ....: 2019-03-20 23:37:24
Encryption key....: [none]
Authentication key: [none]
General key info..: pub  ed25519/D73CF638C53C06BE 2019-03-20 Simon Josefsson <simon@josefsson.org>
sec>  ed25519/D73CF638C53C06BE  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 5D271572
ssb>  ed25519/80260EE8A9B92B2B  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 42315277
ssb>  ed25519/51722B08FE4745A2  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 42315277
ssb>  cv25519/02923D7EE76EBD60  created: 2019-03-20  expires: 2023-09-19
                                card-no: FFFE 42315277
jas@kaka:~$ echo foo|gpg -a --sign|gpg --verify
gpg: Signature made Thu Mar 16 22:11:02 2023 CET
gpg:                using EDDSA key B1D2BD1375BECB784CF4F8C4D73CF638C53C06BE
gpg: Good signature from "Simon Josefsson <simon@josefsson.org>" [ultimate]
jas@kaka:~$ 

Finally to retrieve and sign a key, for example Andre Heinecke’s that I could confirm the OpenPGP key identifier from his business card.

jas@kaka:~$ gpg --locate-external-keys aheinecke@gnupg.com
gpg: key 1FDF723CF462B6B1: public key "Andre Heinecke <aheinecke@gnupg.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   2  signed:   7  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1  valid:   7  signed:  64  trust: 7-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2023-05-26
pub   rsa3072 2015-12-08 [SC] [expires: 2025-12-05]
      94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1
uid           [ unknown] Andre Heinecke <aheinecke@gnupg.com>
sub   ed25519 2017-02-13 [S]
sub   ed25519 2017-02-13 [A]
sub   rsa3072 2015-12-08 [E] [expires: 2025-12-05]
sub   rsa3072 2015-12-08 [A] [expires: 2025-12-05]

jas@kaka:~$ gpg --edit-key "94A5C9A03C2FE5CA3B095D8E1FDF723CF462B6B1"
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.


pub  rsa3072/1FDF723CF462B6B1
     created: 2015-12-08  expires: 2025-12-05  usage: SC  
     trust: unknown       validity: unknown
sub  ed25519/2978E9D40CBABA5C
     created: 2017-02-13  expires: never       usage: S   
sub  ed25519/DC74D901C8E2DD47
     created: 2017-02-13  expires: never       usage: A   
The following key was revoked on 2017-02-23 by RSA key 1FDF723CF462B6B1 Andre Heinecke <aheinecke@gnupg.com>
sub  cv25519/1FFE3151683260AB
     created: 2017-02-13  revoked: 2017-02-23  usage: E   
sub  rsa3072/8CC999BDAA45C71F
     created: 2015-12-08  expires: 2025-12-05  usage: E   
sub  rsa3072/6304A4B539CE444A
     created: 2015-12-08  expires: 2025-12-05  usage: A   
[ unknown] (1). Andre Heinecke <aheinecke@gnupg.com>

gpg> sign

pub  rsa3072/1FDF723CF462B6B1
     created: 2015-12-08  expires: 2025-12-05  usage: SC  
     trust: unknown       validity: unknown
 Primary key fingerprint: 94A5 C9A0 3C2F E5CA 3B09  5D8E 1FDF 723C F462 B6B1

     Andre Heinecke <aheinecke@gnupg.com>

This key is due to expire on 2025-12-05.
Are you sure that you want to sign this key with your
key "Simon Josefsson <simon@josefsson.org>" (D73CF638C53C06BE)

Really sign? (y/N) y

gpg> quit
Save changes? (y/N) y
jas@kaka:~$ 

This is on my day-to-day machine, using the NitroKey Start with the offline key. No need to boot the old offline machine just to sign keys or extend expiry anymore! At FOSDEM’23 I managed to get at least one DD signature on my new key, and the Debian keyring maintainers accepted my Ed25519 key. Hopefully I can now finally let my 2014-era RSA3744 key expire in 2023-09-19 and not extend it any further. This should finish my transition to a simpler OpenPGP key setup, yay!

27 March, 2023 02:57PM by simon

Russell Coker

Strange X11 Grabbing

A couple of days ago I upgraded my home server from Debian/Bullseye to Debian/Testing (soon to be Bookworm). Since then KDE sessions on that system have had problems of locking the input queue, the mouse can move and mouse-over events work but clicking the mouse or pressing the keyboard does nothing. Various web pages suggested that the xdotool program (in the xdotool package in Debian) can address this. The problem is apparently programs “grabbing” the input and not letting it go.

The command “xdotool key XF86LogGrabInfo” causes the xorg server to dump information on it’s “grabs”. After running that command I looked in /var/log/Xorg.0.log and found that active grabs were only held by /usr/bin/kwin_x11 and /usr/bin/kglobalaccel5. So it seems like a KDE issue. Other systems running X11 with Debian/Testing (such as the laptop I’m using to write this blog post) don’t have the problem, so it could be something related to the KDE configuration of the account used on that system.

The command “xdotool key XF86Ungrab” is supposed to break out of such a grab, but for me didn’t do so.

On the same system running KDE with Wayland works fine in this regard. Does Wayland do things differently and not allow this “grabbing” to block everything? Does KDE have an X11 specific bug? Is there a race condition that just gets triggered by the speed of Xorg on that system but not by the slightly different timings of Wayland? I might never find out.

I previously wrote about problems with Wayland/KDE on laptops [1]. Fortunately this bug happened to occur on a server so inability to reconfigure monitors isn’t necessarily a deal breaker, although being unable to use some of the high-DPI settings for the 4K monitor it has may be an issue. It will be really annoying if some of the laptop configurations I support get this grabbing problem. But since that time I have learned of the kscreen-doctor command which is included in Debian/Testing and can do some of the necessary things, it doesn’t have a man page so you have to run “kscreen-doctor -h” for documentation.

27 March, 2023 12:22PM by etbe

hackergotchi for Jonathan Dowland

Jonathan Dowland

Imaging Optical Media, Part 3: Figuring out disc contents

Too many years ago, I started (but did not finish) a series of blog posts on the topic of Imaging Optical Media. I was writing it as I was figuring out the process to use whilst importing my own piles of home-made CD-Rs and DVD-Rs to a more suitable storage.

Back in 2018 Antoine Beaupré blogged about being inspired by my article series to sort out his optical media collection, and wrote up his notes so far.

This inspired me (in 2018!) to change the approach for writing up what I'd been doing. Instead of a series of blog posts, I've dumped all the material I have written in one place: imaging discs. I've also reduced the scope somewhat, deciding that "Organising the extracted files" is a large enough (and orthogonal) topic to deserve its own page (if I ever write anything about it).

The "new" material is really two unfinished blog posts: ?figuring out disc contents and ?Retrying damaged/degraded discs. I post them now in the hope that they are useful despite being unfinished.

27 March, 2023 10:30AM

March 26, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.3.18 on CRAN: New and Updated Scripts

max-heap image

The nineteenth release of littler as a CRAN package landed this morning, following in the now seventeen year history (!!) as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release is somewhat unique is not having any maintenance of the small core but rather focussing exclusively on changes to the (many !!) scripts collected in examples/scripts/. We added new ones and extended several existing ones including install2.r thanks to patches by @eitsupi. Of the new scripts, I already use tttl.r quite a bit for tests on co-maintained packages that use testthat (but as I have made known before my heart belongs firmly to tinytest as nobody should need thirty-plus recursive dependencies to run a few test predicates) as well as the very new installRub.r to directly add r-universe Ubuntu binaries onto an r2u systems as demonstrated in a first and second tweet (and corresponding toot) this week.

The full change description follows.

Changes in littler version 0.3.18 (2023-03-25)

  • Changes in examples scripts

    • roxy.r can now set an additional --libpath

    • getRStudioDesktop.r and getRStudioServer.r have updated default download file

    • install2.r and installGithub.r can set --type

    • r2u.r now has a --suffix option

    • tt.r removes a redundant library call

    • tttl.r has been added for testthat::test_local()

    • installRub.r has been added to install r-universe binaries on Ubuntu

    • install2.r has updated error capture messages (Tatsuya Shima and Dirk in #104)

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian (currently in ‘experimental’ due to a release freeze) as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter — and of course via r2u following its next refresh.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 March, 2023 01:17PM

Emanuele Rocca

EFI and Secure Boot Notes

To create a bootable EFI drive to use with QEMU, first make a disk image and create a vfat filesystem on it.

$ dd if=/dev/zero of=boot.img bs=1M count=512
$ sudo mkfs.vfat boot.img

By default, EFI firmwares boot a specific file under /efi/boot/. The name of such file depends on the architecture: for example, on 64 bit x86 systems it is bootx64.efi, while on ARM it is bootaa64.efi.

Copy /usr/lib/grub/x86_64-efi/monolithic/grubx64.efi from package grub-efi-amd64-bin to /efi/boot/bootx64.efi on the boot image, and that should be enough to start GRUB.

# mount boot.img /mnt/
# mkdir -p /mnt/efi/boot/
# cp /usr/lib/grub/x86_64-efi/monolithic/grubx64.efi /mnt/efi/boot/bootx64.efi
# umount /mnt/

Now get the x86 firmware from package ovmf and start qemu:

$ cp /usr/share/OVMF/OVMF_CODE.fd /tmp/code.fd
$ qemu-system-x86_64 -drive file=/tmp/code.fd,format=raw,if=pflash -cdrom boot.img

GRUB looks fine, but it would be good to have a kernel to boot. Let’s add one to boot.img.

# mount boot.img /mnt
# cp vmlinuz-6.1.0-7-amd64 /mnt/vmlinuz
# umount /mnt/

Boot with qemu again, but this time pass -m 1G. The default amount of memory is not enough to boot.

$ qemu-system-x86_64 -drive file=/tmp/code.fd,format=raw,if=pflash -cdrom boot.img -m 1G

At the grub prompt, type the following to boot:

grub> linux /vmlinuz
grub> boot

The kernel will start and reach the point of trying to mount the root fs. This is great but it would now be useful to have some sort of shell access in order to look around. Let’s add an initrd!

# mount boot.img /mnt
# cp initrd.img-6.1.0-7-amd64 /mnt/initrd
# umount /mnt/

There’s the option of starting qemu in console, let’s try that out. Start qemu with -nographic, and append console=ttyS0 to the kernel command line arguments.

$ qemu-system-x86_64 -drive file=/tmp/code.fd,format=raw,if=pflash -cdrom boot.img -m 1G -nographic
grub> linux /vmlinuz console=ttyS0
grub> initrd /initrd
grub> boot

If all went well we are now in the initramfs shell. We can now run commands! At this point we can see that the system has Secure boot disabled:

(initramfs) dmesg | grep secureboot
[    0.000000] secureboot: Secure boot disabled

In order to boot with Secure boot, we need:

  • a signed shim, grub, and kernel

  • the right EFI variables for Secure boot

The package shim-signed provides a shim signed with Microsoft’s key, while grub-efi-amd64-signed has GRUB signed with Debian’s key.

The signatures can be shown with sbverify --list:

$ sbverify --list /usr/lib/shim/shimx64.efi.signed
warning: data remaining[823184 vs 948768]: gaps between PE/COFF sections?
signature 1
image signature issuers:
 - /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation UEFI CA 2011
image signature certificates:
 - subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Windows UEFI Driver Publisher
   issuer:  /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation UEFI CA 2011
 - subject: /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation UEFI CA 2011
   issuer:  /C=US/ST=Washington/L=Redmond/O=Microsoft Corporation/CN=Microsoft Corporation Third Party Marketplace Root

Similarly for GRUB and the kernel:

$ sbverify --list /usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed
signature 1
image signature issuers:
 - /CN=Debian Secure Boot CA
image signature certificates:
 - subject: /CN=Debian Secure Boot Signer 2022 - grub2
   issuer:  /CN=Debian Secure Boot CA
$ sbverify --list /mnt/vmlinuz
signature 1
image signature issuers:
 - /CN=Debian Secure Boot CA
image signature certificates:
 - subject: /CN=Debian Secure Boot Signer 2022 - linux
   issuer:  /CN=Debian Secure Boot CA

Let’s use the signed shim and grub in the boot image:

# mount boot.img /mnt
# cp /usr/lib/shim/shimx64.efi.signed /mnt/efi/boot/bootx64.efi
# cp /usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed /mnt/efi/boot/grubx64.efi
# umount /mnt

And start QEMU with the appropriate EFI variables for Secure boot:

$ cp /usr/share/OVMF/OVMF_VARS.ms.fd /tmp/vars.fd
$ qemu-system-x86_64 -drive file=/tmp/code.fd,format=raw,if=pflash -drive file=/tmp/vars.fd,format=raw,if=pflash -cdrom boot.img -m 1G -nographic

We can double-check in the firmware settings if Secure boot is indeed enabled. At the GRUB prompt, type fwsetup:

grub> fwsetup

Check under "Device Manager" → "Secure Boot Configuration" that "Attempt Secure Boot" is selected, then boot from GRUB as before. If all went well, the kernel should confirm that we have booted with Secure boot:

(initramfs) dmesg | grep secureboot
[    0.000000] secureboot: Secure boot enabled

26 March, 2023 05:18AM by Emanuele Rocca (ema@linux.it)

March 25, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Four days to fix a simple configuration bug

Phew!

Today, after four days of combing through code I am unfamiliar with, I was finally able to change my expression.

I’m finally at the part of my PhD work where I am tasked with implementing the protocol I claim improves from the current situation. I wrote a script to deploy the infrastructure I need for the experiment, and was not expecting any issues — I am not (yet) familiar with the Go language (in which the Hockeypuck key server is developed), but I have managed to install it several times, and it holds no terrible surprises anymore for me. Or so I think.

So, how come it’s possible the five servers in my laboratory network don’t gossip to each other? The logs don’t show anything clear… Only a sucession of this:

hockeypuck[5295]: time="2023-03-24T00:00:28-06:00" level=error msg="recon with :0 failed" error="[{/srv/hockeypuck/packaging/src/gopkg.in/hockeypuck/conflux.v2/recon/gossip.go:109: } {dial tcp :0: connect: connection refused}]" label="gossip :11370"
hockeypuck[5295]: time="2023-03-24T00:00:28-06:00" level=info msg="waiting 27s for next gossip attempt" label="gossip :11370"

And while tcp :0: connect: connection refused sounds fishy… It took me too long to find the reasons.

But, at least, along the way I decided to find my errors by debugging the code, rather than by rebuilding the laboratory and random-stabbing at the configuration.

And yes, finally… I came to my senses, and found out my silly mistake was to have my configuration read:

[hockeypuck.conflux.recon.partner.10.0.3.13]
httpAddr="10.0.3.13:11371"
reconAddr="10.0.3.13:11370"

where it should have read:

[hockeypuck.conflux.recon.partner.10-0-3-13]
httpAddr="10.0.3.13:11371"
reconAddr="10.0.3.13:11370"

…Because, of course, TOML would find no child declarations for hockeypuck.conflux.recon.partner.10 (as the following period makes the rest of the entry an entirely distinct one from what I thought I specified).

Anyway, this made me at least:

  • Lose my fear of trying to understand the logic of a Go program (and of the Go language)
  • Understand the logging framework for Go
  • Learning how to use the (great!) format strings infrastructure it has
  • Do some minor logic modifications, trying to find the issue
  • Dig into how TOML parsing is done, idiomatically

Now… Why am I posting this? Not only because I feel very happy and wanted to share my a-ha moment, but also because I’m sure this time that seems that I mindlessly spent poking at Go without knowing the basics will be somehow rewarded — I have to learn bits of the language anyway, so it’s time well spent.

Or so I hope.

(oh, and the funny spectacles? I am not sure, but I believe them to have been property of my grandfather or great-grandfather when they came from Europe, in 1947 or in 1928 respectively. One glasspiece is sadly lost, but other than that, I love them!)

25 March, 2023 10:31PM

Now that we are talking about kernel building... What about firebuild?

After my last post, Bálint (who prompted it with his last post) suggested I should do a hybrid test of his tests and my extremes. He suggested I should build the Linux kernel using my Raspberry Pi 4 (8GB model), but using the Firebuild build accelerator.

Before going any further: I must make clear that while Firebuild is freely redistributable, it is not made available under a free license. It is free for personal use or commercial trial, but otherwise requires licensing.

Bálint managed to build a Linux kernel in just over 8 seconds. So, how did my test go? My previous experiment, using -j 4, built Linux in ~100 minutes; this was about a year ago, and I’m now building linux 6.1, so I timed this again. To get a baseline, I built my kernel from a just-unpacked tree, just as usual:

# cd /usr/src/linux-source-6.1
# make clean
# make defconfig
# time make -j4
(...)
real    117m30.588s
user    392m41.434s
sys     52m2.556s

Of course, having all of the object files built makes the rebuild process quite faster (this is still done without firebuild). I understand calling make defconfig without cleaning does not change much, but I saw it often referenced in firebuild’s docs, so I’m leaving it:

# time make -j 4
(...)
real    0m43.822s
user    1m36.577s
sys     0m40.805s

Then, I did a first run using firebuild. Firebuild is a caching build optimizer, so the first run will naturally be somewhat slower (but if you often rebuild your kernel, it should be seen as an investment). Now, in the Raspberry Pi, that uses a slow SD card interface for its storage… It is a heavy investment. The first time I built with firebuild, it meant almost a 100% build time hit:

# cd /usr/src/linux-source-6.1
# make clean
# make defconfig
# time firebuild make -j 4
(...)
real    212m58.647s
user    391m49.080s
sys     81m10.758s

Not only that; I am using a fairly decent and big 32GB card, but this is quite a big price to pay in such a limited system!

# du -sh .cache/firebuild/
4.2G    .cache/firebuild/

I did a build without cleaning the build directory, using firebuild, and it does help — although not by so much as in higher performance systems:

# cd /usr/src/linux-source-6.1
# make clean
# make defconfig
# time firebuild make -j 4
(...)
real    68m6.621s
user    98m32.514s
sys     31m41.643s

So, it built in roughly 65% of the time it would take to build regularly. And what about rebuilding without cleaning?

# make defconfig
# time firebuild make -j 4
(...)
real    1m11.872s
user    2m5.807s
sys     1m46.178s

In this case, using firebuild worked roughly 30% slower than not using it. I guess the high number of file ops inside .cache/firebuild are to blame, as in the case of the media I’m using, those are quite expensive; make went its way basically checking date stamps between *.c and *.o (yes, very roughly), and while running under firebuild, I suppose each of these meant an extra lookup inside the cache.

So… Experiment requested, experiment performed! 😉

25 March, 2023 06:31PM

March 24, 2023

hackergotchi for Matthew Garrett

Matthew Garrett

We need better support for SSH host certificates

Github accidentally committed their SSH RSA private key to a repository, and now a bunch of people's infrastructure is broken because it needs to be updated to trust the new key. This is obviously bad, but what's frustrating is that there's no inherent need for it to be - almost all the technological components needed to both reduce the initial risk and to make the transition seamless already exist.

But first, let's talk about what actually happened here. You're probably used to the idea of TLS certificates from using browsers. Every website that supports TLS has an asymmetric pair of keys divided into a public key and a private key. When you contact the website, it gives you a certificate that contains the public key, and your browser then performs a series of cryptographic operations against it to (a) verify that the remote site possesses the private key (which prevents someone just copying the certificate to another system and pretending to be the legitimate site), and (b) generate an ephemeral encryption key that's used to actually encrypt the traffic between your browser and the site. But what stops an attacker from simply giving you a fake certificate that contains their public key? The certificate is itself signed by a certificate authority (CA), and your browser is configured to trust a preconfigured set of CAs. CAs will not give someone a signed certificate unless they prove they have legitimate ownership of the site in question, so (in theory) an attacker will never be able to obtain a fake certificate for a legitimate site.

This infrastructure is used for pretty much every protocol that can use TLS, including things like SMTP and IMAP. But SSH doesn't use TLS, and doesn't participate in any of this infrastructure. Instead, SSH tends to take a "Trust on First Use" (TOFU) model - the first time you ssh into a server, you receive a prompt asking you whether you trust its public key, and then you probably hit the "Yes" button and get on with your life. This works fine up until the point where the key changes, and SSH suddenly starts complaining that there's a mismatch and something awful could be happening (like someone intercepting your traffic and directing it to their own server with their own keys). Users are then supposed to verify whether this change is legitimate, and if so remove the old keys and add the new ones. This is tedious and risks users just saying "Yes" again, and if it happens too often an attacker can simply redirect target users to their own server and through sheer fatigue at dealing with this crap the user will probably trust the malicious server.

Why not certificates? OpenSSH actually does support certificates, but not in the way you might expect. There's a custom format that's significantly less complicated than the X509 certificate format used in TLS. Basically, an SSH certificate just contains a public key, a list of hostnames it's good for, and a signature from a CA. There's no pre-existing set of trusted CAs, so anyone could generate a certificate that claims it's valid for, say, github.com. This isn't really a problem, though, because right now nothing pays attention to SSH host certificates unless there's some manual configuration.

(It's actually possible to glue the general PKI infrastructure into SSH certificates. Please do not do this)

So let's look at what happened in the Github case. The first question is "How could the private key have been somewhere that could be committed to a repository in the first place?". I have no unique insight into what happened at Github, so this is conjecture, but I'm reasonably confident in it. Github deals with a large number of transactions per second. Github.com is not a single computer - it's a large number of machines. All of those need to have access to the same private key, because otherwise git would complain that the private key had changed whenever it connected to a machine with a different private key (the alternative would be to use a different IP address for every frontend server, but that would instead force users to repeatedly accept additional keys every time they connect to a new IP address). Something needs to be responsible for deploying that private key to new systems as they're brought up, which means there's ample opportunity for it to accidentally end up in the wrong place.

Now, best practices suggest that this should be avoided by simply placing the private key in a hardware module that performs the cryptographic operations, ensuring that nobody can ever get at the private key. The problem faced here is that HSMs typically aren't going to be fast enough to handle the number of requests per second that Github deals with. This can be avoided by using something like a Nitro Enclave, but you're still going to need a bunch of these in different geographic locales because otherwise your front ends are still going to be limited by the need to talk to an enclave on the other side of the planet, and now you're still having to deal with distributing the private key to a bunch of systems.

What if we could have the best of both worlds - the performance of private keys that just happily live on the servers, and the security of private keys that live in HSMs? Unsurprisingly, we can! The SSH private key could be deployed to every front end server, but every minute it could call out to an HSM-backed service and request a new SSH host certificate signed by a private key in the HSM. If clients are configured to trust the key that's signing the certificates, then it doesn't matter what the private key on the servers is - the client will see that there's a valid certificate and will trust the key, even if it changes. Restricting the validity of the certificate to a small window of time means that if a key is compromised an attacker can't do much with it - the moment you become aware of that you stop signing new certificates, and once all the existing ones expire the old private key becomes useless. You roll out a new private key with new certificates signed by the same CA and clients just carry on trusting it without any manual involvement.

Why don't we have this already? The main problem is that client tooling just doesn't handle this well. OpenSSH has no way to do TOFU for CAs, just the keys themselves. This means there's no way to do a git clone ssh://git@github.com/whatever and get a prompt asking you to trust Github's CA. Instead, you need to add a @cert-authority github.com (key) line to your known_hosts file by hand, and since approximately nobody's going to do that there's only marginal benefit in going to the effort to implement this infrastructure. The most important thing we can do to improve the security of the SSH ecosystem is to make it easier to use certificates, and that means improving the behaviour of the clients.

It should be noted that certificates aren't the only approach to handling key migration. OpenSSH supports a protocol for key rotation, basically by allowing the server to provide a set of multiple trusted keys that the client can cache, and then invalidating old ones. Unfortunately this still requires that the "new" private keys be deployed in the same way as the old ones, so any screwup that results in one private key being leaked may well also result in the additional keys being leaked. I prefer the certificate approach.

Finally, I've seen a couple of people imply that the blame here should be attached to whoever or whatever caused the private key to be committed to a repository in the first place. This is a terrible take. Humans will make mistakes, and your systems should be resilient against that. There's no individual at fault here - there's a series of design decisions that made it possible for a bad outcome to occur, and in a better universe they wouldn't have been necessary. Let's work on building that better universe.

comment count unavailable comments

24 March, 2023 05:12PM

March 23, 2023

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

My New Gear...UHK

Recently, I'm coming to feel the burden on my shoulder, it makes me so annoyed.

As of that, I've considered replacing the hardware (keyboard).

I've searched for a new gear candidate to resolve the above issue with the following conditions:

  • MUST: Needs trackpoint (because I'm using ThinkPad Keyboard with TrackPoint so long, I'm afraid to work without it)
  • MUST: Separate keyboard (split keyboard) to be ergonomic to reduce the burden on my shoulder.
  • SHOULD: Upstream publishes products in a form of source code (I prefer more "free" products)

As a result, Ultimate Hacking Keyboard [1] seems that it is the best solution for me because UHK implements trackpoint, split keyboard feature, and upstream publishes firmware as MIT-like license. [2]

It seems the best solution but it costs me more budget unexpectedly (weaker YEN contrast to USD makes it a worse situation). so I've asked about reimbursement [3] for Debian project.

Fortunately, reimbursement request is approved and processed by SPI [4], even though there is a troublesome with the paperwork (it's my fault) [5], I could get a new gear - Ultimate Hacking Keyboard. (Note that reimbursement does not cover all of costs - import tax fee, and so on, but it helps a lot)

As I mentioned above, I used ThinkPad keyboard with TrackPoint so long, I'm noticed that I was used to be optimized to that keyboard.

Here is the first impression:

  • Even though the silent red switch, it may be a bit noisy, but it is acceptable.
  • TrackPoint module may not be so useful as I expected, it may be better to use mod layer or mini trackball with key cluster module. This is because it is not an appropriate position for me that trackpoint module must be attached to left on N, Space key. (left to H key is better position for me)
  • Remapping configuration tool is awesome. but need to try to find my "best" assignment furthermore.
  • It is hard to replace keycaps because of a special keycap width (bit shorter keycap design exists)
  • It takes about a few weeks to get used to new gear.
  • The curled code is a bit shorter than I expected, so it should be replaced with a longer one.

Anyway, UHK is an awesome product and it is worth to recommend you.

23 March, 2023 11:43AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSMC 0.2.7 on CRAN: Extensions and Update

A new release 0.2.7 of our RcppSMC package arrived at CRAN earlier today. It contains several extensions added by team member (and former GSoC student) Ilya Zarubin since the last release. We were a little slow to release those—but “one of those CRAN emails” forced our hand for a release now. The updated ‘uninitialized variable’ messages in clang++-16 have found a fan in Brian Ripley, and so he sent us a note. And as the issue was trivially reproducible with clang++-15 here too I had it fixed in no time. And both changes taken together form the incremental 0.2.7 release.

RcppSMC provides Rcpp-based bindings to R for the Sequential Monte Carlo Template Classes (SMCTC) by Adam Johansen described in his JSS article. Sequential Monte Carlo is also referred to as Particle Filter in some contexts. The package now also features the Google Summer of Code work by Leah South in 2017, and by Ilya Zarubin in 2021.

The release is summarized below.

Changes in RcppSMC version 0.2.7 (2023-03-22)

  • Extensive extensions for conditional SMC and resample, updated hello_world example, added skeleton function for easier package creation (Ilya in #67,#72)

  • Small package updates (Dirk in #75 fixing #74)

Courtesy of my CRANberries, there is a diffstat report for this release.

More information is on the RcppSMC page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 March, 2023 01:40AM

March 22, 2023

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2023)

The following contributors got their Debian Developer accounts in the last two months:

  • Sahil Dhiman (sahil)
  • Jakub Ružička (jru)

The following contributors were added as Debian Maintainers in the last two months:

  • Josenilson Ferreira da Silva
  • Ileana Dumitrescu
  • Douglas Kosovic
  • Israel Galadima
  • Timothy Pearson
  • Blake Lee
  • Vasyl Gello
  • Joachim Zobel
  • Amin Bandali

Congratulations!

22 March, 2023 03:00PM by Jean-Pierre Giraud