March 27, 2022

Russ Allbery

Review: A Song for a New Day

Review: A Song for a New Day, by Sarah Pinsker

Publisher: Berkley
Copyright: September 2019
ISBN: 1-9848-0259-3
Format: Kindle
Pages: 372

Luce Cannon was touring with a session band when the shutdown began. First came the hotel evacuation in the middle of the night due to bomb threats against every hotel in the state. Then came the stadium bombing just before they were ready to go on stage. Luce and most of the band performed anyway, with a volunteer crew and a shaken crowd. It was, people later decided, the last large stage show in the United States before the congregation laws shut down public gatherings. That was the end of Luce's expected career, and could have been the end of music, or at least public music. But Luce was stubborn and needed the music.

Rosemary grew up in the aftermath: living at home with her parents well away from other people, attending school virtually, and then moving seamlessly into a virtual job for Superwally, the corporation that ran essentially everything. A good fix for some last-minute technical problems with StageHoloLive's ticketing system got her an upgraded VR hoodie and complimentary tickets to the first virtual concert she'd ever attended. She found the experience astonishing, prompting her to browse StageHoloLive job openings and then apply for a technical job and, on a whim, an artist recruiter role. That's how Rosemary found herself, quite nerve-wrackingly, traveling out into the unsafe world to look for underground musicians who could become StageHoloLive acts.

A Song for a New Day was published in 2019 and had a moment of fame at the beginning of 2020, culminating in the Nebula Award for best novel, because it's about lockdowns, isolation, and the suppression of public performances. There's even a pandemic, although it's not a respiratory disease (it's some variety of smallpox or chicken pox) and is only a minor contributing factor to the lockdowns in this book. The primary impetus is random violence.

Unfortunately, the subsequent two years have not been kind to this novel. Reading it in 2022, with the experience of the past two years fresh in my mind, was a frustrating and exasperating experience because the world setting is completely unbelievable. This is not entirely Pinsker's fault; this book was published in 2019, was not intended to be about our pandemic, and therefore could not reasonably predict its consequences. Still, it required significant effort to extract the premise of the book from the contradictory evidence of current affairs and salvage the pieces of it I still enjoyed.

First, Pinsker's characters are the most astonishingly incurious and docile group of people I've seen in a recent political SF novel. This extends beyond the protagonists, where it could arguably be part of their characterization, to encompass the entire world (or at least the United States; the rest of the world does not appear in this book at all so far as I can recall). You may be wondering why someone bombs a stadium at the start of the book. If so, you are alone; this is not something anyone else sees any reason to be curious about. Why is random violence spiraling out of control? Is there some coordinated terrorist activity? Is there some social condition that has gotten markedly worse? Race riots? Climate crises? Wars? The only answer this book offers is a completely apathetic shrug. There is a hint at one point that the government may have theories that they're not communicating, but no one cares about that either.

That leads to the second bizarre gap: for a book that hinges on political action, formal political structures are weirdly absent. Near the end of the book, one random person says that they have been inspired to run for office, which so far as I can tell is the first mention of elections in the entire book. The "government" passes congregation laws shutting down public gatherings and there are no protests, no arguments, no debate, but also no suppression, no laws against the press or free speech, no attempt to stop that debate. There's no attempt to build consensus for or against the laws, and no noticeable political campaigning. That's because there's no need. So far as one can tell from this story, literally everyone just shrugs and feels sad and vaguely compliant. Police officers exist and enforce laws, but changing those laws or defying them in other than tiny covert ways simply never occurs to anyone. This makes the book read a bit like a fatuous libertarian parody of a docile populous, but this is so obviously not the author's intent that it wouldn't be satisfying to read even as that.

To be clear, this is not something that lasts only a few months in an emergency when everyone is still scared. This complete political docility and total incuriosity persists for enough years that Rosemary grows up within that mindset.

The triggering event was a stadium bombing followed by an escalating series of random shootings and bombings. (The pandemic in the book only happens after everything is locked down and, apart from adding to Rosemary's agoraphobia and making people inconsistently obsessed with surface cleanliness, plays little role in the novel.) I lived through 9/11 and the Oklahoma City bombing in the US, other countries have been through more protracted and personally dangerous periods of violence (the Troubles come to mind), and never in human history has any country reacted to a shock of violence (or, for that matter, disease) like the US does in this book. At points it felt like one of those SF novels where the author is telling an apparently normal story and all the characters turn out to be aliens based on spiders or bats.

I finally made sense of this by deciding that the author wasn't using sudden shocks like terrorism or pandemics as a model, even though that's what the book postulates. Instead, the model seems to be something implicitly tolerated and worked around: US school shootings, for instance, or the (incorrect but widespread) US belief in a rise of child kidnappings by strangers. The societal reaction here looks less like a public health or counter-terrorism response and more like suburban attitudes towards child-raising, where no child is ever left unattended for safety reasons but we routinely have school shootings no other country has at the same scale. We have been willing to radically (and ineffectually) alter the experience of childhood due to fears of external threat, and that's vaguely and superficially similar to the premise of this novel.

What I think Pinsker still misses (and which the pandemic has made glaringly obvious) is the immense momentum of normality and the inability of adults to accept limitations on their own activities for very long. Even with school shootings, kids go to school in person. We now know that parts of society essentially collapse if they don't, and political pressure becomes intolerable. But by using school shootings as the model, I managed to view Pinsker's setup as an unrealistic but still potentially interesting SF extrapolation: a thought experiment that ignores countervailing pressures in order to exaggerate one aspect of society to an extreme.

This is half of Pinsker's setup. The other half, which made less of a splash because it didn't have the same accident of timing, is the company Superwally: essentially "what if Amazon bought Walmart, Google, Facebook, Netflix, Disney, and Live Nation." This is a more typical SF extrapolation that left me with a few grumbles about realism, but that I'll accept as a plot device to talk about commercialization, monopolies, and surveillance capitalism. But here again, the complete absence of formal political structures in this book is not credible. Superwally achieves an all-pervasiveness that in other SF novels results in corporations taking over the role of national governments, but it still lobbies the government in much the same way and with about the same effectiveness as Amazon does in our world. I thought this directly undermined some parts of the end of the book. I simply did not believe that Superwally would be as benign and ineffectual as it is shown here.

Those are a lot of complaints. I found reading the first half of this book to be an utterly miserable experience and only continued reading out of pure stubbornness and completionism. But the combination of the above-mentioned perspective shift and Pinsker's character focus did partly salvage the book for me.

This is not a book about practical political change, even though it makes gestures in that direction. It's primarily a book about people, music, and personal connection, and Pinsker's portrayal of individual and community trust in all its complexity is the one thing the book gets right. Rosemary's character combines a sort of naive arrogance with self-justification in a way that I found very off-putting, but the pivot point of the book is the way in which Luce and her community extends trust to her anyway, as part of staying true to what they believe.

The problem that I think Pinsker was trying to write about is atomization, which leads to social fragmentation into small trust networks with vast gulfs between them. Luce and Rosemary are both characters who are willing to bridge those gulfs in their own ways. Pinsker does an excellent job describing the benefits, the hurt, the misunderstandings, the risk, and the awkward process of building those bridges between communities that fundamentally do not understand each other. There's something deep here about the nature of solidarity, and how you need both people like Luce and people like Rosemary to build strong and effective communities. I've kept thinking about that part.

It's also helpful for a community to have people who are curious about cause and effect, and who know how a bill becomes a law.

It's hard to sum up this book, other than to say that I understand why it won a Nebula but it has deep world-building flaws that have become far more obvious over the past two years. Pinsker tries hard to capture the feeling of live music for both the listener and the performer and partly succeeded even for me, which probably means others will enjoy that part of the book immensely. The portrayal of the difficult dynamics of personal trust was the best part of the book for me, but you may have to build scaffolding and bracing for your world-building disbelief in order to get there.

On the whole, I think A Song for a New Day is worth reading, but maybe not right now. If you do read it now, tell yourself at the start that this is absolutely not about the pandemic and that everything political in this book is a hugely simplified straw-man extrapolation, and hopefully you'll find the experience less frustrating than I found it.

Rating: 6 out of 10

27 March, 2022 03:58AM

Andrew Cater

Imminent release for the media images for Debian 10.12 and 11.3 20220327 0010

 OK - so it wasn't quite all done in one day - and since today is TZ change day in the UK, it might actually run into the TZ bump but I suspect that it will all be done very soon now. Very few glitches - everybody cheerful with what's been done.

I did spot someone in IRC who had been reading the release notes - which is always much appreciated. Lots of security fixes overall in the last couple of months but just a fairly normal time, I think.

Thanks to the team behind all of this: the ftpmasters, the press team and everyone else involved in making Debian more secure. This is the last blog for this one - there will be another point release along in about three months or so.

27 March, 2022 12:10AM by Andrew Cater (noreply@blogger.com)

March 26, 2022

Part way through testing Debian media images 20220326 1555UTC - Found a new useful utility

 For various obscure reasons, I have a mirror of Debian in one room and the main laptop and so on I use in another. The mirror is connected to a fast Internet line - and has a 1Gb Ethernet cable into the back directly from the router, the laptop and everything else - not so much, everything is wired, but depends on a WiFi link across the property. One end is fast - one end runs like a snail.

Steve suggested I use a different tool to make images directly on the mirror machine - jigit. Slightly less polished than jigdo but - if you're on the same machine - blazingly fast. I just used it to make the Blu-Ray sized .iso and was very pleasantly surprised. 

jigit-mkimage -j [jigdo file] -t [template file] -m Debian=[path to mirror of Debian] -o [output filename]

Another nice surprise for me - I have a horrible old Lenovo Ideapad. It's one of the Bay Trail Intel machines with a 32 bit UEFI and a 64 bit processor. I rescued it from the junk heap. Reinstalling it with an image today fixed an issue I had with slow boot and has turned it into an adequate machine for web browsing.

All in all, I've done relatively few tests so far - but it's been a good day, as ever.

More later.



26 March, 2022 10:15PM by Andrew Cater (noreply@blogger.com)

Debian media team - testing and releasing Debian 11.3 - 20220326 1243UTC

And back to relative normality : the usual suspects are in Cambridge. It's a glorious day across the UK and we're spending it indoors with laptops :)

We'll also be releasing a point release of Buster as a wrap up of recent changes.

Debian 10 should move from full support to LTS on July August 14th - one year after the release of Debian 11 - and there will be a final point release of Buster somewhere around that point.

All seems to be behaving itself well.

Thanks to all for the hard work that goes into preparing each release and especially the security fixes of which there seem to be loads lately.



26 March, 2022 08:31PM by Andrew Cater (noreply@blogger.com)

Still testing Debian media images 20220326 2026UTC- almost finished 11.3 - Buster starting soon

 And we're working through quite nicely.


It's been a long, long day so far and we're about 1/2 way through :)


Shout out to Isy, Sledge and RattusRattus in Cambridge and also smcv.

Two releases in a day is a whole bunch :)

26 March, 2022 08:27PM by Andrew Cater (noreply@blogger.com)

March 25, 2022

Russell Coker

Wayland

The Wayland protocol [1] is designed to be more secure than X, when X was designed there wasn’t much thought given to the possibility of programs with different access levels displaying on the same desktop. The Xephyr nested X server [2] is good for running an entire session from a remote untrusted host on a local display but isn’t suitable for multiple applications in the same session.

GNOME supported Wayland by default in Debian since the Bullseye release and for KDE support you can install the plasma-workspace-wayland which gives you an option for the session type of KDE Plasma Wayland when you login. For systems which don’t use the KDE Plasma workspace but which have some KDE apps you should install the package qtwayland5 to allow the KDE apps to use the Wayland protocol. See the KDE page of the Debian Wiki [3] for more information.

The Debian Wiki page on Wayland has more useful information [4]. Apparently you have to use gdm instead of sddm to get Wayland for the login prompt.

To get screen sharing working on Wayland (and also to get a system that doesn’t give out error messages) you need to install the pipewire package (see the Pipewire project page for more information [6]).

Daniel Stone gave a great LCA talk about Wayland in 2013 [5].

I have just converted two of my systems to Wayland. It’s pretty uneventful, things seem to work the same way as before. It might be theoretically faster but in practice Xorg was fast enough that there’s not much possibility to appear faster. My aim is to work on Linux desktop security to try and get process isolation similar to what Android does on the PC desktop and on Debian based phones such as the Librem 5. Allowing some protection against graphics based attacks is only the first step towards that goal, but it’s an important step. More blog posts on related topics will follow.

Update: One thing I forgot to mention is that MAC systems need policy changes for Wayland. There are direct changes (allowing background daemons for GPU access to talk to a Wayland server running in a user context instead of an X server in a system context) and indirect changes (having the display server and window manager merged).

25 March, 2022 07:52AM by etbe

Reproducible Builds (diffoscope)

diffoscope 208 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 208. This version includes the following changes:

[ Brent Spillner ]
* Add graceful handling for UNIX sockets and named pipes.
  (Closes: reproducible-builds/diffoscope#292)
* Remove a superfluous log message and reformatt comment lines.

[ Chris Lamb ]
* Reformat various files to satisfy current version of Black.

You find out more by visiting the project homepage.

25 March, 2022 12:00AM

March 24, 2022

Ingo Juergensmann

New Server – NVMe Issues

My current server is somewhat aged. I bought it new in July 2014 with a 6-core Xeon E5-2630L, 32 GB RAM and 4x 3.5″ hot-swappable drives. Gladly I had the opportunity to extend the memory to 128 GB RAM at no additional cost by using memory from my ex-employer. It also has 4x 2 TB WD Red HDDs with 5400 rpm hooked up to the SATA backplane, but unfortunately only two of them are SATA-3 with 6 Gbit/s.

The new server is a used/refurbished Supermicro server with 2x 14-core Xeon E5-2683 and 256 GB RAM and 4x 3.5″ hot-swappable drives. It also came with a Hardware-RAID SAS/SATA 8-port controller with BBU. I also ordered two slim drive kits (MCP-220-81504-0N & MCP-220-81506-0N) to be able to use 2x 3.5″ slots for rotational HDDs as a cheap storage. Right now I added 2x 128 GB Supermicro SATA DOMs, 4x WD Red 4 TB SSDs and a Sonnet Fusion 4×4 Silent and 4x 1 TB Seagate Firecuda 520 NVMe disks.

And here the issue starts:

The NVMe should be capable of 4-5 GB/s, but they are connected to a PCIe 3.0 x16 port via the Sonnet Fusion 4×4, which itself features a PCIe bridge, so bifurbacation is not necessary.

When doing some tests with bonnie++ I get around 1 GB/s transfer rates out of a RAID10 setup with all 4 NVMes. In fact, regardless of the RAID level there are only transfer rates of about 1 – 1.2 GB/s with bonnie++. (All software RAIDs with mdadm.)

But also when constructing a RAID each NVMe gives around 300-600 MB/s in sync speed – except for one exception: RAID1.

Regardless of how many NVMe disks in a RAID1 setup the sync speed is up to 2.5 GB/s for each of the NVMe disks. So the lower transfer rates with bonnie++ or other RAID levels shouldn’t be limited by bus speed nor by CPU speed. Alas, atop shows upto 100% CPU usage for all tests. I even tested

In my understanding RAID10 should perform similar to RAID1 in terms of syncing and better and while bonnie++ tests (up to 2x write and 4x read speed compared to a single disk).

For the bonnie++ tests I even made some tests that are available here. You can find the test parameters listed in the hostname column: Baldur is the hostname, then followed by the layout (near-2, far-2, offset-2), chunk size and concurrency of bonnie++. In the end there was no big impact of the chunk size of the RAID.

So, now I’m wondering what the reason for the “slow” performance of those 4x NVMe disks is? Bus speed of the PCIe 3.0 x16 shouldn’t be the cause, because I assume that the software RAID will need to transfer the blocks in RAID1 as well as in RAID10 over the bus. Same goes for the CPU: the amount of CPU work should be roughly the same for RAID1 and for RAID10. RAID10 should even have an advantage because the blocks only need to be synced to 2 disks in a stripe set.

Bonnie++ tests are a different topic for sure. But when testing reading with dd from the md-devices I “only” get around 1-1.5 GB/s as well. Even when using LVM RAID instead of LVM on top of md RAID.

All NVMe disks are already set to 4k and IO scheduler is set to mq-deadline.

Is there anything I could do to improve the performance of the NVMe disks? On the other head, pure transfer rates are not that important to a server that runs a dozen of VMs. Here the improved IOPS performance over rotation disks is a clear performance gain. But I’m still curious if I could get maybe 2 GB/s out of a RAID10 setup with the NVMe disks. Then again having two independent RAID1 setups for MariaDB and for PostgreSQL databases might be a better choice over a single RAID10 setup?

24 March, 2022 09:49AM by ij

March 23, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

AMD's Pluton implementation seems to be controllable

I've been digging through the firmware for an AMD laptop with a Ryzen 6000 that incorporates Pluton for the past couple of weeks, and I've got some rough conclusions. Note that these are extremely preliminary and may not be accurate, but I'm going to try to encourage others to look into this in more detail. For those of you at home, I'm using an image from here, specifically version 309. The installer is happy to run under Wine, and if you tell it to "Extract" rather than "Install" it'll leave a file sitting in C:\\DRIVERS\ASUS_GA402RK_309_BIOS_Update_20220322235241 which seems to have an additional 2K of header on it. Strip that and you should have something approximating a flash image.

Looking for UTF16 strings in this reveals something interesting:

Pluton (HSP) X86 Firmware Support
Enable/Disable X86 firmware HSP related code path, including AGESA HSP module, SBIOS HSP related drivers.
Auto - Depends on PcdAmdHspCoreEnable build value
NOTE: PSP directory entry 0xB BIT36 have the highest priority.
NOTE: This option will NOT put HSP hardware in disable state, to disable HSP hardware, you need setup PSP directory entry 0xB, BIT36 to 1.
// EntryValue[36] = 0: Enable, HSP core is enabled.
// EntryValue[36] = 1: Disable, HSP core is disabled then PSP will gate the HSP clock, no further PSP to HSP commands. System will boot without HSP.

"HSP" here means "Hardware Security Processor" - a generic term that refers to Pluton in this case. This is a configuration setting that determines whether Pluton is "enabled" or not - my interpretation of this is that it doesn't directly influence Pluton, but disables all mechanisms that would allow the OS to communicate with it. In this scenario, Pluton has its firmware loaded and could conceivably be functional if the OS knew how to speak to it directly, but the firmware will never speak to it itself. I took a quick look at the Windows drivers for Pluton and it looks like they won't do anything unless the firmware wants to expose Pluton, so this should mean that Windows will do nothing.

So what about the reference to "PSP directory entry 0xB BIT36 have the highest priority"? The PSP is the AMD Platform Security Processor - it's an ARM core on the CPU package that boots before the x86. The PSP firmware lives in the same flash image as the x86 firmware, so the PSP looks for a header that points it towards the firmware it should execute. This gives a pointer to a "directory" - a list of different object types and where they're located in flash (there's a description of this for slightly older AMDs here). Type 0xb is treated slightly specially. Where most types contain the address of where the actual object is, type 0xb contains a 64-bit value that's interpreted as enabling or disabling various features - something AMD calls "soft fusing" (Intel have something similar that involves setting bits in the Firmware Interface Table). The PSP looks at the bits that are set here and alters its behaviour. If bit 36 is set, the PSP tells Pluton to turn itself off and will no longer send any commands to it.

So, we have two mechanisms to disable Pluton - the PSP can tell it to turn itself off, or the x86 firmware can simply never speak to it or admit that it exists. Both of these imply that Pluton has started executing before it's shut down, so it's reasonable to wonder whether it can still do stuff. In the image I'm looking at, there's a blob starting at 0x0069b610 that appears to be firmware for Pluton - it contains chunks that appear to be the reference TPM2 implementation, and it broadly decompiles as valid ARM code. It should be viable to figure out whether it can do anything in the face of being "disabled" via either of the above mechanisms.

Unfortunately for me, the system I'm looking at does set bit 36 in the 0xb entry - as a result, Pluton is disabled before x86 code starts running and I can't investigate further in any straightforward way. The implication that the user-controllable mechanism for disabling Pluton merely disables x86 communication with it rather than turning it off entirely is a little concerning, although (assuming Pluton is behaving as a TPM rather than having an enhanced set of capabilities) skipping any firmware communication means the OS has no way to know what happened before it started running even if it has a mechanism to communicate with Pluton without firmware assistance. In that scenario it'd be viable to write a bootloader shim that just faked up the firmware measurements before handing control to the OS.

The bit 36 disabling mechanism seems more solid? Again, it should be possible to analyse the Pluton firmware to determine whether it actually pays attention to a disable command being sent. But even if it chooses to ignore that, if the PSP is in a position to just cut the clock to Pluton, it's not going to be able to do a lot. At that point we're trusting AMD rather than trusting Microsoft, but given that you're also trusting AMD to execute the code you're giving them to execute, it's hard to avoid placing trust in them.

Overall: I'm reasonably confident that systems that ship with Pluton disabled via setting bit 36 in the soft fuses are going to disable it sufficiently hard that the OS can't do anything about it. Systems that give the user an option to enable or disable it are a little less clear in that respect, and it's possible (but not yet demonstrated) that an OS could communicate with Pluton anyway. However, if that's true, and if the firmware never communicates with Pluton itself, the user could install a stub loader in UEFI that mimicks the firmware behaviour and leaves the OS thinking everything was good when it absolutely is not.

So, assuming that Pluton in its current form on AMD has no capabilities outside those we know about, the disabling mechanisms are probably good enough. It's tough to make a firm statement on this before I have access to a system that doesn't just disable it immediately, so stay tuned for updates.

comment count unavailable comments

23 March, 2022 08:42AM

March 22, 2022

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Workshops about anger, saying NO, and mapping one’s capacities and desires

For the second year in a row, I proposed some workshops at the feminist hackers assembly at the remote C3. I’m sharing them here because I believe they might be useful to others.

Anger workshop

Based on my readings about the subject and a mediation training, I created a first workshop about dealing with one’s own anger for the feminist hackers assembly in 2020. Many women who attended said they recognized themselves in what I was talking about. I created the exercises in the workshop with the goal of getting participants to share and self-reflect in small groups. I’m not giving out solutions, instead proposals on how to deal with anger come from the participants themselves. (I added the last two content pages to the file after the workshop.) This is why this workshop is always different, depending on the group and what they want to share. The first time I did this workshop was a huge success and so I created an improved version for the assembly of 2021.

Angry womxn* workshop

The act of saying NO

We often say yes, despite wanting to say no, out of a sense of duty, or because we learned that we should always be nice and helpful, and that our own needs are best served last. Many people don’t really know how to say no. Sarah Cooper, a former Google employee herself, makes fun of this in her fabulous book “How to Be Successful Without Hurting Men’s Feelings” (highly recommended read!):

A drawing of a woman who says: How I say yes: I'd love to. How I say no: sure.

That’s why a discussion space about saying NO did not seem out of place at the feminist hackers assembly :) I based my workshop on the original, created by the Institute of War and Peace Reporting and distributed through their holistic security training manual.

I like this workshop because sharing happens in a small groups and has an immediately felt effect. Several people reported that the exercises allowed them to identify the exact moment when they had said yes to something despite really having wanted to say no. The exercises from the workshop can easily be done with a friend or trusted person, and they can even be done alone by writing them down, although the effect in writing might be less pronounced.

The act of saying NO workshop

Mapping capacities and desires

Based on discussions with a friend, whose company uses SWOT analysis (strengths—weaknesses—opportunities—threats) to regularly check in with their employees, and to allow employees to check in with themselves, I created a similar tool for myself which I thought would be nice to share with others. It’s a very simple self-reflection that can help map out what works well, what doesn’t work so well and where one wants to go in the future. I find it important to not use this tool narrow-mindedly only regarding work skills and expertise. Instead, I think it’s useful to also include soft skills, hobbies, non-work capacities and whatever else comes to mind in order to create a truer map.

Fun fact: During the assembly, a bunch of participants reported that they found it hard to distinguish between things they don’t like doing and things they don’t know how to do.

Mapping capacities and desires

Known issues

One important feedback point I got is that people felt the time for the exercises in all three workshops could have been longer. In case you want to try out these workshops, you might want to take this into account.

22 March, 2022 11:00PM by Ulrike Uhlig

hackergotchi for Tollef Fog Heen

Tollef Fog Heen

DNSSEC, ssh and VerifyHostKeyDNS

OpenSSH has this very nice setting, VerifyHostKeyDNS, which when enabled, will pull SSH host keys from DNS, and you no longer need to either trust on first use, or copy host keys around out of band. Naturally, trusting unsecured DNS is a bit scary, so this requires the record to be signed using DNSSEC. This has worked for a long time, but then broke, seemingly out of the blue. Running ssh -vvv gave output similar to debug1: found 4 insecure fingerprints in DNS debug3: verify_host_key_dns: checking SSHFP type 1 fptype 2 debug3: verify_host_key_dns: checking SSHFP type 4 fptype 2 debug1: verify_host_key_dns: matched SSHFP type 4 fptype 2 debug3: verify_host_key_dns: checking SSHFP type 4 fptype 1 debug1: verify_host_key_dns: matched SSHFP type 4 fptype 1 debug3: verify_host_key_dns: checking SSHFP type 1 fptype 1 debug1: matching host key fingerprint found in DNS even though the zone was signed, the resolver was checking the signature and I even checked that the DNS response had the AD bit set. The fix was to add options trust-ad to /etc/resolv.conf. Without this, glibc will discard the AD bit from any upstream DNS servers. Note that you should only add this if you actually have a trusted DNS resolver. I run unbound on localhost, so if somebody can do a man-in-the-middle attack on that traffic, I have other problems.

22 March, 2022 07:30PM

March 21, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

Long, long, long live Emacs after 39 years

Reading Planet Debian (see, Sam, we are still having a conversation over there? 😉), I read Anarcat’s 20+ years of Emacs. And.. Well, should I brag contribute to the discussion? Of course, why not?

Emacs is the first computer program I can name that I ever learnt to use to do something minimally useful. 39 years ago.


From the Space Cadet keyboard that (obviously…) influenced Emacs’ early design

The Emacs editor was born, according to Wikipedia, in 1976, same year as myself. I am clearly not among its first users. It was already a well-established citizen when I first learnt it; I am fortunate to be the son of a Physics researcher at UNAM, My father used to take me to his institute after he noticed how I was attracted to computers; we would usually spend some hours there between 7 and 11PM on Friday nights. His institute had a computer room where they had very sweet gear: Some 10 Heathkit terminals quite similar to this one:

The terminals were connected (via individual switches) to both a PDP-11 and a Foonly F2 computers. The room also had a beautiful thermal printer, a beautiful Tektronix vectorial graphics output terminal, and some other stuff. The main user for my father was to typeset some books; he had recently (1979) published Integral Transforms in Science and Engineering (that must be my first mention in scientific literature), and I remember he was working on the proceedings of a conference he held in Oaxtepec (the account he used in the system was oax, not his usual kbw, which he lent me). He was also working on Manual de Lenguaje y Tipografía Científica en Castellano, where you can see some examples of TeX; due to a hardware crash, the book has the rare privilege of being a direct copy of the output of the thermal printer: It was not possible to produce a higher resolution copy for several years… But it is fun and interesting to see what we were able to produce with in-house tools back in 1985!

So, what could he teach me so I could use the computers while he worked? TeX, of course. No, no LaTeX (that was published in 1984). LaTeX is a set of macros developed initially by Leslie Lamport, used to make TeX easier; TeX was developed by Donald Knuth, and if I have this information correct, it was Knuth himself who installed and demonstrated TeX in the Foonly computer, during a visit to UNAM.

Now, after 39 years hammering at Emacs buffers… Have I grown extra fingers? Nope. I cannot even write decent elisp code, and can barely read it. I do use org-mode (a lot!) and love it; I have written basically five books, many articles and lots of presentations and minor documents with it. But I don’t read my mail or handle my git from Emacs. I could say, I’m a relatively newbie after almost four decades.

Four decades

When we got a PC in 1986, my father got the people at the Institute to get him memacs (micro-emacs). There was probably a ten year period I barely used any emacs, but always recognized it. My fingers hve memorized a dozen or so movement commands, and a similar number of file management commands.

And yes, Emacs and TeX are still the main tools I use day to day.

21 March, 2022 05:45PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2022)

The following contributor got his Debian Developer account in the last two months:

  • Francisco Vilmar Cardoso Ruviaro (vilmar)

The following contributors were added as Debian Maintainers in the last two months:

  • Lu YaNing
  • Mathias Gibbens
  • Markus Blatt
  • Peter Blackman
  • David da Silva Polverari

Congratulations!

21 March, 2022 04:00PM by Jean-Pierre Giraud

Antoine Beaupré

20+ years of Emacs

I enjoyed reading this article named "22 years of Emacs" recently. It's kind of fascinating, because I realised I don't exactly know for how long I've been using Emacs. It's lost in the mists of history. If I would have to venture a guess, it was back in the "early days", which in that history is mapped around 1996-1997, when I installed my very own "PC" with FreeBSD 2.2.x and painstakingly managed to make XFree86 run on it.

Modelines. Those were the days... But I digress.

I am old...

The only formal timestamp I can put is that my rebuilt .emacs.d git repository has its first commit in 2002. Some people reading this may be born after that time. This means I'm at least significantly older than those people, to put things gently.

Clever history nerds will notice that the commit is obviously fake: Git itself did not exist until 2005. But ah-ah! I was already managing my home directory with CVS in 2001! I converted that repository into git some time in 2009, and therefore you can see all my embarrassing history, including changes from two decades ago.

That includes my first known .emacs file which is just bizarre to read right now: 200 lines, most of which are "customize" stuff. Compare with the current, 1000+ lines init.el which is also still kind of a mess, but actually shares very little with the original, thankfully.

All this to say that in those years (decades, really) of using Emacs, I have had a very different experience than credmp who wrote packages, sent patches, and got name dropping from other developers. My experience is just struggling to keep up with everything, in general, but also in Emacs.

... and Emacs is too fast for me

It might sound odd to say, but Emacs is actually moving pretty fast right now. A lot of new packages are coming out, and I can hardly keep up.

  • I am not using org mode, but did use it for time (and task) tracking for a while (and for invoicing too, funky stuff).

  • I am not using mu4e, but maybe I'm using something better (notmuch) and yes, I am reading my mail in Emacs, which I find questionable from a security perspective. (Sandboxing untrusted inputs? Anyone?)

  • I am using magit, but only when coding, so I do end up using git on the command line quite a bit anyways.

  • I do have which-key enabled, and reading about it reminded me I wanted to turn it off because it's kind of noisy and I never remember I can actually use it for anything. Or, in other words, I don't even remember the prefix key or, when I do, there's too many possible commands after for it to be useful.

  • I haven't setup lsp-mode, let alone Eglot, which I just learned about reading the article. I thought I would be super shiny and cool by setting up LSP instead of the (dying?) elpy package, but I never got around to it. And now it seems lsp-mode is uncool and I should really do eglot instead, and that doesn't help.

    UPDATE: I finally got tired and switched to lsp-mode. The main reason for choosing it over eglot is that it's in Debian (and eglot is not). (Apparently, eglot has more chance of being upstreamed, "when it's done", but I guess I'll cross that bridge when I get there.) lsp-mode feels slower than elpy but I haven't done any of the performance tuning and this will improve even more with native compilation (see below).

    I already had lsp-mode partially setup in Emacs so I only had to do this small tweak to switch and change the prefix key (because s-l or mod is used by my window manager). I also had to pin LSP packages to bookworm here and here.

  • I am not using projectile. It's on some of my numerous todo lists somewhere, surely. I suspect it's important to getting my projects organised, but I still live halfway between the terminal and Emacs, so it's not quite clear what I would gain.

  • I had to ask what native compilation was or why it mattered the first time I heard of it. And when I saw it again in the article, I had to click through to remember.

Overall, I feel there's a lot of cool stuff in Emacs out there. But I can't quite tell what's the best of which. I can barely remember which completion mechanism I use (company, maybe?) or what makes my mini-buffer completion work the way it does. Everything is lost in piles of customize and .emacs hacks that is constantly changing. Because a lot is in third-party packages, there are often many different options and it's hard to tell which one we should be using.

... or at least fast enough

And really, Emacs feels fast enough for me. When I started, I was running Emacs on a Pentium I, 166MHz, with 8MB of RAM (eventually upgraded to 32MB, whoohoo!). Back in those days, the joke was that EMACS was an acronym for "Eight Megs, Always Scratching" and now that I write this down, I realize it's actually "Eight Megs, and Constantly Swapping", which doesn't sound as nice because you could actually hear Emacs running on those old hard drives back in the days. It would make a "scratching" noise as the hard drive heads would scramble maniacally to swap pages in and out of swap to make room for the memory-hungry editor.

Now Emacs is pretty far down the list of processes in top(1) regardless of how you look at it. It's using 97MB of resident memory and close to 400MB of virtual memory, which does sound like an awful lot compared to my first computer... But it's absolutely nothing compared to things like Signal-desktop, which somehow manages to map a whopping 20.5GB virtual memory. (That's twenty Gigabytes of memory for old timers or time travelers from the past, and yes, that is now a thing.) I'm not exactly sure how much resident memory it uses (because it forks multiple processes), probably somewhere around 300MB of resident memory. Firefox also uses gigabytes of that good stuff, also spread around the multiple processes, per tab.

Emacs "feels" super fast. Typing latency is noticeably better in Emacs than my web browser, and even beats most terminal emulators. It gets a little worse when font-locking is enabled, unfortunately, but it's still feels much better.

And all my old stuff still works in Emacs, amazingly. (Good luck with your old Netscape or ICQ configuration from 2000.)

I feel like an oldie, using Emacs, but I'm really happy to see younger people using it, and learning it, and especially improving it. If anything, one direction I would like to see it go is closer to what web browsers are doing (yes, I know how bad that sounds) and get better isolation between tasks.

An attack on my email client shouldn't be able to edit my Puppet code, and/or all files on my system, for example. And I know, fundamentally, that's a really hard challenge in Emacs. But if you're going to treat your editor as your operating system (or vice versa, I lost track of where we are now that there's an Emacs Window Manager, which I do not use), at least we should get that kind of security.

Otherwise I'll have to find a new mail client, and that's really something I try to limit to once a decade or so.

21 March, 2022 03:08AM

March 20, 2022

Joerg Jaspert

Another shell script moved to rust

Shell? Rust!

Not the first shell script I took and made a rust version of, but probably my largest yet. This time I took my little tm (tmux helper) tool which is (well, was) a bit more than 600 lines of shell, and converted it to Rust.

I got most of the functionality done now, only one major part is missing.

What’s tm?

tm started as a tiny shell script to make handling tmux easier. The first commit in git was in July 2013, but I started writing and using it in 2011. It started out as a kind-of wrapper around ssh, opening tmux windows with an ssh session on some other hosts. It quickly gained support to open multiple ssh sessions in one window, telling tmux to synchronize input (send input to all targets at once), which is great when you have a set of machines that ought to get the same commands.

tm vs clusterssh / mussh

In spirit it is similar to clusterssh or mussh, allowing to run the same command on many hosts at the same time. clusterssh sets out to open new terminals (xterm) per host and gives you an input line, that it sends everywhere. mussh appears to take your command and then send it to all the hosts. Both have disadvantages in my opinion: clusterssh opens lots of xterm windows, and you can not easily switch between multiple sessions, mussh just seems to send things over ssh and be done.

tm instead “just” creates a tmux session, telling it to ssh to the targets, possibly setting the tmux option to send input to all panes. And leaves all the rest of the handling to tmux. So you can

  • detach a session and reattach later easily,
  • use tmux great builtin support for copy/paste,
  • see all output, modify things even for one machine only,
  • “zoom” in to one machine that needs just ONE bit different (cssh can do this too),
  • let colleagues also connect to your tmux session, when needed,
  • easily add more machines to the mix, if needed,
  • and all the other extra features tmux brings.

More tm

tm also supports just attaching to existing sessions as well as killing sessions, mostly for lazyness (less to type than using tmux directly).

At some point tm gained support for setting up sessions according to some “session file”. It knows two formats now, one is simple and mostly a list of hostnames to open synchronized sessions for. This may contain LIST commands, which let tm execute that command, expected output is list of hostnames (or more LIST commands) for the session. That, combined with the replacement part, lets us have one config file that opens a set of VMs based on tags our Ganeti runs, based on tags. It is simply a LIST command asking for VMs tagged with the replacement arg and up. Very handy. Or also “all VMs on host X”.

The second format is basically “free form tmux commands”. Mostly “commandline tmux call, just drop the tmux in front” collection.

Both of them supporting a crude variable replacement.

Conversion to Rust

Some while ago I started playing with Rust and it somehow ‘clicked’, I do like it. My local git tells me, that I tried starting off with go in 2017, but that appearently did not work out. Fun, everywhere I can read says that Rust ought to be harder to learn.

So by now I have most of the functionality implemented in the Rust version, even if I am sure that the code isn’t a good Rust example. I’m learning, after all, and already have adjusted big parts of it, multiple times, whenever I learn (and understand) something more - and am also sure that this will happen again…

Compatibility with old tm

It turns out that my goal of staying compatible with the behaviour of the old shell script does make some things rather complicated. For example, the LIST commands in session config files - in shell I just execute them commands, and shell deals with variable/parameter expansion, I just set IFS to newline only and read in what I get back. Simple. Because shell is doing a lot of things for me.

Now, in Rust, it is a different thing at all:

  • Properly splitting the line into shell words, taking care of quoting (can’t simply take whitespace) (there is shlex)
  • Expanding specials like ~ and $HOME (there is home_dir).
  • Supporting environment variables in general, tm has some that adjust behaviour of it. Which shell can use globally. Used lazy_static for a similar effect - they aren’t going to change at runtime ever, anyways.

Properly supporting the commandline arguments also turned out to be a bit more work. Rust appearently has multiple crates supporting this, I settled on clap, but as tm supports “getopts”-style as well as free-form arguments (subcommands in clap), it takes a bit to get that interpreted right.

Speed

Most of the time entirely unimportant in the tool that tm is (open a tmux with one to some ssh connections to some places is not exactly hard or time consuming), there are situations, where one can notice that it’s calling out to tmux over and over again, for every single bit to do, and that just takes time: Configurations that open sessions to 20 and more hosts at the same time especially lag in setup time. (My largest setup goes to 443 panes in one window). The compiled Rust version is so much faster there, it’s just great. Nice side effect, that is. And yes, in the end it is also “only” driving tmux, still, it takes less than half the time to do so.

Code, Fun parts

As this is still me learning to write Rust, I am sure the code has lots to improve. Some of which I will sure find on my own, but if you have time, I love PRs (or just mails with hints).

Github

Also the first time I used Github Actions to see how it goes. Letting it build, test, run clippy and also run a code coverage tool (Yay, more than 50% covered…) on it. Unsure my tests are good, I am not used to writing tests for code, but hey, coverage!

Up next

I do have to implement the last missing feature, which is reading the other config file format. A little scared, as that means somehow translating those lines into correct calls within the tmux_interface I am using, not sure that is easy. I could be bad and just shell out to tmux on it all the time, but somehow I don’t like the thought of doing that. Maybe (ab)using the control mode, but then, why would I use tmux_interface, so trying to handle it with that first.

Afterwards I want to gain a new command, to save existing sessions and be able to recreate them easily. Shouldn’t be too hard, tmux has a way to get at that info, somewhere.

20 March, 2022 12:23PM

March 19, 2022

Russell Coker

More About the Librem 5

I concluded my previous post about the Purism Librem 5 [1] with the phone working as a Debian/GNOME system with SSH access over the LAN. Before I published that post I managed to render it unbootable, making a new computer unbootable on the first day of owning it isn’t uncommon for me. In this case I tried to get SE Linux running on it and changing the kernel commandline parameter “security=apparmor” to “security=selinux” caused it to fail the checksum on kernel parameters and halt the boot. That seems to require a fresh install, it seems possible that I could setup my Librem5 to boot a recovery image from a SD card in such situations but that doesn’t seem to be well documented and I didn’t have any important data to lose. If I do figure out how to recover data by booting from a micro SD card I’ll document it.

Here’s the documentation for reflashing the phone [2], you have to use the “--variant luks” option for the flashing tool to have an encrypted root filesystem (should default to on to match the default shipping configuration). There is an option --skip-cleanup to allow you to use the same image multiple times, but that probably isn’t useful. The image that is available for download today has the latest kernel update that I installed yesterday so it seems that they quickly update the image which makes it convenient to get the latest (dpkg is slow on low power ARM systems). Overall the flash tool is nicely written, does the download and install and instructs you how to get the phone in flashing mode. It is a minor annoyance that the battery has to be removed as part of the flashing process, I will probably end up flashing my phone more often than I want to take the back off the case. A mitigating factor is that the back is well designed and doesn’t appear prone to having it’s plastic tabs breaking off when removed (as has happened to several other phones I’ve owned).

The camera doesn’t seem to work well at this time, all photos have an unusually low brightness. The audio recording also doesn’t work well, speaking clearly into the phone results in quiet recordings.

I updated the Debian Wiki page on Mobile devices [3] to include a link to a page about the Librem5 [4] and to also have a section about applications known to work well on mobile devices. Hopefully other people will make some additions to that as most programs in Debian don’t work well on mobile devices so we need a list of known good applications as well as applications that can be easily changed to work well.

One thing I’ve started looking at is the code for the Geary MUA (the default MUA for the Librem5 and the only one in Debian I know to be suitable for a phone). It needs the Thunderbird style autoconfig and it needs the ability to select which IMAP folders to scan as a common practice is to have some large IMAP folders that aren’t used on mobile devices.

I believe that Android runs each app in a separate UID to prevent them from messing with each other. The configuration on a standard Linux system and on PureOS is to have all apps running with the same permissions, I think this needs to be improved both for phones and for regular Linux systems which will probably benefit more than phones do. I’ll write another blog post about this.

19 March, 2022 12:31AM by etbe

March 18, 2022

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 registration and call for proposals are open!

DebConf22 banner open registration

Registration for DebConf22 is now open. The the 23rd edition of DebConf will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

Along with the registration, the DebConf content team announced the call for proposals. Deadline to submit a proposal to be considered in the main schedule is April 15th, 2022 23:59:59 UTC (Friday).

DebConf is an event open to everyone, no matter how you identify yourself or how others perceive you. We want to increase visibility of our diversity and work towards inclusion at Debian Project, drawing our attendees from people just starting their Debian journey, to seasoned Debian Developers or active contributors in different areas like packaging, translation, documentation, artwork, testing, specialized derivatives, user support and many other. In other words, all are welcome.

To register for the event, log into the registration system and fill out the form. You will be able to edit and update your registration at any point. However, in order to help the organisers have a better estimate of how many people will attend the event, we would appreciate if you could access the system and confirm (or cancel) your participation in the conference as soon as you know if you will be able to come. The last day to confirm or cancel is July 1st, 2022 23:59:59 UTC. If you don't confirm or you register after this date, you can come to the DebConf22 but we cannot guarantee availability of accommodation, food and swag (t-shirt, bag, and so on).

For more information about registration, please visit registration information.

Submitting an event

You can now submit an event proposal. Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, and so on) could have different durations. Please choose the most suitable duration for your event and explain any special requests.

In order to submit a talk, you will need to create an account on the website. We suggest that Debian Salsa account holders (including DDs and DMs) use their Salsa login when creating an account. However, this isn't required, as you can sign up with an e-mail address and password.

Bursary for travel, accomodation and meals

In an effort to widen the diversity of DebConf attendees, the Debian Project allocates a part of the financial resources obtained through sponsorships to pay for bursaries (travel, accommodation, and/or meals) for participants who request this support when they register.

As resources are limited, we will examine the requests and decide who will receive the bursaries. They will be destined:

  • To active Debian contributors.
  • To promote diversity: newcomers to Debian and/or DebConf, especially from under-represented communities.

Giving a talk, organizing an event or helping during DebConf22 is taken into account when deciding upon your bursary, so please mention them in your bursary application.

For more information about bursaries, please visit applying for a bursary to DebConf.

Attention: the registration for DebConf22 will be open until the conference starts, but the deadline to apply for bursaries using the registration form before May 1st, 2022 23:59:59 UTC. This deadline is necessary in order to the organisers use time to analyze the requests, and for successful applicants to prepare for the conference.

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak.

DebConf22 is accepting sponsors; if you are interested, or think you know of others who would be willing to help, please get in touch!

18 March, 2022 08:10PM by The Debian Publicity Team

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Debian Clojure Team Sprint --- May 13-14th 2022

I'm happy to announce the Debian Clojure Team will hold a remote sprint from May 13th to May 14th 2022.

The goal of this sprint is to improve various aspects of the Clojure ecosystem in Debian. As such, everyone is welcome to participate!

Here are a few items we are planning to work on, in no particular order:

  • Update leiningen to the latest upstream version, to let some libraries in experimental migrate to unstable.
  • Work towards replacing our custom Clojure script with upstream's and package clj | clojure-cli.
  • Update clojure to the latest upstream version.
  • Work on debugging autopkgtest failures on a bunch of puppetlabs-* libraries.
  • Work on lintian tags for the Clojure Team.

You can register for the sprint on the Debian Wiki. We are planning to ask the DPL for a food budget. If you plan on joining and want your food to be sponsored, please register before April 2nd.

18 March, 2022 06:45PM by Louis-Philippe Véronneau

Enrico Zini

Context-dependent logger in Python

This is a common logging pattern in Python, to have loggers related to module names:

import logging

log = logging.getLogger(__name__)


class Bill:
    def load_bill(self, filename: str):
        log.info("%s: loading file", filename)

I often however find myself wanting to have loggers related to something context-dependent, like the kind of file that is being processed. For example, I'd like to log loading of bill loading when done by the expenses module, and not when done by the printing module.

I came up with a little hack that keeps the same API as before, and allows to propagate a context dependent logger to the code called:

# Call this file log.py
from __future__ import annotations
import contextlib
import contextvars
import logging

_log: contextvars.ContextVar[logging.Logger] = contextvars.ContextVar('log', default=logging.getLogger())


@contextlib.contextmanager
def logger(name: str):
    """
    Set a default logger for the duration of this context manager
    """
    old = _log.set(logging.getLogger(name))
    try:
        yield
    finally:
        _log.reset(old)


def debug(*args, **kw):
    _log.get().debug(*args, **kw)


def info(*args, **kw):
    _log.get().info(*args, **kw)


def warning(*args, **kw):
    _log.get().warning(*args, **kw)


def error(*args, **kw):
    _log.get().error(*args, **kw)

And now I can do this:

from . import log

# …
    with log.logger("expenses"):
        bill = load_bill(filename)


# This code did not change!
class Bill:
    def load_bill(self, filename: str):
        log.info("%s: loading file", filename)

18 March, 2022 10:53AM

March 17, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

Speaking about the OpenPGP WoT on LibrePlanet this Saturday

So, LibrePlanet, the FSF’s conference, is coming!

I much enjoyed attending this conference in person in March 2018. This year I submitted a talk again, and it got accepted — of course, given the conference is still 100% online, I doubt I will be able to go 100% conference-mode (I hope to catch a couple of other talks, but… well, we are all eager to go back to how things were before 2020!)

Anyway, what is my talk about?

My talk is titled Current challenges for the OpenPGP keyserver network. Is there a way forward?. The abstract I submitted follows:

Many free projects use OpenPGP encryption or signatures for various important tasks, like defining membership, authenticating participation, asserting identity over a vote, etc. The Web-of-Trust upon which its operation is based is a model many of us hold dear, allowing for a decentralized way to assign trust to the identity of a given person.

But both the Web-of-Trust model and the software that serves as a basis for the above mentioned uses are at risk due to attacks on the key distribution protocol (not on the software itself!)

With this talk, I will try to bring awareness to this situation, to some possible mitigations, and present some proposals to allow for the decentralized model to continue to thrive towards the future.

I am on the third semester of my PhD, trying to somehow keep a decentralized infrastructure for the OpenPGP Web of Trust viable and usable for the future. While this is still in the early stages of my PhD work (and I still don’t have a solution to present), I will talk about what the main problems are… and will sketch out the path I intend to develop.

What is the relevance? Mainly, I think, that many free software projects use the OpenPGP Web of Trust for their identity definitions… Are we anachronistic? Are we using tools unfit for this century? I don’t think so. I think we are in time to fix the main sore spots for this great example of a decentralized infrastructure.

When is my talk scheduled?

This Saturday, 2022.03.19, at

GMT / UTC time
19:25–20:10
Conference schedule time (EDT/GMT-4)
15:25–16:10
Mexico City time (GMT-6)
13:25–14:10

How to watch it?

The streams are open online. I will be talking in the Saturn room, feel free to just appear there and watch! The FSF asks people to [register for the conference](https://my.fsf.org/civicrm/event/info?reset=1&id=99) beforehand, in order to be able to have an active participation (i.e. ask questions and that). Of course, you might be interested in other talks – take a look at the schedule!

LibrePlanet keeps a video archive of their past conferences, and this talk will be linked from there. Of course, I will link to the recording once it is online.

17 March, 2022 04:55PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, February 2022

A Debian LTS logo

Every month we review the work funded by Freexian’s Debian LTS offering. Please find the report for February below.

Debian project funding

  • In February Raphaël and the LTS worked on a survey of Debian developers meant to solicit ideas for improvements in the Debian project at large. You can see the results of the initial discussion here in the list of ideas of which there are already over 30.
  • The full survey is due to be emailed to Debian Developers shortly.
  • In February € 2250 was put aside to fund Debian projects.

Debian LTS contributors

In February, 12 contributors were paid to work on Debian LTS, their reports are available below. If you’re interested in participating in the LTS or ELTS teams, we welcome participation from the Debian community. Simply get in touch with Jeremiah or Raphaël if you are if you are interested in participating.

Evolution of the situation

In February we released 24 DLAs.

The security tracker currently lists 61 packages with a known CVE and the dla-needed.txt file has 26 packages needing an update.

You can find out more about the Debian LTS project via the following video:

Thanks to our sponsors

Sponsors that joined recently are in bold.

17 March, 2022 11:32AM by Raphaël Hertzog

March 16, 2022

Michael Ablassmeier

python logging messages and exit codes

Everyone knows that an application exit code should change based on the success, error or maybe warnings that happened during execution.

Lately i came along some python code that was structured the following way:

#!/usr/bin/python3
import sys
import logging

def warnme():
    # something bad happens
    logging.warning("warning")
    sys.exit(2)

def evil():
    # something evil happens
    logging.error("error")
    sys.exit(1)

def main():
    logging.basicConfig(
        level=logging.DEBUG,
    )   

    [..]

the situation was a little bit more complicated, some functions in other modules also exited the application, so sys.exit() calls were distributed in lots of modules an files.

Exiting the application in some random function of another module is something i dont consider nice coding style, because it makes it hard to track down errors.

I expect:

  • exit code 0 on success
  • exit code 1 on errors
  • exit code 2 on warnings
  • warnings or errors shall be logged in the function where they actually happen: the logging module will show the function name with a better format option: nice for debugging.
  • one function that exits accordingly, preferrably main()

How to do better?

As the application is using the logging module, we have a single point to collect warnings and errors that might happen accross all modules. This works by passing a custom handler to the logging module which tracks emitted messages.

Heres an small example:

#!/usr/bin/python3
import sys
import logging

class logCount(logging.Handler):
    class LogType:
        def __init__(self):
            self.warnings = 0
            self.errors = 0

    def __init__(self):
        super().__init__()
        self.count = self.LogType()

    def emit(self, record):
        if record.levelname == "WARNING":
            self.count.warnings += 1
        if record.levelname == "ERROR":
            self.count.errors += 1
            
def infome():
    logging.info("hello world")

def warnme():
    logging.warning("help, an warning")

def evil():
    logging.error("yikes")

def main():
    EXIT_WARNING = 2
    EXIT_ERROR = 1
    counter = logCount()
    logging.basicConfig(
        level=logging.DEBUG,
        handlers=[counter, logging.StreamHandler(sys.stderr)],
    )
    infome()
    warnme()
    evil()
    if counter.count.errors != 0:
        raise SystemExit(EXIT_ERROR)
    if counter.count.warnings != 0:
        raise SystemExit(EXIT_WARNING)

if __name__ == "__main__":
    main()
python3 count.py ; echo $?
INFO:root:hello world
WARNING:root:help, an warning
ERROR:root:yikes
1

This also makes easy to define something like:

  • hey, got 2 warnings, change exit code to error?
  • got 3 warnings, but no –strict passed, ingore those, exit with success!
  • etc..

16 March, 2022 12:00AM

March 15, 2022

Russell Coker

Librem 5 First Impression

I just received the Purism Librem 5 that I paid for years ago (I think it was 2018). The process of getting the basic setup done was typical (choosing keyboard language, connecting to wifi, etc). Then I tried doing things. One thing I did was update to the latest PureOS release which gave me a list of the latest Debian packages installed which is nice.

The first problem I found was the lack of notification when the phone is trying to do something. I’d select to launch an app, nothing would happen, then a few seconds later it would appear. When I go to the PureOS app store and get a list of apps in a category nothing happens for ages (shows a blank list) then it might show actual apps, or it might not. I don’t know what it’s doing, maybe downloading a list of apps, if so it should display how many apps have had their summary data downloaded or how many KB of data have been downloaded so I know if it’s doing something and how long it might take.

Running any of the productivity applications requires a GNOME keyring, I selected a keyring password of a few characters and it gave a warning about having no password (does this mean it took 3 characters to be the same as 0 characters?). Then I couldn’t unlock it later. I tried deleting the key file and creating a new one with a single character password and got the same result. I think that such keyring apps have little benefit, all processes in the session have the same UID and presumable can use ptrace to get data from each other’s memory space. If the keyring program was SETGID and the directory used to store the keyring files was a system directory with execute access only for that group then it might provide some benefit (SETGID means that ptrace is denied). Ptrace is denied for the keyring but relying on a user space prompt for the passphrase to a file that the user can read seems of minimal benefit as a hostile process could read the file and prompt for the passphrase. This is probably more of a Debian issue, and I reproduced the keyring issue with my Debian workstation.

The Librem 5 is a large phone (unusually thick by modern phone standards) and is rumoured to be energy hungry. When I tried charging it from the USB port on my PC (HP ML110 Gen9) the charge level went down. I used the same USB port and USB cable that I use to charge my Huawei Mate 10 Pro every day, so other phones can draw more power from that USB port and cable faster than they use it.

The on-sceen keyboard for the Librem 5 is annoying, it doesn’t have a TAB key and the cursor control keys are unreasonably small. The keyboard used by ConnectBot (the most popular SSH client for Android) is much better, it has it’s own keys for CTRL, ESC, TAB, arrows, HOME, and END in addition to the regular on-screen keyboard. The Librem 5 comes with a terminal app by default which is much more difficult to use than it should be due to the lack of TAB filename completion etc.

The phone has a specified temperature range of 0C to 35C, that’s not great for Australia where even the cooler cities have summer temperatures higher than that. When charging on a fast charger (one that can provide energy faster than the phone uses it) the phone gets quite warm. It feels like more than 10C hotter than the ambient temperature, so I guess I can’t charge it on Monday afternoon when the forecast is 31C! Maybe I should put a USB charger by my fridge with a long enough cable that I can charge a phone that’s inside the fridge, seriously.

Having switches to disable networking is a good security feature and designing the phone with separate components that can’t interfere with each other is good too. There are reports that software fixes will reduce the electricity use which will alleviate the problems with charging and temperature. Most of my problems are clearly software related and therefore things that I can fix (in theory at least – I don’t have unlimited coding time).

Overall this wasn’t the experience I had hoped for after spending something like $700 and waiting about 4 years (so long that I can’t easily find the records of how long and how much money).

Getting It Working

It seems that the PureOS app store app doesn’t work properly. I can visit the app site and then select an app to install which then launches the app store app to do the install, which failed for every app I tried.

Then I tried going to the terminal and running the following:

sudo bash
apt update
apt install openssh-server netcat

So I should be able to use APT to install everything I want and use the PureOS web site as a guide to what is expected to work on the phone.

As an aside the PureOS apt repository appears to be a mirror or rebuild of the Debian/Bullseye arm64 archive without non-free components that they call Byzanteum.

Then I could ssh to my phone via “ssh purism@purism” (after adding an entry to /etc/hosts with the name purism and a static entry in my DHCP configuration to match) and run “sudo bash” to get root. To be able to login to root directly I had to install a ssh key (root is setup to login without password) and run “usermod --expiredate= root” (empty string for expire date) to have direct root logins.

I put the following in /etc/ssh/sshd_config.d/local.conf to restrict who can login (I added the users I want to the sshusers group). It also uses the ClientAlive checks because having sessions break due to IP address changes is really common with mobile devices and we don’t want disconnected sessions taking up memory forever.

AllowGroups sshusers
PasswordAuthentication no

UseDNS no
ClientAliveInterval 60
ClientAliveCountMax 6

Notifications

The GNOME notification system is used for notifications in the phone UI. So if you install the package libnotify-bin you get a utility notify-send that allows sending notifications from shell scripts.

Final Result

Now it basically works as a Debian workstation with a single-button mouse. So I just need to configure it as I would a Debian system and fix bugs along the way. Presumably any bug fixes I get into Debian will appear in PureOS shortly after the next Debian release.

15 March, 2022 10:24AM by etbe

Kunal Mehta

How to mirror the Russian Wikipedia with Debian and Kiwix

It has been reported that the Russian government has threatened to block access to Wikipedia for documenting narratives that do not agree with the official position of the Russian government.

One of the anti-censorship strategies I've been working on is Kiwix, an offline Wikipedia reader (and plenty of other content too). Kiwix is free and open source software developed by a great community of people that I really enjoy working with.

With threats of censorship, traffic to Kiwix has increased fifty-fold, with users from Russia accounting for 40% of new downloads!

You can download copies of every language of Wikipedia for offline reading and distribution, as well as hosting your own read-only mirror, which I'm going to explain today.

Disclaimer: depending on where you live it may be illegal or get you in trouble with the authorities to rehost Wikipedia content, please be aware of your digital and physical safety before proceeding.

With that out of the way, let's get started. You'll need a Debian (or Ubuntu) server with at least 30GB of free disk space. You'll also want to have a webserver like Apache or nginx installed (I'll share the Apache config here).

First, we need to download the latest copy of the Russian Wikipedia.

$ wget 'https://download.kiwix.org/zim/wikipedia/wikipedia_ru_all_maxi_2022-03.zim'

If the download is interrupted or fails, you can use wget -c $url to resume it.

Next let's install kiwix-serve and try it out. If you're using Ubuntu, I strongly recommend enabling our Kiwix PPA first.

$ sudo apt update
$ sudo apt install kiwix-tools
$ kiwix-serve -p 3004 wikipedia_ru_all_maxi_2022-03.zim

At this point you should be able to visit http://yourserver.com:3004/ and see the Russian Wikipedia. Awesome! You can use any available port, I just picked 3004.

Now let's use systemd to daemonize it so it runs in the background. Create /etc/systemd/system/kiwix-ru-wp.service with the following:

[Unit]
Description=Kiwix Russian Wikipedia

[Service]
Type=simple
User=www-data
ExecStart=/usr/bin/kiwix-serve -p 3004 /path/to/wikipedia_ru_all_maxi_2022-03.zim
Restart=always

[Install]
WantedBy=multi-user.target

Now let's start it and enable it at boot:

$ sudo systemctl start kiwix-ru-wp
$ sudo systemctl enable kiwix-ru-wp

Since we want to expose this on the public internet, we should put it behind a more established webserver and configure HTTPS.

Here's the Apache httpd configuration I used:

<VirtualHost *:80>
        ServerName ru-wp.yourserver.com

        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/html

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        <Proxy *>
                Require all granted
        </Proxy>

        ProxyPass / http://127.0.0.1:3004/
        ProxyPassReverse / http://127.0.0.1:3004/
</VirtualHost>

Put that in /etc/apache2/sites-available/kiwix-ru-wp.conf and run:

$ sudo a2ensite kiwix-ru-wp
$ sudo systemctl reload apache2

Finally, I used certbot to enable HTTPS on that subdomain and redirect all HTTP traffic over to HTTPS. This is an interactive process that is well documented so I'm not going to go into it in detail.

You can see my mirror of the Russian Wikipedia, following these instructions, at https://ru-wp.legoktm.com/. Anyone is welcome to use it or distribute the link, though I am not committing to running it long-term.

This is certainly not a perfect anti-censorship solution, the copy of Wikipedia that Kiwix provides became out of date the moment it was created, and the setup described here will require you to manually update the service when the new copy is available next month.

Finally, if you have some extra bandwith, you can also help seed this as a torrent.

15 March, 2022 01:02AM by legoktm

March 14, 2022

Sam Hartman

Nostalgia for Blogging

Recently, I migrated this blog from Livejournal over to Dreamwidth. As part of the process, I was looking back at my blog entries from around 2007 or so.

I miss those days. I miss the days when blogging was more of an interactive community. Comments got exchanged, and at least among my circle of friends people wrote thoughtful, well-considered entries. There was introspection into what was going on in people's lives, as well as technical stuff, as well as just keeping up with people who were important in my life.
Today, we have some of the same thought going into things like Planet Debian, but it's a lot less interactive. Then we have things like Facebook, Twitter, and the more free alternatives. There's interactivity, but it feels like everything has to fit into the length of a single tweet. So it is a lot faster paced and a lot less considered. I find I don't belong to that fast-paced social media as much as I did to the blogs of old.



comment count unavailable comments

14 March, 2022 12:51AM

March 12, 2022

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

kitty rxvt-like config

kitty is a terminal with some nice features (I particularly like the focus on low latency, and the best-in-class support for emoji) but with a rather unusual default configuration. Since everybody's opinions are bad, I will offer my own configuration so far to get a bit closer to classic terminals' defaults:

# If you're running GNOME with Wayland, you may or may not want to uncomment
# this to get your normal window decorations back (this may or may not be
# better in the future; see https://github.com/kovidgoyal/kitty/issues/3284)
# linux_display_server x11

# This is pretty much the only non-xterm choice I make; fixed just isn't
# suitable for high-DPI screens. I also install Noto Color Emoji, which will
# be used for fallback for flags etc.
font_family      Noto Mono
font_size        12
italic_font      auto
bold_italic_font auto

# kitty doesn't support bold-as-bright (which is on purpose, but I'm not
# really a fan); see https://github.com/kovidgoyal/kitty/issues/197.
# In any case, that means we'll need a bold font. 
bold_font        Noto Sans Mono Bold

# Typical terminals don't blink (and it causes wakeups).
cursor_blink_interval 0

# No bell. Be silent.
enable_audio_bell no

# Standard scrolling with pageup/pagedown, and a reasonable scrolling speed
# (this also holds for mouse wheel).
map shift+page_up scroll_page_up
map shift+page_down scroll_page_down
touch_scroll_multiplier 5.0

# I don't want kitty to boot up maximized just because there's some other
# maximized terminal somewhere. 80x24 for life, yo.
remember_window_size no
initial_window_width 80c
initial_window_height 24c

# It's really jarring having spare room at the _top_ if the terminal isn't
# a perfect multiple of the font cell size.
placement_strategy top-left

# Now for the default xterm/rxvt-like colors.
foreground #ffffff
background #000000
# black
color0 #000000
color8 #404040
# red
color1 #CD0000
color9 #FF0000
# green
color2 #00CD00
color10 #00FF00
# yellow
color3 #CDCD00
color11 #FFFF00
# blue
color4 #0000CD
color12 #0000FF
# magenta
color5 #CD00CD
color13 #FF00FF
# cyan
color6 #00CDCD
color14 #00FFFF
# white
color7 #FFFFFF
color15 #00FF00

I'm still not 100% sold on default URL behavior (somehow, it doesn't seem to always react on my left-click), and I'd really like those window decorations to be fixed, but apart from that, this is pretty good stuff so far.

12 March, 2022 06:03PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.11: Small Maintenance

A new release 0.3.11 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package.

This release updates src/Makefile.ucrt to use the RTools42 libraries. Details follow from the NEWS file.

Changes in version 0.3.11 (2022-03-12)

  • The UCRT Makefile was updated

  • Minor edits to README.md were made

Courtesy of CRANberries, a summary of changes in the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 March, 2022 03:30PM

hackergotchi for Thomas Koch

Thomas Koch

lsp-java coming to debian

Posted on March 12, 2022
Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

12 March, 2022 02:55PM

Waiting for a STATE folder in the XDG basedir spec

Posted on February 18, 2014

The XDG Basedirectory specification proposes default homedir folders for the categories DATA (~/.local/share), CONFIG (~/.config) and CACHE (~/.cache). One category however is missing: STATE. This category has been requested several times but nothing happened.

Examples for state data are:

  • history files of shells, repls, anything that uses libreadline
  • logfiles
  • state of application windows on exit
  • recently opened files
  • last time application was run
  • emacs: bookmarks, ido last directories, backups, auto-save files, auto-save-list

The missing STATE category is especially annoying if you’re managing your dotfiles with a VCS (e.g. via VCSH) and you care to keep your homedir tidy.

If you’re as annoyed as me about the missing STATE category, please voice your opinion on the XDG mailing list.

Of course it’s a very long way until applications really use such a STATE directory. But without a common standard it will never happen.

12 March, 2022 02:55PM

shared infrastructure coop

Posted on February 5, 2014

I’m working in a very small web agency with 4 employees, one of them part time and our boss who doesn’t do programming. It shouldn’t come as a surprise, that our development infrastructure is not perfect. We have many ideas and dreams how we could improve it, but not the time. Now we have two obvious choices: Either we just do nothing or we buy services from specialized vendors like github, atlassian, travis-ci, heroku, google and others.

Doing nothing does not work for me. But just buying all this stuff doesn’t please me either. We’d depend on proprietary software, lock-in effects or one-size-fits-all offerings. Another option would be to find other small web shops like us, form a cooperative and share essential services. There are thousands of web shops in the same situation like us and we all need the same things:

  • public and private Git hosting
  • continuous integration (Jenkins)
  • code review (Gerrit)
  • file sharing (e.g. git-annex + webdav)
  • wiki
  • issue tracking
  • virtual windows systems for Internet Explorer testing
  • MySQL / Postgres databases
  • PaaS for PHP, Python, Ruby, Java
  • staging environment
  • Mails, Mailing Lists
  • simple calendar, CRM
  • monitoring

As I said, all of the above is available as commercial offerings. But I’d prefer the following to be satisfied:

  • The infrastructure itself should be open (but not free of charge), like the OpenStack Project Infrastructure as presented at LCA. I especially like how they review their puppet config with Gerrit.

  • The process to become an admin for the infrastructure should work much the same like the process to become a Debian Developer. I’d also like the same attitude towards quality as present in Debian.

Does something like that already exists? There already is the German cooperative hostsharing which is kind of similar but does provide mainly hosting, not services. But I’ll ask them next after writing this blog post.

Is your company interested in joining such an effort? Does it sound silly?

Comments:

Sounds promising. I already answered by mail. Dirk Deimeke (Homepage) am 16.02.2014 08:16 Homepage: http://d5e.org

I’m sorry for accidentily removing a comment that linked to https://mayfirst.org while moderating comments. I’m really looking forward to another blogging engine… Thomas Koch am 16.02.2014 12:20

Why? What are you missing? I am using s9y for 9 years now. Dirk Deimeke (Homepage) am 16.02.2014 12:57

12 March, 2022 02:55PM

Petter Reinholdtsen

Publish Hargassner wood chip boiler state to MQTT

Recently I had a look at a Hargassner wood chip boiler, and what kind of free software can be used to monitor and control it. The boiler can be connected to some cloud service via what the producer call an Internet Gateway, which seem to be a computer connecting to the boiler and passing the information gathered to the cloud. I discovered the boiler controller got an IP address on the local network and listen on TCP port 23 to provide status information as a text line of numbers. It also provide a HTTP server listening on port 80, but I have not yet figured out what it can do beside return an error code.

If I am to believe various free software implementations talking to such boiler, the interpretation of the line of numbers differ between type of boiler and software version on the boiler. By comparing the list of numbers on the front panel of the boiler with the numbers returned via TCP, I have been able to figure out several of the numbers, but there are a lot left to understand. I've located several temperature measurements and hours running values, as well as oxygen measurements and counters.

I decided to write a simple parser in Python for the values I figured out so far, and a simple MQTT injector publishing both the interpreted and the unknown values on a MQTT bus to make collecting and graphing simpler. The end result is available from the hargassner2mqtt project page on gitlab. I very much welcome patches extending the parser to understand more values, boiler types and software versions. I do not really expect very few free software developers got their hands on such unit to experiment, but it would be fun if others too find this project useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12 March, 2022 05:30AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.8.2: Hotfix release per CRAN request

rcpp logo

A new hot-fix release 1.0.8.2 of Rcpp just got to CRAN. It will also be uploaded to Debian shortly, and Windows and macOS binaries will appear at CRAN in the next few days. This release breaks with the six-months cycle started with release 1.0.5 in July 2020 as CRAN desired an update to silence nags from the newest clang version which turned a little loud over a feature deprecated in C++11 (namely std::unary_function() and std::binary_function()). This was easy to replace with std::function() which we did. The release also contains a minor bugfix relative to 1.0.8 and C++98 builds, and minor correction to one pdf vignette. The release was fully tested by us and CRAN as usual against all reverse dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2519 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 239 in BioConductor.

The full list of details for this interim release follows.

Changes in Rcpp hotfix release version 1.0.8.2 (2022-03-10)

  • Changes in Rcpp API:

    • Accomodate C++98 compilation by adjusting attributes.cpp (Dirk in #1193 fixing #1192)

    • Accomodate newest compilers replacing deprecated std::unary_function and std::binary_function with std::function (Dirk in #1202 fixing #1201 and CRAN request)

  • Changes in Rcpp Documentation:

    • Adjust one overflowing column (Bill Denney in #1196 fixing #1195)
  • Changes in Rcpp Deployment:

    • Accomodate four digit version numbers in unit test (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2843 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 March, 2022 12:00AM

March 11, 2022

hackergotchi for Santiago García Mantiñán

Santiago García Mantiñán

tcpping-nmap a substitute for tcpping based on nmap

I was about to setup a tcpping based monitoring on smokeping but then I discovered this was based on tcptraceroute which on Debian comes setuid root and the alternative is to use sudo, so, anyway you put it... this runs with root privileges.

I didn't like what I saw, so, I said... couldn't we do this with nmap without needing root?

And so I started to write a little script that could mimic what tcpping and tcptraceroute were outputing but using nmap.

The result is tcpping-nmap which does this. The only little thing is that nmap only outputs miliseconds while the tcpping gets to microseconds.

Hope you enjoy it :-)

11 March, 2022 11:20PM by Santiago García Mantiñán (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dtts 0.1.0 on CRAN: New Package

Leonardo and I are thrilled to announce the first CRAN release of dtts. The dtts package builds on top of both our nanotime package and the well-loved and widely-used data.table package by Matt, Arun, Jan, and numerous collaborators.

In a very rough nutshell, you can think of dtts as combining both these potent ingredients to produce something not-entirely-unlike the venerable xts package by our friends Jeff and Josh—but using highest-precision nanosecond increments rather than not-quite-microseconds or dates.

The package is still somewhat rare and bare: it is mostly “just” alignment operators. But because of the power and genius of data.table not all that much more is needed because data.table gets us fifteen years of highly refined, tuned and tested code for data slicing, dicing, and aggregation. To which we now humbly add nanosecond-resolution indexing and alignment.

The package had been simmering for some time, and does of course take advantage of (a lot of) earlier work by Leonardo on his ztsdb project. We look forward to user feedback and suggestions at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 March, 2022 02:14AM

March 10, 2022

hackergotchi for Holger Levsen

Holger Levsen

20220310-Debian-Reunion-Hamburg-2022

Debian Reunion Hamburg 2022 from May 23 to 30

As last year there will be a Debian Reunion Hamburg 2022 event taking place at the same location as previous years, from May 23rd until the 30th.

This is just a preliminary announcement to get the word out, that this event will happen, so you can ponder attending. The wiki page has more information and some fine folks have even already registered!

A few things still need to be sorted out, eg a call for papers and a call for sponsors. If you want to help with that or have questions about the event, please reach out via #debconf-hamburg on irc.oftc.net or via the debconf-hamburg mailinglist.

I'm very much looking forward to meet some of you again soon and getting to know some others for the first time! Yay. It's been a long time...

10 March, 2022 12:04PM

Michael Ablassmeier

fscom switch shell

fs.com s5850 and s8050 series type switches have a secret mode which lets you enter a regular shell from the switch cli, like so:

hostname # start shell
Password:

The command and password are not documented by the manufacturer, i wondered wether if its possible to extract that password from the firmware. After all: its my device, and i want to have access to all the features!

Download the latest firmware image for those switch types and let binwalk do its magic:

$ wget https://img-en.fs.com/file/user_manual/s5850-and-s8050-series-switches-fsos-v7-2-5r-software.zip
binwalk FSOS-S5850-v7.2.5.r.bin  -e

This will extract an regular cpio archive, including the switch root FS:

$ file 2344D4 
2344D4: ASCII cpio archive (SVR4 with no CRC)
$ cpio --no-absolute-filenames -idv < 2344D4

The extracted files include the passwd file with hashes:

cat etc/passwd
root:$1$ZrdxfwMZ$1xAj.S6emtA7gWD7iwmmm/:0:0:root:/root:/bin/sh
nms:$1$nUbsGtA7$5omXOHPNK.ZzNd5KeekUq/:0:0:root:/ftp:/bin/sh

Let john do its job:

$ wget https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt
$ sudo john etc/passwd  --wordlist=rockyou.txt
<the_password>   (nms)
<the_password>   (root)
2g 0:00:04:03 100% 0.008220g/s 58931p/s 58935c/s 58935C/s nancy..!*!hahaha!*!

Thats it (wont reveal the password here, but well: its an easy one ;))

Now have fun poking around on your switches firmware:

hostname # start shell
Password: <the_password>
[root@hostname /mnt/flash]$ ps axw
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:29 init
    2 ?        S      0:06 [kthreadd]
 [..]
[root@hostname /mnt/flash]$ uname -a
Linux hostname 2.6.35-svn37723 #1 Thu Aug 22 20:43:19 CST 2019 ppc unknow

even tho the good things wont work, but i guess its time to update the firmware anyways:

[root@hostname /mnt/flash]$ tcpdump -pni vlan250
tcpdump: can't get TPACKET_V3 header len on packet socket: Invalid argument

10 March, 2022 12:00AM

March 09, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

Broken webcam aspect ratio

picture of my Sony RX100-III camera

Sony RX100-III, relegated to a webcam

Sometimes I have remote meetings with Google Meet. Unlike the other video-conferencing services that I use (Bluejeans, Zoom), my video was stretched out of proportion under Google Meet with Firefox. I haven't found out why this was happening, but I did figure out a work-around.

Thanks to Daniel Silverstone, Rob Kendrick, Gregor Herrmann and Ben Allen for pointing me in the right direction!

Hardware

The lovely Sony RX-100 mk3 that I bought in 2015 has spent most of its life languishing unused. During the Pandemic, once I was working from home all the time, I decided to press-gang it into service as a better-quality webcam. Newer models of this camera — the mark 4 onwards — have support for a USB mode called "PC Remote", which effectively makes them into webcams. Unfortunately my mark 3 does not support this, but it does have HDMI out, so I picked up a cheap "HDMI to USB Video Capture Card" from eBay.

Video modes

Before: wrong aspect ratio

Before: wrong aspect ratio

This device offers a selection of different video modes over a webcam interface. I used qv4l2 to explore the different modes. It became clear that the camera was outputting a signal at 16:9, but the modes on offer from the dongle were for a range of different aspect ratios. The picture for these other ratios was not letter or pillar-boxed, but stretched to fit.

I also noticed that the modes which had the correct aspect ratio were at very low framerates: 1920x1080@5fps, 1360x768@8fps, 1280x720@10fps. It felt to me that I would look unnatural at such a low framerate. The most promising mode was close to the right ratio, 720x480 and 30 fps.

Software

After: corrected aspect ratio

After: corrected aspect ratio

My initial solution is to use the v4l2loopback kernel module, which provides a virtual loop-back webcam interface. I can write video data to it from one process, and read it back from another. Loading it as follows:

modprobe v4l2loopback exclusive_caps=1

The option exclusive_caps configures the module into a mode where it initially presents a write-only interface, but once a process has opened a file handle, it then switches to read-only for subsequent processes. Assuming there are no other camera devices connected at the time of loading the module, it will create /dev/video0.1

I experimented briefly with OBS Studio, the very versatile and feature-full streaming tool, which confirmed that I could use filters on the source video to fix the aspect ratio, and emit the result to the virtual device. I don't otherwise use OBS, though, so I achieve the same result using ffmpeg:

fmpeg -s 720x480 -i /dev/video1 -r 30 -f v4l2 -vcodec rawvideo \
    -pix_fmt yuyv422 -s 720x405 /dev/video0

The source options are to select the source video mode I want. The codec and pixel formats are to match what is being emitted (I determined that using ffprobe on the camera device). The resizing is triggered by supplying a different size to the -s parameter. I think that is equivalent to explicitly selecting a "scale" filter, and there might be other filters that could be used instead (to add pillar boxes for example).

This worked just as well. In Google Meet, I select the Virtual Camera, and Google Meet is presented with only one video mode, in the correct aspect ratio, and no configurable options for it, so it can't misbehave.

Future

I'm planning to automate the loading (and unloading) of the module and starting the ffmpeg process in response to the real camera device being plugged or unplugged, using systemd events and services. (I don't leave the camera plugged in all the time due to some bad USB behaviour I've experienced if I do so.) If I get that working, I will write a follow-up.


  1. you can request a specific device name/number with another module option.

09 March, 2022 02:21PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.2.0: Major Updates

A new major release of RcppRedis arrived on CRAN today. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads for several years now.

This release integrates support for pub/sub, a popular messaging pattern in which one or more clients can subscribe to one or more ‘channels’. Whenever a client instances publishes, the Redis server immediately updates all clients listening on the given channel. This pattern is fairly common to transmit data to listeners. A there is a bit more to explain about this, I also added a brand-new vignette describing pub/sub with RcppRedis, along with another introductory vignette about Redis itself. We blogged about this exciting new feature and its particular use for market monitoring in R4 #36 recently too.

The pub/sub feature was available in package rredis by Bryan Lewis and has now been ported over by Bryan in a truly elegant yet compact implementation. We placed the code for the pub/sub examples, both for a single symbol (SP 500) as well as for a set of (futures) symbols, into a new examples/ subdirectory.

Other changes in this release are the removal of the build-dependency on Boost (or, rather, my BH package), an update to the included hiredis library (used if no system-wide version is found), and an updated to the UCRT build for R. That last one is a bit of a sore spot and nobody at CRAN deemed it necessary to tell me they were waiting for me to make this change; communication with the CRAN team can still be “challenging” (and I am being generous here). Anyway, the package is now on CRAN so all is well now, at long last.

The detailed changes list follows.

Changes in version 0.2.0 (2022-03-08)

  • Two simple C++11 features remove needs for BH and lexical_cast() (Dirk in #45 addressing #44).

  • Redis pub/sub is now supported (Dirk in #43, Bryan in #46).

  • Bryan Lewis is now a coauthor.

  • Added pub/sub examples for single and multiple Globex symbols.

  • The included hiredis sources have been updated to release 1.0.2.

  • Two vignettes have been added to introduce Redis and to described a live market-monitoring application included in directory pubsub/.

  • The UCRT build was updated per a suggestion by Tomas.

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 March, 2022 02:09AM

François Marier

Using a Streamzap remote control with MythTV on Debian Bullseye

After upgrading my MythTV machine to Debian Bullseye and MythTV 31, my Streamzap remote control stopped working correctly: the up and down buttons were working, but the OK button wasn't.

Here's the complete solution that made it work with the built-in kernel support (i.e. without LIRC).

Button re-mapping

Since some of the buttons were working, but not others, I figured that the buttons were probably not mapped to the right keys.

Inspired by these old v4l-utils-based instructions, I made my own custom keymap by by copying the original keymap:

cp /lib/udev/rc_keymaps/streamzap.toml /etc/rc_keymaps/

and then modifying it to adapt it to what MythTV needs. This is what I ended up with:

<span class="createlink"><a href="/blog.cgi?do=create&amp;from=posts%2Fusing-streamzap-remote-with-mythtv-debian-bullseye&amp;page=protocols" rel="nofollow">?</a>protocols</span>
name = "streamzap"
protocol = "rc-5-sz"
[protocols.scancodes]
0x28c0 = "KEY_0"
0x28c1 = "KEY_1"
0x28c2 = "KEY_2"
0x28c3 = "KEY_3"
0x28c4 = "KEY_4"
0x28c5 = "KEY_5"
0x28c6 = "KEY_6"
0x28c7 = "KEY_7"
0x28c8 = "KEY_8"
0x28c9 = "KEY_9"
0x28ca = "KEY_ESC"
0x28cb = "KEY_MUTE"
0x28cc = "KEY_UP"
0x28cd = "KEY_RIGHTBRACE"
0x28ce = "KEY_DOWN"
0x28cf = "KEY_LEFTBRACE"
0x28d0 = "KEY_UP"
0x28d1 = "KEY_LEFT"
0x28d2 = "KEY_ENTER"
0x28d3 = "KEY_RIGHT"
0x28d4 = "KEY_DOWN"
0x28d5 = "KEY_M"
0x28d6 = "KEY_ESC"
0x28d7 = "KEY_L"
0x28d8 = "KEY_P"
0x28d9 = "KEY_ESC"
0x28da = "KEY_BACK"
0x28db = "KEY_FORWARD"
0x28dc = "KEY_R"
0x28dd = "KEY_PAGEUP"
0x28de = "KEY_PAGEDOWN"
0x28e0 = "KEY_D"
0x28e1 = "KEY_I"
0x28e2 = "KEY_END"
0x28e3 = "KEY_A"

Note that the keycodes can be found in the kernel source code.

With my own keymap in place at /etc/rc_keymaps/streamzap.toml, I changed /etc/rc_maps.cfg to have the kernel driver automatically use it:

--- a/rc_maps.cfg
+++ b/rc_maps.cfg
@@ -126,7 +126,7 @@
 *      rc-real-audio-220-32-keys real_audio_220_32_keys.toml
 *      rc-reddo                 reddo.toml
 *      rc-snapstream-firefly    snapstream_firefly.toml
-*      rc-streamzap             streamzap.toml
+*      rc-streamzap             /etc/rc_keymaps/streamzap.toml
 *      rc-su3000                su3000.toml
 *      rc-tango                 tango.toml
 *      rc-tanix-tx3mini         tanix_tx3mini.toml

Button repeat delay

To adjust the delay before button presses are repeated, I followed these old out-of-date instructions on the MythTV wiki and put the following in /etc/udev/rules.d/streamzap.rules:

ACTION=="add", ATTRS{idVendor}=="0e9c", ATTRS{idProduct}=="0000", RUN+="/usr/bin/ir-keytable -s rc0 -D 1000 -P 250"

Note that the -d option has been replaced with -s in the latest version of ir-keytable.

To check that the Streamzap is indeed detected as rc0 on your system, use this command:

$ ir-keytable 
Found /sys/class/rc/rc0/ with:
    Name: Streamzap PC Remote Infrared Receiver (0e9c:0000)
    Driver: streamzap
    Default keymap: rc-streamzap
...

Make sure you don't pass the -c to ir-keytable or else it will clear the keymap set via /etc/rc_maps.cfg, removing all of the button mappings.

09 March, 2022 12:00AM

March 07, 2022

Ayoyimika Ajibade

Progress Report!! Modifying Expectations... 📝

Wait! Just like yesterday when I was accepted as an Outreachy intern and the first half of the internship is finished😲. How time flies when you are having a good time🎃

As part of the requirements for the final application during the contribution period for the Outreachy internship, I needed to provide a timeline to achieve our goal on my outreachy task which is transitioning of dependencies in node16 and webpack5. Having consulted my mentors who implied that the packages depending on webpack and nodejs combined are so numerous that its impossible to finish all within a space of three months but we have steps to guide us through the entire process to achieve most of our goals which are ➡

  • Find a list of packages(failing rebuild and testing with autopkgtest, of reverse dependencies of webpack and nodejs) to fix.
  • See if new upstream versions are available that support nodejs16 and webpack5 respectively.
  • See if the new upstream version works and doesn't fail while rebuilding or testing with autopkgtest.
  • Report bug 🐞 in Debian if any fails to rebuild or test with autopkgtest.
  • Forward bugs upstream if needed.
  • Fix packages and forward patches.

As of this writing(though a little late🕔) we have successfully rebuilt all reverse dependencies of webpack5 and split them equally each for I and my co-intern for all Javascript modules as ruby💎 packages also depend on webpack which is a total of 44 packages. Filed a bug report on Debian bug tracking system for failing packages, also the original maintainer or uploader of the package to the Debian archive mostly Debian developers also get a mail in references to the package bug 🐞report. Sometimes the uploader who also receives the bug report decides to help out to fix the package and forward the patch upstream if need be. We have also filed an issue to upstream repo mostly via github👆 where some respond and create PR to solve those errors and others are plain aversive to the whole idea. PR from the upstream developer is cherry-picked and a patch is created by us to incorporate the code into our own working repository. some package upstream maintainer rejects such issues or doesn't respond, we take it upon ourselves to fix the package. The total number of packages that are successfully updated and ready to be merged is 10 packages while 12 packages remain on my own end to be updated.

One of the most challenging packages to update so far was prop-type as its runs its large test suite using jest of a lower version 19.0.2 compared to that of Debian OS which is version 27.5.1 updating and migrating its API's and methods to use the Debian updated version is so challenging after several googling, testing out the solution from StackOverflow, trials, and errors, reading documentations we eventually made progress with the help of my mentor, co-intern and the whole community member. It's so crazy that when I got it working I said to myself. phew😅😌 it's not rocket science why can I figure it out sooner than expected🤷‍♀️

I initially proposed that I would be halfway done with the project by now, I guess the reason am not able to achieve some of our goals which are finishing up with the packages for webpack and moving to transition some of the nodejs packages at all is DEBUGGING. Yes DEBUGGING! where you never can predict what the solution is. is the problem coming from Debian? or dependencies of the package you are working on, upstream bug, or dependencies of dependencies of the package you are working on, so many questions to answer. You can't easily find a solution to a bug as it takes time to try out so many guesses more of an educated guess, or even try out all the solutions from stack overflow and still no viable progress. Obviously, you cannot really know about something to set up a plan for unless you get right into it.

One way of doing this, if I have to start again is the truly understand how the javascript package work under the hood, how its handles different interaction between packages, some of its dos and don't of transpiling, bundling, testing, e.t.c

I guess my unrealistic goals need to be modified because some drawback that was not envisaged popped up and I underestimated the complexity of the tasks, which will be reducing the number of packages to update in transitioning of nodejs from what I planned😢

My major focus for the second half of the internship is to fix bugs and errors I discover, file bug reports for future bugs to seek help from co-maintainer or developers, file issues upstream and close those whose bugs are already resolved for the remaining 12 packages, and ultimately successfully uploading all reverse dependencies. Also diving into transitioning of nodejs16.

Thanks for stopping by🙏

07 March, 2022 01:09PM by Ayoyimika Ajibade

March 06, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.6 on CRAN: Updates

Leonardo and I are pleased to another update to our nanotime package bringing it to version 0.3.6 which landed on CRAN earlier today.

nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release corrects subsetting with %in% operator, integrates it better fit in the mixed S3/S4 setup, fixes a negative period parse, and updates class comparisons to rely on inherits(). The NEWS snippet has the full more details.

Changes in version 0.3.6 (2022-03-06)

  • Fix incorrect subsetting with operator %in% (Leonardo in #100 fixing #99).

  • Fix incorrect parsing for negative nanoperiod (Leonardo in #100 fixing #96).

  • Test for class via inherits() (Dirk).

Thanks to my CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 March, 2022 08:20PM

March 05, 2022

Thorsten Alteholz

My Debian Activities in February 2022

FTP master

This month I accepted 484 and rejected 73 packages. The overall number of packages that got accepted was 495.

The overall number of rejected packages was 76, which is about 15% of the uploads to NEW. While most of the maintainers do a great job when creating their debian/copyright, others are a bit lax. Unfortunately those people seem to be more enthusiastic when fighting for changes in NEW processing or even removing NEW.

One argument in discussions about NEW is that the copyright verification of packages can be done by the community after accepting the packages in the archive.
Last month I did not get any hint that such checks have been done by anybody. As the past already showed several times, this community based checks simply do not exist.

So in the end poorly maintained copyright information will rot in the archive and I am not sure that this really corresponds with the Debian Social Contract.

Debian LTS

This was my ninety-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 2928-1] htmldoc security update for three CVEs
  • [#1004049] buster-pu: zziplib debdiff was approved and package uploaded
  • [#1004050] bullseye-pu: zziplib debdiff was approved and package uploaded
  • [#1004055] buster-pu: debdiff was approved and package uploaded
  • [#1006493] bullseye-pu: htmldoc/1.9.11-4+deb11u2
  • [#1006494] buster-pu: htmldoc/1.9.3-1+deb10u3
  • [#1006550] buster-pu: tiff/4.1.0+git191117-2~deb10u4
  • [#1006551] bullseye-pu: tiff/4.2.0-1+deb11u1

Unfortunately salsa went down at the end of the month, so several planned uploads did not happen and have to be delayed to March.

I also continued to work on security support for golang packages. Further I worked on packages in NEW on security-master and injected missing sources. Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the forty-fourth ELTS month.

During my allocated time I uploaded:

  • ELA-567-1 for apache2
  • ELA-567-2 for apache2
  • ELA-568-1 for ksh
  • ELA-569-1 for tiff
  • ELA-570-1 for htmldoc

Further I worked on cyrus-sasl but did not do an upload yet.

Last but not least I did some days of frontdesk duties.

Debian Printing

As announced last month I uploaded a new version of cups.

Altogether I uploaded new upstream versions or improved packaging of:

Debian Astro

This month I uploaded new upstream versions or improved packaging of:

Other stuff

This month I uploaded new upstream versions or improved packaging of:

05 March, 2022 12:43PM by alteholz

Reproducible Builds

Reproducible Builds in February 2022

Welcome to the February 2022 report from the Reproducible Builds project. In these reports, we try to round-up the important things we and others have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.


Jiawen Xiong, Yong Shi, Boyuan Chen, Filipe R. Cogo and Zhen Ming Jiang have published a new paper titled Towards Build Verifiability for Java-based Systems (PDF). The abstract of the paper contains the following:

Various efforts towards build verifiability have been made to C/C++-based systems, yet the techniques for Java-based systems are not systematic and are often specific to a particular build tool (eg. Maven). In this study, we present a systematic approach towards build verifiability on Java-based systems.


GitBOM is a flexible scheme to track the source code used to generate build artifacts via Git-like unique identifiers. Although the project has been active for a while, the community around GitBOM has now started running weekly community meetings.


The paper Chris Lamb and Stefano Zacchiroli is now available in the March/April 2022 issue of IEEE Software. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains (PDF), the abstract of the paper contains the following:

We first define the problem, and then provide insight into the challenges of making real-world software build in a “reproducible” manner-this is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA).


In openSUSE, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


On our mailing list this month, Thomas Schmitt started a thread around the SOURCE_DATE_EPOCH specification related to formats that cannot help embedding potentially timezone-specific timestamp. (Full thread index.)


The Yocto Project is pleased to report that it’s core metadata (OpenEmbedded-Core) is now reproducible for all recipes (100% coverage) after issues with newer languages such as Golang were resolved. This was announced in their recent Year in Review publication. It is of particular interest for security updates so that systems can have specific components updated but reducing the risk of other unintended changes and making the sections of the system changing very clear for audit.

The project is now also making heavy use of “equivalence” of build output to determine whether further items in builds need to be rebuilt or whether cached previously built items can be used. As mentioned in the article above, there are now public servers sharing this equivalence information. Reproducibility is key in making this possible and effective to reduce build times/costs/resource usage.


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 203, 204, 205 and 206 to Debian unstable, as well as made the following changes to the code itself:

  • Bug fixes:

    • Fix a file(1)-related regression where Debian .changes files that contained non-ASCII text were not identified as such, therefore resulting in seemingly arbitrary packages not actually comparing the nested files themselves. The non-ASCII parts were typically in the Maintainer or in the changelog text. [][]
    • Fix a regression when comparing directories against non-directories. [][]
    • If we fail to scan using binwalk, return False from BinwalkFile.recognizes. []
    • If we fail to import binwalk, don’t report that we are missing the Python rpm module! []
  • Testsuite improvements:

    • Add a test for recent file(1) issue regarding .changes files. []
    • Use our assert_diff utility where we can within the test_directory.py set of tests. []
    • Don’t run our binwalk-related tests as root or fakeroot. The latest version of binwalk has some new security protection against this. []
  • Codebase improvements:

    • Drop the _PATH suffix from module-level globals that are not paths. []
    • Tidy some control flow in Difference._reverse_self. []
    • Don’t print a warning to the console regarding NT_GNU_BUILD_ID changes. []

In addition, Mattia Rizzolo updated the Debian packaging to ensure that diffoscope and diffoscope-minimal packages have the same version. []


Vagrant Cascadian wrote to the debian-devel mailing list after noticing that the binutils source package contained unreproducible logs in one of its binary packages. Vagrant expanded the discussion to one about all kinds of build metadata in packages and outlines a number of potential solutions that support reproducible builds and arbitrary metadata.

Vagrant also started a discussion on debian-devel after identifying a large number of packages that embed build paths via RPATH when building with CMake, including a list of packages (grouped by Debian maintainer) affected by this issue. Maintainers were requested to check whether their package still builds correctly when passing the -DCMAKE_BUILD_RPATH_USE_ORIGIN=ON directive.

On our mailing list this month, kpcyrd announced the release of rebuilderd-debian-buildinfo-crawler a tool to parse the Packages.xz Debian package index file, attempts to discover the right .buildinfo file from buildinfos.debian.net and outputs it in a format that can be understood by rebuilderd. The tool, which is available on GitHub, solves a problem regarding correlating Debian version numbers with their builds.

bauen1 provided two patches for debian-cd, the software used to make Debian installer images. This involved passing --invariant and -i deb00001 to mkfs.msdos(8) and avoided embedding timestamps into the gzipped Packages and Translations files. After some discussion, the patches in question were merged and will be included in debian-cd version 3.1.36.

Roland Clobus wrote another in-depth status update about status of ‘live’ Debian images, summarising the current situation that “all major desktops build reproducibly with bullseye, bookworm and sid”.

The python3.10 package was uploaded to Debian by doko, fixing an issue where [.pyc files were not reproducible because the elements in frozenset data structures were not ordered reproducibly. This meant that to creating a bit-for-bit reproducible Debian chroot which included .pyc files was not reproducible. As of writing, the only remaining unreproducible parts of a standard chroot is man-db, but Guillem Jover has a patch for update-alternatives which will likely be part of the next release of dpkg.

Elsewhere in Debian, 139 reviews of Debian packages were added, 29 were updated and 17 were removed this month adding to our knowledge about identified issues. A large number of issue types have been updated too, including the addition of captures_kernel_variant, erlang_escript_file, captures_build_path_in_r_rdb_rds_databases, captures_build_path_in_vo_files_generated_by_coq and build_path_in_vo_files_generated_by_coq.


Website updates

There were quite a few changes to the Reproducible Builds website and documentation this month as well, including:

  • Chris Lamb:

  • Daniel Shahaf:

    • Try a different Markdown footnote content syntax to work around a rendering issue. [][][]
  • Holger Levsen:

    • Make a huge number of changes to the Who is involved? page, including pre-populating a large number of contributors who cannot be identified from the metadata of the website itself. [][][][][]
    • Improve linking to sponsors in sidebar navigation. []
    • drop sponsors paragraph as the navigation is clearer now. []
    • Add Mullvad VPN as a bronze-level sponsor . [][]
  • Vagrant Cascadian:


Upstream patches

The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. February’s patches included the following:


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Daniel Golle:

    • Update the OpenWrt configuration to not depend on the host LLVM, adding lines to the .config seed to build LLVM for eBPF from source. []
    • Preserve more OpenWrt-related build artifacts. []
  • Holger Levsen:

  • Temporary use a different Git tree when building OpenWrt as our tests had been broken since September 2020. This was reverted after the patch in question was accepted by Paul Spooren into the canonical openwrt.git repository the next day.
    • Various improvements to debugging OpenWrt reproducibility. [][][][][]
    • Ignore useradd warnings when building packages. []
    • Update the script to powercycle armhf architecture nodes to add a hint to where nodes named virt-*. []
    • Update the node health check to also fix failed logrotate and man-db services. []
  • Mattia Rizzolo:

    • Update the website job after contributors.sh script was rewritten in Python. []
    • Make sure to set the DIFFOSCOPE environment variable when available. []
  • Vagrant Cascadian:

    • Various updates to the diffoscope timeouts. [][][]

Node maintenance was also performed by Holger Levsen [] and Vagrant Cascadian [].


Finally…

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 March, 2022 11:17AM

March 04, 2022

Reproducible Builds (diffoscope)

diffoscope 207 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 207. This version includes the following changes:

* Fix a gnarly regression when comparing directories against non-directories.
  (Closes: reproducible-builds/diffoscope#292)
* Use our assert_diff utility where we can within test_directory.py

You find out more by visiting the project homepage.

04 March, 2022 12:00AM

Abiola Ajadi

Outreachy-And it’s a wrap!

Outreachy Wrap-up

Project Improve Debian Continuous Integration UX
Project Link: https://www.outreachy.org/outreachy-december-2021-internship-round/communities/debian/#improve-debian-continuous-integration-ux
Code Repository: https://salsa.debian.org/ci-team/debci
Mentors: Antonio Terceiro, Paul Gevers and Pavit Kaur

About the project

Debci exist to make sure packages work currently after an update, How it does this is by testing all of the packages that have tests written in them to make sure it works and nothing is broken This project entails making improvements to the platform to make it easier to use and maintain.

Deliverables of the project:

  • Package landing page displaying pending jobs
  • web frontend: centralize job listings in a single template
  • self-service: request test form forgets values when validation fails
  • Improvement to status

Work done

Package landing page displaying pending jobs

Previously, Jobs that were pending were not displayed on the package page. Working on this added a feature to display pending jobs on package landing. Working on this task made it known that the same block of codes was repeated in different files which led to the next task Screenshot-2022-03-04-at-02-03-06.png

Merge request

web frontend: centralize job listings in a single template

Jobs are listed in various landings such as status packages, Status alerts, status failing, History, and so on. The same Code was repeated in these pages to list the jobs, I worked on refactoring it and created a single template for job listing so it can be used anywhere it’s needed. I also wrote a test for the feature I added.
Merge request

self service: request test form forgets values when validation fails

When one tries to request for a test and it fails with an error, originally the form does not remember the values that were typed in the package name, suite field et. c. This fix ensures the form remembers the values inputted even when it throws an error. Image of request test page N/B: The form checks all architecture on the load of the page
merge request

Improvement to status

Originally the Status pages were rendered as static HTML pages but I converted these pages to be generated dynamically, I wrote endpoints for each page. Since most of the status pages have a list of jobs I modified it to use the template I created for job-listing. Previously, the status pages had a mechanism to filter such as All, Latest 50 et.c which wasn’t paginated. I removed this mechanism added a filter by architecture and suites to these pages and also add pagination. Last but not the least, I wrote tests for these implementations carried out on the status page. Image of Status failing page

merge request:
first task
second task

Major take-aways

I learnt a lot during my internship but most importantly I learnt how to:

  • write Tests in Ruby and how writing tests is an important aspect of software development
  • maintain good coding practice, Paying attending to commit messages, Indentation et.c are good areas I developed in writing code.
  • make contributions in Ruby Programming Language.

Acknowledgement

I can not end this without saying thank you to my mentors Antonio Terceiro, Paul Gevers, and Pavit Kaur for their constant support and guidance throughout the entire duration of this Internship. It has been a pleasure Interacting and learning from everyone.

Review

Outreachy has helped me feel more confident about open-source, especially during the application phase. I had to reach out to the community I was interested in and ask questions on how to get started. The informal chats week was awesome I was able to build my network and have interesting conversations with amazing individuals in open-source. To round up, Always ask questions and do not be afraid of making a mistake, as one of the outreachy blog post topics says Everyone struggles!, but never give up!

04 March, 2022 12:00AM by Abiola Ajadi (briannaajadi03@gmail.com)

March 03, 2022

Joerg Jaspert

Scan for SSH private keys without passphrase

SSH private key scanner (keys without passphrase)

So for policy reasons, customer wanted to ensure that every SSH private key in use by a human on their systems has a passphrase set. And asked us to make sure this is the case.

There is no way in SSH to check this during connection, so client side needs to be looked at. Which means looking at actual files on the system.

Turns out there are multiple formats for the private keys - and I really do not want to implement something able to deal with that on my own.

OpenSSH to the rescue, it ships a little tool ssh-keygen, most commonly known for its ability to generate SSH keys. But it can do much more with keys. One action is interesting here for our case: The ability to print out the public key to a given private key. For a key that is unprotected, this will just work. A key with a passphrase instead leads to it asking you for one.

So we have our way to check if a key is protected by a passphrase. Now we only need to find all possible keys (note, the requirement is not “keys in .ssh/”, but all possible, so we need to scan for them.

But we do not want to run ssh-keygen on just any file, we would like to do it when we are halfway sure, that it is actually a key. Well, turns out, even though SSH has multiple formats, they all appear to have the string PRIVATE KEY somewhere very early (usually first line). And they are tiny - even a 16384bit RSA key is just above 12000 bytes long.

Lets find every file thats less then 13000 bytes and has the magic string in it, and throw it at ssh-keygen - if we get a public key back, flag it. Also, we supply a random (ohwell, hardcoded) passphrase, to avoid it prompting for any.

Scanning the whole system, one will find quite a surprising number of “unprotected” SSH keys. Well, better description possibly “Unprotected RSA private keys”, so the output does need to be checked by a human.

This, of course, can be done in shell, quite simple. So i wrote some Rust code instead, as I am still on my task to try and learn more of it. If you are interested, you can find sshprivscanner and play with it, patches/fixes/whatever welcome.

03 March, 2022 08:32PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Neat uses for a backlit keyboard

I bought myself a new keyboard last November, a Logitech G213. True keyboard fans will tell me it’s not a real mechanical keyboard, but it was a lot cheaper and met my requirements of having some backlighting and a few media keys (really all I use are the volume control keys). Oh, and being a proper UK layout.

While the G213 isn’t fully independent RGB per key it does have a set of zones that can be controlled. Also this has been reverse engineered, so there are tools to do this under Linux. All I really wanted was some basic backlighting to make things a bit nicer in the evenings, but with the ability to control colour I felt I should put it to good use.

As previously mentioned I have a personal desktop / work laptop setup combined with a UGREEN USB 3.0 Sharing Switch Box, so the keyboard is shared between both machines. So I configured up both machines to set the keyboard colour when the USB device is plugged in, and told them to use different colours. Instant visual indication of which machine I’m currently typing on!

Running the script on USB detection is easy, a file in /etc/udev/rules.d/. I called it 99-keyboard-colour.rules:

# Change the keyboard colour when we see it
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="046d", ATTR{idProduct}=="c336", \
        RUN+="/usr/local/sbin/g213-set"

g213-set is a simple bit of Python:

#!/usr/bin/python3

import sys

found = False
devnum = 0
while not found:
    try:
        with open("/sys/class/hidraw/hidraw" + str(devnum) + "/device/uevent") as f:
            for line in f:
                line = line.rstrip()
                if line == 'HID_NAME=Logitech Gaming Keyboard G213':
                    found = True
    except:
        break

    if not found:
        devnum += 1

if not found:
    print("Could not find keyboard device")
    sys.exit(1)

eventfile = "/dev/hidraw" + str(devnum)

#                                   z       r     g     b
command = [ 0x11, 0xff, 0x0c, 0x3a, 0, 1, 0xff, 0xff, 0x00, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]

with open(eventfile, "wb") as f:
    f.write(bytes(command))

I did wonder about trying to make it turn red when I’m in a root terminal, but that gets a bit more complicated (I’m guessing I need to hook into GNOME Terminal some how?) and this simple hack gives me a significant win anyway.

03 March, 2022 06:32PM

Enrico Zini

Migrating from procmail to sieve

Anarcat's "procmail considered harmful" post convinced me to get my act together and finally migrate my venerable procmail based setup to sieve.

My setup was nontrivial, so I migrated with an intermediate step in which sieve scripts would by default pipe everything to procmail, which allowed me to slowly move rules from procmailrc to sieve until nothing remained in procmailrc.

Here's what I did.

Literature review

https://brokkr.net/2019/10/31/lets-do-dovecot-slowly-and-properly-part-3-lmtp/ has a guide quite aligned with current Debian, and could be a starting point to get an idea of the work to do.

https://wiki.dovecot.org/HowTo/PostfixDovecotLMTP is way more terse, but more aligned with my intentions. Reading the former helped me in understanding the latter.

https://datatracker.ietf.org/doc/html/rfc5228 has the full Sieve syntax.

https://doc.dovecot.org/configuration_manual/sieve/pigeonhole_sieve_interpreter/ has the list of Sieve features supported by Dovecot.

https://doc.dovecot.org/settings/pigeonhole/ has the reference on Dovecot's sieve implementation.

https://raw.githubusercontent.com/dovecot/pigeonhole/master/doc/rfc/spec-bosch-sieve-extprograms.txt is the hard to find full reference for the functions introduced by the extprograms plugin.

Debugging tools:

  • doveconf to dump dovecot's configuration to see if what it understands matches what I mean
  • sieve-test parses sieve scripts: sieve-test file.sieve /dev/null is a quick and dirty syntax check

Backup of all mails processed

One thing I did with procmail was to generate a monthly mailbox with all incoming email, with something like this:

BACKUP="/srv/backupts/test-`date +%Y-%m-d`.mbox"

:0c
$BACKUP

I did not find an obvious way in sieve to create montly mailboxes, so I redesigned that system using Postfix's always_bcc feature, piping everything to an archive user.

I'll then recreate the monthly archiving using a chewmail script that I can simply run via cron.

Configure dovecot

apt install dovecot-sieve dovecot-lmtpd

I added this to the local dovecot configuration:

service lmtp {
  unix_listener /var/spool/postfix/private/dovecot-lmtp {
    user = postfix
    group = postfix
    mode = 0666
  }
}

protocol lmtp {
  mail_plugins = $mail_plugins sieve
}

plugin {
  sieve = file:~/.sieve;active=~/.dovecot.sieve
}

This makes Dovecot ready to receive mail from Postfix via a lmtp unix socket created in Postfix's private chroot.

It also activates the sieve plugin, and uses ~/.sieve as a sieve script.

The script can be a file or a directory; if it is a directory, ~/.dovecot.sieve will be a symlink pointing to the .sieve file to run.

This is a feature I'm not yet using, but if one day I want to try enabling UIs to edit sieve scripts, that part is ready.

Delegate to procmail

To make sieve scripts that delegate to procmail, I enabled the sieve_extprograms plugin:

 plugin {
   sieve = file:~/.sieve;active=~/.dovecot.sieve
+  sieve_plugins = sieve_extprograms
+  sieve_extensions +vnd.dovecot.pipe
+  sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe
+  sieve_trace_dir = ~/.sieve-trace
+  sieve_trace_level = matching
+  sieve_trace_debug = yes
 }

and then created a script for it:

mkdir -p /usr/local/lib/dovecot/sieve-pipe/
(echo "#!/bin/sh'; echo "exec /usr/bin/procmail") > /usr/local/lib/dovecot/sieve-pipe/procmail
chmod 0755 /usr/local/lib/dovecot/sieve-pipe/procmail

And I can have a sieve script that delegates processing to procmail:

require "vnd.dovecot.pipe";

pipe "procmail";

Activate the postfix side

These changes switched local delivery over to Dovecot:

--- a/roles/mailserver/templates/dovecot.conf
+++ b/roles/mailserver/templates/dovecot.conf
@@ -25,6 +25,8 @@+auth_username_format = %Ln
+diff --git a/roles/mailserver/templates/main.cf b/roles/mailserver/templates/main.cf
index d2c515a..d35537c 100644
--- a/roles/mailserver/templates/main.cf
+++ b/roles/mailserver/templates/main.cf
@@ -64,8 +64,7 @@ virtual_alias_domains =-mailbox_command = procmail -a "$EXTENSION"
-mailbox_size_limit = 0
+mailbox_transport = lmtp:unix:private/dovecot-lmtp

Without auth_username_format = %Ln dovecot won't be able to understand usernames sent by postfix in my specific setup.

Moving rules over to sieve

This is mostly straightforward, with the luxury of being able to do it a bit at a time.

The last tricky bit was how to call spamc from sieve, as in some situations I reduce system load by running the spamfilter only on a prefiltered selection of incoming emails.

For this I enabled the filter directive in sieve:

 plugin {
   sieve = file:~/.sieve;active=~/.dovecot.sieve
   sieve_plugins = sieve_extprograms
-  sieve_extensions +vnd.dovecot.pipe
+  sieve_extensions +vnd.dovecot.pipe +vnd.dovecot.filter
   sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve-pipe
+  sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve-filter
   sieve_trace_dir = ~/.sieve-trace
   sieve_trace_level = matching
   sieve_trace_debug = yes
 }

Then I created a filter script:

mkdir -p /usr/local/lib/dovecot/sieve-filter/"
(echo "#!/bin/sh'; echo "exec /usr/bin/spamc") > /usr/local/lib/dovecot/sieve-filter/spamc
chmod 0755 /usr/local/lib/dovecot/sieve-filter/spamc

And now what was previously:

:0 fw
| /usr/bin/spamc

:0
* ^X-Spam-Status: Yes
.spam/

Can become:

require "vnd.dovecot.filter";
require "fileinto";

filter "spamc";

if header :contains "x-spam-level" "**************" {
    discard;
} elsif header :matches "X-Spam-Status" "Yes,*" {
    fileinto "spam";
}

Updates

Ansgar mentioned that it's possible to replicate the monthly mailbox using the variables and date extensions, with a hacky trick from the extensions' RFC:

require "date"
require "variables"

if currentdate :matches "month" "*" { set "month" "${1}"; }
if currentdate :matches "year" "*" { set "year" "${1}"; }

fileinto :create "${month}-${year}";

03 March, 2022 02:03PM

Jamie McClelland

LVM Cache Surprises

By far the biggest LVM Cache surprise is just how well it works.

Between 2010 and 2020, my single, biggest and most consistent headache managing servers at May First has been disk i/o. We run a number of physical hosts with encrypted disks, with each providing a dozen or so sundry KVM guests. And they consume a lot of disk i/o.

This problem kept me awake at night and made me want to put my head on the table and cry during the day as I monitored the output of vmstat 1 and watched each disk i/o death spiral unfold.

We tried everything. Turned off fsck’s, turned off RAID monthly checks. Switched to less intensive backup systems. Added solid state drives and tried to stragically distribute them to our database partitions and other read/write heavy services. Added tmpfs file systems where it was possible.

But, the sad truth was: we simply did not have the resources to pay for the infrastructure that could support the disk i/o our services demanded.

Then, we discovered LVM caching (cue Hallelujah). We starting provisioning SSD partitions to back up our busiest spinning disk logical volumes and presto. Ten years of agony gone like a poof of smoke!

I don’t know which individuals are responsible for writing the LVM caching code but if you see this: THANK YOU! Your contributions to the world are noticed, appreciated and have had an enormous impact on at least one individual.

Some surprises

Filters

For the last two years, with the exception of one little heart attack, LVM caches have gone very smoothly.

Then, last week we upgraded 13 physical servers straight through from stretch to bullseye.

It went relatively smoothly for the first half of our servers (the old ones hosting fewer resources). But, after rebooting our first server with lvm caching going on, we noticed that the cached disk wasn’t accessible.

No problem, we reasoned. We’ll just uncache it. Except that didn’t work either. We tried every argument we could find on the Internet but lvm insisted that the block device from the SSD volume group (that provides the caching device) was not available. Running pvs showed an “unknown” device and vgs reported similar errors. Now I started to panic a bit. There was a clean shutdown of the server, so surely all the data had been flushed to the disk. But, how can we get that data? We started a restore from backup process because we really thought that data was gone for ever.

Then we had a really great theory: the caching logical volume comes from the SSD volume group, which gets decrypted after the spinning disk volume group.

Maybe there’s a timing issue? When the spinning disk volume group comes online, the caching logical volume is not yet available.

So, we booted into busybox, and manually decrypted the SSD volume first, followed by the spinning disk volume. Alas, no dice.

Now that we were fully desperate, we decided to restore the lvm configuration file for the entire spinning disk volume group. This felt kinda risky since we might be damaging all the currently working logical volumes, but it seemed like the only option we had.

The main problem was that busybox didn’t seem to have the lvm config tool we needed to restore the configuration from our backup (I think it might be there but it was late and we couldn’t figure it out). And, our only readily available live install media was a Debian stretch disk via debirf.

Debian stretch is pretty old and we really would have preferred to have the most modern tools available, but we decided to go with what we had.

And, that was a good thing, because as soon as we booted into stretch and decrypted the disks, the lvm volume suddenly appeared, happy as ever. We uncached it and booted into the host system and there it was.

We went to bed confused but relieved.

The next morning my co-worker figured it out: filtering.

During the stretch days we occassionally ran into an annoying problem: the logical volumes from guests would suddenly pop up on the host. This was mostly annoying but also it made possible some serious mistakes if you accidentally took a volume from a guest and used it on the host.

The LVM folks seemed to have noticed this problem and introduced a new default filter that tries to only show you the devices that you should be seeing.

Unfortunately for us, this new filter removed logical volumes from the list of available physical volumes. That does make sense for most people. But, not for us. It sounds a bit weird, but our setup looks like this:

  • One volume group derived from the spinning disks
  • One volume group derived from the SSD disks

Then we carve out logical volumes from each for each guest.

Once we discovered LVM caching, we carved out SSD logical volumes to be used as caches for the spinning logical volumes.

In restrospect, if we could start over, we would probably do it differently.

In any event, once we discovered the problem, we used the handy configuration options in lvm.conf to tweak the filters to include our cache disks and once again, everything is back to working.

Saturated SSDs

The other surprise seems unrelated to the upgrade. We have a phsyical server that has been suffering from disk i/o problems despite our use of LVM caching.

Our answer, of course, was to add more LVM caches to the spinning logical volumes that seemed to be suffering.

But somehow this was making things even worse.

Then, we finally just removed the LVM caches from all the spinning disks and presto, disk i/o problems seemed to go away. What? Isn’t that the opposite of what’s supposed to happen?

We’re still trying to figure this one out, but it seems that our SSDs are saturated, in which case adding them as a caching volume really is going to make things worse.

We’re still not sure why they are saturated when none of the SSDs on our other hosts are saturated, but a few theories include:

  • They are doing more writing and/or it’s a different kind of writing. I’m still not sure I quite have the right tool to compare this host with other hosts. And, this host is our only MySQL network database server, hosting hundreds of GBs of database - all writing/reading direclty onto the SSDs.

  • They are broken or substanard SSDs (smartctl doesn’t uncover any problems but maybe it’s a bad model?)

I’ll update this post as we learn more but welcome any suggestions in the comments.

Update: 2022-03-07

Two more possible causes:

  • Our use of the write back feature: LVM cache has a nice feature that caches writes to smooth out writes to the underlying disk. Maybe our disks are simply writing more then can be handled and not using write back is our solution. This server supports a guest with an unusually large disk.

  • Maybe we haven’t allocated a big enough LVM cache for the given volume so the contents are constantly being ejected?

03 March, 2022 12:27PM

John Goerzen

Tools for Communicating Offline and in Difficult Circumstances

Note: this post is also available on my website, where it will be updated periodically.

When things are difficult – maybe there’s been a disaster, or an invasion (this page is being written in 2022 just after Russia invaded Ukraine), or maybe you’re just backpacking off the grid – there are tools that can help you keep in touch, or move your data around. This page aims to survey some of them, roughly in order from easiest to more complex.

Simple radios

Handheld radios shouldn’t be forgotten. They are cheap, small, and easy to operate. Their range isn’t huge – maybe a couple of miles in rural areas, much less in cities – but they can be a useful place to start. They tend to have no actual encryption features (the “privacy” features really aren’t.) In the USA, options are FRS/GMRS and CB.

Syncthing

With Syncthing, you can share files among your devices or with your friends. Syncthing essentially builds a private mesh for file sharing. Devices will auto-discover each other when on the same LAN or Wifi network, and opportunistically sync.

I wrote more about offline uses of Syncthing, and its use with NNCP, in my blog post A simple, delay-tolerant, offline-capable mesh network with Syncthing (+ optional NNCP). Yes, it is a form of a Mesh Network!

Homepage: https://syncthing.net/

Briar

Briar is an instant messaging service based around Android. It’s IM with a twist: it can use a mesh of Bluetooh devices. Or, if Internet is available, Tor. It has even been extended to support the use of SD cards and USB sticks to carry your messages.

Like some others here, it can relay messages for third parties as well.

Homepage: https://briarproject.org/

Manyverse and Scuttlebutt

Manyverse is a client for Scuttlebutt, which is a sort of asynchronous, offline-friendly social network. You can use it to keep in touch with your family and friends, and it supports syncing over Bluetooth and Wifi even in the absence of Internet.

Homepages: https://www.manyver.se/ and https://scuttlebutt.nz/

Yggdrasil

Yggdrasil is a self-healing, fully end-to-end Encrypted Mesh Network. It can work among local devices or on the global Internet. It has network services that can egress onto things like Tor, I2P, and the public Internet. Yggdrasil makes a perfect companion to ad-hoc wifi as it has auto peer discovery on the local network.

I talked about it in more detail in my blog post Make the Internet Yours Again With an Instant Mesh Network.

Homepage: https://yggdrasil-network.github.io/

Ad-Hoc Wifi

Few people know about the ad-hoc wifi mode. Ad-hoc wifi lets devices in range talk to each other without an access point. You just all set your devices to the same network name and password and there you go. However, there often isn’t DHCP, so IP configuration can be a bit of a challenge. Yggdrasil helps here.

NNCP

Moving now to more advanced tools, NNCP lets you assemble a network of peers that can use Asynchronous Communication over sneakernet, USB drives, radios, CD-Rs, Internet, tor, NNCP over Yggdrasil, Syncthing, Dropbox, S3, you name it . NNCP supports multi-hop file transfer and remote execution. It is fully end-to-end encrypted. Think of it as the offline version of ssh.

Homepage: https://nncp.mirrors.quux.org/

Meshtastic

Meshtastic uses long-range, low-power LoRa radios to build a long-distance, encrypted, instant messaging system that is a Mesh Network. It requires specialized hardware, about $30, but will tend to get much better range than simple radios, and with very little power.

Homepages: https://meshtastic.org/ and https://meshtastic.letstalkthis.com/

Portable Satellite Communicators

You can get portable satellite communicators that can send SMS from anywhere on earth with a clear view of the sky. The Garmin InReach mini and Zoleo are two credible options. Subscriptions range from about $10 to $40 per month depending on usage. They also have global SOS features.

Telephone Lines

If you have a phone line and a modem, UUCP can get through just about anything. It’s an older protocol that lacks modern security, but will deal with slow and noisy serial lines well. XBee SX radios also have a serial mode that can work well with UUCP.

Additional Suggestions

It is probably useful to have a Linux live USB stick with whatever software you want to use handy. Debian can be installed from the live environment, or you could use a security-focused distribution such as Tails or Qubes.

References

This page originated in my Mastodon thread and incorporates some suggestions I received there.

It also formed a post on my blog.

03 March, 2022 02:49AM by John Goerzen

Ian Jackson

3D printed hard case for Fairphone 4

About 4 years ago, I posted about making a 3D printed case for my then-new phone. The FP2 was already a few years old when I got one and by now, some spares are unavailable - which is a problem, because I'm terribly hard on hardware. Indeed, that's why I need a very sturdy case for my phone - a case which can be ablative when necessary.

With the arrival of my new Fairphone 4, I've updated my case design. Sadly the FP4 doesn't have a notification LED - I guess we're supposed to be glued to the screen and leaving the phone ignored in a corner unless it lights up is forbidden. But that does at least make the printing simpler, as there's no need for a window for the LED.

Source code: https://www.chiark.greenend.org.uk/ucgi/~ianmdlvl/git?p=reprap-play.git;a=blob;f=fairphone4-case.scad;h=1738612c2aafcd4ee4ea6b8d1d14feffeba3b392;hb=629359238b2938366dc6e526d30a2a7ddec5a1b0

And the diagrams (which are part of the source, although I didn't update them for the FP4 changes: https://www.chiark.greenend.org.uk/ucgi/~ianmdlvl/git?p=reprap-diagrams.git;a=tree;f=fairphone-case;h=65f423399cbcfd3cf24265ed3216e6b4c0b26c20;hb=07e1723c88a294d68637bb2ca3eac388d2a0b5d4

big pictures )



comment count unavailable comments

03 March, 2022 12:11AM

March 02, 2022

Antoine Beaupré

procmail considered harmful

TL;DR: procmail is a security liability and has been abandoned upstream for the last two decades. If you are still using it, you should probably drop everything and at least remove its SUID flag. There are plenty of alternatives to chose from, and conversion is a one-time, acceptable trade-off.

Procmail is unmaintained

procmail is unmaintained. The "Final release", according to Wikipedia, dates back to September 10, 2001 (3.22). That release was shipped in Debian since then, all the way back from Debian 3.0 "woody", twenty years ago.

Debian also ships 25 uploads on top of this, with 3.22-21 shipping the "3.23pre" release that has been rumored since at least the November 2001, according to debian/changelog at least:

procmail (3.22-1) unstable; urgency=low

  * New upstream release, which uses the `standard' format for Maildir
    filenames and retries on name collision. It also contains some
    bug fixes from the 3.23pre snapshot dated 2001-09-13.
  * Removed `sendmail' from the Recommends field, since we already
    have `exim' (the default Debian MTA) and `mail-transport-agent'.
  * Removed suidmanager support. Conflicts: suidmanager (<< 0.50).
  * Added support for DEB_BUILD_OPTIONS in the source package.
  * README.Maildir: Do not use locking on the example recipe,
    since it's wrong to do so in this case.

 -- Santiago Vila <sanvila@debian.org>  Wed, 21 Nov 2001 09:40:20 +0100

All Debian suites from buster onwards ship the 3.22-26 release, although the maintainer just pushed a 3.22-27 release to fix a seven year old null pointer dereference, after this article was drafted.

Procmail is also shipped in all major distributions: Fedora and its derivatives, Debian derivatives, Gentoo, Arch, FreeBSD, OpenBSD. We all seem to be ignoring this problem.

The upstream website (http://procmail.org/) has been down since about 2015, according to Debian bug #805864, with no change since.

In effect, every distribution is currently maintaining its fork of this dead program.

Note that, after filing a bug to keep Debian from shipping procmail in a stable release again, I was told that the Debian maintainer is apparently in contact with the upstream. And, surprise! they still plan to release that fabled 3.23 release, which has been now in "pre-release" for all those twenty years.

In fact, it turns out that 3.23 is considered released already, and that the procmail author actually pushed a 3.24 release, codenamed "Two decades of fixes". That amounts to 25 commits since 3.23pre some of which address serious security issues, but none of which address fundamental issues with the code base.

Procmail is insecure

By default, procmail is installed SUID root:mail in Debian. There's no debconf or pre-seed setting that can change this. There has been two bug reports against the Debian to make this configurable (298058, 264011), but both were closed to say that, basically, you should use dpkg-statoverride to change the permissions on the binary.

So if anything, you should immediately run this command on any host that you have procmail installed on:

dpkg-statoverride --update --add root root 0755 /usr/bin/procmail

Note that this might break email delivery. It might also not work at all, thanks to usrmerge. Not sure. Yes, everything is on fire. This is fine.

In my opinion, even assuming we keep procmail in Debian, that default should be reversed. It should be up to people installing procmail to assign it those dangerous permissions, after careful consideration of the risk involved.

The last maintainer of procmail explicitly advised us (in that null pointer dereference bug) and other projects (e.g. OpenBSD, in [2]) to stop shipping it, back in 2014. Quote:

Executive summary: delete the procmail port; the code is not safe and should not be used as a basis for any further work.

I just read some of the code again this morning, after the original author claimed that procmail was active again. It's still littered with bizarre macros like:

#define bit_set(name,which,value) \
  (value?(name[bit_index(which)]|=bit_mask(which)):\
  (name[bit_index(which)]&=~bit_mask(which)))

... from regexp.c, line 66 (yes, that's a custom regex engine). Or this one:

#define jj  (aleps.au.sopc)

It uses insecure functions like strcpy extensively. malloc() is thrown around gotos like it's 1984 all over again. (To be fair, it has been feeling like 1984 a lot lately, but that's another matter entirely.)

That null pointer deref bug? It's fixed upstream now, in this commit merged a few hours ago, which I presume might be in response to my request to remove procmail from Debian.

So while that's nice, this is the just tip of the iceberg. I speculate that one could easily find an exploitable crash in procmail if only by running it through a fuzzer. But I don't need to speculate: procmail had, for years, serious security issues that could possibly lead to root privilege escalation, remotely exploitable if procmail is (as it's designed to do) exposed to the network.

Maybe I'm overreacting. Maybe the procmail author will go through the code base and do a proper rewrite. But I don't think that's what is in the cards right now. What I expect will happen next is that people will start fuzzing procmail, throw an uncountable number of bug reports at it which will get fixed in a trickle while never fixing the underlying, serious design flaws behind procmail.

Procmail has better alternatives

The reason this is so frustrating is that there are plenty of modern alternatives to procmail which do not suffer from those problems.

Alternatives to procmail(1) itself are typically part of mail servers. For example, Dovecot has its own LDA which implements the standard Sieve language (RFC 5228). (Interestingly, Sieve was published as RFC 3028 in 2001, before procmail was formally abandoned.)

Courier also has "maildrop" which has its own filtering mechanism, and there is fdm (2007) which is a fetchmail and procmail replacement. Update: there's also mailprocessing, which is not an LDA, but processing an existing folder. It was, however, specifically designed to replace complex Procmail rules.

But procmail, of course, doesn't just ship procmail; that would just be too easy. It ships mailstat(1) which we could probably ignore because it only parses procmail log files. But more importantly, it also ships:

  • lockfile(1) - conditional semaphore-file creator
  • formail(1) - mail (re)formatter

lockfile(1) already has a somewhat acceptable replacement in the form of flock(1), part of util-linux (which is Essential, so installed on any normal Debian system). It might not be a direct drop-in replacement, but it should be close enough.

formail(1) is similar: the courier maildrop package ships reformail(1) which is, presumably, a rewrite of formail. It's unclear if it's a drop-in replacement, but it should probably possible to port uses of formail to it easily.

Update: the maildrop package ships a SUID root binary (two, even). So if you want only reformail(1), you might want to disable that with:

dpkg-statoverride --update --add root root 0755 /usr/bin/lockmail.maildrop 
dpkg-statoverride --update --add root root 0755 /usr/bin/maildrop

It would be perhaps better to have reformail(1) as a separate package, see bug 1006903 for that discussion.

The real challenge is, of course, migrating those old .procmailrc recipes to Sieve (basically). I added a few examples in the appendix below. You might notice the Sieve examples are easier to read, which is a nice added bonus.

Conclusion

There is really, absolutely, no reason to keep procmail in Debian, nor should it be used anywhere at this point.

It's a great part of our computing history. May it be kept forever in our museums and historical archives, but not in Debian, and certainly not in actual release.

It's just a bomb waiting to go off. It is irresponsible for distributions to keep shipping obsolete and insecure software like this for unsuspecting users.

Note that I am grateful to the author, I really am: I used procmail for decades and it served me well. But now, it's time to move, not bring it back from the dead.

Appendix

Previous work

It's really weird to have to write this blog post. Back in 2016, I rebuilt my mail setup at home and, to my horror, discovered that procmail had been abandoned for 15 years at that point, thanks to that LWN article from 2010. I would have thought that I was the only weirdo still running procmail after all those years and felt kind of embarrassed to only "now" switch to the more modern (and, honestly, awesome) Sieve language.

But no. Since then, Debian shipped three major releases (stretch, buster, and bullseye), all with the same vulnerable procmail release.

Then, in early 2022, I found that, at work, we actually had procmail installed everywhere, possibly because userdir-ldap was using it for lockfile until 2019. I sent a patch to fix that and scrambled to remove get rid of procmail everywhere. That took about a day.

But many other sites are now in that situation, possibly not imagining they have this glaring security hole in their infrastructure.

Procmail to Sieve recipes

I'll collect a few Sieve equivalents to procmail recipes here. If you have any additions, do contact me.

All Sieve examples below assume you drop the file in ~/.dovecot.sieve.

deliver mail to "plus" extension folder

Say you want to deliver user+foo@example.com to the folder foo. You might write something like this in procmail:

MAILDIR=$HOME/Maildir/
DEFAULT=$MAILDIR
LOGFILE=$HOME/.procmail.log
VERBOSE=off
EXTENSION=$1            # Need to rename it - ?? does not like $1 nor 1

:0
* EXTENSION ?? [a-zA-Z0-9]+
        .$EXTENSION/

That, in sieve language, would be:

require ["variables", "envelope", "fileinto", "subaddress"];

########################################################################
# wildcard +extension
# https://doc.dovecot.org/configuration_manual/sieve/examples/#plus-addressed-mail-filtering
if envelope :matches :detail "to" "*" {
  # Save name in ${name} in all lowercase
  set :lower "name" "${1}";
  fileinto "${name}";
  stop;
}

Subject into folder

This would file all mails with a Subject: line having FreshPorts in it into the freshports folder, and mails from alternc.org mailing lists into the alternc folder:

:0
## mailing list freshports
* ^Subject.*FreshPorts.*
.freshports/

:0
## mailing list alternc
* ^List-Post.*mailto:.*@alternc.org.*
.alternc/

Equivalent Sieve:

if header :contains "subject" "FreshPorts" {
    fileinto "freshports";
} elsif header :contains "List-Id" "alternc.org" {
    fileinto "alternc";
}

Mail sent to root to a reports folder

This double rule:

:0
* ^Subject: Cron
* ^From: .*root@
.rapports/

Would look something like this in Sieve:

if header :comparator "i;octet" :contains "Subject" "Cron" {
  if header :regex :comparator "i;octet"  "From" ".*root@" {
        fileinto "rapports";
  }
}

Note that this is what the automated converted does (below). It's not very readable, but it works.

Bulk email

I didn't have an equivalent of this in procmail, but that's something I did in Sieve:

if header :contains "Precedence" "bulk" {
    fileinto "bulk";
}

Any mailing list

This is another rule I didn't have in procmail but I found handy and easy to do in Sieve:

if exists "List-Id" {
    fileinto "lists";
}

This or that

I wouldn't remember how to do this in procmail either, but that's an easy one in Sieve:

if anyof (header :contains "from" "example.com",
           header :contains ["to", "cc"] "anarcat@example.com") {
    fileinto "example";
}

You can even pile up a bunch of options together to have one big rule with multiple patterns:

if anyof (exists "X-Cron-Env",
          header :contains ["subject"] ["security run output",
                                        "monthly run output",
                                        "daily run output",
                                        "weekly run output",
                                        "Debian Package Updates",
                                        "Debian package update",
                                        "daily mail stats",
                                        "Anacron job",
                                        "nagios",
                                        "changes report",
                                        "run output",
                                        "[Systraq]",
                                        "Undelivered mail",
                                        "Postfix SMTP server: errors from",
                                        "backupninja",
                                        "DenyHosts report",
                                        "Debian security status",
                                        "apt-listchanges"
                                        ],
           header :contains "Auto-Submitted" "auto-generated",
           envelope :contains "from" ["nagios@",
                                      "logcheck@",
                                      "root@"])
    {
    fileinto "rapports";
}

Automated script

There is a procmail2sieve.pl script floating around, and mentioned in the dovecot documentation. It didn't work very well for me: I could use it for small things, but I mostly wrote the sieve file from scratch.

Progressive migration

Enrico Zini has progressively migrated his procmail setup to Sieve using a clever way: he hooked procmail inside sieve so that he could deliver to the Dovecot LDA and progressively migrate rules one by one, without having a "flag day".

See this explanatory blog post for the details, which also shows how to configure Dovecot as an LMTP server with Postfix.

Other examples

The Dovecot sieve examples are numerous and also quite useful. At the time of writing, they include virus scanning and spam filtering, vacation auto-replies, includes, archival, and flags.

Harmful considered harmful

I am aware that the "considered harmful" title has a long and controversial history, being considered harmful in itself (by some people who are obviously not afraid of contradictions).

I have nevertheless deliberately chosen that title, partly to make sure this article gets maximum visibility, but more specifically because I do not have doubts at this moment that procmail is, clearly, a bad idea at this moment in history.

Developing story

I must also add that, incredibly, this story has changed while writing it. This article is derived from this bug I filed in Debian to, quite frankly, kick procmail out of Debian. But filing the bug had the interesting effect of pushing the upstream into action: as mentioned above, they have apparently made a new release and merged a bunch of patches in a new git repository.

This doesn't change much of the above, at this moment. If anything significant comes out of this effort, I will try to update this article to reflect the situation. I am actually happy to retract the claims in this article if it turns out that procmail is a stellar example of defensive programming and survives fuzzing attacks. But at this moment, I'm pretty confident that will not happen, at least not in scope of the next Debian release cycle.

02 March, 2022 06:16PM

Petter Reinholdtsen

Run your industrial metal working machine using Debian?

After many months of hard work by the good people involved in LinuxCNC, the system was accepted Sunday into Debian. Once it was available from Debian, I was surprised to discover from its popularity-contest numbers that people have been reporting its use since 2012. Its project site might be a good place to check out, but sadly is not working when visiting via Tor.

But what is LinuxCNC, you are probably wondering? Perhaps a Wikipedia quote is in place?

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

It can even control 3D printers. And even though the Wikipedia page indicate that it can only work with hard real time kernel features, it can also work with the user space soft real time features provided by the Debian kernel. The source code is available from Github. The last few months I've been involved in the translation setup for the program and documentation. Translators are most welcome to join the effort using Weblate.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

02 March, 2022 05:40PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, February 2022

In February I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from January. I worked 16 hours, and will carry over the remaining time to March.

I spent most of my time triaging security issues for Linux, working out which of them were fixed upstream and which actually applied to the versions provided in Debian 9 "stretch". I also rebased the Linux 4.9 (linux) package on the latest stable update, but did not make an upload this month.

02 March, 2022 03:04PM

hackergotchi for Keith Packard

Keith Packard

picolibc-testing

Testing Picolibc with the glibc tests

Picolibc has a bunch of built-in tests, but more testing is always better, right? I decided to see how hard it would be to run some of the tests provided in the GNU C Library (glibc).

Parallel meson build files

Similar to how Picolibc uses meson build files to avoid modifying the newlib autotools infrastructure, I decided to take the glibc code and write meson build rules that would compile the tests against Picolibc header files and link against Picolibc libraries.

I decided to select a single target for this project so I could focus on getting things building and not worry too much about making it portable. I wanted to pick something that had hardware floating point so that I would have rounding modes and exception support, so I picked the ARM Cortex M7 with hard float ABI:

$ arm-none-eabi-gcc -mcpu=cortex-m7 -mfloat-abi=hard

It should be fairly easy to extend this to other targets, but for now, that's what I've got working. There's a cross-cortex-m7.txt file containing all of the cross compilation details which is used when running meson setup.

All of the Picolibc-specific files live in a new picolibc directory so they are isolated from the main glibc code.

Pre-loading a pile of hacks

Adapt Picolibc to support the Glibc test code required a bunch of random hacks, from providing _unlocked versions of the stdio macros to stubbing out various unsupportable syscalls (like sleep and chdir). Instead of modifying the Glibc code, I created a file called hacks.h which is full of this stuff and used the gcc -include parameter to read that into the compiler before starting compilation on all of the files.

Supporting command line parameters

The glibc tests all support a range of command line parameters, some of which turned out to be quite useful for this work. Picolibc had limited semihosting support for accessing the command line, but that required modifying applications to go fetch the command line using a special semihosting function.

To make this easier, I added a new crt0 variant for picolibc called (oddly) semihost. This extends the existing hosted variant by adding a call to the semihosting library to fetch the current command line and splitting that into words at each space. It doesn't handle any quoting, but it's sufficient for the needs here.

Avoiding glibc headers

The glibc tests use some glibc-specific extensions to the standard POSIX C library, so I needed to include those in the test builds. Headers for those extensions are mixed in with the rest of the POSIX standard headers, which conflict with the Picolibc versions. To work around this, I stuck stub #include files in the picolibc directory which directly include the appropriate headers for the glibc API extensions. This includes things like argp.h and array_length.h. For other headers which weren't actually needed for picolibc, I created empty files.

Adding more POSIX to Picolibc

At this point, code was compiling but failing to find various standard POSIX functions which aren't available in Picolibc. That included some syscalls which could be emulated using semihosting, like gettimeofday and getpagesize. It also included some generally useful additions, like replacing ecvtbuf and fcvtbuf with ecvt_r and fcvt_r. The _r variants provide a target buffer size instead of assuming that it was large enough as the Picolibc buf variants did.

Which tests are working?

So far, I've got some of the tests in malloc, math, misc and stdio-common running.

There are a lot of tests in the malloc directory which cover glibc API extensions or require POSIX syscalls not supported by semihosting. I think I've added all of the tests which should be supported.

For the math tests, I'm testing the standard POSIX math APIs in both float and double forms, except for the Bessel and Gamma functions. Picolibc's versions of those are so bad that they violate some pretty basic assumptions about error bounds built into the glibc test code. Until Picolibc gets better versions of these functions, we'll have to skip testing them this way.

In the misc directory, I've only added tests for ecvt, fcvt, gcvt, dirname and hsearch. I don't think there are any other tests there which should work.

Finally, for stdio-common, almost all of the tests expect a fully functioning file system, which semihosting really can't support. As a result, we're only able to run the printf, scanf and some of the sprintf tests.

All in all, we're running 78 of the glibc test programs, which is a small fraction of the total tests, but I think it's the bulk of the tests which cover APIs that don't depend too much on the underlying POSIX operating system.

Bugs found and fixed in Picolibc

This exercise has resulted in 17 fixes in Picolibc, which can be classified as:

  1. Math functions taking sNaN and not raising FE_INVALID and returning qNaN. Almost any operation on an sNaN value is supposed to signal an invalid operand and replace that with a qNaN so that further processing doesn't raise another exception. This was fairly easy to fix, just need to use return x + x; instead of return x;.

  2. Math functions failing to set errno. I'd recently restructured the math library to get rid of the separate IEEE version of the functions which didn't set errno and missed a couple of cases that needed to use the errno-setting helper functions.

  3. Corner cases in string/float conversion, including the need to perform round-to-even for '%a' conversions in printf and supporting negative decimal values for fcvt. This particular exercise led to replacing the ecvtbuf and fcvtbuf APIs with glibc's ecvt_r and fcvt_r versions as those pass explicit buffer lengths, making overflow prevention far more reliable.

  4. A bunch of malloc entry points were not handling failure correctly; allocation values close to the maximum possible size caused numerous numeric wrapping errors with the usual consequences (allocations "succeed", but return a far smaller buffer than requested). Also, realloc was failing to check the return value from malloc before copying the old data, which really isn't a good idea.

  5. Tinystdio's POSIX support code was returning an error instead of EOF at end of file.

  6. Error bounds for the Picolibc math library aren't great; I had to generate Picolibc-specific ulps files. Most functions are between 0 and 3 ulps, but for some reason, the float version of erfc (ercf) has errors as large as 64 ulps. That needs investigation.

Tests added to Picolibc

With all of the fixes applied to Picolibc, I added some tests to verify them without relying on running the glibc tests, that includes sNaN vs qNaN tests for math functions, testing fopen and mktemp, checking the printf %a rounding support and adding a ecvt/fcvt tests.

I also discovered that the github-based CI system was compiling but not testing when using clang with a riscv target, so that got added in.

Where's the source code?

The Picolibc changes are sitting on the glibc-testing branch. I'll merge them once the current CI pass finishes.

The hacked-up Glibc bits are in a glibc mirror at github in the picolibc project on the picolibc-testing branch. It would be interesting to know what additional tests should be usable in this environment. And, perhaps, finding a way to use this for picolibc CI testing in the future.

Concluding thoughts

Overall, I'm pretty pleased with these results. The bugs in malloc are fairly serious and could easily result in trouble, but the remaining issues are mostly minor and shouldn't have a big impact on applications using Picolibc.

I'll get the changes merged and start thinking about doing another Picolibc release.

02 March, 2022 07:28AM

François Marier

Ways to refer to locahost in Chromium

The filter rules preventing websites from portscanning the local machine have recently been tightened in Brave. It turns out there are a surprising number of ways to refer to the local machine in Chromium.

localhost and friends

127.0.0.1 is the first address that comes to mind when thinking of the local machine. localhost is typically aliased to that address (via /etc/hosts), though that convention is not mandatory. The IPv6 equivalent is [::1].

0.0.0.0

0.0.0.0 is not a routable address, but that's what's used to tell a service to bind (listen) on all network interfaces. In Chromium, it resolves to the local machine, just like 127.0.0.1. The IPv6 equivalent is [::].

DNS-based

Of course, another way to encode these numerical URLs is to create A / AAAA records for them under a domain you control. I've done this under my personal domain:

For these to work, you'll need to:

  • Make sure you can connect to IPv6-only hosts, for example by connecting to an appropriate VPN if needed.
  • Put nameserver 8.8.8.8 in /etc/resolv.conf since you need a DNS server that will not filter these localhost domains. (For example, Unbound will do that if you use private-address: 127.0.0.0/8 in the server config.)
  • Go into chrome://settings/security and disable Always use secure connections to make sure the OS resolver is used.
  • Turn off the chrome://flags/#block-insecure-private-network-requests flag since that security feature (CORS-RFC1918) is designed to protect against these kinds of requests.

127.0.0.0/8 subnet

Technically, the entire 127.0.0.0/8 subnet can used to refer to the local machine. However, it's not a reliable way to portscan a machine from a web browser because it only catches the services that listen on all interfaces (i.e. 0.0.0.0).

For example, on my machine, if I nmap 127.0.0.1, I get:

PORT     STATE SERVICE   VERSION
22/tcp   open  ssh       OpenSSH 8.2p1
25/tcp   open  smtp      Postfix smtpd

whereas if I nmap 127.0.1.25, I only get:

PORT   STATE SERVICE VERSION
22/tcp open  ssh     OpenSSH 8.2p1

That's because I've got the following in /etc/postfix/main.cf:

inet_interfaces = loopback-only

which I assume is explicitly binding 127.0.0.1.

Nevertheless, it would be good to get that fixed in Brave too.

02 March, 2022 02:45AM

March 01, 2022

Utkarsh Gupta

FOSS Activites in February 2022

Here’s my (twenty-ninth) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 38th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

I had been sick this month, so most of the time I spent away from system, recovering, et al, and also went through the huge backlog that I had, which is starting to get smaller. :D

Anyway, I did the following stuff in Debian:

Uploads and bug fixes:

  • at (3.4.4-1) - Adding a DEP8 test for the package, fixing bug #985421.

Other $things:

  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 13th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twenty-ninth month as a Debian LTS and eighteenth month as a Debian ELTS paid contributor.
Whilst I was assigned 42.75 hours for LTS and 45.25 hours for ELTS, I could only work a little due to being sick and so I spent 15.75 hours on LTS and 9.25 hours on ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

Debian LTS Survey

I’ve spent 10 hours on the LTS survey on the following bits:
(and 5 hours of the last month that I’m going to invoice this month)

  • Put most of the content in the instance according to the question type.
  • Been going back and forth updating the status of the survey on the issue.
  • Trying to find a way to send to DDs - discussing with DPL, Raphael, and other people on the issue itself.
  • Completing the last bits to start the survey for the paid contributors, at least. Talking to Jeremiah about this.

Until next time.
:wq for today.

01 March, 2022 05:41AM

Paul Wise

FLOSS Activities February 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: fix usertags for some users, unarchive/reopen/triage bugs for reintroduced packages: horae dh-haskell shutter logging-tree openstreetmap-carto gnome-commander
  • Debian servers: restore wiki data from backup, restarted bacula director for TLS cert update
  • Debian wiki: restore wiki account from backup, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The purple-discord, gensim, DiskANN, SPTAG, wsproto work was sponsored by my employer. All other work was done on a volunteer basis.

01 March, 2022 04:16AM

Russell Coker

SAGE (ITPA) Spam

In 2008 I joined SAGE (the System Administrators’ Guild of Australia). It was a professional society for people doing sysadmin work (running computer servers). I quit when I found that the level of clue was lower than hoped and that members used the code of ethics as nothing but a way to score points in online debates. After quitting SAGE kept emailing me and wouldn’t respect my request to be removed from all lists so I had to block their mail server.

SAGE has in recent times changed it’s name to ITPA (Information Technology Professionals Association) and is still sending me email. I’ve just sent yet another unsubscribe request.

How many years of sending unwanted email can be caused by incompetence and when should we assume it’s malice? They have been doing this for over a decade now. Even if it’s incompetence, that’s still damning given that it’s incompetence in the main topic of the organisation.

Here is the ITPA Code of Ethics [1], as you can see there is no reference to spam. The nearest seems to be “I will continue to enlarge my understanding of the social and legal issues that arise in computing environments, and I will communicate that understanding to others when appropriate“. So it’s great that they aren’t breaking their own code of ethics :-# but I’d still like them to stop emailing me.

01 March, 2022 04:11AM by etbe

Abiola Ajadi

Career Interest!

Hi, Welcome back. In this blog, I will be discussing my career interests and what I plan to pursue after my internship.

What comes next after the internship

After working a while with Debci project, I was able to learn better software development practices and love the open source community. I will love to build my career path around open-source.

What I am looking for

Prior to starting my internship, I have worked as a front-end developer with react and javascript. During my internship with Debci I learn Ruby and also how to write tests 😊😊. I am also passionate about community which is one of my core-values, I build and organize communities. I am looking for opportunities in areas of front-end development, Ruby, React.

i am also open to opportunities in developer relations/community management where i can be part of building or managing a community, Interacting with developers, meeting new people.

I am based in lagos, Nigeria and can work remotely, but also open to relocation.

You can connect with me on linkedin

01 March, 2022 12:00AM by Abiola Ajadi (briannaajadi03@gmail.com)