May 01, 2022

hackergotchi for Thomas Koch

Thomas Koch

Missing memegen

Posted on May 1, 2022

Back at $COMPANY we had an internal meme-site. I had some reputation in my team for creating good memes. When I watched Episode 3 of Season 2 from Yes Premier Minister yesterday, I really missed a place to post memes.

This is the full scene. Please watch it or even the full episode before scrolling down to the GIFs. I had a good laugh for some time.

With Debian, I could just download the episode from somewhere on the net with youtube-dl and easily create two GIFs using ffmpeg, with and without subtitle:

ffmpeg  -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 tmp/tragic.gif

ffmpeg  -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 \
        -vf "subtitles=tmp/sub.srt:force_style='Fontsize=60'" tmp/tragic_with_subtitle.gif

And this sub.srt file:

1
00:00:10,000 --> 00:00:12,000
Tragic.

I believe, one needs to install the libavfilter-extra variant to burn the subtitle in the GIF.

Some

space

to

hide

the

GIFs.

The Premier Minister just learned, that his predecessor, who was about to publish embarassing memories, died of a sudden heart attack:

I can’t actually think of a meme with this GIF, that the internal thought police community moderation would not immediately take down.

For a moment I thought that it would be fun to have a Meme-Site for Debian members. But it is probably not the right time for this.

Maybe somebody likes the above GIFs though and wants to use them somewhere.

01 May, 2022 06:17PM

lsp-java coming to debian

Posted on March 12, 2022
Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

01 May, 2022 06:17PM

Waiting for a STATE folder in the XDG basedir spec

Posted on February 18, 2014

The XDG Basedirectory specification proposes default homedir folders for the categories DATA (~/.local/share), CONFIG (~/.config) and CACHE (~/.cache). One category however is missing: STATE. This category has been requested several times but nothing happened.

Examples for state data are:

  • history files of shells, repls, anything that uses libreadline
  • logfiles
  • state of application windows on exit
  • recently opened files
  • last time application was run
  • emacs: bookmarks, ido last directories, backups, auto-save files, auto-save-list

The missing STATE category is especially annoying if you’re managing your dotfiles with a VCS (e.g. via VCSH) and you care to keep your homedir tidy.

If you’re as annoyed as me about the missing STATE category, please voice your opinion on the XDG mailing list.

Of course it’s a very long way until applications really use such a STATE directory. But without a common standard it will never happen.

01 May, 2022 06:17PM

shared infrastructure coop

Posted on February 5, 2014

I’m working in a very small web agency with 4 employees, one of them part time and our boss who doesn’t do programming. It shouldn’t come as a surprise, that our development infrastructure is not perfect. We have many ideas and dreams how we could improve it, but not the time. Now we have two obvious choices: Either we just do nothing or we buy services from specialized vendors like github, atlassian, travis-ci, heroku, google and others.

Doing nothing does not work for me. But just buying all this stuff doesn’t please me either. We’d depend on proprietary software, lock-in effects or one-size-fits-all offerings. Another option would be to find other small web shops like us, form a cooperative and share essential services. There are thousands of web shops in the same situation like us and we all need the same things:

  • public and private Git hosting
  • continuous integration (Jenkins)
  • code review (Gerrit)
  • file sharing (e.g. git-annex + webdav)
  • wiki
  • issue tracking
  • virtual windows systems for Internet Explorer testing
  • MySQL / Postgres databases
  • PaaS for PHP, Python, Ruby, Java
  • staging environment
  • Mails, Mailing Lists
  • simple calendar, CRM
  • monitoring

As I said, all of the above is available as commercial offerings. But I’d prefer the following to be satisfied:

  • The infrastructure itself should be open (but not free of charge), like the OpenStack Project Infrastructure as presented at LCA. I especially like how they review their puppet config with Gerrit.

  • The process to become an admin for the infrastructure should work much the same like the process to become a Debian Developer. I’d also like the same attitude towards quality as present in Debian.

Does something like that already exists? There already is the German cooperative hostsharing which is kind of similar but does provide mainly hosting, not services. But I’ll ask them next after writing this blog post.

Is your company interested in joining such an effort? Does it sound silly?

Comments:

Sounds promising. I already answered by mail. Dirk Deimeke (Homepage) am 16.02.2014 08:16 Homepage: http://d5e.org

I’m sorry for accidentily removing a comment that linked to https://mayfirst.org while moderating comments. I’m really looking forward to another blogging engine… Thomas Koch am 16.02.2014 12:20

Why? What are you missing? I am using s9y for 9 years now. Dirk Deimeke (Homepage) am 16.02.2014 12:57

01 May, 2022 06:17PM

Paul Wise

FLOSS Activities April 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 33 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration

  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors

The libpst, gensim, SPTAG work was sponsored. All other work was done on a volunteer basis.

01 May, 2022 12:26AM

April 30, 2022

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already May.

Already May. I've been writing some code in rust and a bit of javascript. But real life is too busy.

30 April, 2022 11:57PM by Junichi Uekawa

April 29, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

hyperlinked PDF planner

The Year page

The Year page

A day page

A day page

I've been having reasonable success with time blocking, a technique I learned from Cal Newport's writings, in particular Deep Work. I'd been doing it on paper for a while, but I wanted to try and move to a digital solution.

There's a cottage industry of people making (and selling) various types of diary and planner as PDF files for use on tablets such as the Remarkable. Some of these use PDF hyperlinks to greatly improve navigating around. This one from Clou Media is particularly good, but I found that I wanted something slightly different from what I could find out there, so I decided to build my own.

I explored a couple of different approaches for how to do this. One was Latex, and here's one example of a latex-based planner, but I decided against as I spend too much time wrestling with it for my PhD work already.

Another approach might have been Pandoc, but as far as I could tell its PDF pipeline went via Latex, so I thought I might as well cut out the middleman.

Eventually I stumbled across tools to build PDFs from HTML, via "CSS Paged Media". This appealed, because I've done plenty of HTML generation. print-css.rocks is a fantastic resource to explore the print-specific CSS features. Weasyprint is a fantastic open source tool to convert appropriately-written HTML/CSS into PDF.

Finally I wanted to use a templating system to take shortcuts on writing HTML. I settled for embedded Ruby, which is something I haven't touched in over a decade. This was a relatively simple project and I found it surprisingly fun.

The results are available on GitHub: https://github.com/jmtd/planner. Right now, you get exactly what I have described. But my next plan is to add support for re-generating a planner, incorporating new information: pulling diary info from iCal, and any annotations made (such as with the Remarkable tablet) on top of the last generation and preserving them on the next.

29 April, 2022 09:29PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Should we stop teaching the normal distribution?

I guess Betteridge's law of headlines gives you the answer, but bear with me. :-)

Like most engineers, I am a layperson in statistics; I had some in high school, then an intro course in university and then used it in a couple of random courses (like speech recognition). (I also took a multivariate statistics course on my own after I had graduated.) But pretty much every practical tool I ever learned was, eventually, centered around the normal distribution; we learned about Student's t-test in various scenarios, made confidence intervals, learned about the central limit theorem that showed its special place in statistics, how the binomial distribution converges to the normal distribution under reasonable circumstances (not the least due to the CLT), and so on.

But then I got out in the wild and started trying to make sense out of the troves of data coming my way (including some stemming from experiments I designed on my own). And it turns out… a lot of things really are not normal. I'd see distributions with heavy tails, with skew, or that were bimodal. And here's the thing—people, who had the same kind of non-statistics-specialized education as me, continued to treat these as Gaussian. And it still appears to work. You get the beautiful confidence intervals and low p-values that seem to make sense… it's just so odd that you get “p<0.05 significant“ tests way too often from random noise. You just assume that's how it is, without really realizing that you're doing junk statistics. And even if you do, you don't have the tools to do anything about it, because everything else is hidden away in obscure R libraries or somewhere on Math Stack Exchange.

So I ask: If we're really going to learn people one thing, is the normal distribution really the best tool? (Yes, sure, we learned about the Poission and Weibull and many others, but we never really did hypothesis testing on them, and we never really learned what to do when things didn't follow a tidy mathematical formula. Or even how to identify that.) It's beautiful and simple (“simple”) and mathematical and you only need a huge table and then you can almost do calculations by hand, but perhaps that's not really what we want? I understand we want to teach fundamental understanding and not just “use this computer tool”, but again, we're sending people out with a really limited tool set to make sense of the world.

I don't know what we should do instead—again, I am a layperson, and my understanding of this is limited. But it feels like we should be able to come up with fairly simple techniques that don't break down fatally if the data doesn't follow one given distribution, no matter how important. Bootstrap? Wilcoxon signed-rank test? I know, of course, that if the data really is normal, you will need a lot less data for the same-quality result (and some natural processes, like, I guess, radioactive decay, surely follow normal distributions), but perhaps we should leave the Gaussians and other parametric tools for the advanced courses? I don't know. But it's worth a thought. And I need to learn more statistics.

29 April, 2022 06:30PM

hackergotchi for Holger Levsen

Holger Levsen

20220429-Debian-Reunion-Hamburg-2022

Debian Reunion Hamburg 2022 from May 23 to 30

This is just a quick reminder for the Debian Reunion Hamburg 2022 happening in a bit more than 3 weeks.

So far 43 people have registered and thus there's still some on site accomodation available. There's no real deadline for registration, however if you register after May 1st you might not get a t-shirt in your prefered size.

Also: if you attend to give a presentation but haven't replied to the CfP, please do so.

The wiki page linked above has all the details.

29 April, 2022 12:34PM

Russ Allbery

Review: Interesting Times

Review: Interesting Times, by Terry Pratchett

Series: Discworld #17
Publisher: Harper
Copyright: 1994
Printing: February 2014
ISBN: 0-06-227629-8
Format: Mass market
Pages: 399

Interesting Times is the seventeenth Discworld novel and certainly not the place to start. At the least, you will probably want to read The Colour of Magic and The Light Fantastic before this book, since it's a sequel to those (although Rincewind has had some intervening adventures).

Lord Vetinari has received a message from the Counterweight Continent, the first in ten years, cryptically demanding the Great Wizzard be sent immediately.

The Agatean Empire is one of the most powerful states on the Disc. Thankfully for everyone else, it normally suits its rulers to believe that the lands outside their walls are inhabited only by ghosts. No one is inclined to try to change their minds or otherwise draw their attention. Accordingly, the Great Wizard must be sent, a task that Vetinari efficiently delegates to the Archchancellor. There is only the small matter of determining who the Great Wizzard is, and why it was spelled with two z's.

Discworld readers with a better memory than I will recall Rincewind's hat. Why the Counterweight Continent would demanding a wizard notorious for his near-total inability to perform magic is a puzzle for other people. Rincewind is promptly located by a magical computer, and nearly as promptly transported across the Disc, swapping him for an unnecessarily exciting object of roughly equivalent mass and hurling him into an unexpected rescue of Cohen the Barbarian. Rincewind predictably reacts by running away, although not fast or far enough to keep him from being entangled in a glorious popular uprising. Or, well, something that has aspirations of being glorious, and popular, and an uprising.

I hate to say this, because Pratchett is an ethically thoughtful writer to whom I am willing to give the benefit of many doubts, but this book was kind of racist.

The Agatean Empire is modeled after China, and the Rincewind books tend to be the broadest and most obvious parodies, so that was already a recipe for some trouble. Some of the social parody is not too objectionable, albeit not my thing. I find ethnic stereotypes and making fun of funny-sounding names in other languages (like a city named Hunghung) to be in poor taste, but Pratchett makes fun of everyone's names and cultures rather equally. (Also, I admit that some of the water buffalo jokes, despite the stereotypes, were pretty good.) If it had stopped there, it would have prompted some eye-rolling but not much comment.

Unfortunately, a significant portion of the plot depends on the idea that the population of the Agatean Empire has been so brainwashed into obedience that they have a hard time even imagining resistance, and even their revolutionaries are so polite that the best they can manage for slogans are things like "Timely Demise to All Enemies!" What they need are a bunch of outsiders, such as Rincewind or Cohen and his gang. More details would be spoilers, but there are several deliberate uses of Ankh-Morpork as a revolutionary inspiration and a great deal of narrative hand-wringing over how awful it is to so completely convince people they are slaves that you don't need chains.

There is a depressingly tedious tendency of western writers, even otherwise thoughtful and well-meaning ones like Pratchett, to adopt a simplistic ranking of political systems on a crude measure of freedom. That analysis immediately encounters the problem that lots of people who live within systems that rate poorly on this one-dimensional scale seem inadequately upset about circumstances that are "obviously" horrific oppression. This should raise questions about the validity of the assumptions, but those assumptions are so unquestionable that the writer instead decides the people who are insufficiently upset about their lack of freedom must be defective. The more racist writers attribute that defectiveness to racial characteristics. The less racist writers, like Pratchett, attribute that defectiveness to brainwashing and systemic evil, which is not quite as bad as overt racism but still rests on a foundation of smug cultural superiority.

Krister Stendahl, a bishop of the Church of Sweden, coined three famous rules for understanding other religions:

  1. When you are trying to understand another religion, you should ask the adherents of that religion and not its enemies.
  2. Don't compare your best to their worst.
  3. Leave room for "holy envy."

This is excellent advice that should also be applied to politics. Most systems exist for some reason. The differences from your preferred system are easy to see, particularly those that strike you as horrible. But often there are countervailing advantages that are less obvious, and those are more psychologically difficult to understand and objectively analyze. You might find they have something that you wish your system had, which causes discomfort if you're convinced you have the best political system in the world, or are making yourself feel better about the abuses of your local politics by assuring yourself that at least you're better than those people.

I was particularly irritated to see this sort of simplistic stereotyping in Discworld given that Ankh-Morpork, the setting of most of the Discworld novels, is an authoritarian dictatorship. Vetinari quite capably maintains his hold on power, and yet this is not taken as a sign that the city's inhabitants have been brainwashed into considering themselves slaves. Instead, he's shown as adept at maintaining the stability of a precarious system with a lot of competing forces and a high potential for destructive chaos. Vetinari is an awful person, but he may be better than anyone who would replace him. Hmm.

This sort of complexity is permitted in the "local" city, but as soon as we end up in an analog of China, the rulers are evil, the system lacks any justification, and the peasants only don't revolt because they've been trained to believe they can't. Gah.

I was muttering about this all the way through Interesting Times, which is a shame because, outside of the ham-handed political plot, it has some great Pratchett moments. Rincewind's approach to any and all danger is a running (sorry) gag that keeps working, and Cohen and his gang of absurdly competent decrepit barbarians are both funnier here than they have been in any previous book and the rare highly-positive portrayal of old people in fantasy adventures who are not wizards or crones. Pretty Butterfly is a great character who deserved to be in a better plot. And I loved the trouble that Rincewind had with the Agatean tonal language, which is an excuse for Pratchett to write dialog full of frustrated non-sequiturs when Rincewind mispronounces a word.

I do have to grumble about the Luggage, though. From a world-building perspective its subplot makes sense, but the Luggage was always the best character in the Rincewind stories, and the way it lost all of its specialness here was oddly sad and depressing. Pratchett also failed to convince me of the drastic retcon of The Colour of Magic and The Light Fantastic that he does here (and which I can't talk about in detail due to spoilers), in part because it's entangled in the orientalism of the plot.

I'm not sure Pratchett could write a bad book, and I still enjoyed reading Interesting Times, but I don't think he gave the politics his normal care, attention, and thoughtful humanism. I hope later books in this part of the Disc add more nuance, and are less confident and judgmental. I can't really recommend this one, even though it has some merits.

Also, just for the record, "may you live in interesting times" is not a Chinese curse. It's an English saying that likely was attributed to China to make it sound exotic, which is the sort of landmine that good-natured parody of other people's cultures needs to be wary of.

Followed in publication order by Maskerade, and in Rincewind's personal timeline by The Last Continent.

Rating: 6 out of 10

29 April, 2022 02:50AM

April 28, 2022

hackergotchi for Jonathan McDowell

Jonathan McDowell

Resizing consoles automatically

I have 2 very useful shell scripts related to resizing consoles. The first is imaginatively called resize and just configures the terminal to be the requested size, neatly resizing an xterm or gnome-terminal:

#!/bin/sh

# resize <rows> <columns>
/bin/echo -e '\033[8;'$1';'$2't'

The other is a bit more complicated and useful when connecting to a host via a serial console, or when driving a qemu VM with -display none -nographic and all output coming over a “serial console” on stdio. It figures out the size of the terminal it’s running in and correctly sets the local settings to match so you can take full advantage of a larger terminal than the default 80x24:

#!/bin/bash

echo -ne '\e[s\e[5000;5000H'
IFS='[;' read -p $'\e[6n' -d R -a pos -rs
echo -ne '\e[u'

# cols / rows
echo "Size: ${pos[2]} x ${pos[1]}"

stty cols "${pos[2]}" rows "${pos[1]}"

export TERM=xterm-256color

Generally I source this with . fix-term or the TERM export doesn’t get applied. Both of these exist in various places around the ‘net (and there’s a resize binary shipped along with xterm) but I always forget the exact terms to find it again when I need it. So this post is mostly intended to serve as future reference next time I don’t have them handy.

28 April, 2022 07:03PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, March 2022

A Debian LTS logo

Every month we review the work funded by Freexian’s Debian LTS offering. Please find the report for March below.

Debian project funding

  • There was no new activity in Debian project funding in the two existing projects. However, there was a survey run with hundreds of Debian Developers and Debian contributors. The survey results are being collated and we will use the anonymized data to further develop the Freexian project funding initiative.
  • We are preparing to more broadly announce additional support for Debian 8 Jessie and Debian 9 Stretch. Now, Debian 8 can be supported until June 2025 and Debian 9 until June 2027. More information on ELTS support is available.
  • In March € 2250 was put aside to fund Debian projects.

Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In March, 11 contributors were paid to work on Debian LTS, their reports are available below. If you’re interested in participating in the LTS or ELTS teams, we welcome participation from the Debian community. Simply get in touch with Jeremiah or Raphaël if you are if you are interested in participating.

Evolution of the situation

In March we released 42 DLAs.

The security tracker currently lists 81 packages with a known CVE and the dla-needed.txt file has 52 packages needing an update.

We’re glad to welcome a few new sponsors such as Électricité de France (Gold sponsor), Telecats BV and Soliton Systems.

Thanks to our sponsors

Sponsors that joined recently are in bold.

28 April, 2022 10:47AM by Raphaël Hertzog

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 bursary applications and call for papers are closing in less than 72 hours!

If you intend to apply for a DebConf22 bursary and/or submit an event proposal and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf22 will be accepted until May 1st at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the accommodation page.

Event proposals will be accepted until May 1st at 23:59 UTC too.

Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, and so on) could have different durations. Please choose the most suitable duration for your event and explain any special requests. You can submit it here.

The the 23rd edition of DebConf will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

See you in Prizren!

DebConf22 banner open registration

28 April, 2022 07:30AM by The Debian Publicity Team

Reproducible Builds (diffoscope)

diffoscope 211 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 211. This version includes the following changes:

[ Mattia Rizzolo ]
* Drop mplayer from the Build-Depends, it was add likely by accident and it's
  not needed.
* Disable gnumeric tests in Debian because it's not currently available.

You find out more by visiting the project homepage.

28 April, 2022 12:00AM

April 27, 2022

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - April 2022

After two long years of COVID hiatus, local Debian events in Montreal are back! Last Sunday, nine of us met at Koumbit to work on Debian (and other stuff!), chat and socialise.

Even though these events aren't always the most productive, it was super fun and definitely helps keeping me motivated to work on Debian in my spare time.

Many thanks to Debian for providing us a budget to rent the venue for the day and for the pizzas! Here are a few pictures I took during the event:

Pizza boxes on a wooden bench

Whiteboard listing TODO items for some of the participants

A table with a bunch of laptops, and LeLutin :)

If everything goes according to plan, our next meeting should be sometime in June. If you are interested, the best way to stay in touch is either to subscribe to our mailing list or to join our IRC channel (#debian-quebec on OFTC). Events are also posted on Quebec's Agenda du libre.

27 April, 2022 10:00PM by Louis-Philippe Véronneau

Antoine Beaupré

Using LSP in Emacs and Debian

The Language Server Protocol (LSP) is a neat mechanism that provides a common interface to what used to be language-specific lookup mechanisms (like, say, running a Python interpreter in the background to find function definitions).

There is also ctags shipped with UNIX since forever, but that doesn't support looking backwards ("who uses this function"), linting, or refactoring. In short, LSP rocks, and how do I use it right now in my editor of choice (Emacs, in my case) and OS (Debian) please?

Editor (emacs) setup

First, you need to setup your editor. The Emacs LSP mode has pretty good installation instructions which, for me, currently mean:

apt install elpa-lsp-mode

and this .emacs snippet:

(use-package lsp-mode
  :commands (lsp lsp-deferred)
  :hook ((python-mode go-mode) . lsp-deferred)
  :demand t
  :init
  (setq lsp-keymap-prefix "C-c l")
  ;; TODO: https://emacs-lsp.github.io/lsp-mode/page/performance/
  ;; also note re "native compilation": <+varemara> it's the
  ;; difference between lsp-mode being usable or not, for me
  :config
  (setq lsp-auto-configure t))

(use-package lsp-ui
  :config
  (setq lsp-ui-flycheck-enable t)
  (add-to-list 'lsp-ui-doc-frame-parameters '(no-accept-focus . t))
  (define-key lsp-ui-mode-map [remap xref-find-definitions] #'lsp-ui-peek-find-definitions)
  (define-key lsp-ui-mode-map [remap xref-find-references] #'lsp-ui-peek-find-references))

Note: this configuration might have changed since I wrote this, see my init.el configuration for the most recent config.

The main reason for choosing lsp-mode over eglot is that it's in Debian (and eglot is not). (Apparently, eglot has more chance of being upstreamed, "when it's done", but I guess I'll cross that bridge when I get there.)

I already had lsp-mode partially setup in Emacs so I only had to do this small tweak to switch and change the prefix key (because s-l or mod is used by my window manager). I also had to pin LSP packages to bookworm here so that it properly detects pylsp (the older version in Debian bullseye only supports pyls, not packaged in Debian).

This won't do anything by itself: Emacs will need something to talk with to provide the magic. Those are called "servers" and are basically different programs, for each programming language, that provide the magic.

Servers setup

The Emacs package provides a way (M-x lsp-install-server) to install some of them, but I prefer to manage those tools through Debian packages if possible, just like lsp-mode itself. Those are the servers I currently know of in Debian:

package languages
ccls C, C++, ObjectiveC
clangd C, C++, ObjectiveC
elpa-lsp-haskell Haskell
fortran-language-server Fortran
gopls Golang
python3-pyls Python

There might be more such packages, but those are surprisingly hard to find. I found a few with apt search "Language Server Protocol", but that didn't find ccls, for example, because that just said "Language Server" in the description (which also found a few more pyls plugins, e.g. black support).

Note that the Python packages, in particular, need to be upgraded to their bookworm releases to work properly (here). It seems like there's some interoperability problems there that I haven't quite figured out yet. See also my Puppet configuration for LSP.

Finally, note that I have now completely switched away from Elpy to pyls, and I'm quite happy with the results. lsp-mode feels slower than elpy but I haven't done any of the performance tuning and this will improve even more with native compilation. And lsp-mode is much more powerful. I particularly like the "rename symbol" functionality, which ... mostly works.

Remaining work

Puppet and Ruby

I still have to figure how to actually use this: I mostly spend my time in Puppet these days, there is no server listed in the Emacs lsp-mode language list, but there is one listed over at the upstream language list, the puppet-editor-services server.

But it's not packaged in Debian, and seems somewhat... involved. It could still be a good productivity boost. The Voxpupuli team have vim install instructions which also suggest installing solargraph, the Ruby language server, also not packaged in Debian.

Bash

I guess I do a bit of shell scripting from time to time nowadays, even though I don't like it. So the bash-language-server may prove useful as well.

Other languages

Here are more language servers available:

27 April, 2022 08:36PM

building Debian packages under qemu with sbuild

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

Why

I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt.

I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots...

I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), unshare (with schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu.

The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How

Basically, you need this:

sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian

Then to make this used by default, add this to ~/.sbuildrc:

# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';

Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:

# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];

This configuration will:

  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests

Note that the VM created by sbuild-qemu-create have an unlocked root account with an empty password.

Other useful tasks

  • enter the VM to make test, changes will be discarded (thanks Nick Brown for the sbuild-qemu-boot tip!):

     sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
    

    That program is shipped only with bookworm and later, an equivalent command is:

     qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    

    The key argument here is -snapshot.

  • enter the VM to make permanent changes, which will not be discarded:

     sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
    

    Equivalent command:

     sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    
  • update the VM (thanks lavamind):

     sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
    
  • build in a specific VM regardless of the suite specified in the changelog (e.g. UNRELEASED, bookworm-backports, bookworm-security, etc):

     sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    

    Note that you'd also need to pass --autopkgtest-opts if you want autopkgtest to run in the correct VM as well:

     sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    

    You might also need parameters like --ram-size if you customized it above.

And yes, this is all quite complicated and could be streamlined a little, but that's what you get when you have years of legacy and just want to get stuff done. It seems to me autopkgtest-virt-qemu should have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there.

Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).

Live access to a running test

When autopkgtest starts a VM, it uses this funky qemu commandline:

qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory, /tmp/autopkgtest-qemu.w1mlh54b/ in the above example):

  • the shared/ directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor UNIX socket is a qemu control socket (see the QEMU monitor documentation, also nc -U)

In other words, it's possible to access the VM with:

nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2

The nc socket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.

Nitty-gritty details no one cares about

Fixing hang in sbuild cleanup

I'm having a hard time making heads or tails of this, but please bear with me.

In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter.

At least in lib/Sbuild/Build.pm, we can see this:

my $is_cloned_session = (defined ($session->get('Session Purged')) &&
             $session->get('Session Purged') == 1) ? 1 : 0;

[...]

if ($is_cloned_session) {
$self->log("Not cleaning session: cloned chroot in use\n");
} else {
if ($purge_build_deps) {
    # Removing dependencies
    $resolver->uninstall_deps();
} else {
    $self->log("Not removing build depends: as requested\n");
}
}

The schroot builder defines that parameter as:

    $self->set('Session Purged', $info->{'Session Purged'});

... which is ... a little confusing to me. $info is:

my $info = $self->get('Chroots')->get_info($schroot_session);

... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...

ChrootUnshare.pm is way more explicit:

$self->set('Session Purged', 1);

I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right?

For some reason, before I added this line to my configuration:

$purge_build_deps = 'never';

... the "Cleanup" step would just completely hang. It was quite bizarre.

Disgression on the diversity of VM-like things

There are a lot of different virtualization solutions one can use (e.g. Xen, KVM, Docker or Virtualbox). I have also found libguestfs to be useful to operate on virtual images in various ways. Libvirt and Vagrant are also useful wrappers on top of the above systems.

There are particularly a lot of different tools which use Docker, Virtual machines or some sort of isolation stronger than chroot to build packages. Here are some of the alternatives I am aware of:

Take, for example, Whalebuilder, which uses Docker to build packages instead of pbuilder or sbuild. Docker provides more isolation than a simple chroot: in whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that pbuilder and sbuild do build under a different user which will limit the security issues with building untrusted packages.

On the upside, some of things are being fixed: whalebuilder is now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage.

None of those solutions (except the autopkgtest/qemu backend) are implemented as a sbuild plugin, which would greatly reduce their complexity.

I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt.

The great thing now is that autopkgtest has good support for qemu and sbuild has bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:

  • #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
  • #911979: sbuild: fails on chown in autopkgtest-qemu backend
  • #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
  • #911981: autopkgtest: qemu server warns about missing CPU features

So we have unification! It's possible to run your virtual machines and Debian builds using a single VM image backend storage, which is no small feat, in my humble opinion. See the sbuild-qemu blog post for the annoucement

Now I just need to figure out how to merge Vagrant, GNOME Boxes, and libvirt together, which should be a matter of placing images in the right place... right? See also hosting.

pbuilder vs sbuild

I was previously using pbuilder and switched in 2017 to sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell.

My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the _apt user is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie).

Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays.

I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...

Who

Thanks lavamind for the introduction to the sbuild-qemu package.

27 April, 2022 08:29PM

Russ Allbery

Review: Sorceress of Darshiva

Review: Sorceress of Darshiva, by David Eddings

Series: The Malloreon #4
Publisher: Del Rey
Copyright: December 1989
Printing: November 1990
ISBN: 0-345-36935-1
Format: Mass market
Pages: 371

This is the fourth book of the Malloreon, the sequel series to the Belgariad. Eddings as usual helpfully summarizes the plot of previous books (the one thing about his writing that I wish more authors would copy), this time by having various important people around the world briefed on current events. That said, you don't want to start reading here (although you might wish you could).

This is such a weird book.

One could argue that not much happens in the Belgariad other than map exploration and collecting a party, but the party collection involves meddling in local problems to extract each new party member. It's a bit of a random sequence of events, but things clearly happen. The Malloreon starts off with a similar structure, including an explicit task to create a new party of adventurers to save the world, but most of the party is already gathered at the start of the series since they carry over from the previous series. There is a ton of map exploration, but it's within the territory of the bad guys from the previous series. Rather than local meddling and acquiring new characters, the story is therefore chasing Zandramas (the big bad of the series) and books of prophecy.

This could still be an effective plot trigger but for another decision of Eddings that becomes obvious in Demon Lord of Karanda (the third book): the second continent of this world, unlike the Kingdoms of Hats world-building of the Belgariad, is mostly uniform. There are large cities, tons of commercial activity, and a fairly effective and well-run empire, with only a little local variation. In some ways it's a welcome break from Eddings's previous characterization by stereotype, but there isn't much in the way of local happenings for the characters to get entangled in.

Even more oddly, this continental empire, which the previous series set up as the mysterious and evil adversaries of the west akin to Sauron's domain in Lord of the Rings, is not mysterious to some of the party at all. Silk, the Drasnian spy who is a major character in both series, has apparently been running a vast trading empire in Mallorea. Not only has he been there before, he has houses and factors and local employees in every major city and keeps being distracted from the plot by his cutthroat capitalist business shenanigans. It's as if the characters ventured into the heart of the evil empire and found themselves in the entirely normal city next door, complete with restaurant recommendations from one of their traveling companions.

I think this is an intentional subversion of the normal fantasy plot by Eddings, and I kind of like it. We have met the evil empire, and they're more normal than most of the good guys, and both unaware and entirely uninterested in being the evil empire. But in terms of dramatic plot structure, it is such an odd choice. Combined with the heroes being so absurdly powerful that they have no reason to take most dangers seriously (and indeed do not), it makes this book remarkably anticlimactic and weirdly lacking in drama.

And yet I kind of enjoyed reading it? It's oddly quiet and comfortable reading. Nothing bad happens, nor seems very likely to happen. The primary plot tension is Belgarath trying to figure out the plot of the series by tracking down prophecies in which the plot is written down with all of the dramatic tension of an irritated rare book collector. In the middle of the plot, the characters take a detour to investigate an alchemist who is apparently immortal, featuring a university on Melcena that could have come straight out of a Discworld novel, because investigating people who spontaneously discover magic is of arguably equal importance to saving the world. Given how much the plot is both on rails and clearly willing to wait for the protagonists to catch up, it's hard to argue with them. It felt like a side quest in a video game.

I continue to find the way that Eddings uses prophecy in this series to be highly amusing, although there aren't nearly enough moments of the prophecy giving Garion stage direction. The basic concept of two competing prophecies that are active characters in the world attempting to create their own sequence of events is one that would support a better series than this one. It's a shame that Zandramas, the main villain, is rather uninteresting despite being female in a highly sexist society, highly competent, a different type of shapeshifter (I won't say more to avoid spoilers for earlier books), and the anchor of the other prophecy. It's good material, but Eddings uses it very poorly, on top of making the weird decision to have her talk like an extra in a Shakespeare play.

This book was astonishingly pointless. I think the only significant plot advancement besides map movement is picking up a new party member (who was rather predictable), and the plot is so completely on rails that the characters are commenting about the brand of railroad ties that Eddings used. Ce'Nedra continues to be spectacularly irritating. It's not, by any stretch of the imagination, a good book, and yet for some reason I enjoyed it more than the other books of the series so far. Chalk one up for brain candy when one is in the right mood, I guess.

Followed by The Seeress of Kell, the epic (?) conclusion.

Rating: 6 out of 10

27 April, 2022 04:30AM

Russell Coker

PIN for Login

Windows 10 added a new “PIN” login method, which is an optional login method instead of an Internet based password through Microsoft or a Domain password through Active Directory. Here is a web page explaining some of the technology (don’t watch the YouTube video) [1]. There are three issues here, whether a PIN is any good in concept, whether the specifics of how it works are any good, and whether we can copy any useful ideas for Linux.

Is a PIN Any Good?

A PIN in concept is a shorter password. I think that less secure methods of screen unlocking (fingerprint, face unlock, and a PIN) can be reasonably used in less hostile environments. For example if you go to the bathroom or to get a drink in a relatively secure environment like a typical home or office you don’t need to enter a long password afterwards. Having a short password that works for short time periods of screen locking and a long password for longer times could be a viable option.

It could also be an option to allow short passwords when the device is in a certain area (determined by GPS or Wifi connection). Android devices have in the past had options to disable passwords when at home.

Is the Windows 10 PIN Any Good?

The Windows 10 PIN is based on TPM security which can provide real benefits, but this is more of a failure of Windows local passwords in not using the TPM than a benefit for the PIN. When you login to a Windows 10 system you will be given a choice of PIN or the configured password (local password or AD password).

As a general rule providing a user a choice of ways to login is bad for security as an attacker can use whichever option is least secure.

The configuration options for Windows 10 allow either group policy in AD or the registry to determine whether PIN login is allowed but doesn’t have any control over when the PIN can be used which seems like a major limitation to me.

The claim that the PIN is more secure than a password would only make sense if it was a viable option to disable the local password or AD domain password and only use the PIN. That’s unreasonably difficult for home users and usually impossible for people on machines with corporate management.

Ideas For Linux

I think it would be good to have separate options for short term and long term screen locks. This could be implemented by having a screen locking program use two different PAM configurations for unlocking after short term and long term lock periods.

Having local passwords based on the TPM might be useful. But if you have the root filesystem encrypted via the TPM using systemd-cryptoenroll it probably doesn’t gain you a lot. One benefit of the TPM is limiting the number of incorrect attempts at guessing the password in hardware, the default is allowing 32 wrong attempts and then one every 10 minutes. Trying to do that in software would allow 32 guesses and then a hardware reset which could average at something like 32 guesses per minute instead of 32 guesses per 320 minutes. Maybe something like fail2ban could help with this (a similar algorithm but for password authentication guesses instead of network access).

Having a local login method to use when there is no Internet access and network authentication can’t work could be useful. But if the local login method is easier then an attacker could disrupt Internet access to force a less secure login method.

Is there a good federated authentication system for Linux? Something to provide comparable functionality to AD but with distributed operation as a possibility?

27 April, 2022 03:18AM by etbe

April 26, 2022

Tim Retout

Exploring StackRox

At the end of March, the source code to StackRox was released, following the 2021 acquisition by Red Hat. StackRox is a Kubernetes security tool which is now badged as Red Hat Advanced Cluster Security (RHACS), offering features such as vulnerability management, validating cluster configurations against CIS benchmarks, and some runtime behaviour analysis. In fact, it’s such a diverse range of features that I have trouble getting my head round it from the product page or even the documentation.

Source code is available via the StackRox organisation on GitHub, and the most obviously interesting repositories seem to be:

  • stackrox/stackrox, containing the main application, written in Go
  • stackrox/scanner, the vulnerability scanner, also in Go. From a first glance at the go.mod file, it does not seem to share much code with Clair, which is interesting.
  • stackrox/collector, the runtime analysis component, in C++ but also with hooks into the kernel.

My initial curiosity has been around the ‘collector’, to better understand what runtime behaviour the tool can actually pick up. I was intrigued to find that the actual kernel component is a patched version of Falco’s kernel module/eBPF probes; a few features are disabled compared to Falco, e.g. page faults and signal events.

There’s a list of supported syscalls in driver/syscall_table.c, which seems to have drifted slightly or be slightly behind the upstream Falco version? In particular I note the absence of io_uring, but given RHACS is mainly deployed on Linux 4.18 at the moment (RHEL 8) this is probably a non-issue. (But relevant if anyone were to run it on newer kernels.)

That’s as far as I’ve got for now. Red Hat are making great efforts to reach out to the community; there’s a Slack channel, and office hours recordings, and a community hub to explore further. It’s great to see new free software projects created through acquisition in this way - I’m not sure I remember seeing a comparable example.

26 April, 2022 08:07PM

hackergotchi for Steve Kemp

Steve Kemp

Porting a game from CP/M to the ZX Spectrum 48k

Back in April 2021 I introduced a simple text-based adventure game, The Lighthouse of Doom, which I'd written in Z80 assembly language for CP/M systems.

As it was recently the 40th Anniversary of the ZX Spectrum 48k, the first computer I had, and the reason I got into programming in the first place, it crossed my mind that it might be possible to port my game from CP/M to the ZX Spectrum.

To recap my game is a simple text-based adventure game, which you can complete in fifteen minutes, or less, with a bunch of Paw Patrol easter-eggs.

  • You enter simple commands such as "up", "down", "take rug", etc etc.
  • You receive text-based replies "You can't see a telephone to use here!".

My code is largely table-based, having structures that cover objects, locations, and similar state-things. Most of the code involves working with those objects, with only a few small platform-specific routines being necessary:

  • Clearing the screen.
  • Pausing for "a short while".
  • Reading a line of input from the user.
  • Sending a $-terminated string to the console.
  • etc.

My feeling was that I could replace the use of those CP/M functions with something custom, and I'd have done the 99% of the work. Of course the devil is always in the details.

Let's start. To begin with I'm lucky in that I'm using the pasmo assembler which is capable of outputting .TAP files, which can be loaded into ZX Spectrum emulators.

I'm not going to walk through all the code here, because that is available within the project repository, but here's a very brief getting-started guide which demonstrates writing some code on a Linux host, and generating a TAP file which can be loaded into your favourite emulator. As I needed similar routines I started working out how to read keyboard input, clear the screen, and output messages which is what the following sample will demonstrate..

First of all you'll need to install the dependencies, specifically the assembler and an emulator to run the thing:

# apt install pasmo spectemu-x11

Now we'll create a simple assembly-language file, to test things out - save the following as hello.z80:

    ; Code starts here
    org 32768

    ; clear the screen
    call cls

    ; output some text
    ld   de, instructions                  ; DE points to the text string
    ld   bc, instructions_end-instructions ; BC contains the length
    call 8252

    ; wait for a key
    ld hl,0x5c08        ; LASTK
    ld a,255
    ld (hl),a
wkey:
    cp (hl)             ; wait for the value to change
    jr z, wkey

    ; get the key and save it
    ld a,(HL)
    push af

    ; clear the screen
    call cls

    ; show a second message
    ld de, you_pressed
    ld bc, you_pressed_end-you_pressed
    call 8252

    ;; Output the ASCII character in A
    ld a,2
    call 0x1601
    pop af
    call 0x0010

    ; loop forever.  simple demo is simple
endless:
    jr endless

cls:
    ld a,2
    call 0x1601  ; ROM_OPEN_CHANNEL
    call 0x0DAF  ; ROM_CLS
    ret

instructions:
    defb 'Please press a key to continue!'
instructions_end:

you_pressed:
    defb 'You pressed:'
you_pressed_end:

end 32768

Now you can assemble that into a TAP file like so:

$ pasmo --tapbas hello.z80 hello.tap

The final step is to load it in the emulator:

$ xspect -quick-load -load-immed -tap hello.tap

The reason I specifically chose that emulator was because it allows easily loading of a TAP file, without waiting for the tape to play, and without the use of any menus. (If you can tell me how to make FUSE auto-start like that, I'd love to hear!)

I wrote a small number of "CP/M emulation functions" allowing me to clear the screen, pause, prompt for input, and output text, which will work via the primitives available within the standard ZX Spectrum ROM. Then I reworked the game a little to cope with the different screen resolution (though only minimally, some of the text still breaks lines in unfortunate spots):

The end result is reasonably playable, even if it isn't quite as nice as the CP/M version (largely because of the unfortunate word-wrapping, and smaller console-area). So now my repository contains a .TAP file which can be loaded into your emulator of choice, available from the releases list.

Here's a brief teaser of what you can expect:

Outstanding bugs? Well the line-input is a bit horrid, and unfortunately this was written for CP/M accessed over a terminal - so I'd assumed a "standard" 80x25 resolution, which means that line/word-wrapping is broken in places.

That said it didn't take me too long to make the port, and it was kinda fun.

26 April, 2022 04:00PM

Reproducible Builds

Supporter spotlight: Google Open Source Security Team (GOSST)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org.

We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent one about ARDC. However, today, we’ll be talking with Meder Kydyraliev of the Google Open Source Security Team (GOSST).


Chris Lamb: Hi Meder, thanks for taking the time to talk to us today. So, for someone who has not heard of the Google Open Source Security Team (GOSST) before, could you tell us what your team is about?

Meder: Of course. The Google Open Source Security Team (or ‘GOSST’) was created in 2020 to work with the open source community at large, with the goal of making the open source software that everyone relies on more secure.


Chris: What kinds of activities is the GOSST involved in?

Meder: The range of initiatives that the team is involved in recognizes the diversity of the ecosystem and unique challenges that projects face on their security journey. For example, our sponsorship of sos.dev ensures that developers are rewarded for security improvements to open source projects, whilst the long term work on improving Linux kernel security tackles specific kernel-related vulnerability classes.

Many of the projects GOSST is involved with aim to make it easier for developers to improve security through automated assessment (Scorecards and Allstar) and vulnerability discovery tools (OSS-Fuzz, ClusterFuzzLite, FuzzIntrospector), in addition to contributing to infrastructure to make adopting certain ‘best practices’ easier. Two great examples of best practice efforts are Sigstore for artifact signing and OSV for automated vulnerability management.


Chris: The usage of open source software has exploded in the last decade, but supply-chain hygiene and best practices has seemingly not kept up. How does GOSST see this issue and what approaches is it taking to ensure that past and future systems are resilient?

Meder: Every open source ecosystem is a complex environment and that awareness informs our approaches in this space. There are, of course, no ‘silver bullets’, and long-lasting and material supply-chain improvements requires infrastructure and tools that will make lives of open source developers easier, all whilst improving the state of the wider software supply chain.

As part of a broader effort, we created the Supply-chain Levels for Software Artifacts framework that has been used internally at Google to protect production workloads. This framework describes the best practices for source code and binary artifact integrity, and we are engaging with the community on its refinement and adoption. Here, package managers (such as PyPI, Maven Central, Debian, etc.) are an essential link in the software supply chain due to their near-universal adoption; users do not download and compile their own software anymore. GOSST is starting to work with package managers to explore ways to collaborate together on improving the state of the supply chain and helping package maintainers and application developers do better… all with the understanding that many open source projects are developed in spare time as a hobby! Solutions like this, which are the result of collaboration between GOSST and GitHub, are very encouraging as they demonstrate a way to materially strengthen software supply chain security with readily available tools, while also improving development workflows.

For GOSST, the problem of supply chain security also covers vulnerability management and solutions to make it easier for everyone to discover known vulnerabilities in open source packages in a scalable and automated way. This has been difficult in the past due to lack of consistently high-quality data in an easily-consumable format. To address this, we’re working on infrastructure (OSV.dev) to make vulnerability data more easily accessible as well as a widely adopted and automation friendly data format.


Chris: How does the Reproducible Builds effort help GOSST achieve its goals?

Meder: Build reproducibility has a lot of attributes that are attractive as part of generally good ‘build hygiene’. As an example, hermeticity, one of requirements to meet SLSA level 4, makes it much easier to reason about the dependencies of a piece of software. This is an enormous benefit during vulnerability or supply chain incident response.

On a higher level, however, we always think about reproducibility from the viewpoint of a user and the threats that reproducibility protects them from. Here, a lot of progress has been made, of course, but a lot of work remains to make reproducibility part of everyone’s software consumption practices.


Chris: So if someone wanted to know more about GOSST or follow the team’s work, where might they go to look?

Meder: We post regular updates on Google’s Security Blog and on the Linux hardening mailing list. We also welcome community participation in the projects we work on! See any of the projects linked above or OpenSSF’s GitHub projects page for a list of efforts you can contribute to directly if you want to get involved in strengthening the open source ecosystem.


Chris: Thanks for taking the time to talk to us today.

Meder: No problem. :)




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

26 April, 2022 10:00AM

April 23, 2022

hackergotchi for Bálint Réczey

Bálint Réczey

Firefox on Ubuntu 22.04 from .deb (not from snap)

It is now widely known that Ubuntu 22.04 LTS (Jammy Jellyfish) ships Firefox as a snap, but some people (like me) may prefer installing it from .deb packages to retain control over upgrades or to keep extensions working.

Luckily there is still a PPA serving firefox (and thunderbird) debs at https://launchpad.net/~mozillateam/+archive/ubuntu/ppa maintained by the Mozilla Team. (Thank you!)

You can block the Ubuntu archive’s version that just pulls in the snap by pinning it:

$ cat /etc/apt/preferences.d/firefox-no-snap 
Package: firefox*
Pin: release o=Ubuntu*
Pin-Priority: -1

Now you can remove the transitional package and the Firefox snap itself:

sudo apt purge firefox
sudo snap remove firefox
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install firefox

Since the package comes from a PPA unattended-upgrades will not upgrade it automatically, unless you enable this origin:

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Happy browsing!

Update: I have found a few other, similar guides at https://fostips.com/ubuntu-21-10-two-firefox-remove-snap and https://ubuntuhandbook.org/index.php/2022/04/install-firefox-deb-ubuntu-22-04 and I’ve updated the pinning configuration based on them.

23 April, 2022 02:38PM by Réczey Bálint

Russell Coker

Got Covid

I’ve currently got Covid, I believe I caught it on the 11th of April (my first flight since the pandemic started) with a runny nose on the 13th and a positive RAT on the evening of the 14th. I got an official PCR test on the 16th with a positive result returned on the 17th. I think I didn’t infect anyone else (yay)! Now I seem mostly OK but still have a lack of energy, sometimes I suddenly feel tired after 20 minutes of computer work.

The progression of the disease was very different to previous cold/flu diseases that I have had. What I expect is to start with a cough or runny nose, escalate with more of that, have a day or two of utter misery with congestion, joint pain, headache, etc, then have it suddenly decrease overnight. For Covid I had a runny nose for a couple of days which went away then I got congestion in my throat with serious coughing such that I became unable to speak. Then the coughing went away and I had a really bad headache for a day with almost no other symptoms. Then the headache went away and I was coughing a bit the next day. The symptoms seemed to be moving around my body.

I got a new job and they wanted me to fly to the head office to meet the team, I apparently got it on the plane a day before starting work. I’ve discussed this with a manager and stated my plan to drive instead of fly in future. It’s only a 7 hour drive and it’s not worth risking the disease to save 3-4 hours travel time, or even the 5 hours travel I’d have saved if the airports were working normally (apparently a lot of airport staff are off sick so there’s delays). Given the flight delays and the fact that I was advised to arrive extra early at the airport I ended up taking almost 7 hours for the entire trip!

7 hours driving is a bit of effort, but sitting in an airport waiting for a delayed flight while surrounded by diseased people isn’t fun either.

23 April, 2022 09:52AM by etbe

Andrej Shadura

To England by train (part 2)

My attempt to travel to the UK by train last year didn’t go quite as well as I expected. As I mentioned in that blog post, the NightJet to Brussels was cancelled, forcing me to fly instead. This disappointed me so much that I actually unpublished the blog post minutes after it was originally put online. The timing was nearly perfect: I type make publish and I get an email from ÖBB saying they don’t know if my train is going to run. Of course it didn’t, as Deutsche Bahn workers went ahead with their strike. The blog post sat in the drafts for more than half a year until yesterday, when I finally updated and published it.

The reason I have finally published it is that I’m going to the UK by train once again. Now, unless railways decide to hold a strike again, fully by train both ways. Very expensive, especially compared to the price of Ryanair flights to my destination. Unfortunately, even though Eurostar added more daily services, they’re still not cheap, especially on a shorter notice. This seems to apply to the ÖBB’s NightJet even more: I tried many routes between Vienna and London, and the cheapest still seemed to be the connection through Brussels.

While researching the prices of the tickets, it seems all booking systems decided to stop co-operating. The Trainline refused to let me look up any trains at all, even with all tracking and advertisement allowed, SNCF kepts showing me overly generic errors (Sorry, an error has occurred.), while the GWR booking system kept crashing with a 500 Internal Server Error for about two hours.

Error messages at the websites of Trainline, GWR and SNCFTrainline, GWR and SNCF kept crashing

Eventually, having spent a lot of time and money, I’ve booked my trains to, within and back from England. This time, Cambridge is among the destinations.

The map of the train route from Bratislava to Cambridge through Brussels and London The complete route
Date Station Arrival Departure Train
26.4 Bratislava hl.st. 18:37 REX 2529
Wien Hbf 19:44 20:13 NJ 40490
27.4 Bruxelles-Midi 9:54 15:56 EST 9145
London St Pancras 17:01 17:43 ThamesLink
Cambridge 18:43

I’m not sure about it yet, but I may end up taking an earlier train from Bratislava just to ensure there’s enough time for the change in Vienna; similarly, I’m not sure how much time I will be spending at St Pancras, so I may take one of the later trains.

P.S. The maps in this and other posts were created using uMap; the map data come from OpenStreetMap. The train route visualisation was generated with help of signal.eu.org.

23 April, 2022 06:45AM by Andrej Shadura

April 22, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

3D-printed replacement battery cover

Print next to the original

Print next to the original

new cover in the light

new cover in the light

My first self-designed functional 3D print is a replacement battery cover for a LED fake-candle that my daughter uses as a night-light.

I measured the original cover (we have three of the candles) using a newly-purchased micrometer and tried to re-create it in OpenSCAD. I skipped the screw-hole that is for securing the cover as we don't use that.

I sliced it using Cura and printed it using PETG on our office 3D printer, a Ender 3. Print time was about an hour.

To my amazement, my first take fits snugly!

I've uploaded the OpenSCAD source here: batterycover.scad. It's covered under the terms of Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).

22 April, 2022 02:57PM

Andrej Shadura

To England by train (part 1)

This post was written in August 2021. Just as I was going to publish it, I received an email from ÖBB stating that due to a railway strike in Germany my night train would be cancelled. Since the rest of the trip has already been booked well in advance, I had to take a plane to Charleroi and a bus to Brussels to catch my Eurostar. Ultimately, I ended up publishing it in April 2022, just as I’m about to leave for a fully train-powered trip to the UK once again.

Before the pandemic started, I planned to use the last months of my then-expiring UK visa and go to England by train. I’ve completed two train long journeys by that time already, to Brussels and to Belarus and Ukraine, but this would be something quite different, as I wanted to have multiple stops on my way, use night trains where it made sense, and to go through the Channel Tunnel.

The Channel Tunnel fascinated me since my childhood; I first read about it in the Soviet Science and Life magazine (Наука и жизнь) when I was seven. I’ve never had the chance to use it though, since to board any train going though it I’d first need to get to France, Belgium or the Netherlands, making it significantly more expensive than the cheap €30 Ryanair flights to Stansted.

As the coronavirus spread across the world, all of my travel plans along with plans for a sabbatical had to be cancelled. During 2020, I only managed to go on two weekend trips to Prague and Budapest, followed by a two-weeks holiday on Crete (we returned just a couple of weeks before the infection numbers rose and lockdowns started). I do realise that a lot of people couldn’t even have this much because the situation in their countries was much worse — we were lucky to have had at least some travel.

Fast forward to August 2021, I’m fully vaccinated, I — once again — have a UK visa for five years, and the UK finally recognises the EU vaccination passports — yay! I can finally go to Devon to see my mother and sister again. By train, of course.

Compared to my original plan, this journey will be different: about the same or even more expensive than I originally planned, but shorter and with fewer stops on the way. My original plan would be to take multiple trains from Bratislava to France or Belgium and complete this segment of the trip in about three days, enjoying my stay in a couple of cities on the way. Instead, I’m taking a direct NightJet from Vienna to Brussels, not stopping anywhere on the way.

Map: train route from Bratislava to BrusselsTrain route from Bratislava to Brussels

Since I was booking my trip just two weeks ahead, the price of the ticket is not that I hoped for, but much higher: €109 for the ticket itself and €60 for the berth (advance bookings could be about twice as cheap).

Next, to London! Eurostar is still on a very much reduced schedule, running one train only from Amsterdam through Brussels and Lille to London each day. This means, of course, higher ticket prices (I paid about €100 for the ticket) and longer waiting time in Brussels — my sleeper arrives about 10 am, but the Eurostar train is scheduled to depart at 3 pm.

Map: train route from Brussels to LondonTrain route from Brussels to London

The train makes a stop in Lille, which I initially suspected to be risky as at the time when I booked my tickets, as at the time France was on the amber plus list for the UK, requiring a quarantine upon arrival. However, Eurostar announced that they will assign travellers from Lille to a different carriage to avoid other passengers having to go to quarantine, but recently France was taken off the amber plus list.

The train fare system in the UK is something I don’t quite understand, as sometimes split tickets are cheaper, sometimes they’re expensive, sometimes prices for the same service at different times can be vastly different, off-peak tickets don’t say what exactly off-peak means (very few people in the UK are asked were able to tell me when exactly off-peak hours are). Curiously, transfers between train stations using London Underground services can be included into railway tickets, but some last mile connection like Exeter to Honiton cannot (but this used to be possible). Both GWR.com and TrainLine refused to sell me a single ticket from London to Honiton through Exeter, insisting I split the ticket at Exeter St Davids or take the slower South Western train to Honiton via Salisbury and Yeovil. I ended up buying a £57 ticket from Paddington to Exeter St Davids with the first segment being the London Underground from St Pancras, and a separate £7.70 ticket to Honiton.

Map: arrival at St Pancras, the underground, departure from PaddingtonChange of trains in London
Map: arrival at Exeter St Davids, change to a South Western train, arrival at HonitonChange of trains in Exeter

Date Station Arrival Departure Train
12.8 Bratislava-Petržalka 18:15 REX 7756
Wien Hbf 19:15 19:53 NJ 50490
13.8 Bruxelles-Midi 9:55 15:06 EST 9145
London St Pancras 16:03 16:34 TfL
London Paddington 16:49 17:04 GWR 59231
Exeter St Davids 19:19 19:25 SWR 52706
Honiton 19:54

Unfortunately, due to the price of the tickets, I’m taking a £15 Ryanair flight back 🙂

Update after the journey

Since I flew to Charleroi instead of comfortably sleeping in a night train, I had to put up with inconveniences of airports, including cumbersome connections to the nearby cities. The only reasonable way of getting from Charleroi to Brussels is an overcrowded bus which takes almost an hour to arrive. I used to take this bus when I tried to save money on my way to FOSDEM, and I must admit it’s not something I missed.

Boarding the Eurostar train went fine, my vaccination passport and Covid test wasn’t really checked, just glanced at. The waiting room was a bit of a disappointment, with bars closed and vending machines broken. Since it was underground, I couldn’t even see the trains until the very last moment when we were finally allowed on the platform. The train itself, while comfortable, disappointed me with the bistro carriage: standing only, instant coffee, poor selection of food and drinks. I’m glad I bought some food at Carrefour at the Midi station!

When I arrived in Exeter, I soon found out why the system refused to sell me a through ticket: 6 minutes is not enough to change trains at Exeter St Davids! Or, it might have been if I took the right footbridge — but I took the one which led into a very talkative (and slow!) lift. I ended up running to the train just as it closed the doors and departed, leaving me tin Exeter for an hour until. I used this chance and walked to Exeter Central, and had a pint in a conveniently located pub around the corner.

P.S. The maps in this and other posts were created using uMap; the map data come from OpenStreetMap. The train route visualisation was generated with help of the Raildar.fr OSRM instance.

22 April, 2022 01:25PM by Andrej Shadura

Russell Coker

Joplin Notes

In response to my post about Android phones without Google Play [1] I received an email recommending Joplin for notes on Android [2].

Joplin supports storing notes on a number of protocols including Nextcloud and WebDAV. I setup WebDAV because it’s easiest, here is Digital Ocean instructions for WebDAV on Apache [3]. That basically works. One problem for my use case is that the Joplin client doesn’t support accounts on multiple servers and the only released way of sharing notes between accounts is using the paid Joplin Cloud service.

There is a Joplin Server in beta which allows sharing notes but that is designed to run in Docker and is written in TypeScript so it was too much pain to setup. One mitigating factor is that there are “Notebooks” which are collections of notes. So if multiple people who trust each other share an account they can have Notebooks for personal notes and a Notebook for shared notes.

There is also a Snap install of the client for Debian [4]. Snap isn’t my favourite way of doing things but packaging JavaScript programs will probably be painful so I’ll do it if I continue using Joplin.

22 April, 2022 09:53AM by etbe

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Systemd Service Hang

Finally, TIL, what can all be the reason for systemd services to hang indefinitely. The internet is flooded with numerous reports on this topic but no clear answers. So no more uselessly marked workarounds like: systemctl daemon-reload and systemctl-daemon-reexec for this scenario.

The scene would be something along the lines of:

rrs         6467  0.0  0.0  23088 15852 pts/1    Ss   12:53   0:00  |   |   \_ /bin/bash
rrs        11512  0.0  0.0  14876  4608 pts/1    S+   13:18   0:00  |   |   |   \_ systemctl restart snapper-timeline.timer
rrs        11513  0.0  0.0  14984  3076 pts/1    S+   13:18   0:00  |   |   |       \_ /bin/systemd-tty-ask-password-agent --watch
rrs        11514  0.0  0.0 234756  6752 pts/1    Sl+  13:18   0:00  |   |   |       \_ /usr/bin/pkttyagent --notify-fd 5 --fallback

The snapper-timeline service is important to me and it not running for months is a complete failure. Disappointingly, commands like systemctl --failed do not report of this oddity. The overall system status is reported to be fine, which is completely incorrect.

Thankfully, a kind soul’s comment gave the hint. The problem is that you could be having certain services in Activating status, which thus blocks all other services; quietly. So much for the unnecessary fun.

Looking further, in my case, it was:

rrs@priyasi:~$ systemctl list-jobs 
JOB  UNIT                           TYPE  STATE  
81   timers.target                  start waiting
85   man-db.timer                   start waiting
88   fstrim.timer                   start waiting
3832 snapper-timeline.service       start waiting
83   snapper-timeline.timer         start waiting
39   systemd-time-wait-sync.service start running
87   logrotate.timer                start waiting
84   debspawn-clear-caches.timer    start waiting
89   plocate-updatedb.timer         start waiting
91   dpkg-db-backup.timer           start waiting
93   e2scrub_all.timer              start waiting
40   time-sync.target               start waiting
86   apt-listbugs.timer             start waiting

13 jobs listed.
13:12 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    

That was it. I knew the systemd-timesyncd service, in the past, had given me enough headaches. And so was it this time, just quietly doing it all again.

rrs@priyasi:~$ systemctl status systemd-time-wait-sync.service
● systemd-time-wait-sync.service - Wait Until Kernel Time Synchronized
     Loaded: loaded (/lib/systemd/system/systemd-time-wait-sync.service; enabled; vendor preset>
     Active: activating (start) since Fri 2022-04-22 13:14:25 IST; 1min 38s ago
       Docs: man:systemd-time-wait-sync.service(8)
   Main PID: 11090 (systemd-time-wa)
      Tasks: 1 (limit: 37051)
     Memory: 836.0K
        CPU: 7ms
     CGroup: /system.slice/systemd-time-wait-sync.service
             └─11090 /lib/systemd/systemd-time-wait-sync

Apr 22 13:14:25 priyasi systemd[1]: Starting Wait Until Kernel Time Synchronized...
Apr 22 13:14:25 priyasi systemd-time-wait-sync[11090]: adjtime state 5 status 40 time Fri 2022->
13:16 ♒ ॐ ♅ ♄ ⛢      ☹ 😟=> 3  

Dear LazyWeb, anybody knows of why the systemd-time-wait-sync service would hang indefinitely? I’ve had identical setups on many machines, in the same network, where others don’t exhibit this problem.

rrs@priyasi:~$ systemctl cat systemd-time-wait-sync.service

...snipped...

[Service]
Type=oneshot
ExecStart=/lib/systemd/systemd-time-wait-sync
TimeoutStartSec=infinity
RemainAfterExit=yes

[Install]
WantedBy=sysinit.target

The TimeoutStartSec=infinity is definitely an attribute that shouldn’t be shipped in any system services. There are use cases for it but that should be left for local admins to explicitly decide. Hanging for infinity is not a desired behavior for a system service.

In figuring all this out, today I learnt the handy systemctl list-jobs command, which will give the list of active running/blocked/waiting jobs.

22 April, 2022 07:47AM by Ritesh Raj Sarraf (rrs@researchut.com)

April 21, 2022

hackergotchi for Andy Simpkins

Andy Simpkins

Firmware and Debian

There has been a flurry of activity on the Debian mailing lists ever since Steve McIntyre raised the issue of including non-free firmware as part of official Debian installation images.

Firstly I should point out that I am in complete agreement with Steve’s proposal to include non-free firmware as part of an installation image. Likewise I think that we should have a separate archive section for firmware. Because without doing so it will soon become almost impossible to install onto any new hardware. However, as always the issue is more nuanced than a first glance would suggest.

Lets start by defining what is firmware?

Firmware is any software that runs outside the orchestration of the operating system. Typically firmware will be executed on a processor(s) separate from the processor(s) running the OS, but this does not need to be the case.

As “Debian” we are content that our systems can operate using fully free and open source software and firmware. We can install our OS without needing any non-free firmware.

This is an illusion!

Each and every PC platform contains non-free firmware…

It may be possible to run free firmware on some Graphics controllers, Wi-Fi chip-sets, or Ethernet cards and we can (and perhaps should) choose to spend our money on systems where this is the case. When installing a new system we might still be forced to ‘hold our nose’ and install with non-free firmware on the peripheral before we are able to upgrade it to FLOSS firmware later – if this is exists or is even possible to do so. However after the installation we are running a full FLOSS system in terms of software and firmware.

We all (almost without exception) are running propitiatory firmware whether we like it or not.

Even after carefully selecting graphics and network hardware with FLOSS firmware options we still haven’t escaped from non-free-firmware. Other peripherals contain firmware too – each keyboard, disk (SSDs and Spinning rust). Even your USB memory stick that you use to contain the Debian installation image contains a microcontroller and hence also contains firmware that runs on it.

  1. Much of this firmware can not even be updated.
  2. Some can be updated, but is stored in FLASH ROM and the hardware vendor has defeated all programming methods (possibly circumnavigated with a hardware mod).
  3. Some of it can be updated but requires external device programmers (and often the programming connections are a series of test points dotted around the board and not on a ‘header’ in order to make programming as difficult as possible).
  4. Sometimes the firmware can be updated from within the host operating system (i.e. Debian)
  5. Sometimes, as Steve pointed out in his post, the hardware vendor has enough firmware on a peripheral to perform basic functions – perhaps enough to install the OS, but requires additional firmware to enable specific feature (i.e. higher screen resolutions, hardware accelerated functions etc.)
  6. Finally some vendors don’t even bother with any non-volatile storage beyond a basic boot loader and firmware must be loaded before the device can be used in any mode.

What about the motherboard? If we are lucky we might be able to run a FLOSS implementation of the UEFI subsystem (edk2/tianocore for example), indeed the non AMD64/i386 platforms based around ARM, MIPS architectures are often the most ‘free’ when it comes to firmware.

What about the microcode on the processor? Personally I wasn’t aware that that this was updatable ‘firmware’ until the Spectre and Meltdown classes of vulnerabilities arose a few years back.

So back to Debian images including non-free firmware.

This is specifically to address the last two use cases mentioned above, i.e. where firmware needs to be loaded to achieve a minimum functioning of a device. Although it could also include motherboard support, and microcode as well.

As far as I can tell the proposal exists for several reasons:

#1 Because some ‘freely distributable’ firmware is required for more and more devices, in order to install Debian, or because whilst Debian can be installed a desktop environment can not be started or fully function

#2 Because frankly it is less work to produce, test and maintain fewer installation images – As someone who performs tests on our images, this clearly gets my vote :-)

and perhaps most important of all..

#3 Because our least experienced users, and new users will download an official image and give up if things don’t “just work”TM

Steve’s proposal option 5 would address theses issues and I fully support it.

I would love to see separate repositories for firmware and firmware-none free. Additionally to accompany firmware non-free I would like to have information on what the firmware actually does. Can I run my hardware without it, what function(s) are limited without the firmware, better yet is there a FLOSS equivalent that I can load instead? Is this something that we can present in Debian installer? I would love not to “require” non-free firmware, but if I can’t, I would love if DI would enable a user to make an informed choice as to what, if any, firmware is installed.

Should we be requesting (requiring?) this information for any non-free firmware image that we carry in the archive?

Finally lets consider firmware in the wider general case, not just the case where we need to load firmware from within Debian each and every boot.

Personally I am annoyed whenever a hardware manufacturer has gone out of their way to prevent firmware updates. Lets face it software contains bugs, and we can assume that the software making up a firmware image will as well.

Critical (security) vulnerabilities found in firmware, especially if this runs on the same processor(s) as the OS can impact on the wider system, not just the device itself. This will mean that, without updatable firmware, the hardware itself should be withdrawn from use whilst it would otherwise still function. By preventing firmware updates vendors are forcing early obsolescence in the hardware they sell, perhaps good for their bottom line, but certainly no good for users or the environment.

Here I can practice what I preach. As an Electronic Engineer / Systems architect I have been beating the drum for In System Updatable firmware for ALL programmable devices in a system, be it a simple peripheral or a deeply embedded system. I can honestly say that over the last 20 years (yes I have been banging this particular drum for that long) I have had 100% success in arguing this case commercially. Having device programmers in R&D departments is one thing, but that is additional cost for production, and field service. Needing custom programming headers or even a bed of nails fixture to connect your target device to a programmer is more trouble than it is worth. Finally, the ability to update firmware in the field means that you can launch your product on schedule, make a sale and ship to a customer even if the first thing that you need to do is download an update. Offering that to any project manager will make you very popular indeed.

So what if this firmware is non-free? As long as the firmware resides in non-volatile media without needing the OS to interact with it, we as a project don’t need to carry it in our archives. And we as principled individuals can vote with our feet and wallets by choosing to purchase devices that have free firmware.

But where that isn’t an option, I’ll take updatable but non-free firmware over non-free firmware that can not be updated any day of the week.

Sure, the manufacture can choose to no longer support the firmware, and it is shocking how soon this happens – often in the consumer market, the manufacture has withdrawn support for a product before it even reaches the end user (In which case we should boycott that manufacture in future until they either change their ways of go bust). But again if firmware can be updated “in system” that would at least allow the possibility of open firmware to arise. Indeed the only commercial case I have seen to argue against updatable firmware has been either for DRM, in which case good – lets get rid of both, or for RF licence compliance, and even then it is tenuous because in this case the manufacture wants ISP for its own use right up until a device is shipped out the door, typically achived by blowing one time programmable ‘fuse links’.

21 April, 2022 10:55PM by andy

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader election 2022, Jonathan Carter re-elected

The voting period and tally of votes for the Debian Project Leader election has just concluded, and the winner is Jonathan Carter, who has been elected for third time. Congratulations! The new term for the project leader starts on 2022-04-21.

354 of 1,023 Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2022 page.

Many thanks to Felix Lechner, Jonathan Carter and Hideki Yamane for their campaigns, and to our Developers for voting.

21 April, 2022 05:30PM by Laura Arjona Reina

April 20, 2022

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Btrfs Subvol Fix

There surely is need for better tooling on the BTRFS File System side.

While migrating my setup from one machine to another, this is one issue I came to be aware of, only today, when my backup tool (btrbk) complained about it. Following the pointers, I see the below snippet in btrfs-subvolume manual page.

       A snapshot that was created by send/receive will be read-only, with different last change generation, read-only and with set received_uuid which identifies the subvolume on the
       filesystem that produced the stream. The usecase relies on matching data on both sides. Changing the subvolume to read-write after it has been received requires to reset the
       received_uuid. As this is a notable change and could potentially break the incremental send use case, performing it by btrfs property set requires force if that is really desired by
       user.

           Note
           The safety checks have been implemented in 5.14.2, any subvolumes previously received (with a valid received_uuid) and read-write status may exist and could still lead to
           problems with send/receive. You can use btrfs subvolume show to identify them. Flipping the flags to read-only and back to read-write will reset the received_uuid manually. There
           may exist a convenience tool in the future.

Fixing the Received UUID: flag meant running the below:

rrs@priyasi:.../spool$ sudo btrfs sub show /
WARNING: the subvolume is read-write and has received_uuid set,
         don't use it for incremental send. Please see section
         'SUBVOLUME FLAGS' in manual page btrfs-subvolume for
         further information.
ROOTVOL
        Name:                   ROOTVOL
        UUID:                   122b0de1-e6f2-6845-aba0-6bf766c16526
        Parent UUID:            -
        Received UUID:          34772967-c709-5146-bf20-898f7dbc2c1f
        Creation time:          2021-12-02 19:59:29 +0530
        Subvolume ID:           256
        Generation:             138473
        Gen at creation:        7
        Parent ID:              5
        Top level ID:           5
        Flags:                  -
        Send transid:           35245
        Send time:              2021-12-02 19:59:29 +0530
        Receive transid:        34
        Receive time:           2021-12-02 20:13:11 +0530
        Snapshot(s):
                                ROOTVOL/.snapshots/1/snapshot
                                ROOTVOL/.snapshots/2/snapshot
22:40 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    


rrs@priyasi:.../spool$ sudo btrfs property set / ro true
WARNING: read-write subvolume with received_uuid, this is bad
22:40 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    



rrs@priyasi:.../spool$ sudo btrfs property set -f / ro false
22:40 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    



rrs@priyasi:.../spool$ sudo btrfs sub show /
ROOTVOL
        Name:                   ROOTVOL
        UUID:                   122b0de1-e6f2-6845-aba0-6bf766c16526
        Parent UUID:            -
        Received UUID:          -
        Creation time:          2021-12-02 19:59:29 +0530
        Subvolume ID:           256
        Generation:             138473
        Gen at creation:        7
        Parent ID:              5
        Top level ID:           5
        Flags:                  -
        Send transid:           0
        Send time:              2021-12-02 19:59:29 +0530
        Receive transid:        138480
        Receive time:           2022-04-20 22:40:43 +0530
        Snapshot(s):
                                ROOTVOL/.snapshots/1/snapshot
                                ROOTVOL/.snapshots/2/snapshot
22:40 ♒ ॐ ♅ ♄ ⛢     ☺ 😄    

Hoping there won’t be surprises in the coming months. 🤞

20 April, 2022 05:11PM by Ritesh Raj Sarraf (rrs@researchut.com)

Russell Coker

Android Without Play

A while ago I was given a few reasonably high-end Android phones to give away. I gave two very nice phones to someone who looks after refugees so a couple of refugee families could make video calls to relatives. The third phone is a Huawei Nova 7i [1] which doesn’t have the Google Play Store. The Nova 7i is a ridiculously powerful computer (8G of RAM in a phone!!!) but without the Google Play Store it’s not much use to the average phone user. It has the “HuaWei App Gallery” which isn’t as bad as most of the proprietary app stores of small players in the Android world, it has SnapChat, TikTok, Telegram, Alibaba, WeChat, and Grays auction (an app I didn’t even know existed) along with many others. It also links to ApkPure (apparently a 3rd party app installer that “obtains” APK files for major commercial apps) for Facebook among others. The ApkPure thing might be Huawei outsourcing the violation of Facebook terms of service. For the moment I’ve decided to only use free software on this phone and use my old phone for non-free stuff (Facebook, LinkedIn, etc). The eventual aim is that I can only carry a phone with free software for normal use and carry a second phone if I’m active on LinkedIn or something. My recollection is that when I first got the phone (almost 2 years ago) it didn’t have such a range of apps.

The first thing to install was f-droid [2] as the app repository. F-droid has a repository of thousands of free software Android apps as well as some apps that are slightly less free which are tagged appropriately. You can install the F-Droid app from the web site. As an aside I had to go to settings and enable “force old index format” to get the list of packages, I don’t know why as other phones had worked without it.

Here are the F-Droid apps I installed:

  • Kdeconnect to transfer files to PC. This has some neat features including using the PC keyboard on Android. One downside is that there’s no convenient way to kill it. I don’t want it hanging around, I want to transfer a file and close it down to minimise exposure.
  • K9 is an Android app for email that I’ve used for over a decade now. Previously I’ve used it from the Play Store but it’s available in F-droid. I used Kdeconnect to transfer the exported configuration from my old phone to my PC and then from my PC to my new phone.
  • I’m now using SchildiChat for Matrix as a replacement for Google Hangouts (I previously wrote about how Google is killing Hangouts [3]). One advantage of SchildiChat is that it keeps a notification running 24*7 to reduce the incidence of Android killing it. The process of sending private messages with Matrix seems noticeably slower than Hangouts, while Google will inevitably be faster than a federated system (if only because they buy better hardware than I rent) the difference shouldn’t be enough to notice (my Matrix servers might need some work).
  • I used ffupdater to install Firefox. It can also install other browsers that don’t publish APK files. One of the options is “Ungoogled Chromium” which I’m not going to use even though I’ve found Google Chrome to be a great browser, I think I should go all the way in avoiding Google. There’s no description in the app of the differences between the browsers, the ffupdater web page has information about the browsers [4].
  • I use Tusky for Mastodon which is a replacement for Twitter. My Mastodon address is @etbe@mastodon.nzoss.nz. Currently Mastodon needs more users, there are plenty of free servers out there and the New Zealand Open Source Society is just one I have contact with.
  • I have used ConnectBot for ssh connections from Android for over 10 years, previously via the Play Store but it’s also in F-droid. To get the hash of a key from a server in the way ConnectBot displays it run “ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_ed25519_key.pub“.
  • I initially changed Keyboard from MS Swiftkey to the Celia keyboard that came with the phone. But it’s spelling correction was terrible, almost never suggesting words with apostrophes when appropriate and also having no apparent option to disable adult words. I’m now using OpenBoard which is a port of the Google Android keyboard which works well.
  • I’ve just installed “primitive ftpd” for file transfer, it supports ftp and sftp protocols and is well written.
  • I’ve installed the mpv video player which plays FullHD video at high quality using hardware decoding. I don’t need to do that sort of thing (the screen is too small to make it worth FullHD video), but it’s nice to have.
  • For barcodes and QR codes I’m using Binary Eye which seems better than the Play Store one I had used previously.
  • For playing music I’ve tried using the Simple Music Player (which is nice for mp3s), but it doesn’t play m4a or webm files. Auxio and Music Player Go play mp3 and m4a but not webm. So far the only programs I’ve found that can play webm are VLC and MPV, so I’m trying out VLC as a music player which it basically does but a program with the same audio features and no menu options about video would be better. Webm is important to me because I have some music videos downloaded from YouTube and webm allows me to put a binary copy of the audio data into an audio file.

Future Plans

The current main things I’m missing are a calendar, a contact list, and a shared note taking system (like Google Keep). For calendaring and a contact list the CalDAV and CardDAV protocols seem best. The most common implementation on the server side appears to be DAViCal [5]. The Nextcloud system supports CalDAV, CardDAV, web editing of notes and documents (including LibreOffice if you install that plugin) [6]. But it is huge and demands write access to all it’s own code (bad for security), and it’s not packaged for Debian. Also in my tests it gave me an error 401 when I tried to authenticate to it from the Android Nextcloud client. I’ve seen a positive review about Radicale, a simple CalDAV and CardDAV server that doesn’t need a database [7]. I prefer the Unix philosophy of keeping things simple with file storage unless there’s a real need for anything else. I don’t think that anything I ever do with calendaring will require the PostgreSQL database that DAViCal uses.

I’ll give Radicale a go for CalDAV and CardDAV, but I still need something for shared notes (shopping lists etc). Suggestions welcome.

Current Status

Lack of a contacts list is a major loss of functionality in a phone. I could store contacts in the phone memory or on the SIM, but I would still have to get all my old contacts in there and also getting something half working reduces motivation for getting it working properly. Lack of a calendar is also a problem, again I could work around that by exporting all my Google calendars as iCal URLs but I’d rather get it working correctly.

The lack of shared notes may be a harder problem to solve given the failure of Nextcloud. For that I would consider just having the keep.google.com web site always open in Mozilla at least in the short term.

At the moment I require two phones, my new Android phone without Google and the old one for my contacts list etc. Hopefully in a week or so I’ll have my new phone doing contacts, calendaring, and notes. Then my old phone will just be for proprietary apps which I don’t need most of the time and I can leave it at home when I don’t need that sort of thing.

20 April, 2022 09:04AM by etbe

Petter Reinholdtsen

geteltorito make CD firmware upgrades a breeze

Recently I wanted to upgrade the firmware of my thinkpad, and located the firmware download page from Lenovo (which annoyingly do not allow access via Tor, forcing me to hand them more personal information that I would like). The download from Lenovo is a bootable ISO image, which is a bit of a problem when all I got available is a USB memory stick. I tried booting the ISO as a USB stick, but this did not work. But genisoimage came to the rescue.

The geteltorito program in the genisoimage binary package is able to convert the bootable ISO image to a bootable USB stick using a simple command line recipe, which I then can write to the most recently inserted USB stick:

geteltorito -o usbstick.img lenovo-firmware.iso
sudo dd bs=10M if=usbstick.img of=$(ls -tr /dev/sd?|tail -1)

This USB stick booted the firmware upgrader just fine, and in a few minutes my machine had the latest and greatest BIOS firmware in place.

20 April, 2022 09:00AM

genisoimage make CD firmware upgrades a breeze

Recently I wanted to upgrade the firmware of my thinkpad, and located the firmware download page from Lenovo (which annoyingly do not allow access via Tor, forcing me to hand them more personal information that I would like). The download from Lenovo is a bootable ISO image, which is a bit of a problem when all I got available is a USB memory stick. I tried booting the ISO as a USB stick, but this did not work. But genisoimage came to the rescue.

The geteltorito program in the genisoimage package is able to convert the bootable ISO image to a bootable USB stick using a simple command line recipe, which I then can write to the most recently inserted USB stick:

geteltorito -o usbstick.img lenovo-firmware.iso
sudo dd bs=10M if=usbstick.img of=$(ls -tr /dev/sd?|tail -1)

This USB stick booted the firmware upgrader just fine, and in a few minutes my machine had the latest and greatest BIOS firmware in place.

20 April, 2022 07:00AM

April 19, 2022

hackergotchi for Bits from Debian

Bits from Debian

ITP Prizren Platinum Sponsor of DebConf22

itplogo

We are very pleased to announce that ITP - Innovation and Training Park Prizren has committed to supporting DebConf22 as a Platinum sponsor. Also, ITP Prizren will host the Conference for all 15 days!

The ITP Prizren intends to be a changing and boosting element in the area of ICT, agro-food and creatives industries, through the creation and management of a favourable environment and efficient services for SMEs, exploiting different kinds of innovations that can contribute to Kosovo to improve its level of development in industry and research, bringing benefits to the economy and society of the country as a whole.

ITP Prizren is a focal point in the Balkan region for innovation, business and skills development, and a source of innovative and successful ideas.

With this commitment as Platinum Sponsor, ITP Prizren is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much ITP Prizren, for your support of DebConf22!

Become a sponsor too!

DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor.

DebConf22 banner open registration

19 April, 2022 07:00AM by The Debian Publicity Team

hackergotchi for Steve McIntyre

Steve McIntyre

Firmware - what are we going to do about it?

TL;DR: firmware support in Debian sucks, and we need to change this. See the "My preference, and rationale" Section below.

In my opinion, the way we deal with (non-free) firmware in Debian is a mess, and this is hurting many of our users daily. For a long time we've been pretending that supporting and including (non-free) firmware on Debian systems is not necessary. We don't want to have to provide (non-free) firmware to our users, and in an ideal world we wouldn't need to. However, it's very clearly no longer a sensible path when trying to support lots of common current hardware.

Background - why has (non-free) firmware become an issue?

Firmware is the low-level software that's designed to make hardware devices work. Firmware is tightly coupled to the hardware, exposing its features, providing higher-level functionality and interfaces for other software to use. For a variety of reasons, it's typically not Free Software.

For Debian's purposes, we typically separate firmware from software by considering where the code executes (does it run on a separate processor? Is it visible to the host OS?) but it can be difficult to define a single reliable dividing line here. Consider the Intel/AMD CPU microcode packages, or the U-Boot firmware packages as examples.

In times past, all necessary firmware would normally be included directly in devices / expansion cards by their vendors. Over time, however, it has become more and more attractive (and therefore more common) for device manufacturers to not include complete firmware on all devices. Instead, some devices just embed a very simple set of firmware that allows for upload of a more complete firmware "blob" into memory. Device drivers are then expected to provide that blob during device initialisation.

There are a couple of key drivers for this change:

  • Cost: it's typically cheaper to fit smaller flash memory (or no flash at all) onto a device. The cost difference may seem small in many cases, but reducing the bill of materials (BOM) even by a few cents can make a substantial difference to the economics of a product. For most vendors, they will have to implement device drivers anyway and it's not difficult to include firmware in that driver.

  • Flexibility: it's much easier to change the behaviour of a device by simply changing to a different blob. This can potentially cover lots of different use cases:

    • separating deadlines for hardware and software in manufacturing (drivers and firmware can be written and shipped later);
    • bug fixes and security updates (e.g. CPU microcode changes);
    • changing configuration of a device for different users or products (e.g. potentially different firmware to enable different frequencies on a radio product);
    • changing fundamental device operation (e.g. switching between RAID and JBOD functionality on a disk controller).

Due to these reasons, more and more devices in a typical computer now need firmware to be uploaded at runtime for them to function correctly. This has grown:

  • Going back 10 years or so, most computers only needed firmware uploads to make WiFi hardware work.

  • A growing number of wired network adapters now demand firmware uploads. Some will work in a limited way but depend on extra firmware to allow advanced features like TCP segmentation offload (TSO). Others will refuse to work at all without a firmware upload.

  • More and more graphics adapters now also want firmware uploads to provide any non-basic functions. A simple basic (S)VGA-compatible framebuffer is not enough for most users these days; modern desktops expect 3D acceleration, and a lot of current hardware will not provide that without extra firmware.

  • Current generations of common Intel-based laptops also need firmware uploads to make audio work (see the firmware-sof-signed package).

At the beginning of this timeline, a typical Debian user would be able to use almost all of their computer's hardware without needing any firmware blobs. It might have been inconvenient to not be able to use the WiFi, but most laptops had wired ethernet anyway. The WiFi could always be enabled and configured after installation.

Today, a user with a new laptop from most vendors will struggle to use it at all with our firmware-free Debian installation media. Modern laptops normally don't come with wired ethernet now. There won't be any usable graphics on the laptop's screen. A visually-impaired user won't get any audio prompts. These experiences are not acceptable, by any measure. There are new computers still available for purchase today which don't need firmware to be uploaded, but they are growing less and less common.

Current state of firmware in Debian

For clarity: obviously not all devices need extra firmware uploading like this. There are many devices that depend on firmware for operation, but we never have to think about them in normal circumstances. The code is not likely to be Free Software, but it's not something that we in Debian must spend our time on as we're not distributing that code ourselves. Our problems come when our user needs extra firmware to make their computer work, and they need/expect us to provide it.

We have a small set of Free firmware binaries included in Debian main, and these are included on our installation and live media. This is great - we all love Free Software and this works.

However, there are many more firmware binaries that are not Free. If we are legally able to redistribute those binaries, we package them up and include them in the non-free section of the archive. As Free Software developers, we don't like providing or supporting non-free software for our users, but we acknowledge that it's sometimes a necessary thing for them. This tension is acknowledged in the Debian Free Software Guidelines.

This tension extends to our installation and live media. As non-free is officially not considered part of Debian, our official media cannot include anything from non-free. This has been a deliberate policy for many years. Instead, we have for some time been building a limited parallel set of "unofficial non-free" images which include non-free firmware. These non-free images are produced by the same software that we use for the official images, and by the same team.

There are a number of issues here that make developers and users unhappy:

  1. Building, testing and publishing two sets of images takes more effort.
  2. We don't really want to be providing non-free images at all, from a philosophy point of view. So we mainly promote and advertise the preferred official free images. That can be a cause of confusion for users. We do link to the non-free images in various places, but they're not so easy to find.
  3. Using non-free installation media will cause more installations to use non-free software by default. That's not a great story for us, and we may end up with more of our users using non-free software and believing that it's all part of Debian.
  4. A number of users and developers complain that we're wasting their time by publishing official images that are just not useful for a lot (a majority?) of users.

We should do better than this.

Options

The status quo is a mess, and I believe we can and should do things differently.

I see several possible options that the images team can choose from here. However, several of these options could undermine the principles of Debian. We don't want to make fundamental changes like that without the clear backing of the wider project. That's why I'm writing this...

  1. Keep the existing setup. It's horrible, but maybe it's the best we can do? (I hope not!)

  2. We could just stop providing the non-free unofficial images altogether. That's not really a promising route to follow - we'd be making it even harder for users to install our software. While ideologically pure, it's not going to advance the cause of Free Software.

  3. We could stop pretending that the non-free images are unofficial, and maybe move them alongside the normal free images so they're published together. This would make them easier to find for people that need them, but is likely to cause users to question why we still make any images without firmware if they're otherwise identical.

  4. The images team technically could simply include non-free into the official images, and add firmware packages to the input lists for those images. However, that would still leave us with problem 3 from above (non-free generally enabled on most installations).

  5. We could split out the non-free firmware packages into a new non-free-firmware component in the archive, and allow a specific exception only to allow inclusion of those packages on our official media. We would then generate only one set of official media, including those non-free firmware packages.

    (We've already seen various suggestions in recent years to split up the non-free component of the archive like this, for example into non-free-firmware, non-free-doc, non-free-drivers, etc. Disagreement (bike-shedding?) about the split caused us to not make any progress on this. I believe this project should be picked up and completed. We don't have to make a perfect solution here immediately, just something that works well enough for our needs today. We can always tweak and improve the setup incrementally if that's needed.)

These are the most likely possible options, in my opinion. If you have a better suggestion, please let us know!

I'd like to take this set of options to a GR, and do it soon. I want to get a clear decision from the wider Debian project as to how to organise firmware and installation images. If we do end up changing how we do things, I want a clear mandate from the project to do that.

My preference, and rationale

Mainly, I want to see how the project as a whole feels here - this is a big issue that we're overdue solving.

What would I choose to do? My personal preference would be to go with option 5: split the non-free firmware into a special new component and include that on official media.

Does that make me a sellout? I don't think so. I've been passionately supporting and developing Free Software for more than half my life. My philosophy here has not changed. However, this is a complex and nuanced situation. I firmly believe that sharing software freedom with our users comes with a responsibility to also make our software useful. If users can't easily install and use Debian, that helps nobody.

By splitting things out here, we would enable users to install and use Debian on their hardware, without promoting/pushing higher-level non-free software in general. I think that's a reasonable compromise. This is simply a change to recognise that hardware requirements have moved on over the years.

Further work

If we do go with the changes in option 5, there are other things we could do here for better control of and information about non-free firmware:

  1. Along with adding non-free firmware onto media, when the installer (or live image) runs, we should make it clear exactly which firmware packages have been used/installed to support detected hardware. We could link to docs about each, and maybe also to projects working on Free re-implementations.

  2. Add an option at boot to explicitly disable the use of the non-free firmware packages, so that users can choose to avoid them.

Acknowledgements

Thanks to people who reviewed earlier versions of this document and/or made suggestions for improvement, in particular:

  • Cyril Brulebois
  • Matthew Garrett
  • David Leggett
  • Martin Michlmayr
  • Andy Simpkins
  • Neil Williams

19 April, 2022 12:24AM

April 17, 2022

Russ Allbery

First 2022 haul post

I haven't posted one of these in a while. Here's the (mostly new) stuff that's come out that caught my interest in the past few months. Some of these I've already read and reviewed.

Tom Burgis — Kleptopia (non-fiction)
Angela Chen — Ace (non-fiction)
P. Djèlí Clark — A Dead Djinn in Cairo (sff)
P. Djèlí Clark — The Haunting of Tram Car 015 (sff)
P. Djèlí Clark — A Master of Djinn (sff)
Brittney C. Cooper — Eloquent Rage (non-fiction)
Madeleine Dore — I Didn't Do the Thing Today (non-fiction)
Saad Z. Hossain — The Gurkha and the Lord of Tuesday (sff)
George F. Kennan — Memoirs, 1925-1950 (non-fiction)
Kiese Laymon — How to Slowly Kill Yourself and Others in America (non-fiction)
Adam Minter — Secondhand (non-fiction)
Amanda Oliver — Overdue (non-fiction)
Laurie Penny — Sexual Revolution (non-fiction)
Scott A. Snook — Friendly Fire (non-fiction)
Adrian Tchaikovsky — Elder Race (sff)
Adrian Tchaikovsky — Shards of Earth (sff)
Tor.com (ed.) — Some of the Best of Tor.com: 2021 (sff anthology)
Charlie Warzel & Anne Helen Petersen — Out of Office (non-fiction)
Robert Wears — Still Not Safe (non-fiction)
Max Weber — The Vocation Lectures (non-fiction)

Lots and lots of non-fiction in this mix. Maybe a tiny bit better than normal at not buying tons of books that I don't have time to read, although my reading (and particularly my reviewing) rate has been a bit slow lately.

17 April, 2022 03:49AM

hackergotchi for Matthew Garrett

Matthew Garrett

The Freedom Phone is not great at privacy

The Freedom Phone advertises itself as a "Free speech and privacy first focused phone". As documented on the features page, it runs ClearOS, an Android-based OS produced by Clear United (or maybe one of the bewildering array of associated companies, we'll come back to that later). It's advertised as including Signal, but what's shipped is not the version available from the Signal website or any official app store - instead it's this fork called "ClearSignal".

The first thing to note about ClearSignal is that the privacy policy link from that page 404s, which is not a great start. The second thing is that it has a version number of 5.8.14, which is strange because upstream went from 5.8.10 to 5.9.0. The third is that, despite Signal being GPL 3, there's no source code available. So, I grabbed jadx and started looking for differences between ClearSignal and the upstream 5.8.10 release. The results were, uh, surprising.

First up is that they seem to have integrated ACRA, a crash reporting framework. This feels a little odd - in the absence of a privacy policy, it's unclear what information this gathers or how it'll be stored. Having a piece of privacy software automatically uploading information about what you were doing in the event of a crash with no notification other than a toast that appears saying "Crash Report" feels a little dubious.

Next is that Signal (for fairly obvious reasons) warns you if your version is out of date and eventually refuses to work unless you upgrade. ClearSignal has dealt with this problem by, uh, simply removing that code. The MacOS version of the desktop app they provide for download seems to be derived from a release from last September, which for an Electron-based app feels like a pretty terrible idea. Weirdly, for Windows they link to an official binary release from February 2021, and for Linux they tell you how to use the upstream repo properly. I have no idea what's going on here.

They've also added support for network backups of your Signal data. This involves the backups being pushed to an S3 bucket using credentials that are statically available in the app. It's ok, though, each upload has some sort of nominally unique identifier associated with it, so it's not trivial to just download other people's backups. But, uh, where does this identifier come from? It turns out that Clear Center, another of the Clear family of companies, employs a bunch of people to work on a ClearID[1], some sort of decentralised something or other that seems to be based on KERI. There's an overview slide deck here which didn't really answer any of my questions and as far as I can tell this is entirely lacking any sort of peer review, but hey it's only the one thing that stops anyone on the internet being able to grab your Signal backups so how important can it be.

The final thing, though? They've extended Signal's invitation support to encourage users to get others to sign up for Clear United. There's an exposed API endpoint called "get_user_email_by_mobile_number" which does exactly what you'd expect - if you give it a registered phone number, it gives you back the associated email address. This requires no authentication. But it gets better! The API to generate a referral link to send to others sends the name and phone number of everyone in your phone's contact list. There does not appear to be any indication that this is going to happen.

So, from a privacy perspective, going to go with things being some distance from ideal. But what's going on with all these Clear companies anyway? They all seem to be related to Michael Proper, who founded the Clear Foundation in 2009. They are, perhaps unsurprisingly, heavily invested in blockchain stuff, while Clear United also appears to be some sort of multi-level marketing scheme which has a membership agreement that includes the somewhat astonishing claim that:

Specifically, the initial focus of the Association will provide members with supplements and technologies for:

9a. Frequency Evaluation, Scans, Reports;

9b. Remote Frequency Health Tuning through Quantum Entanglement;

9c. General and Customized Frequency Optimizations;


- there's more discussion of this and other weirdness here. Clear Center, meanwhile, has a Chief Physics Officer? I have a lot of questions.

Anyway. We have a company that seems to be combining blockchain and MLM, has some opinions about Quantum Entanglement, bases the security of its platform on a set of novel cryptographic primitives that seem to have had no external review, has implemented an API that just hands out personal information without any authentication and an app that appears more than happy to upload all your contact details without telling you first, has failed to update this app to keep up with upstream security updates, and is violating the upstream license. If this is their idea of "privacy first", I really hate to think what their code looks like when privacy comes further down the list.

[1] Pointed out to me here

comment count unavailable comments

17 April, 2022 12:23AM

April 16, 2022

Petter Reinholdtsen

Playing and encoding AV1 in Debian Bullseye

Inspired by the recent news of AV1 hardware encoding support from Intel, I decided to look into the state of AV1 on Linux today. AV1 is a free and open standard as defined by Digistan without any royalty payment requirement, unlike its much used competitor encoding H.264. While looking, I came across an 5 year old question on askubuntu.com which in turn inspired me to check out how things are in Debian Stable regarding AV1. The test file listed in the question (askubuntu_test_aom.mp4) did not exist any more, so I tracked down a different set of test files on av1.webmfiles.org to test them with the various video tools I had installed on my machine. I was happy to discover that AV1 decoding and playback worked with almost every tool I tested:

mediainfo ok
dragonplayer ok
ffmpeg / ffplay ok
gnome-mplayer fail
mplayer ok
mpv ok
parole ok
vlc ok
firefox ok
chromium ok

AV1 encoding is available in Debian Stable from the aom-tools version 1.0.0.errata1-3 package, using the aomenc tool. The encoding using the package in Debian Stable is quite slow, with the frame rate for my 10 second test video at around 0.25 fps. My 10 second video test took 16 minutes and 11 seconds on my test machine.

I tested by first running ffmpeg and then aomenc using the recipe provided by the askubuntu recipe above. I had to remove the '--row-mt=1' option, as it was not supported in my 1.0.0 version. The encoding only used a single thread, according to top.

ffmpeg -i some-old-video.ogv -t 10 -pix_fmt yuv420p video.y4m
aomenc --fps=24/1 -u 0 --codec=av1 --target-bitrate=1000 \
  --lag-in-frames=25 --auto-alt-ref=1 -t 24 --cpu-used=8 \
  --tile-columns=2 --tile-rows=2 -o output.webm video.y4m

As version 1.0.0 currently have several unsolved security issues in Debian Stable, and to see if the recent backport provided in Debian is any quicker, I ran apt -t bullseye-backports install aom-tools to fetch the backported version and re-encoded the video using the latest version. This time the '--row-mt=1' option worked, and the encoding was done in 46 seconds with a frame rate of around 5.22 fps. This time it seem to be using all my four cores to encode. Encoding speed is still too low for streaming and real time, which would require frame rates above 25 fps, but might be good enough for offline encoding.

I am very happy to see AV1 playback working so well with the default tools in Debian Stable. I hope the encoding situation improve too, allowing even a slow old computer like my 10 year old laptop to be used for encoding.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

16 April, 2022 06:40AM

April 15, 2022

Reproducible Builds (diffoscope)

diffoscope 210 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 210. This version includes the following changes:

[ Mattia Rizzolo ]
* Make sure that PATH is properly mangled for all diffoscope actions, not
  just when running comparators.

You find out more by visiting the project homepage.

15 April, 2022 12:00AM

April 14, 2022

Reproducible Builds

Supporter spotlight: Amateur Radio Digital Communications (ARDC)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about the project and the work that we do.

This is the third instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org.

We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation. Today, however, we’ll be talking with Dan Romanchik, Communications Manager at Amateur Radio Digital Communications (ARDC).


Chris Lamb: Hey Dan, it’s nice to meet you! So, for someone who has not heard of Amateur Radio Digital Communications (ARDC) before, could you tell us what your foundation is about?

Dan: Sure! ARDC’s mission is to support, promote, and enhance experimentation, education, development, open access, and innovation in amateur radio, digital communication, and information and communication science and technology. We fulfill that mission in two ways:

  1. We administer an allocation of IP addresses that we call 44Net. These IP addresses (in the 44.0.0.0/8 IP range) can only be used for amateur radio applications and experimentation.

  2. We make grants to organizations whose work aligns with our mission. This includes amateur radio clubs as well as other amateur radio-related organizations and activities. Additionally, we support scholarship programs for people who either have an amateur radio license or are pursuing careers in technology, STEM education and open-source software development projects that fit our mission, such as Reproducible Builds.


Chris: How might you relate the importance of amateur radio and similar technologies to someone who is non-technical?

Dan: Amateur radio is important in a number of ways. First of all, amateur radio is a public service. In fact, the legal name for amateur radio is the Amateur Radio Service, and one of the primary reasons that amateur radio exists is to provide emergency and public service communications. All over the world, amateur radio operators are prepared to step up and provide emergency communications when disaster strikes or to provide communications for events such as marathons or bicycle tours.

Second, amateur radio is important because it helps advance the state of the art. By experimenting with different circuits and communications techniques, amateurs have made significant contributions to communications science and technology.

Third, amateur radio plays a big part in technical education. It enables students to experiment with wireless technologies and electronics in ways that aren’t possible without a license. Amateur radio has historically been a gateway for young people interested in pursuing a career in engineering or science, such as network or electrical engineering.

Fourth — and this point is a little less obvious than the first three — amateur radio is a way to enhance international goodwill and community. Radio knows no boundaries, of course, and amateurs are therefore ambassadors for their country, reaching out to all around the world.

Beyond amateur radio, ARDC also supports and promotes research and innovation in the broader field of digital communication and information and communication science and technology. Information and communication technology plays a big part in our lives, be it for business, education, or personal communications. For example, think of the impact that cell phones have had on our culture. The challenge is that much of this work is proprietary and owned by large corporations. By focusing on open source work in this area, we help open the door to innovation outside of the corporate landscape, which is important to overall technological resiliency.


Chris: Could you briefly outline the history of ARDC?

Dan: Nearly forty years ago, a group of visionary ‘hams’ saw the future possibilities of what was to become the internet and requested an address allocation from the Internet Assigned Numbers Authority (IANA). That allocation included more than sixteen million IPv4 addresses, 44.0.0.0 through 44.255.255.255. These addresses have been used exclusively for amateur radio applications and experimentation with digital communications techniques ever since. In 2011, the informal group of hams administering these addresses incorporated as a nonprofit corporation, Amateur Radio Digital Communications (ARDC). ARDC is recognized by IANA, ARIN and the other Internet Registries as the sole owner of these addresses, which are also known as AMPRNet or 44Net.

Over the years, ARDC has assigned addresses to thousands of hams on a long-term loan (essentially acting as a zero-cost lease), allowing them to experiment with digital communications technology. Using these IP addresses, hams have carried out some very interesting and worthwhile research projects and developed practical applications, including TCP/IP connectivity via radio links, digital voice, telemetry and repeater linking.

Even so, the amateur radio community never used much more than half the available addresses, and today, less than one third of the address space is assigned and in use. This is one of the reasons that ARDC, in 2019, decided to sell one quarter of the address space (or approximately 4 million IP addresses) and establish an endowment with the proceeds. This endowment now funds ARDC’s a suite of grants, including scholarships, research projects, and of course amateur radio projects. Initially, ARDC was restricted to awarding grants to organizations in the United States, but is now able to provide funds to organizations around the world.


Chris: How does the Reproducible Builds effort help ARDC achieve its goals?

Dan: Our aspirational goals include:

  • Broad reach. We look for projects that will benefit the widest community possible.

  • Social over commercial benefit. We prioritize giving to organizations and funding projects that may not have a viable business model, but provide a clear benefit to society.

  • Preservation of the right to innovate. We oppose any efforts that restrict innovation to protect the status quo and keep the current winners in privileged positions.

We think that the Reproducible Builds’ efforts in helping to ensure the safety and security of open source software closely align with those goals.


Chris: Are there any specific ‘success stories’ that ARDC is particularly proud of?

Dan: We are really proud of our grant to the Hoopa Valley Tribe in California. With a population of nearly 2,100, their reservation is the largest in California. Like everywhere else, the COVID-19 pandemic hit the reservation hard, and the lack of broadband internet access meant that 130 children on the reservation were unable to attend school remotely.

The ARDC grant allowed the tribe to address the immediate broadband needs in the Hoopa Valley, as well as encourage the use of amateur radio and other two-way communications on the reservation. The first nation was able to deploy a network that provides broadband access to approximately 90% of the residents in the valley. And, in addition to bringing remote education to those 130 children, the Hoopa now use the network for remote medical monitoring and consultation, adult education, and other applications.

Other successes include our grants to:

  • The ARRL Foundation, which has awarded dozens of scholarships over the past couple of years to amateur radio operators both young and old.

  • The Chippewa Valley Amateur Club, who used the grant to build an emergency communications trailer and improve the emergency communication infrastructure in their community.

  • Woodridge Middle School, who have developed innovative, hands-on, radio-related projects that have dramatically increased the test scores of the kids in their STEM program.

  • The Make Operating Radio Easier (MORE) Project, which aims to reduce both gender and age imbalances in ham radio, through education and hands-on activities. Led by the IEEE Central Jersey Section, MORE provides mentoring, proactive intervention, and inclusivity to groups that are under-represented in amateur radio.


Chris: ARDC supports a number of other existing projects and initiatives, not all of them in the open source world. How much do you feel being a part of the broader ‘free culture’ movement helps you achieve your aims?

Dan: In general, we find it challenging that most digital communications technology is proprietary and closed-source. It’s part of our mission to fund open source alternatives. Without them, we are solely reliant, as a society, on corporate interests for our digital communication needs. It makes us vulnerable and it puts us at risk of increased surveillance. Thus, ARDC supports open source software wherever possible, and our grantees must make a commitment to share their work under an open source license or otherwise make it as freely available as possible.


Chris: Thanks so much for taking the time to talk to us today. Now, if someone wanted to know more about ARDC or to get involved, where might they go to look?

To learn more about ARDC in general, please visit our website at https://www.ampr.org.

To learn more about 44Net, go to https://wiki.ampr.org/wiki/Main_Page.

And, finally, to learn more about our grants program, go to https://www.ampr.org/apply/



For more about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

14 April, 2022 10:00AM

hackergotchi for Jonathan Dowland

Jonathan Dowland

hledger

This year I've decided to bite the bullet and properly try out hledger for personal accounting. It seems I need to commit to it properly if I'm to figure out whether it will work for me or not.

Up until now I'd been strictly separating my finances into two buckets: family and personal. I'd been using GNUCash for a couple of years for my personal finances, initially to evaluate it for use for the family, but I had not managed to adopt it for that.

I set up a new git repository to track the ledger file, as well as a notes.txt "diary" that documents my planning around its structure and how to use it, and a import.txt which documents what account data I have imported and confirmed that the resulting balances match those reported on monthly statements.

For this evaluation, I decided to bite the bullet and track both family and personal finances at the same time. I'm still keeping them conceptually very separate. To reflect that I've organised my account names around that: all accounts relating to family are prefixed family:, and likewise personal jon:.1 Some example accounts:

family:assets:shared    - shared bank account
family:dues:jon         - I owe to family
family:expenses:cat     - budget category for the cat
income                  - where money enters this universe
jon:assets:current      - my personal account
jon:dues:peter          - money Peter owes me
jon:expenses:snacks     - budget category for coffees etc
jon:liabilities:amex    - a personal credit card

I decided to make the calendar year a strict cut-over point: my personal opening balances in hledger are determined by what GNUCash reports. It's possible those will change over this year, as adjustments are made to last year's data: but it's easy enough to go in and update the opening balances in hledger to reflect that.

Credit cards are a small exception. January's credit card bills are paid in January but cover transactions from mid-December. I import those transactions into hledger to balance the credit card payment. As a consequence, the "spend per month" view of my data is a bit skewed: All the transactions in December should be thought of as in January since that's when they were paid. I need to explore options to fix this.

When I had family and personal managed separately, occasionally something would be paid for on the wrong card and end up in the wrong data. The solution I used last year was to keep an account dues:family to which I posted those and periodically I'd settle it with a real-world bank transfer.

I've realised that this doesn't work so well when I manage both together: I can't track both dues and expense categorisation with just one posting. The solution I'm using for now is hledger's unbalanced virtual postings: a third posting for the transaction to the budget category, which is not balanced, e.g.:

2022-01-02 ZTL*RELISH
    family:liabilities:creditcard      £ -3.00
    family:dues:jon                     £ 3.00
    (jon:expenses:snacks)               £ 3.00

This works, but it completely side-steps double-entry book keeping, which is the whole point of using a double-entry system. There's also no check and balance that the figure I put in the virtual posting (£3) matches the figure in the rest of the transaction. I'm therefore open to other ideas.


  1. there are a couple of places in hledger where default account names are used, such as the default place that expenses are posted to during CSV imports: expenses:unknown, that obviously don't fit my family/jon: prefix scheme. The solution is to make sure I specify a default posting-to account in all my CSV import rules.

14 April, 2022 09:07AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.2.3 on CRAN: Arm M1 Support

drat user

A new minor release of drat arrived on CRAN today. drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works. Detailed information about drat is at its documentation site.

This release adds support for the macOS Arm M1 architecture, initially supplied in a PR last fall and now finalized with additional tests. The NEWS file summarises the release as follows:

Changes in drat version 0.2.3 (2022-04-13)

  • Arm M1 repos are now supported (#126 and #131 fixing #125)

  • A vignette typo has been fixed (#130)

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page as well as at the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 April, 2022 01:16AM

April 13, 2022

Antoine Beaupré

Tuning my wifi radios

After listening to an episode of the 2.5 admins podcast, I realized there was some sort of low-hanging fruit I could pick to better tune my WiFi at home. You see, I'm kind of a fraud in WiFi: I only started a WiFi mesh in Montreal (now defunct), I don't really know how any of that stuff works. So I was surprised to hear one of the podcast host say "it's all about airtime" and "you want to reduce the power on your access points" (APs). It seemed like sound advice: better bandwidth means less time on air, means less collisions, less latency, and less power also means less collisions. Worth a try, right?

Frequency

So the first thing I looked at was WifiAnalyzer to see if I had any optimisation I could do there. Normally, I try to avoid having nearby APs on the same frequency to avoid collisions, but who knows, maybe I had messed that up. And turns out I did! Both APs were on "auto" for 5GHz, which typically means "do nothing or worse".

5GHz is really interesting, because, in theory, there are LOTS of channels to pick from, it goes up to 196!! And both my APs were on 36, what gives?

So the first thing I did was to set it to channel 100, as there was that long gap in WifiAnalyzer where no other AP was. But that just broke 5GHz on the AP. The OpenWRT GUI (luci) would just say "wireless not associated" and the ESSID wouldn't show up in a scan anymore.

At first, I thought this was a problem with OpenWRT or my hardware, but I could reproduce the problem with both my APs: a TP-Link Archer A7 v5 and a Turris Omnia (see also my review).

As it turns out, that's because that range of the WiFi band interferes with trivial things like satellites and radar, which make the actually very useful radar maps look like useless christmas trees. So those channels require DFS to operate. DFS works by first listening on the frequency for a certain amount of time (1-2 minute, but could be as high as 10) to see if there's something else transmitting at all.

So typically, that means they just don't operate at all in those bands, especially if you're near any major city which generally means you are near a weather radar that will transmit on that band.

In the system logs, if you have such a problem, you might see this:

Apr  9 22:17:39 octavia hostapd: wlan0: DFS-CAC-START freq=5500 chan=100 sec_chan=1, width=0, seg0=102, seg1=0, cac_time=60s
Apr  9 22:17:39 octavia hostapd: DFS start_dfs_cac() failed, -1

... and/or this:

Sat Apr  9 18:05:03 2022 daemon.notice hostapd: Channel 100 (primary) not allowed for AP mode, flags: 0x10095b NO-IR RADAR
Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Configured channel (100) not found from the channel list of current mode (2) IEEE 802.11a
Sat Apr  9 18:05:03 2022 daemon.warn hostapd: wlan0: IEEE 802.11 Hardware does not support configured channel

Here, it clearly says RADAR (in all caps too, which means it's really important). NO-IR is also important, I'm not sure what it means but it could be that you're not allowed to transmit in that band because of other local regulations.

There might be a way to workaround those by changing the "region" in the Luci GUI, but I didn't mess with that, because I figured that other devices will have that already configured. So using a forbidden channel might make it more difficult for clients to connect (although it's possible this is enforced only on the AP side).

In any case, 5GHz is promising, but in reality, you only get from channel 36 (5.170GHz) to 48 (5.250GHz), inclusively. Fast counters will notice that is exactly 80MHz, which means that if an AP is configured for that hungry, all-powerful 80MHz, it will effectively take up all 5GHz channels at once.

This, in other words, is as bad as 2.4GHz, where you also have only two 40MHz channels. (Really, what did you expect: this is an unregulated frequency controlled by commercial interests...)

So the first thing I did was to switch to 40MHz. This gives me two distinct channels in 5GHz at no noticeable bandwidth cost. (In fact, I couldn't find hard data on what the bandwidth ends up being on those frequencies, but I could still get 400Mbps which is fine for my use case.)

Power

The next thing I did was to fiddle with power. By default, both radios were configured to transmit as much power as they needed to reach clients, which means that if a client gets farther away, it would boost its transmit power which, in turns, would mean the client would still connect to instead of failing and properly roaming to the other AP.

The higher power also means more interference with neighbors and other APs, although that matters less if they are on different channels.

On 5GHz, power was about 20dBm (100 mW) -- and more on the Turris! -- when I first looked, so I tried to lower it drastically to 5dBm (3mW) just for kicks. That didn't work so well, so I bumped it back up to 14 dBm (25 mW) and that seems to work well: clients hit about -80dBm when they get far enough from the AP, which gets close to the noise floor (and where the neighbor APs are), which is exactly what I want.

On 2.4GHz, I lowered it down even further, to 10 dBm (10mW) since it's better at going through wells, I figured it would need less power. And anyways, I rather people use the 5GHz APs, so maybe that will act as an encouragement to switch. I was still able to connect correctly to the APs at that power as well.

Other tweaks

I disabled the "Allow legacy 802.11b rates" setting in the 5GHz configuration. According to this discussion:

Checking the "Allow b rates" affects what the AP will transmit. In particular it will send most overhead packets including beacons, probe responses, and authentication / authorization as the slow, noisy, 1 Mb DSSS signal. That is bad for you and your neighbors. Do not check that box. The default really should be unchecked.

This, in particular, "will make the AP unusable to distant clients, which again is a good thing for public wifi in general". So I just unchecked that box and I feel happier now. I didn't make tests to see the effect separately however, so this is mostly just a guess.

13 April, 2022 08:56PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Another Green World

Brian Eno's classic 1975 album Another Green World, with the iconic cover crop from Tom Philip's After Raphael. This is a recent pressing. I try to avoid buying new vinyl, and I think I got this as part of a trade-in swap two years ago when I went to get one of my first Covid vaccinations in Newcastle. It was the first time I'd been anywhere near a record shop (which was adjacent to the temporary vaccine centre) in a year or more, and I took the opportunity to bring in some records to sell. I definitely left with fewer records than I went in with, at least…

Another Green World on the turntable

What to say about this album? It's a classic, it's weirdly compelling, it dances over the line between engaging your attention and something you can have on in the background. Many of the tracks are quite short relative to the ideas they express: I imagine some remixer could have a lot of fun with it. The Big Ship is perhaps the standout, but I really like the title track, and opener Sky Saw too.

It's probably the first album written using using the oblique strategies card system.

Card reading 'You are an Engineer'

13 April, 2022 08:41AM

April 12, 2022

Sven Hoexter

Emulating Raspi2 like hardware with RaspiOS in 2022

Update of my notes from 2020.

# Download a binary device tree file and matching kernel a good soul uploaded to github
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/kernel-qemu-4.4.1-vexpress
wget https://github.com/vfdev-5/qemu-rpi2-vexpress/raw/master/vexpress-v2p-ca15-tc1.dtb
# Download the official Rasbian image without X
wget https://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2022-04-07/2022-04-04-raspios-bullseye-armhf-lite.img.xz
unxz 2022-04-04-raspios-bullseye-armhf-lite.img.xz
# Convert it from the raw image to a qcow2 image and add some space
qemu-img convert -f raw -O qcow2 2022-04-04-raspios-bullseye-armhf-lite.img rasbian.qcow2
qemu-img resize rasbian.qcow2 4G
# make sure we get a user account setup
echo "me:$(echo 'test123'|openssl passwd -6 -stdin)" > userconf
sudo guestmount -a rasbian.qcow2 -m /dev/sda1 /mnt
sudo mv userconf /mnt
sudo guestunmount /mnt
# start qemu
qemu-system-arm -m 2048M -M vexpress-a15 -cpu cortex-a15 \
 -kernel kernel-qemu-4.4.1-vexpress -no-reboot \
 -smp 2 -serial stdio \
 -dtb vexpress-v2p-ca15-tc1.dtb -sd rasbian.qcow2 \
 -append "root=/dev/mmcblk0p2 rw rootfstype=ext4 console=ttyAMA0,15200 loglevel=8" \
 -nic user,hostfwd=tcp::5555-:22
# login at the serial console as user me with password test123
sudo -i
# enable ssh
systemctl enable ssh
systemctl start ssh
# resize partition and filesystem
parted /dev/mmcblk0 resizepart 2 100%
resize2fs /dev/mmcblk0p2

Now I can login via ssh and start to play:

ssh me@localhost -p 5555

12 April, 2022 09:27AM

April 11, 2022

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

k3xec.com/patty: Go bindings to patty

AX.25 is a tough protocol to use on UNIX systems. A lot of the support in Linux, specifically, is pretty hard to use, and tends to be built into the reptilian brain of the kernel. xan built a userland AX.25 stack called patty, for which I have now built some Go bindings on top of.

Code needed to create AX.25 Sockets via Go can be found at github.com/k3xec/go-patty, and imported by Go source as k3xec.com/patty.

Overview

Clint patty programs (including consumers of this Go library) work by communicating with a userland daemon (pattyd) via a UNIX named socket. That daemon will communicate with a particular radio using a KISS TNC serial device.

The Go bindings implement as many standard Go library interfaces as is practical, allowing for the “plug and play” use of patty (and AX.25) in places where you would expect a network socket (such as TCP) to work, such as Go’s http library.

Example

package main
import (
"fmt"
"log"
"net"
"os"
"time"
"k3xec.com/patty"
)
func main() {
callsign := "N0CALL-10"
client, err := patty.Open("patty.sock")
if err != nil {
panic(err)
}
l, err := client.Listen("ax25", callsign)
if err != nil {
panic(err)
}
for {
log.Printf("Listening for requests to %s", l.Addr())
conn, err := l.Accept()
if err != nil {
log.Printf("Error accepting: %s", err)
continue
}
go handle(conn)
}
}
func handle(c net.Conn) error {
defer c.Close()
log.Printf("New connection from %s (local: %s)", c.RemoteAddr(), c.LocalAddr())
fmt.Fprintf(c, `

Hello! This is Paul's experimental %s node. Feel free
to poke around. Let me know if you spot anything funny.

Five pings are to follow!

`, c.LocalAddr())
for i := 0; i < 5; i++ {
time.Sleep(time.Second * 5)
fmt.Fprintf(c, "Ping!\n")
}
return nil
}

11 April, 2022 11:33PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

With Teeth

Sometimes people ask me which is the best entry point for Nine Inch Nails, and I have trouble answering. I've eventually decided that it's 2005's With Teeth.

Picture of 'With Teeth' playing on my turntable

I've pulled this from my pile of records to sell. Most Nine Inch Nails stuff seems to increase in value over time, and it's got to the stage that some of the ones in my collection are now valuable enough that I'm nervous to play them, which makes them a little pointless. A couple of years ago, a handful of the albums were re-issued (including With Teeth, here, and The Fragile, which hadn't been repressed since 1999). I had a vague plan to sell off my original pressings to someone who would appreciate them more and use the takings to buy the re-issues, but I didn't move fast enough, and now the re-issues are (for the most part) also sought-after.

11 April, 2022 08:53AM

April 10, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.2.1: Maintenance

A month after the major release 0.2.0 bringing pub/sub and other goodies to our RcppRedis package, a new version 0.2.1 arrived on CRAN yesterday. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads for several years now.

This release updated the rredis suggestion by adding an Additional_repositories entry as Bryan decided to retire the rredis package. You can still install it via install.packages("rredis") by setting the addtional repo, for example repos=c("https://ghrr.github.io/drat", getOption("repos")) as documented in package and at our ghrr drat repo.

The detailed changes list follows.

Changes in version 0.2.1 (2022-04-09)

  • The rredis package can be installed via the repo listed in Additional_repositories; the pubsub.R test file makes rredis optional and conditional; all demos now note that the optional rredis package is installable via the drat listed in Additional_repositories.

  • The fallback-compilation of hiredis has been forced to override compilation flags because CRAN knows better than upstream.

  • The GLOBEX pub/sub example has small updates.

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 April, 2022 02:02PM

April 08, 2022

RcppEigen 0.3.3.9.2 on CRAN: Maintenance

A new release 0.3.3.9.2 of RcppEigen arrived on CRAN today (and already went to Debian). Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

This update was (as it happens) requested by CRAN as R aims to bring the Fortran / C interface to best practices. We call dgesdd twice in one example and use a character argument, and the-powers-that-be now prefer better control over that character argument. So we did. Another change, kindly contributed by Mikael Jagan, switches row and column indices for R_xlen_t allowing for greater range. Plus some more small tweaks mostly to CI, see the NEWS entry below for full details.

And again as we said for the previous three releases:

One additional and recent change was the accomodation of a recent CRAN Policy change to not allow gcc or clang to mess with diagnostic messages. A word of caution: this may make your compilation of packages using RcppEigen very noisy so consider adding -Wno-ignored-attributes to the compiler flags added in your ~/.R/Makevars.

We still find this requirement rather annoying. Eigen is only usable if you set, say,

-Wno-deprecated-declarations -Wno-parentheses -Wno-ignored-attributes -Wno-unused-function

asoptions in~/.R/Makevars`. But CRAN makes the rules. Maybe if a few of us gently and politely nudge them they may relent one day. One can only hope.

The complete NEWS file entry follows.

Changes in RcppEigen version 0.3.3.9.2 (2022-04-05)

  • Added test coverage in continuous integration

  • Added new tests to increase test coverage

  • Small improvement to the RcppEigen.package.skeleton() code

  • Small updates and edits to README.md and inst/CITATION

  • Use R_xlen_t for vector rows and columns (by Mikael Jagan)

  • Support USE_FC_LEN_T by adding FCONE to two dgesdd calls

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 April, 2022 11:31PM

Reproducible Builds

Reproducible Builds in March 2022

Welcome to the March 2022 report from the Reproducible Builds project! In our monthly reports we outline the most important things that we have been up to over the past month.


The in-toto project was accepted as an “incubating project” within the Cloud Native Computing Foundation (CNCF). in-toto is a framework that protects the software supply chain by collecting and verifying relevant data. It does so by enabling libraries to collect information about software supply chain actions and then allowing software users and/or project managers to publish policies about software supply chain practices that can be verified before deploying or installing software. CNCF foundations hosts a number of critical components of the global technology infrastructure under the auspices of the Linux Foundation. (View full announcement.)


Hervé Boutemy posted to our mailing list with an announcement that the Java Reproducible Central has hit the milestone of “500 fully reproduced builds of upstream projects”. Indeed, at the time of writing, according to the nightly rebuild results, 530 releases were found to be fully reproducible, with 100% reproducible artifacts.


GitBOM is relatively new project to enable build tools trace every source file that is incorporated into build artifacts. As an experiment and/or proof-of-concept, the GitBOM developers are rebuilding Debian to generate side-channel build metadata for versions of Debian that have already been released. This only works because Debian is (partial) reproducible, so one can be sure that that, if the case where build artifacts are identical, any metadata generated during these instrumented builds applies to the binaries that were built and released in the past. More information on their approach is available in README file in the bomsh repository.


Ludovic Courtes has published an academic paper discussing how the performance requirements of high-performance computing are not (as usually assumed) at odds with reproducible builds. The received wisdom is that vendor-specific libraries and platform-specific CPU extensions have resulted in a culture of local recompilation to ensure the best performance, rendering the property of reproducibility unobtainable or even meaningless. In his paper, Ludovic explains how Guix has:

[…] implemented what we call “package multi-versioning” for C/C++ software that lacks function multi-versioning and run-time dispatch […]. It is another way to ensure that users do not have to trade reproducibility for performance. (full PDF)


Kit Martin posted to the FOSSA blog a post titled The Three Pillars of Reproducible Builds. Inspired by the “shock of infiltrated or intentionally broken NPM packages, supply chain attacks, long-unnoticed backdoors”, the post goes on to outline the high-level steps that lead to a reproducible build:

It is one thing to talk about reproducible builds and how they strengthen software supply chain security, but it’s quite another to effectively configure a reproducible build. Concrete steps for specific languages are a far larger topic than can be covered in a single blog post, but today we’ll be talking about some guiding principles when designing reproducible builds. []

The article was discussed on Hacker News.


Finally, Bernhard M. Wiedemann noticed that the GNU Helloworld project varies depending on whether it is being built during a full moon! (Reddit announcement, openSUSE bug report)


Events

There will be an in-person “Debian Reunion” in Hamburg, Germany later this year, taking place from 23 — 30 May. Although this is a “Debian” event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information.

Bernhard M. Wiedemann posted to our mailing list about a meetup for Reproducible Builds folks at the openSUSE conference in Nuremberg, Germany.

It was also recently announced that DebConf22 will take place this year as an in-person conference in Prizren, Kosovo. The pre-conference meeting (or “Debcamp”) will take place from 10 — 16 July, and the main talks, workshops, etc. will take place from 17 — 24 July.

Misc news

Holger Levsen updated the Reproducible Builds website to improve the documentation for the SOURCE_DATE_EPOCH environment variable, both by expanding parts of the existing text [][] as well as clarifying meaning by removing text in other places []. In addition, Chris Lamb added a Twitter Card to our website’s metadata too [][][].

On our mailing list this month:


Distribution work

In Debian this month:

  • Johannes Schauer Marin Rodrigues posted to the debian-devel list mentioning that he exploited the property of reproducibility within Debian to demonstrate that automatically converting a large number of packages to a new internal “source version” did not change the resulting packages. The proposed change could therefore be applied without causing breakage:

So now we have 364 source packages for which we have a patch and for which we can show that this patch does not change the build output. Do you agree that with those two properties, the advantages of the 3.0 (quilt) format are sufficient such that the change shall be implemented at least for those 364? []

In openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.


Tooling

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 207, 208 and 209 to Debian unstable, as well as made the following changes to the code itself:

  • Update minimum version of Black to prevent test failure on Ubuntu jammy. []

  • Updated the R test fixture for the 4.2.x series of the R programming language. []

Brent Spillner also worked on adding graceful handling for UNIX sockets and named pipes to diffoscope. [][][]. Vagrant Cascadian also updated the diffoscope package in GNU Guix. [][]

reprotest is the Reproducible Build’s project end-user tool to build the same source code twice in widely different environments and checking whether the binaries produced by the builds have any differences. This month, Santiago Ruano Rincón added a new --append-build-command option [], which was subsequently uploaded to Debian unstable by Holger Levsen.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Replace a local copy of the dsa-check-running-kernel script with a packaged version. []
    • Don’t hide the status of offline hosts in the Jenkins shell monitor. []
    • Detect undefined service problems in the node health check. []
    • Update the sources.lst file for our mail server as its still running Debian buster. []
    • Add our mail server to our node inventory so it is included in the Jenkins maintenance processes. []
    • Remove the debsecan package everywhere; it got installed accidentally via the Recommends relation. []
    • Document the usage of the osuosl174 host. []

Regular node maintenance was also performed by Holger Levsen [], Vagrant Cascadian [][][] and Mattia Rizzolo.


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

08 April, 2022 08:14AM

Jacob Adams

The Unexpected Importance of the Trailing Slash

For many using Unix-derived systems today, we take for granted that /some/path and /some/path/ are the same. Most shells will even add a trailing slash for you when you press the Tab key after the name of a directory or a symbolic link to one.

However, many programs treat these two paths as subtly different in certain cases, which I outline below, as all three have tripped me up in various ways1.

POSIX and Coreutils

Perhaps the trickiest use of the trailing slash in a distinguishing way is in POSIX2 which states:

When the final component of a pathname is a symbolic link, the standard requires that a trailing <slash> causes the link to be followed. This is the behavior of historical implementations3. For example, for /a/b and /a/b/, if /a/b is a symbolic link to a directory, then /a/b refers to the symbolic link, and /a/b/ refers to the directory to which the symbolic link points.

This leads to some unexpected behavior. For example, if you have the following structure of a directory dir containing a file dirfile with a symbolic link link pointing to dir. (which will be used in all shell examples throughout this article):

$ ls -lR
.:
total 4
drwxr-xr-x 2 jacob jacob 4096 Apr  3 00:00 dir
lrwxrwxrwx 1 jacob jacob    3 Apr  3 00:00 link -> dir

./dir:
total 0
-rw-r--r-- 1 jacob jacob 0 Apr  3 00:12 dirfile

On Unixes such as MacOS, FreeBSD or Illumos4, you can move a directory through a symbolic link by using a trailing slash:

$ mv link/ otherdir
$ ls
link	otherdir

On Linux5, mv will not “rename the indirectly referenced directory and not the symbolic link,” when given a symbolic link with a trailing slash as the source to be renamed. despite the coreutils documentation’s claims to the contrary6, instead failing with Not a directory:

$ mv link/ other
mv: cannot move 'link/' to 'other': Not a directory
$ mkdir otherdir
$ mv link/ otherdir
mv: cannot move 'link/' to 'otherdir/link': Not a directory
$ mv link/ otherdir/
mv: cannot move 'link/' to 'otherdir/link': Not a directory
$ mv link otherdirlink
$ ls -l otherdirlink
lrwxrwxrwx 1 jacob jacob 3 Apr  3 00:13 otherdirlink -> dir

This is probably for the best, as it is very confusing behavior. There is still one advantage the trailing slash has when using mv, even on Linux, in that is it does not allow you to move a file to a non-existent directory, or move a file that you expect to be a directory that isn’t.

$ mv dir/dirfile nonedir/
mv: cannot move 'dir/dirfile' to 'nonedir/': Not a directory
$ touch otherfile
$ mv otherfile/ dir
mv: cannot stat 'otherfile/': Not a directory
$ mv otherfile dir
$ ls dir
dirfile  otherfile

However, Linux still exhibits some confusing behavior of its own, like when you attempt to remove link recursively with a trailing slash:

rm -rvf link/

Neither link nor dir are removed, but the contents of dir are removed:

removed 'link/dirfile'

Whereas if you remove the trailing slash, you just remove the symbolic link:

$ rm -rvf link
removed 'link'

While on MacOS, FreeBSD or Illumos4, rm will also remove the source directory:

$ rm -rvf link
link/dirfile
link/
$ ls
link

The find and ls commands, in contrast, behave the same on all three operating systems.

The find command only searches the contents of the directory a symbolic link points to if the trailing slash is added:

$ find link -name dirfile
$ find link/ -name dirfile
link/dirfile

The ls command acts similarly, showing information on just a symbolic link by itself unless a trailing slash is added, at which point it shows the contents of the directory that it links to:

$ ls -l link
lrwxrwxrwx 1 jacob jacob 3 Apr  3 00:13 link -> dir
$ ls -l link/
total 0
-rw-r--r-- 1 jacob jacob 0 Apr  3 00:13 dirfile

rsync

The command rsync handles a trailing slash in an unusual way that trips up many new users. The rsync man page notes:

You can think of a trailing / on a source as meaning “copy the contents of this directory” as opposed to “copy the directory by name”, but in both cases the attributes of the containing directory are transferred to the containing directory on the destination.

That is to say, if we had two folders a and b each of which contained some files:

$ ls -R .
.:
a  b

./a:
a1  a2

./b:
b1  b2

Running rsync -av a b moves the entire directory a to directory b:

$ rsync -av a b
sending incremental file list
a/
a/a1
a/a2

sent 181 bytes  received 58 bytes  478.00 bytes/sec
total size is 0  speedup is 0.00
$ ls -R b
b:
a  b1  b2

b/a:
a1  a2

While running rsync -av a/ b moves the contents of directory a to b:

$ rsync -av a/ b
sending incremental file list
./
a1
a2

sent 170 bytes  received 57 bytes  454.00 bytes/sec
total size is 0  speedup is 0.00
$ ls b
a1  a2	b1  b2

Dockerfile COPY

The Dockerfile COPY command also cares about the presence of the trailing slash, using it to determine whether the destination should be considered a file or directory.

The Docker documentation explains the rules of the command thusly:

COPY [--chown=<user>:<group>] <src>... <dest>

If <src> is a directory, the entire contents of the directory are copied, including filesystem metadata.

Note: The directory itself is not copied, just its contents.

If <src> is any other kind of file, it is copied individually along with its metadata. In this case, if <dest> ends with a trailing slash /, it will be considered a directory and the contents of <src> will be written at <dest>/base(<src>).

If multiple <src> resources are specified, either directly or due to the use of a wildcard, then <dest> must be a directory, and it must end with a slash /.

If <dest> does not end with a trailing slash, it will be considered a regular file and the contents of <src> will be written at <dest>.

If <dest> doesn’t exist, it is created along with all missing directories in its path.

This means if you had a COPY command that moved file to a nonexistent containerfile without the slash, it would create containerfile as a file with the contents of file.

COPY file /containerfile
container$ stat -c %F containerfile
regular empty file

Whereas if you add a trailing slash, then file will be added as a file under the new directory containerdir:

COPY file /containerdir/
container$ stat -c %F containerdir
directory

Interestingly, at no point can you copy a directory completely, only its contents. Thus if you wanted to make a directory in the new container, you need to specify its name in both the source and the destination:

COPY dir /dirincontainer
container$ stat -c %F /dirincontainer
directory

Dockerfiles do also make good use of the trailing slash to ensure they’re doing what you mean by requiring a trailing slash on the destination of multiple files:

COPY file otherfile /othercontainerdir

results in the following error:

When using COPY with more than one source file, the destination must be a directory and end with a /
  1. I’m sure there are probably more than just these three cases, but these are the three I’m familiar with. If you know of more, please tell me about them!

  2. Some additional relevant sections are the Path Resolution Appendix and the section on Symbolic Links

  3. The sentence “This is the behavior of historical implementations” implies that this probably originated in some ancient Unix derivative, possibly BSD or even the original Unix. I don’t really have a source on that though, so please reach out if you happen to have any more knowledge on what this refers to. 

  4. I tested on MacOS 11.6.5, FreeBSD 12.0 and OmniOS 5.11  2

  5. “unless the source is a directory trailing slashes give -ENOTDIR” 

  6. In fairness to the coreutils maintainers, it seems to be true on all other Unix platforms, but it probably deserves a mention in the documentation when Linux is the most common platform on which coreutils is used. I should submit a patch. 

08 April, 2022 12:00AM

April 07, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

How is the free firmware for the Raspberry progressing?

Raspberry Pi computers require a piece of non-free software to boot — the infamous raspi-firmware package. But for almost as long as there has been a Raspberry Pi to talk of (this year it turns 10 years old!), there have been efforts to get it to boot using only free software. How is it progressing?

Michael Bishop (IRC user clever) explained today in the #debian-raspberrypi channel in OFTC that it advances far better than what I expected: It is even possible to boot a usable system under the RPi2 family! Just… There is somewhat incomplete hardware support: For his testing, he has managed to use a xfce environment — but over the composite (NTSC) video output, as HDMI initialization support is not there.

However, he shared with me several interesting links and videos, and I told him I’d share them — there are still many issues; I do not believe it is currently worth it to make Debian images with this firmware.

Before anything else: Go visit the librerpi/lk-overlay repository. Its README outlines hardware support for each of the RPi families; there is a binary build available with nixos if you want to try it out, and instructions to build it.

But what clever showed me that made me write this post… Is the amount of stuff you can do with the RPi’s VPU (why Vision Vector Processing Unit and not the more familiar GPU, Graphical Processing Unit? I don’t really know… But I trust clever’s definitions beyond how I trust my own 😉) before it loads an opearting system:

There’s not too much I can add to this. I was just… Truly amazed. And I hope to see the remaining hurdles for “regular” Linux booting on this range of machines with purely free software quickly go away!

Packaging this for Debian? Well, not yet… not so fast ☹ I first told clever we could push this firmware to experimental instead of unstable, as it is not yet ready for most production systems. However, pabs made some spot-on further questions. And… yes, it requires installing three(!) different cross-compilers, one of which vc4-toolchain, for the VPU is free software, but not yet upstreamed, and hence is not available for Debian.

Anyway, the talk continued long after I had to go. I have gone a bit over the backlog, but I have to leave now – so that will be it as for this blog post 😉

07 April, 2022 06:40PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Ubuntu plocate security review

Seemingly, the Ubuntu security team made a (quick!) review of plocate prior to inclusion in main. I'm pretty happy about the result:

I reviewed plocate 1.1.15-1ubuntu2 as checked into jammy. This shouldn't be
considered a full audit but rather a quick gauge of maintainability.

plocate is a locate implementation based on posting lists and io_uring,
intended as a drop-in replacement for mlocate.

- No CVE History.
- Build-Depends on liburing and libzstd
- The pre/post inst/rm scripts adds a plocate group, sets up
  alternatives to place it as the locate, and sets up the systemd timer.
  Things are cleaned up in the pre/post-rm scripts.
- No init scripts.
- One systemd timer and service to run updatedb
- No dbus services
- No setuid binaries, plocate binary is setgid.
- binaries in PATH: plocate, plocate-build, and updatedb.plocate
- No sudo fragments
- No polkit files
- No udev rules
- test
  - no unit or other build-time tests
  - autopkgtests: a basic test plus a more complex test that tests
    visibility across differing users.
- One cron job that exits immediately because systemd timers are available.
- No build warnings or errors, lintian with one minor warning:
  command-with-path-in-maintainer-script

- No processes spawned.
- Memory management is okay, generally uses C++ style
  allocations / deallocations.
- File IO is mostly performed on static names or parsed out of
  /proc/self/mountinfo. The exception is the db argument to plocate;
  however, if alternate db files are passed, a child process that drops
  privilege is forked to search the passed db file.
- Logging is mostly done by perror, and is done safely.
- Environment variable usage is okay.
- Privileged functions (setgid) are used to drop privs and are okay
  (returned errors are checked for).
- No use of cryptography / random number sources.
- Sole use of temp files in database-builder is okay, uses O_TMPFILE if
  available.
- No use of networking.
- No use of WebKit.
- No use of PolicyKit.

- No significant cppcheck results.
- No significant Coverity results, a couple of issues that could possibly
  warrant further investigation. Recommend upstream project make use of
  the public https://scan.coverity.com service.

Code generally feels modern and readable.

Security team ACK for promoting plocate to main.

Not much is really happening in plocate these days, for the simple reason that most things work the way I'd want them to. Simple utilities like that reach a saturation point, and I guess that's fine.

07 April, 2022 03:13PM

April 06, 2022

Thorsten Alteholz

My Debian Activities in March 2022

FTP master

This month I accepted 332 and rejected 15 packages. This ratio gives a reason to hope. The overall number of packages that got accepted was 342.

Debian LTS

This was my ninety-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 2932-1] tiff security update for three CVEs
  • [DLA 2931-1] cyrus-sasl2 security for one CVE
  • [DLA 2966-1] libgc security update for one CVE
  • [#1006493] bullseye-pu: htmldoc debdiff was approved and package uploaded
  • [#1006493] buster-pu: htmldoc debdiff was approved and package uploaded
  • [#1007938] buster-pu: cups/2.2.10-6+deb10u5
  • [#1007938] buster-pu: cups debdiff was approved and package uploaded
  • [#1008577] bullseye-pu: golang-github-russellhaering-goxmldsig/1.1.0-1+deb11u1
  • [#1008578] buster-pu: golang-github-russellhaering-goxmldsig/0.0~git20170911.b7efc62-1+deb10u1
  • [unstable] minidlna security update for one CVE

All my PU bugs for Buster and Bullseye, that accumulated over the last months, were part of the latest point release. So new ones have to be created now :-).

I also continued to work on security support for golang packages. As a result #1008577 and #1008578 were the first real tests with a simple package.

Debian ELTS

This month was the forty-fifth ELTS month.

During my allocated time I uploaded:

  • ELA-573-1 for cyrus-sasl2
  • ELA-589-1 for libgc

Unfortunately uploads have to be done for younger releases first, so I had to withhold some uploads for ELTS. Hopefully they can be done in April. Probably this policy needs to be reconsidered.

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

In order to make the Debian Edu team happy, I uploaded a new version of cups-filters with an adapted Apparmor-file to Unstable and Bullseye.

Debian Astro

This month I uploaded new upstream versions or improved packaging of:

Other stuff

This month I uploaded new upstream versions or improved packaging of:

In order to avoid an AUTORM of some Osmocom packages, I also had to NMU:

06 April, 2022 05:01PM by alteholz

hackergotchi for Bits from Debian

Bits from Debian

Infomaniak Platinum Sponsor of DebConf22

infomaniaklogo

We are very pleased to announce that Infomaniak has committed to support DebConf22 as a Platinum sponsor. This is the fourth year in a row that Infomaniak is sponsoring The Debian Conference with the higher tier!

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

With this commitment as Platinum Sponsor, Infomaniak contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Infomaniak, for your support of DebConf22!

Become a sponsor too!

DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor.

DebConf22 banner open registration

06 April, 2022 10:30AM by The Debian Publicity Team

April 05, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

Bearer tokens are just awful

As I mentioned last time, bearer tokens are not super compatible with a model in which every access is verified to ensure it's coming from a trusted device. Let's talk about that in a bit more detail.

First off, what is a bearer token? In its simplest form, it's simply an opaque blob that you give to a user after an authentication or authorisation challenge, and then they show it to you to prove that they should be allowed access to a resource. In theory you could just hand someone a randomly generated blob, but then you'd need to keep track of which blobs you've issued and when they should be expired and who they correspond to, so frequently this is actually done using JWTs which contain some base64 encoded JSON that describes the user and group membership and so on and then have a signature associated with them so whenever the user presents one you can just validate the signature and then assume that the contents of the JSON are trustworthy.

One thing to note here is that the crypto is purely between whoever issued the token and whoever validates the token - as far as the server is concerned, any client who can just show it the token is just fine as long as the signature is verified. There's no way to verify the client's state, so one of the core ideas of Zero Trust (that we verify that the client is in a trustworthy state on every access) is already violated.

Can we make things not terrible? Sure! We may not be able to validate the client state on every access, but we can validate the client state when we issue the token in the first place. When the user hits a login page, we do state validation according to whatever policy we want to enforce, and if the client violates that policy we refuse to issue a token to it. If the token has a sufficiently short lifetime then an attacker is only going to have a short period of time to use that token before it expires and then (with luck) they won't be able to get a new one because the state validation will fail.

Except! This is fine for cases where we control the issuance flow. What if we have a scenario where a third party authenticates the client (by verifying that they have a valid token issued by their ID provider) and then uses that to issue their own token that's much longer lived? Well, now the client has a long-lived token sitting on it. And if anyone copies that token to another device, they can now pretend to be that client.

This is, sadly, depressingly common. A lot of services will verify the user, and then issue an oauth token that'll expire some time around the heat death of the universe. If a client system is compromised and an attacker just copies that token to another system, they can continue to pretend to be the legitimate user until someone notices (which, depending on whether or not the service in question has any sort of audit logs, and whether you're paying any attention to them, may be once screenshots of your data show up on Twitter).

This is a problem! There's no way to fit a hosted service that behaves this way into a Zero Trust model - the best you can say is that a token was issued to a device that was, around that time, apparently trustworthy, and now it's some time later and you have literally no idea whether the device is still trustworthy or if the token is still even on that device.

But wait, there's more! Even if you're nowhere near doing any sort of Zero Trust stuff, imagine the case of a user having a bunch of tokens from multiple services on their laptop, and then they leave their laptop unlocked in a cafe while they head to the toilet and whoops it's not there any more, better assume that someone has access to all the data on there. How many services has our opportunistic new laptop owner gained access to as a result? How do we revoke all of the tokens that are sitting there on the local disk? Do you even have a policy for dealing with that?

There isn't a simple answer to all of these problems. Replacing bearer tokens with some sort of asymmetric cryptographic challenge to the client would at least let us tie the tokens to a TPM or other secure enclave, and then we wouldn't have to worry about them being copied elsewhere. But that wouldn't help us if the client is compromised and the attacker simply keeps using the compromised client. The entire model of simply proving knowledge of a secret being sufficient to gain access to a resource is inherently incompatible with a desire for fine-grained trust verification on every access, but I don't see anything changing until we have a standard for third party services to be able to perform that trust verification against a customer's policy.

Still, at least this means I can just run weird Android IoT apps through mitmproxy, pull the bearer token out of the request headers and then start poking the remote API with curl. It may all be broken, but it's also got me a bunch of bug bounty credit, so, it;s impossible to say if its bad or not,

(Addendum: this suggestion that we solve the hardware binding problem by simply passing all the network traffic through some sort of local enclave that could see tokens being set and would then sequester them and reinject them into later requests is OBVIOUSLY HORRIFYING and is also probably going to be at least three startup pitches by the end of next week)

comment count unavailable comments

05 April, 2022 06:54AM