June 23, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppGSL 0.3.9: Polish and More Builds

Release 0.3.9 of the RcppGSL package arrived at CRAN today, pretty much exactly one year since the last upload. The RcppGSL package provides an interface from R to the GNU GSL by relying on the Rcpp package.

This release brings some small documentation and CI polish, and enables builds on the newer (and still experimental) windows ‘UCRT’ flavor (which will bring native utf-8 chars to Windows, see this and this write-up) thanks to a PR by Jeroen.

Changes in version 0.3.9 (2021-06-23)

  • The pdf vignette was extended by a small subsection (Dirk).

  • The CI setup was updated to use run.sh from r-ci (Dirk).

  • The windows was updated to GSL 2.7, and UCRT support was added (Jeroen in #28).

Courtesy of CRANberries, a summary of changes to the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 June, 2021 10:33PM

Enrico Zini

Transilience check mode

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

I added check mode to Transilience, to do everything except perform changes, like Ansible does:

$ ./provision --help
usage: provision [-h] [-v] [--debug] [-C] [--to-python role]

Provision my VPS

optional arguments:
  -h, --help        show this help message and exit
  -v, --verbose     verbose output
  --debug           verbose output
  -C, --check       do not perform changes, but check if changes would be  ← NEW!
                    needed                                                 ← NEW!

It was quite straightforwad to add a new field to the base Action class, and tweak the implementations not to perform changes if it is True:

# Shortcut function to annotate dataclass fields with documentation metadata
def doc(default: Any, doc: str, **kw):
    return field(default=default, metadata={"doc": doc})


@dataclass
class Action:
    ...
    check: bool = doc(False, "when True, check if the action would perform changes, but do nothing")

Like with Ansible, check mode takes about the same time as a normal run which does not perform changes.

Unlike Ansible, with Transilience this is actually pretty fast! ;)

23 June, 2021 09:22PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Hardening Weechat Relays Against RCE on Bullseye

I've been using weechat to connect to IRC since late 2016 and one of its killer feature is relays. They let use other frontends like the Weechat Android app or the amazing Glowing Bear (packaged in Debian Bullseye by yours truly).

Sadly, relays also used to be somewhat of a security risk: anyone with access to a relay1 could run scripts on the machine running weechat by using commands such as /exec or /script. Not great.

Since version 2.5 (Buster had version 2.3), you can mitigate this risk by setting a command allowlist for relays. Later versions implemented a sane default by blocking the following commands:

  • /exec
  • /fset
  • /set
  • /unset
  • /plugin
  • /script
  • /python
  • /perl
  • /ruby
  • /lua
  • /tcl
  • /guile
  • /javascript
  • /php
  • /secure
  • /upgrade
  • /quit

Sadly, this default didn't make in into Bullseye. If you are running weechat and are using the relays feature, after upgrading to Bullseye, I would recommend you run the following commands in the weechat TUI:

/set relay.weechat.commands *,!exec,!fset,!set,!unset,!plugin,!script,!python,!perl,!ruby,!lua,!tcl,!guile,!javascript,!php,!secure,!upgrade,!quit
/save

  1. For example, someone steals your phone and connects to IRC via the Weechat app... 

23 June, 2021 07:45PM by Louis-Philippe Véronneau

June 22, 2021

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Creating an app with QML: a heater control

Last week I took the ICS course "Building an Embedded Application with Qt" and now it's time to put the gained knowledge into action. I decided to create an application to (simulate?) a heater control. Why? Because I have a very basic one at home, and I always dreamed of getting something better. So time to implement it.


Requirements

General

Try to do the business logic in C++ as much as possible.

Thermostat

  • 0.1 ºC resolution.
  • Regulable hysteresis in the range +/- [0.1 1.0].

Temperature profiles

  • 3 profiles to select (in fact I think 2 would be enough, but one more shouldn't hurt).
  • Each profile will have each own target temperature and hysteresis.

Profile selection

The system should be able to configure the desired profile in slots of 15' for each day (Monday to Sunday). So 96 slots per day.

Manual button

Sometimes I want to override the predefined configuration for a special situation. This button should allow me to set a new target temperature in a time slot from 15 minutes to 3 hours.

Heater on indicator

We want to know when the heater is being commanded to turn on.

Temperature sensor

This is not strictly decided, so the ability to use different kinds of sensors, one at a time, would be just nice.


User interface

I did some sketches on what I would want as a UI. But I'm not a graphic designer, so I will first do a very simple but yet functional UI and then try to switch to a better designed UI. I dream on emulating an horizontal disc gauge, those where the user sees the border of the disc and the center of it shows the current temperature, perhaps even with a magnifier in the middle. Something like this, but with the needle fixed in the center.


Implementing the idea

The backend

Code repository at GitLab.

My first steps where to implement the C++ backend: a Settings class and a TemperatureEngine class.

For the temperature sensor I decided to make a very simple AbstractTemperatureSensor class and also implement a FakeTemperatureSensor. The later will come handy in order to be able to run tests.

Later on I can implement other temperature sources like reading an analog voltage from some GPIO, getting the data trough MQTT, etc.

The GUI

That's definitely Work In Progress :-)

22 June, 2021 05:48PM by Lisandro Damián Nicanor Pérez Meyer

Jamie McClelland

How to Meet Online with Simultaneous Interpretation

May First Movement Technology has been running a public Jitsi Meet instance since well before the pandemic to support Internet-based, video meetings for folks who don't want to rely on corporate and proprietary infrastructure.

However (until this week - see below), we haven't been using it for our own meetings for one main reason: simultaneous interpretation. We're an international organization with roots in the US and Mexico and we are committed to building a bi-national leadership with a movement strategy that recongizes the symbolic and practical disaster of the US/Mexico border.

As a result, we simply can't hold a meeting without simultaneous interpretation between english and spanish.

Up to now, we've worked out a creative way to have mumble meetings with simultaneous interpretation. In short: we have a room for interpretation. If you move into the interpretation room, you hear the interpreter. If you move into the main room, you hear the live voices of the participants. You can switch between rooms as needed. This approach is rock solid, and we benefit from mumble's excellent performance in low bandwidth situations and the availability of mumble clients on both Android and iPhones.

However, there are limitations, which include:

  • You can't hear both the live voices and the interpretation at the same time: it's one or the other. If you are in a face-to-face meeting and receiving interpretation via headphones, you can see the person talking and even remove the headphones from one ear to get a sense of the tone and emotions of the speaker. Not with mumble. In fact, you can't even tell who is speaking.

  • Two chat rooms: If you chat to the live group, it's only seen by the live group. If you chat with the interpretation group, it's only seen by the interpretation group.

  • No video in mumble: well, some people consider this a positive. I'll leave it at that.

After years of reviewing the many dead-end threads and issue requests around simultaneous interpretation on the Jitsi boards (and the Big Blue Button boards for that matter) I finally came across the thread that led to the pull request that changed everything.

With the ability to control local volume via the Jitsi Meet API, I was able to pull together a very small amount of code to produce Jitsi Simultaneous Interpretation (JSI) - a way to run your Jitsi Meet server with an interpretation slider at the top allowing you to set the volume of the interpreter at any time during the meeting.

It's still not perfect - the main problem is that you can't use any of the Jitsi Meet apps - so it runs well on most desktops, but when it comes to cell phones, it only runs (in browser) on modern android phones.

22 June, 2021 01:15PM

June 21, 2021

hackergotchi for Junichi Uekawa

Junichi Uekawa

Updated my simple web recording app to support multiple cameras and selecting from it.

Updated my simple web recording app to support multiple cameras and selecting from it. WebRecord. I didn't need it until today because I didn't use a device with more than one camera as often, but I got hold of a HDMI USB capture device and that changed the game.

21 June, 2021 11:32PM by Junichi Uekawa

hackergotchi for Shirish Agarwal

Shirish Agarwal

Accessibility, Freenode and American imperialism.

Accessibility

This is perhaps one of the strangest ways and yet also perhaps the straightest way to start the blog post. For the past weeks/months, a strange experience has been there. I am using a Logitech wireless keyboard and mouse for almost a decade. Now, for the past few months and weeks we observed a somewhat rare ‘phenomena’. While in-between us we have a single desktop computer. So me and mum take turns to be on the Desktop. At times, however, the system would sit idle and after some time it goes to low-power mode/sleep mode after 30 minutes.

Then, when you want to come back, you obviously have to give your login credentials. At times, the keyboard refuses to input any data in the login screen. Interestingly, the mouse still functions. Much more interesting is the fact that both the mouse and the keyboard use the same transceiver sensor to send data. And I had changed batteries to ensure it was not a power issue but still no input :(.

While my mother uses and used the power switch (I did teach her how to hold it for few minutes and then let it go) but for self, tried another thing. Using the mouse I logged of the session thinking perhaps some race condition or something might be in the session which was not letting the keystrokes be inputted into the system and having a new session might resolve it. But this was not to be 😦

Luckily, on the screen you do have the option to reboot or power off. I did a reboot and lo, behold the system was able to input characters again. And this has happened time and again. I tried to find GOK and failed to remember that GOK had been retired. I looked up the accessibility page on Debian wiki. Very interesting, very detailed but sadly it did not and does not provide the backup I needed. I tried out florence but found that the app. is buggy.

Moreover, the instructions provided on the lightdm screen does not work. I do not get the on-screen keyboard while I followed the instructions. Just to be clear this is all on Debian testing which is gonna be Debian stable soonish 🙂

I even tried the same with xvkbd but no avail. I do use mate as my desktop-manager so maybe the instructions need some refinement ????

$ cat /etc/lightdm/lightdm-gtk-greeter.conf | grep keyboard
# a11y-states = states of accessibility features: “name” – save state on exit, “-name”
– disabled at start (default value for unlisted), “+name” – enabled at start. Allowed names: contrast, font, keyboard, reader.
keyboard=xvkbd –no-gnome –focus &
# keyboard-position = x y[;width height] (“50%,center -0;50% 25%” by default) Works only for “onboard”
#keyboard=

Interestingly, Debian does provide two more on-screen keyboards, matchbox as well as ‘onboard’ which comes from Ubuntu. While I have both of them installed. I find xvkbd to be enough for my work, the only issue seems to be I cannot get it from the drop-down box of accessibility at the login screen. Just to make sure that I have not gone to Gnome-display manager, I did run

$ sudo dpkg-reconfigure gdm3

Only to find out that I am indeed running lightdm. So I am a bit confused why it doesn’t come up as an option when I have the login window/login manager running. FWIW I do run metacity as the window manager as it plays nice with all the various desktop environments I have, almost all of them.

So this is where I’m stuck. If I do get any help, I probably would also add those instructions to the wiki page, so it would be convenient to the next person who comes with the same issue.

I also need to figure out some way to know whether there is some race-condition or something which is happening, have no clue how would I go about it without having whole lot of noise. I am sure there are others who may have more of an idea. FWIW, I did search unix.stackexchange as well as reddit/debian to see if I could see any meaningful posts but came up empty.

Freenode

I had not been using IRC for quite some time now. The reasons have been multiple issues with Riot (now element) taking the whole space on my desktop. I did get alerted to the whole thing about a week after the whole thing went down. Somebody messaged me DM. I *think* I put up a thread or a mini-thread about IRC or something in response to somebody praising telegram/WhatsApp or one of those apps. That probably triggered the DM.

It took me a couple of minutes to hit upon this. I was angry and depressed, seeing the behavior of the new overlords of freenode. I did see that lot of channels moved over to Libera.

It was also interesting to see that some communities were thinking of moving to some other obscure platform, which again could be held hostage to the same thing. One could argue one way or the other, but that would be tiresome and fact is any network needs lot of help to be grown and nurtured, whether it is online or offline.

I also saw that Libera was also using a software Solanum which is ircv3 compliant. Now having done this initial investigation, it was time to move to an IRC client. The Libera documentation is and was pretty helpful in telling which IRC clients would be good with their network. So I first tried hexchat. I installed it and tried to add Libera server credentials, it didn’t work. Did see that they had fixed the bug in sid/unstable and now it’s in testing. But at the time it was in sid, the bug-fixed and I wanted to have something which just ran the damn thing. I chanced upon quassel. I had played around with quassel quite a number of times before, so I knew I could play/use it. Hence, I installed it and was able to use it on the first try. I did use the encrypted server and just had to tweak some settings before I could use it with some help with their documentation. Although, have to say that even quassel upstream needs to get its documentation in order. It is just all over the place, and they haven’t put any effort into streamlining the documentation, so that finding things becomes easier. But that can be said of many projects upstream.

There is one thing though that all of these IRC clients lack. The lack of a password manager. Now till that isn’t fixed it will suck because you need another secure place to put your password/s. You either put it on your desktop somewhere (insecure) or store it in the cloud somewhere (somewhat secure but again need to remember that password), whatever you do is extra work. I am sure there will be a day when authenticating with Nickserv will be an automated task and people can just get on talking on channels and figuring out how to be part of the various communities. As can be seen, even now there is a bit of a learning curve for both newbies and people who know a bit about systems to get it working.

Now, I know there are a lot of things that need to be fixed in the anonymity, security place if I put that sort of hat. For e.g. wouldn’t it be cool if either the IRC client or one of its add-on gave throwaway usernames and passwords. The passwords would be complex. This would make it easier who are paranoid about security and many do and would have. As an example we can see of Fuchs.

Now if the gentleman or lady is working in a professional capacity and would come to know of their real identity and perceive rightly or wrongly the role of that person, it will affect their career. Now, should it? I am sure a lot of people would be divided on the issue. Personally, as far as I am concerned, I would say no because whether right or wrong, whatever they were doing they were doing on their own time. Not on company time. So it doesn’t concern the company at all. If we were to let companies police the behavior outside the time, individuals would be in a lot of trouble.

Although, have to say that is a trend that has been seen in companies that are firing people either on the left or right. A recent example that comes to mind is Emily Wilder who was fired by Associated Press. Interestingly, she was interviewed by Democracy now, and it did come out that she is a Jew. As can be seen and understood there is a lot of nuance to her story and not the way she was fired. It doesn’t give a good taste in the mouth, but then getting fired nobody does. On few forums, people did share of people getting fired of their job because they were dancing (cops). Again, it all depends, for me again, hats off to anybody who feels like dancing or whatever because there are just so many depressing stories all around.

Banned and FOE

On few forums I was banned because I was talking about Brexit and American imperialism, both of which are seem to ruffle a few feathers in quite a few places. For instance, many people for obvious reasons do not like this video –

Now I’m sorry I am not able to and have not been able to give invidious links for the past few months. The reason being invidious itself went through some changes and the changes are good and bad. For e.g. now you need to share your google id with a third-party which at least to my mind is not a good idea. But that probably is another story altogether and it probably will need its own place.

Coming back to the video itself, this was shared by Anthony hazard and the Title is “The Atlantic slave trade: What too few textbooks told you”. I did see this video quite a few years ago and still find it hard to swallow that tens of millions of Africans were bought as slaves to the Americas, although to be fair it does start with the Spanish settlement in the land which would be called the U.S. but they bought slaves with themselves. They even got the American natives, i.e. people from different tribes which made up America at that point. One point to note is that U.S. got its independence on July 4, 1776 so all the people before that were called as European settlers for want of a better word. Some or many of these European settlers would be convicts who were sent from UK. But as shared in the article, that would only happen with U.S. itself is mature and open enough for that discussion. Going back to the original point though, these ‘European or American settlers’ bought lot of slaves from Africa. The video does also shed some of the cruelty the Europeans or Americans did on the slaves, men and women in different ways. The most revelatory part though which I also forget many a times that because lot of people were taken from Africa and many of them men, it did lead to imbalances in the African societies not just in weddings but economics in general. It also developed a theory called ‘Critical Race theory‘ in which it tries to paint the Africans as an inferior race otherwise how would Christianity work where their own good book says ‘All men are born equal ‘.

That does in part explain why the African countries are still so far behind their European or American counterparts. But Africa can still be proud as they are richer than us, yup India. Sadly, I don’t think America is ready to have that conversation anytime soon or if ever. And if it were to do, it would have to out-do any truth and reconciliation Committee which the world has seen. A mere apology or two would not just cut it. The problems of America sadly are not limited to just Africans but the natives of the land, for e.g. the Lakota people. In 1868, they put a letter stating we will give the land back to the Lakota people forever, but then the gold rush happened. In 2007, when the Lakota stated their proposal for independence, the U.S. through its force denied. So much for the paper, it was written on.

Now from what I came to know over the years, the American natives are called ‘First nations’. Time and time again the American Govt. has tried or been foul towards them. Some of the examples include ‘The Yucca Mountain nuclear waste repository‘. The same is and was the case with The Keystone pipeline which is now dead. Now one could say that it is America’s internal matter and I would fully agree but when they speak of internal matters of other countries, then we should have the same freedom.

But this is not restricted to just internal matters, sadly. Since the 1950’s i.e. the advent of the cold war, America’s foreign policy made Regime changes all around the world. Sharing some of the examples from the Cold War –

Iran 1953
Guatemala 1954
Democratic Republic of the Congo 1960
Republic of Ghana 1966
Iraq 1968
Chile 1973
Argentina 1976
Afghanistan 1978-1980s
Grenada
Nicaragua 1981-1990
1. Destabilization through CIA assets
2. Arming the Contras
El Salvador 1980-92
Philippines 1986

Even after the Cold War ended the situation was anonymolus, meaning they still continued with their old behavior.

After the end of Cold War

Guatemala 1993
Serbia 2000
Iraq 2003-
Afghanistan – 2001 – ongoing

There is a helpful Wikipedia article titled ‘History of CIA‘ which basically lists most of the covert regime changes done by U.S. The abvoe is merely a sub-set of the actions done by U.S.

Now are all the behaviors above of a ‘civilized nation’? And if one cares to notice, one would notice that all the above countries in the list which had the regime change had either Oil or precious metals. So U.S. is and was being what it accuses China, a profiteer. But this isn’t just the U.S. – China story but more about the American abuse of its power.

My own country, India paid IMF loans till 1991 and we paid through the nose. There were economic sanctions against India. But then, this is again not just about U.S. – India. Even with Europe or more precisely Norway which didn’t want to side with America because their intelligence showed that no WMD were present in Iraq, the relationship still has issues.

Pandemic and the World

So I do find that this whole blaming of China by U.S. quite theatrical and full of double-triple standards. Very early during the debates, it came to light that the ‘Spanish Flu’ actually originated in Kensas, U.S.

What was also interesting as I found in the ‘Pentagon Papers’ much before ‘The Watergate scandal’ came out that U.S. had realized that China would be more of a competitor than Russia. And this itself was in 1960’s itself. This shows the level of intelligence that the Americans had. From what I can recollect from whatever I have read of that era, China was still mostly an agri-based economy. So, how the U.S. was able to deduce that China will surpass other economies is beyond me even now. They surely must have known something that even we today do not.

One of the other interesting observations and understanding that I got while researching that every year we transfer an average of 7500 diseases from animal to humans and that should be a scary figure. I think more than anything else, loss of habitat and use of animals from food to clothing to medicine is probably the reason we are getting such diseases. I am also sure that there probably are and have been similar number of transfer of diseases from humans to animals as well but for well-known biases and whatnot those studies are neither done or are under-funded. There are and have been reports of something like 850,000 undiscovered viruses which various mammals and birds have.

Also I did find that most of such pandemics are hard to identify, for e.g. SARS 1 took about 15 years, Ebola we don’t know till date from where it came. Even HIV has questions for us. Hell, even why does hearing go away is a mystery to us. In all of this, we want to say China is culpable. And while China may or may not be culpable, only time will tell, this is surely the opportunity for all countries to spend and make capacities in public health. Countries which will take lessons from it and improve their public healthcare models will hopefully will not suffer as those who will suffer and are continuing to suffer now 😦

To those who feel that habitat loss of animals is untrue, I would suggest them to see ‘Sherni‘ which depicts the human/animal conflict in all its brutality. I am gonna warn in advance that the ending is not nice but what can you expect from a country in which forest area cover has constantly declined and the Govt. itself is only interested in headline management 😦

The only positive story I can share from India is that finally the Modi Govt. has said we will do free vaccine immunization for everybody. Although the pace is nothing to write home about. One additional thing they relaxed was instead of going to Cowin or any other portal, people could simply walk in using their identity papers. Although, given the pace of vaccinations, it is going to take anywhere between 13-18 months or more depending on availability of vaccines.

Looking forward to all and any replies have a virtual keyboard, preferably xvkbd as that is good enough for my use-case.

21 June, 2021 09:04PM by shirishag75

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

Firsts steps into QML

After years of using and maintaining Qt there was a piece of the SDK that I never got to use as a developer: QML. Thanks to ICS I've took the free (in the sense of cost) QML Programming — Fundamentals and Beyond.

It consists of seven sessions, which can be easily done in a few days. I did them all in 4 days, but with enough time available you can do them even faster. Of course some previous knowledge of Qt comes handy.

The only drawback was the need of a corporate e-mail in order to register (or at least the webpage says so). Apart from that it is really worth the effort. So, if you are planning into getting into QML this is definitely a nice way to start.

21 June, 2021 05:48PM by Lisandro Damián Nicanor Pérez Meyer

Russ Allbery

Review: Demon Lord of Karanda

Review: Demon Lord of Karanda, by David Eddings

Series: The Malloreon #3
Publisher: Del Rey
Copyright: September 1988
Printing: February 1991
ISBN: 0-345-36331-0
Format: Mass market
Pages: 404

This is the third book of the Malloreon, which in turn is a sequel trilogy to The Belgariad. Eddings, unlike most series authors, does a great job of reminding you what's happening with prologues in each book, but you definitely do not want to start reading here.

When we last left our heroes, they had been captured. (This is arguably a spoiler for King of the Murgos, but it's not much of one, nor one that really matters.) This turns out to be an opportunity to meet the Emperor of Mallorea, the empire from which their adversary Zandramas (and, from the earlier trilogy, the god Torak) comes. This goes much better than one might expect, continuing the trend in this series of showing the leaders of the enemy countries as substantially similar to the leaders of the supposedly good countries.

This sounds like open-mindedness on Eddings's part, and I suppose it partly is. It's at least a change from the first series, in which the bad guys were treated more like orcs. But the deeper I read into this series, the more obvious how invested Eddings is in a weird sort of classism. Garion and the others get along with Zakath in part because they're all royalty, or at least run in those circles. They just disagree about how to be a good ruler (and not as much as one might think, or hope). The general population of any of the countries is rarely of much significance. Zakath is a bit more cynical than Garion and company and has his own agenda, but he's not able to overcome the strong conviction of this series that the Prophecy and the fight between the Child of Light and the Child of Dark is the only thing of importance that's going on, and other people matter only to the extent that they're involved in that story.

When I first read these books as a teenager, I was one of the few who liked the second series and didn't mind that its plot was partly a rehash of the first. I found, and still find, the blatantness with which Eddings manipulates the plot by making prophecy a character in the novel amusing. What I had forgotten, however, was how much of a slog the middle of this series is. It takes about three quarters of this book before there are any significant plot developments, and that time isn't packed with interesting diversions. It's mostly the heroes having conversations with each other or with Zakath, being weirdly sexist, rehashing their personality quirks, or shrugging about horrific events that don't matter to them personally.

The last is a reference to the plague that appears in this book, and which I had completely forgotten. To be fair to my memory, that's partly because none of the characters seem to care much about it either. They're cooling their heels in a huge city, a plague starts killing people, they give Zakath amazingly brutal and bloodthirsty advice to essentially set fire to all the parts of the city with infected people, and then they blatantly ignore all the restrictions on movement because, well, they're important unlike all those other people and have places to go. It's rather stunningly unempathetic under the best of circumstances and seems even more vile in a 2021 re-read.

Eddings also manages to make Ce'Nedra even more obnoxious than she has been by turning her into a walking zombie with weird fits where she's obsessed with her child, and adding further problems (which would be a spoiler) on top of that. I have never been a Ce'Nedra fan (that Garion's marriage ever works at all appears to be by authorial decree), but in this book she's both useless and irritating while supposedly being a tragic figure.

You might be able to tell that I'm running sufficiently low on patience for Eddings's character quirks that I'm losing my enthusiasm for re-reading this bit of teenage nostalgia.

This is the third book of a five-book series, so while there's a climax of sorts just like there was in the third book of the Belgariad, it's a false climax. One of the secondary characters is removed, but nothing is truly resolved; the state of the plot isn't much different at the end of this book than it was at the start. And to get there, one has to put up with Garion being an idiot, Ce'Nedra being a basket case, the supposed heroes being incredibly vicious about a plague, and one character pretending to have an absolutely dreadful Irish accent for pages upon pages upon pages. (His identity is supposedly a mystery, but was completely obvious a hundred pages before Garion figured it out. Garion isn't the sharpest knife in the drawer.)

The one redeeming merit to this series is the dry voice in Garion's head and the absurd sight of the prophecy telling all the characters what to do, and we barely get any of that in this book. When it wasn't irritating or offensive, it was just a waste of time. The worst book of the series so far.

Followed by Sorceress of Darshiva.

Rating: 3 out of 10

21 June, 2021 03:57AM

June 20, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

BBB Packaging for Debian, a short Heads-Up

Over the past days, I have received tons of positive feedback on my previous blog post about forming the Debian BBB Packaging Team [1]. Feedback arrived via mail, IRC, [matrix] and Mastodon. Awesome. Thanks for sharing your thoughts, folks...

Therefore, here comes a short ...

Heads-Up on the current Ongoings

... around packaging BigBlueButton for Debian:

  • While looking at Kurento Media Server, we stumbled over forked portions of code that are not uploadable to Debian as is (kmsjsoncpp, kms-gstreamer). These components would contain old and unmaintained, but also KMS-patched copies of code. Uploading those would violate Debian policy (which in short forbids code duplications in the archive). We are in touch with Kurento upstream and they have removing these forked projects on their development roadmap, although it is unclear when this might be ready for productive use.
  • At the same time, we read BBB developer posts that suggested Janus WebRTC as a possible alternative to Kurento Media Server. As our main directive is getting BigBlueButton into Debian, we might drop the Kurento packaging initiative and spend resources on other components (as Janus WebRTC, thanks to Jonas Smedegaard, already is in Debian).
  • The coming week (what a great coincidence), BigBlueButton World 2021 [2] will be held online and the schedule promises a lot of interesting talks and speakers. If interested, join up for free registration. Amongst others, Paulo Lanzarin will be speaking about "BigBlueButton’s Media Stack: Overview and the Road Ahead" (12:30 EST(!), June 23rd). For the complete schedule, see [3].

Credits

  • Thanks to Paul Wise, who joined the packaging effort last week.
  • Thanks to Ingo Wichmann from Linuxhotel who is my point of contact on this to the sponsor (German Unix Users Group e.V.) of this endeavour.

light+love
Mike Gabriel  
 
[1] https://sunweavers.net/blog/node/133
[2] https://bigbluebutton.org/event-page/
[3] https://docs.google.com/document/d/1kpYJxYFVuWhB84bB73kmAQoGIS59ari1_hn2...

20 June, 2021 07:51PM by sunweaver

Julian Andres Klode

Migrating away from apt-key

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world.

The manual page already provides all you need to know for replacing apt-key add usage:

Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either “gpg” or “asc” as file extension

So it’s kind of surprising people need step by step instructions for how to copy/download a file into a directory.

I’ll also discuss the alternative security snakeoil approach with signed-by that’s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root.

Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation

Assume you currently have:

wget -qO- https://myrepo.example/myrepo.asc | sudo apt-key add –

To translate this directly for bionic and newer, you can use:

sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc

or to avoid downloading as root:

wget -qO-  https://myrepo.example/myrepo.asc | sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc

Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use

sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg

Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems.

wget might not be available everywhere so you can use apt-helper:

sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc

or, to avoid downloading as root:

/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d

Pretending to be safer by using signed-by

People say it’s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can’t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it’s better for publicity :)

As an example, here are the instructions to install signal-desktop from signal.org. As mentioned, gpg --dearmor use in there is not a good idea, and I’d personally not tell people to modify /usr as it’s supposed to be managed by the package manager, but we don’t have an /etc/apt/keyrings or similar at the moment; it’s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).

# NOTE: These instructions only work for 64 bit Debian-based
# Linux distributions such as Ubuntu, Mint etc.

# 1. Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg
cat signal-desktop-keyring.gpg | sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null

# 2. Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main' |\
  sudo tee -a /etc/apt/sources.list.d/signal-xenial.list

# 3. Update your package database and install signal
sudo apt update && sudo apt install signal-desktop

I do wonder why they do wget | gpg --dearmor, pipe that into the file and then cat | sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance

We have three scenarios:

For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted.

Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let’s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess.

Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don’t have another dir inside /etc (or /usr/local), so it’s hard to suggest something else. There’s no significant benefit from actually using signed-by, so it’s kind of extra work for little gain, though.

Addendum: Future work

This part is new, just for this blog post. Let’s look at upcoming changes and how they make things easier.

Bundled .sources files

Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):

Types: deb
URIs: https://myrepo.example/ https://myotherrepo.example/
Suites: stable not-so-stable
Components: main
Signed-By:
 -----BEGIN PGP PUBLIC KEY BLOCK-----
 .
 mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY
 CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk
 IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS
 dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG
 3bHcln8DMpIJVXht78sL
 =IE0r
 -----END PGP PUBLIC KEY BLOCK-----

Then you can just provide a .sources files to users, they place it into `sources.list.d, and everything magically works

Probably adding a nice apt add-source command for it I guess.

Well, python-apt’s aptsources package still does not support deb822 sources, and never will, we’ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign

We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It’s temporarily named aptsign, but it’s a generic signer for single-section deb822 files, similar to signify/minisign.

We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

20 June, 2021 07:42AM

Russ Allbery

Review: The Magician's Nephew

Review: The Magician's Nephew, by C.S. Lewis

Illustrator: Pauline Baynes
Series: Chronicles of Narnia #6
Publisher: Collier Books
Copyright: 1955
Printing: 1978
ISBN: 0-02-044230-0
Format: Mass market
Pages: 186

The Magician's Nephew is the sixth book of the Chronicles of Narnia in the original publication order, but it's a prequel, set fifty years before The Lion, the Witch and the Wardrobe. It's therefore put first in the new reading order.

I have always loved world-building and continuities and, as a comics book reader (Marvel primarily), developed a deep enjoyment of filling in the pieces and reconstructing histories from later stories. It's no wonder that I love reading The Magician's Nephew after The Lion, the Witch and the Wardrobe. The experience of fleshing out backstory with detail and specifics makes me happy. If that's also you, I recommend the order in which I'm reading these books.

Reading this one first is defensible, though. One of the strongest arguments for doing so is that it's a much stronger, tighter, and better-told story than The Lion, the Witch and the Wardrobe, and therefore might start the series off on a better foot for you. It stands alone well; you don't need to know any of the later events to enjoy this, although you will miss the significance of a few things like the lamp post and you don't get the full introduction to Aslan.

The Magician's Nephew is the story of Polly Plummer, her new neighbor Digory Kirke, and his Uncle Andrew, who fancies himself a magician. At the start of the book, Digory's mother is bed-ridden and dying and Digory is miserable, which is the impetus for a friendship with Polly. The two decide to explore the crawl space of the row houses in which they live, seeing if they can get into the empty house past Digory's. They don't calculate the distances correctly and end up in Uncle Andrew's workroom, where Digory was forbidden to go. Uncle Andrew sees this as a golden opportunity to use them for an experiment in travel to other worlds.

MAJOR SPOILERS BELOW.

The Magician's Nephew, like the best of the Narnia books, does not drag its feet getting started. It takes a mere 30 pages to introduce all of the characters, establish a friendship, introduce us to a villain, and get both of the kids into another world. When Lewis is at his best, he has an economy of storytelling and a grasp of pacing that I wish was more common.

It's also stuffed to the brim with ideas, one of the best of which is the Wood Between the Worlds.

Uncle Andrew has crafted pairs of magic rings, yellow and green, and tricks Polly into touching one of the yellow ones, causing her to vanish from our world. He then uses her plight to coerce Digory into going after her, carrying two green rings that he thinks will bring people back into our world, and not incidentally also observing that world and returning to tell Uncle Andrew what it's like. But the world is more complicated than he thinks it is, and the place where the children find themselves is an eerie and incredibly peaceful wood, full of grass and trees but apparently no other living thing, and sprinkled with pools of water.

This was my first encounter with the idea of a world that connects other worlds, and it remains the most memorable one for me. I love everything about the Wood: the simplicity of it, the calm that seems in part to be a defense against intrusion, the hidden danger that one might lose one's way and confuse the ponds for each other, and even the way that it tends to make one lose track of why one is there or what one is trying to accomplish. That quiet forest filled with pools is still an image I use for infinite creativity and potential. It's quiet and nonthreatening, but not entirely inviting either; it's magnificently neutral, letting each person bring what they wish to it.

One of the minor plot points of this book is that Uncle Andrew is wrong about the rings because he's wrong about the worlds. There aren't just two worlds; there are an infinite number, with the Wood as a nexus, and our reality is neither the center nor one of an important pair. The rings are directional, but relative to the Wood, not our world. The kids, who are forced to experiment and who have open minds, figure this out quickly, but Uncle Andrew never shifts his perspective. This isn't important to the story, but I've always thought it was a nice touch of world-building.

Where this story is heading, of course, is the creation of Narnia and the beginning of all of the stories told in the rest of the series. But before that, the kids's first trip out of the Wood is to one of the best worlds of children's fantasy: Charn.

If the Wood is my mental image of a world nexus, Charn will forever be my image of a dying world: black sky, swollen red sun, and endless abandoned and crumbling buildings as far as the eye can see, full of tired silences and eerie noises. And, of course, the hall of statues, with one of the most memorable descriptions of history and empire I've ever read (if you ignore the racialized description):

All of the faces they could see were certainly nice. Both the men and women looked kind and wise, and they seemed to come of a handsome race. But after the children had gone a few steps down the room they came to faces that looked a little different. These were very solemn faces. You felt you would have to mind your P's and Q's, if you ever met living people who looked like that. When they had gone a little farther, they found themselves among faces they didn't like: this was about the middle of the room. The faces here looked very strong and proud and happy, but they looked cruel. A little further on, they looked crueller. Further on again, they were still cruel but they no longer looked happy. They were even despairing faces: as if the people they belonged to had done dreadful things and also suffered dreadful things.

The last statue is of a fierce, proud woman that Digory finds strikingly beautiful. (Lewis notes in an aside that Polly always said she never found anything specially beautiful about her. Here, as in The Silver Chair, the girl is the sensible one and things would have gone better if the boy had listened to her, a theme that I find immensely frustrating because Susan was the sensible one in the first two books of the series but then Lewis threw that away.)

There is a bell in the middle of this hall, and the pillar that holds that bell has an inscription on it that I think every kid who grew up on Narnia knows by heart.

Make your choice, adventurous Stranger;
Strike the bell and bide the danger,
Or wonder, till it drives you mad,
What would have followed if you had.

Polly has no intention of striking the bell, but Digory fights her and does it anyway, waking Jadis from where she sat as the final statue in the hall and setting off one of the greatest reimaginings of a villain in children's literature.

Jadis will, of course, become the White Witch who holds Narnia in endless winter some thousand Narnian years later. But the White Witch was a mediocre villain at best, the sort of obvious and cruel villain common in short fairy tales where the author isn't interested in doing much characterization. She exists to be evil, do bad things, and be defeated. She has a few good moments in conflict with Aslan, but that's about it. Jadis in this book is another matter entirely: proud, brilliant, dangerous, and creative.

The death of everything on Charn was Jadis's doing: an intentional spell, used to claim a victory of sorts from the jaws of defeat by her sister in a civil war. (I find it fascinating that Lewis puts aside his normally sexist roles here.) Despite the best attempts of the kids to lose her both in Charn and in the Wood (which is inimical to her, in another nice bit of world-building), she manages to get back to England with them. The result is a remarkably good bit of villain characterization.

Jadis is totally out of her element, used to a world-spanning empire run with magic and (from what hints we get) vaguely medieval technology. Her plan to take over their local country and eventually the world should be absurd and is played somewhat for laughs. Her magic, which is her great weapon, doesn't even work in England. But Jadis learns at a speed that the reader can watch. She's observant, she pays attention to things that don't fit her expectations, she changes plans, and she moves with predatory speed. Within a few hours in London she's stolen jewels and a horse and carriage, and the local police seem entirely overmatched. There's no way that one person without magic should be a real danger to England around the turn of the 20th century, but by the time the kids manage to pull her back into the Wood, you're not entirely sure England would have been safe.

A chaotic confrontation, plus the ability of the rings to work their magic through transitive human contact, ends up with the kids, Uncle Andrew, Jadis, a taxicab driver and his horse all transported through the Wood to a new world. In this case, literally a new world: Narnia at the point of its creation.

Here again, Lewis translates Christian myth, in this case the Genesis creation story, into a more vivid and in many ways more beautiful story than the original. Aslan singing the world into existence is an incredible image, as is the newly-created world so bursting with life that even things that normally could not grow will do so. (Which, of course, is why there is a lamp post burning in the middle of the western forest of Narnia for the Pevensie kids to find later.) I think my favorite part is the creation of the stars, but the whole sequence is great.

There's also an insightful bit of human psychology. Uncle Andrew can't believe that a lion is singing, so he convinces himself that Aslan is not singing, and thus prevents himself from making any sense of the talking animals later.

Now the trouble about trying to make yourself stupider than you really are is that you very often succeed.

As with a lot in Lewis, he probably meant this as a statement about faith, but it generalizes well beyond the religious context.

What disappointed me about the creation story, though, is the animals. I didn't notice this as a kid, but this re-read has sensitized me to how Lewis consistently treats the talking animals as less than humans even though he celebrates them. That happens here too: the newly-created, newly-awakened animals are curious and excited but kind of dim. Some of this is an attempt to show that they're young and are just starting to learn, but it also seems to be an excuse for Aslan to set up a human king and queen over them instead of teaching them directly how to deal with the threat of Jadis who the children inadvertently introduced into the world.

The other thing I dislike about The Magician's Nephew is that the climax is unnecessarily cruel. Once Digory realizes the properties of the newly-created world, he hopes to find a way to use that to heal his mother. Aslan points out that he is responsible for Jadis entering the world and instead sends him on a mission to obtain a fruit that, when planted, will ward Narnia against her for many years. The same fruit would heal his mother, and he has to choose Narnia over her. (It's a fairly explicit parallel to the Garden of Eden, except in this case Digory passes.)

Aslan, in the end, gives Digory the fruit of the tree that grows, which is still sufficient to heal his mother, but this sequence made me angry when re-reading it. Aslan knew all along that what Digory is doing will let him heal his mother as well, but hides this from him to make it more of a test. It's cruel and mean; Aslan could have promised to heal Digory's mother and then seen if he would help Narnia without getting anything in return other than atoning for his error, but I suppose that was too transactional for Lewis's theology or something. Meh.

But, despite that, the only reason why this is not the best Narnia book is because The Voyage of the Dawn Treader is the only Narnia book that also nails the ending. The Magician's Nephew, up through Charn, Jadis's rampage through London, and the initial creation of Narnia, is fully as good, perhaps better. It sags a bit at the end, partly because it tries to hard to make the Narnian animals humorous and partly because of the unnecessary emotional torture of Digory. But this still holds up as the second-best Narnia book, and one I thoroughly enjoyed re-reading. If anything, Jadis and Charn are even better than I remembered.

Followed by the last book of the series, the somewhat notorious The Last Battle.

Rating: 9 out of 10

20 June, 2021 04:03AM

hackergotchi for Sean Whitton

Sean Whitton

transient-caps-lock

If you’re writing a lot of Common Lisp and you want to follow the convention of using all uppercase to refer to symbols in docstrings, comments etc., you really need something better than the shift key. Similarly if you’re writing C and you have VARIOUS_LONG_ENUMS.

The traditional way is a caps lock key. But that means giving up a whole keyboard key, all of the time, just for block capitalisation, which one hardly uses outside of programming. So a better alternative is to come up with some Emacs thing to get block capitalisation, as Emacs key binding is much more flexible than system keyboard layouts, and can let us get block capitalisation without giving up a whole key.

The simplest thing would be to bind some sequence of keys to just toggle caps lock. But I came up with something a bit fancier. With the following, you can type M-C, and then you get block caps until the point at which you’ve probably finished typing your symbol or enum name.

(defun spw/transient-caps-self-insert (&optional n)
  (interactive "p")
  (insert-char (upcase last-command-event) n))

(defun spw/activate-transient-caps ()
  "Activate caps lock while typing the current whitespace-delimited word(s).
This is useful for typing Lisp symbols and C enums which consist
of several all-uppercase words separated by hyphens and
underscores, such that M-- M-u after typing will not upcase the
whole thing."
  (interactive)
  (let* ((map (make-sparse-keymap))
     (deletion-commands &apos(delete-backward-char
                          paredit-backward-delete
                          backward-kill-word
                          paredit-backward-kill-word
                          spw/unix-word-rubout
                          spw/paredit-unix-word-rubout))
     (typing-commands (cons &aposspw/transient-caps-self-insert
                            deletion-commands)))
     (substitute-key-definition &aposself-insert-command
                                #&aposspw/transient-caps-self-insert
                                map
                                (current-global-map))
    (set-transient-map
     map
     (lambda ()
       ;; try to determine whether we are probably still about to try to type
       ;; something in all-uppercase
       (and (member this-command typing-commands)
            (not (and (eq this-command &aposspw/transient-caps-self-insert)
                      (= (char-syntax last-command-event) ?\ )))
            (not (and (or (bolp) (= (char-syntax (char-before)) ?\ ))
                      (member this-command deletion-commands))))))))

(global-set-key "\M-C" #&aposspw/activate-transient-caps)

A few notes:

  • I have caps lock on left-ctrl on standard keyboard layouts, and on the sequence keypd-capslock-keypd on the Kinesis Advantage. These are about equally inconvenient, but good enough for those rare cases one needs caps lock outside of Emacs.

  • I also had the idea of binding this to Delete, because I don’t use that at all in Emacs, but Delete is relatively hard to hit on conventional keyboards, sometimes missing, and might not work in some text terminals.

  • I’ve actually trained myself to set the mark, type my symbol or enum and then just C-x C-u to upcase it, so I’m not actually using the above, but I thought someone else might like it, so still worth posting.

20 June, 2021 12:26AM

June 19, 2021

Andrew Cater

Debian 10.10 media checking - RELEASE - 202106192215

And that's it for another release. A few bugs but nothing show-stopping.

Thanks again to Sledge, RattusRattus and Isy, to Linux-Fan and schweer.

As the testing page notes, there's a bug in some arm64 installs - the fix should come out via debian-security shortly but you might want to be aware of this.

 Here's to the next one: only a short time until Bullseye

19 June, 2021 10:12PM by Andrew Cater (noreply@blogger.com)

Debian 10.10 media checking - 202106191837 - We're doing quite well

Linux-Fan and Schweer have just left us: Schweer has confirmed that all the Debian-Edu images are fine and working to his satisfaction.

 After a short break for food, we're all back in on testing: the Cambridge folks are working hard.  There have been questions on IRC about the release in Libera.Chat as well. Always good to do this: at some point in the next couple of months, we'll be doing this for Debian 11 [Bullseye] :)

Thanks as ever to all behind the scenes making each point release happen and to those folks supporting LTS and ELTS. It takes a huge amount of bug fixing, sometimes on the fly as issues are discovered, to make it work this seamlessly.


19 June, 2021 07:42PM by Andrew Cater (noreply@blogger.com)

Fixing Wayland failing to start when a desktop environment is installed but your machine needs firmware. ...

This came up in an install that I was just doing for Debian 10.10 media testing.

 I hadn't seen this before and it would be disconcerting to other people, though it is a known bug, I think.

 I was installing an image that had no network and no firmware. KDE failed to run and dropped me to a text mode prompt. This was because the Zotac SBC I'm using requires Radeon R600 firmware to work. There was a warning message on screen to that effect.

 The way round this was to plug in a network cable and edit /etc/apt/sources.list.

 Editing /etc/apt/sources.list was to add contrib and non-free to the appropriate lines to allow me to install  firmware-linux-nonfree and firmware-misc-nonfree which includes the appropriate AMD firmware for the embedded Radeon chipset.

 Since the machine hadn't been connected to a network at install time, I also needed to run a dhclient command to obtain a network lease and allow me to install the non-free metapackages over the network.

Result: success: a full KDE desktop. [The machine is an old Zotac SBC with embedded graphics hardware: AMD E350 - specifically, it requires firmware-amd-graphics and amd64-microcode].

19 June, 2021 06:32PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Chris Lamb

Chris Lamb

Raiders of the Lost Ark: 40 Years On

"Again, we see there is nothing you can possess which I cannot take away."

The cinema was a rare and expensive treat in my youth, so I first came across Raiders of the Lost Ark by recording it from television onto a poor quality VHS. I only mention this as it meant I watched a slightly different film to the one intended, as my copy somehow missed off the first 10 minutes. For those not as intimately familiar with the film as me, this is just in time to see a Belloq demand Dr. Jones hand over the Peruvian head (see above), just in time to learn that Indy loathes snakes, and just in time to see the inadvertent reproduction of two Europeans squabbling over the spoils of a foreign land.

What this truncation did to my interpretation of the film (released thirty years ago today on June 19th 1981) is interesting to explore. Without Jones' physical and moral traits being demonstrated on-screen (as well as missing the weighing the gold head and the rollercoaster boulder scene), it actually made the idea of 'Indiana Jones' even more of a mythical archetype. The film wisely withholds Jones' backstory, but my directors cut deprived him of even more, and counterintuitively imbued him with even more of a legendary hue as the elision made his qualities an assumption beyond question. Indiana Jones, if you can excuse the cliché, needed no introduction at all.

§

Good artists copy, great artists steal. And oh boy, does Raiders steal. I've watched this film about twenty times over the past two decades and it's now firmly entered into my personal canon. But watching it on its thirtieth anniversary was different not least because I could situate it in a broader cinematic context. For example, I now see the Gestapo officer in Major Strasser from Casablanca (1942), in fact just as I can with many of Raiders' other orientalist tendencies: not only in its breezy depictions of backwards sand people, but also of North Africa as an entrepôt and playground for a certain kind of Western gangster. The opening as well, set in an equally reductionist pseudo-Peru, now feels like Werner Herzog's Aguirre, the Wrath of God (1972) — but without, of course, any self-conscious colonial critique.

The imagery of the ark appears to be borrowed from James Tissot's The Ark Passes Over the Jordan, part of the fin de siecle fascination with the occult and (ironically enough given the background of Raiders' director), a French Catholic revival.

I can now also appreciate some of the finer edges that make this film just so much damn fun to watch. For instance, the comic book conceit that Jones and Belloq are a 'shadowy reflection' of one other and that they need 'only a nudge' to make one like the other. As is the idea that Belloq seems to be actually enjoying being evil. I also spotted Jones rejecting the martini on the plane. This feels less like a comment on corrupting effect of alcohol (he drinks rather heavily elsewhere in the film), but rather a subtle distancing from James Bond. This feels especially important given that the action-packed cold open is, let us be honest for a second, ripped straight from the 007 franchise.

John William's soundtracks are always worth mentioning. The corny Raiders March does almost nothing for me, but the highly-underrated 'Ark theme' certainly does. I delight in its allusions to Gregorian chant, the diabolus in musica and the Hungarian minor scale, fusing the Christian doctrine of the Holy Trinity (the stacked thirds, get it?), the ars antiqua of the Middle Ages with an 'exotic' twist that the Russian Five associated with central European Judaism.

The best use of the ark leitmotif is, of course, when it is opened. Here, Indy and Marion are saved by not opening their eyes whilst the 'High Priest' Belloq and the rest of the Nazis are all melted away. I'm no Biblical scholar, but I'm almost certain they were alluding to Leviticus 16:2 here:

The Lord said to Moses: “Tell your brother Aaron that he is not to come whenever he chooses into the Most Holy Place behind the curtain in front of the atonement cover on the ark, or else he will die, for I will appear in the cloud above the mercy seat.”

But would it be too much of a stretch to also see the myth of Orpheus and Eurydices too? Orpheus's wife would only be saved from the underworld if he did not turn around until he came to his own house. But he turned round to look at his wife, and she instantly slipped back into the depths:

For he who overcome should turn back his gaze
Towards the Tartarean cave,
Whatever excellence he takes with him
He loses when he looks on those below.

Perhaps not, given that Marion and the ark are not lost in quite the same way. But whilst touching on gender, it was interesting to update my view of archaeologist René Belloq. To countermand his slight queer coding (a trope of Disney villains such as Scar, Jafar, Cruella, etc.), there is a rather clumsy subplot involving Belloq repeatedly (and half-heartedly) failing to seduce Marion. This disavows any idea that Belloq isn't firmly heterosexual, essential for the film's mainstream audience, but it is especially important in Raiders because, if we recall the relationship between Belloq and Jones: 'it would take only a nudge to make you like me'. (This would definitely put a new slant on 'Top men'.)

However, my favourite moment is where the Nazis place the ark in a crate in order to transport it to the deserted island. On route, the swastikas on the side of the crate spontaneously burn away, and a disturbing noise is heard in the background. This short scene has always fascinated me, partly because it's the first time in the film that the power of the ark is demonstrated first-hand but also because gives the object an other-worldly nature that, to the best of my knowledge, has no parallel in the rest of cinema.

Still, I had always assumed that the Aak disfigured the swastikas because of their association with the Nazis, interpreting the act as God's condemnation of the Third Reich. But now I catch myself wondering whether the ark would have disfigured any iconography as a matter of principle or whether their treatment was specific to the swastika. We later get a partial answer to this question, as the 'US Army' inscriptions in the Citizen Kane warehouse remain untouched.

Far from being an insignificant concern, the filmmakers appear to have wandered into a highly-contested theological debate. As in, if the burning of the swastika is God's moral judgement of the Nazi regime, then God is clearly both willing and able to intervene in human affairs. So why did he not, to put it mildly, prevent Auschwitz? From this perspective, Spielberg appears to be limbering up for some of the academic critiques surrounding Holocaust representations that will follow Schindler's List (1993).

§

Given my nostalgic and somewhat ironic attachment to Raiders, it will always be difficult for me to objectively appraise the film. Even so, it feels like it is underpinned by an earnest attempt to entertain the viewer, largely absent in the affected cynicism of contemporary cinema. And when considered in the totality of Hollywood's output, its tonal and technical flaws are not actually that bad — or at least Marion's muddled characterisation and its breezy chauvinism (for example) clearly have far worse examples.

Perhaps the most remarkable thing about the film in 2021 is that it hasn't changed that much at all. It spawned one good sequel (The Last Crusade), one bad one (The Temple of Doom), and one hardly worth mentioning at all, yet these adventures haven't affected the original Raiders in any meaningful way. In fact, if anything has affected the original text it is, once again, George Lucas himself, as knowing the impending backlash around the Star Wars prequels adds an inadvertent paratext to all his earlier works.

Yet in a 1978 discussion prior to the creation of Raiders, you can get a keen sense of how Lucas' childlike enthusiasm will always result in something either extremely good or something extremely bad — somehow no middle ground is quite possible. Yes, it's easy to rubbish his initial ideas — 'We'll call him Indiana Smith! — but hasn't Lucas actually captured the essence of a heroic 'Americana' here, and that the final result is simply a difference of degree, not kind?

19 June, 2021 05:01PM

Andrew Cater

Debian 10.10 release 202106191548

 Late blogging on this one.

Even as we wait for the final release of Bullseye [Debian 11], we're still producing updates for Debian 10 [Buster].

Today has thrown up a few problems: working with Steve, RattusRattus and Isy in Cambridge, Schweer and Linux-Fan somewhere else in the world.

A couple of build problems have meant that we've started later than we otherwise might have been and a couple of image runs have had to be redone. We're there now and happily running tests.

As ever, it's good to be doing this. With practice, I can now repeat mistakes with 100% reliability and in shorter time :)

More updates later.

19 June, 2021 03:49PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Joachim Breitner

Joachim Breitner

Leaving DFINITY

Last Thursday was my last working day at DFINITY. There are various reasons why I felt that after almost three years the DFINITY Foundation isn’t quite the right place for me anymore, and this plan has been in the making for a while. Primarily, there are personal pull factors that strongly suggest that I’ll take a break from full time employment, so I decided to see the launch of the Internet Computer through and then leave.

DFINITY has hired some amazing people, and it was a great pleasure to work with them. I learned a lot (some Rust, a lot of Nix, and just how merciless Conway’s law is), and I dare say I had the opportunity to do some good work, contributing my part to make the Internet Computer a reality.

I am especially proud of the Interface Specification and the specification-driven design principles behind it. It even comes with a model reference implementation and acceptance test suite, and although we didn’t quite get to do formalization, those familiar with the DeepSpec project will recognize some influence of their concept of “deep specifications”.

Besides that, there is of course my work on the Motoko programming language, where I build the backend,a the Candid interoperability layer, where I helped with the formalism, formulated the a generic soundness criterion for Interface Description Languages in a higher order settings and formally verified that in Coq. Fortunately, all of this work is now Free Software or at least Open Source.

With so much work poured into this project, I continue to care about it, and you’ll see me post on the the developer forum and hack on Motoko. As the Internet Computer becomes gradually more open, I hope I can be gradually more involved again. But even without me contributing full-time I am sure that DFINITY and the Internet Computer will do well; when I left there were still plenty of smart, capable and enthusiastic people forging ahead.

So what’s next?

So far, I have rushed every professional transition in my life: When starting my PhD, when starting my postdoc, when starting my job at DFINITY, and every time I regretted it. So this time, I will take a proper break and will explore the world a bit (as far as that is possible given the pandemic). I will stresslessly contribute to various open source projects. I also hope to do more public outreach and teaching, writing more blog posts again, recording screencasts and giving talks and lectures. If you want to invite me to your user group/seminar/company/workshop, please let me know! Also, I might be up for small interesting projects in a while.

Beyond these, I have no concrete plans and am looking forward to the inspiration I might get from hiking through the Scandinavian wilderness. If you happen to stumble across my tent, please stop for a tea.

19 June, 2021 10:51AM by Joachim Breitner (mail@joachim-breitner.de)

June 18, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

Fighting spam on roundcube with modsecurity

Every couple of months, one of my users falls prey to phishing attacks, and send their login/password data to an unknown somebody who poses as… Well, as me, their always-friendly and always-helpful systems administrator.

What follows is, of course, me spending a week trying to get our systems out of all of the RBLs/DNSBLs. But, no matter how fast I act, there’s always distruption and lost mails (bounced or classified as spam) for my users.

Most of my users use the Webmail I have configured on our institute’s servers, Roundcube, for which I have the highest appreciation. Only that… Of course, when a user yields their username and password to an attacker, it is very successful at… Sending huge amounts of unrequested mail, leading to my server losing its reputation ☹

This week, I set two bits of mitigation strategies. The first one, most straightforward, was to ask Roundcube to disallow sending mails with over ten recipients. In a Debian install, this is as easy as setting up the following variable in /etc/roundcube/config.inc.php:

$config['max_recipients'] = 10

However, a dilligent spammer can still clog the server by sending many, many, many, many requests — maybe each of them with ten recipients only; last weekend, I got a new mail every three seconds or so.

Adding rate limit to a specific Roundcube action is not easy, however, or at least it took me quite a bit of headbanging to get it right ☹. Roundcube is a very AJAX-y system where all (most, at least) actions are received by /index.php and there is quite a bit of parsing to do to understand the actions done. When sending a mail, of course, it is done using the POST HTTP verb, and the URI-specified variables include _task=mail&_unlock=loading<message_id> (of course, with changing message IDs).

After some poking here and there, I faced to SpiderLabs’ ModSecurity… Only that I am not yet well versed in writing rules for it. But after quite a bit of reading, poking, breaking… I was able to come up with the following rules:

# How often does the limit counter expire ⇒ ratelimit_client=60,
# every 60 seconds
SecRule REQUEST_LINE "@rx POST.*_task=mail&_unlock" id:10,phase:2,nolog,pass,setuid:%{tx.ua_hash},setvar:user.ratelimit_client=+1,expirevar:user.ratelimit_client=60

# How many requests do we allow in the specified time period? ⇒
# @gt 3, 3 requests
SecRule user:ratelimit_client "@gt 2" chain,id:100009,phase:2,deny,status:429,setenv:RATELIMITED,log,msg:RATE-LIMITED

SecRule REQUEST_LINE "@rx POST.*_task=mail&_unlock"

The first line specifies the rule will match request lines specifying the POST verb aind including the _task=mail&_unlock fragment in the URL. It increments tht ratelimit_client user variable, but expires it after 60 seconds.

The first line verifies whether the above specified variable (do note that it’s user: instead of user.) is greater than 2. If so, it sets the deny action, HTTP return status of 429 (Too Many Requests), and logs the reason why this request was denied (rate-limited).

And… Given the way Roundcube works, this even works transparently! If a user hits the limit, the mail sending component will just wait and, after a while, time out. Then, the user can click Send again. If legitimate users are too productive and try to send over three mails in a minute, they won’t lose any of it; spammers will (hopefully!) find it unbearably slow and give up.

Logging is quite informative; I will probably later restrict it to show fewer parts (even if just for privacy sake, as it logs the full request!) For a complex permissions framework such as mod_security, having information such as the following is most welcome in order to find a possibly misbehaving rule:

--76659f4b-H--
Message: Access denied with code 429 (phase 2). Pattern match "POST.*_task=mail&_unlock" at REQUEST_LINE. [file "/etc/modsecurity/rate_limit_sender.conf"] [line "20"] [id "100009"] [msg "RATELIMITED BOT"]
Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 192.168.1.48] ModSecurity: Access denied with code 429 (phase 2). Pattern match "POST.*_task=mail&_unlock" at REQUEST_LINE. [file "/etc/modsecurity/rate_limit_sender.conf"] [line "20"] [id "100009"] [msg "RATELIMITED BOT"] [hostname "my.server.mx"] [uri "/roundcube/"] [unique_id "YMzJLR9jVDMGsG@18kB1qAAAAAY"]
Action: Intercepted (phase 2)
Stopwatch: 1624033581838813 1204 (- - -)
Stopwatch2: 1624033581838813 1204; combined=352, p1=29, p2=140, p3=0, p4=0, p5=94, sr=81, sw=89, l=0, gc=0
Response-Body-Transformed: Dechunked
Producer: ModSecurity for Apache/2.9.3 (http://www.modsecurity.org/).
Server: Apache
WebApp-Info: "default" "-" ""
Engine-Mode: "ENABLED"

I truly, truly hope this is the last time my server falls in the black pits of DNSBL/RBL lists ☹

18 June, 2021 04:40PM

Enrico Zini

Playbooks, host vars, group vars

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Host variables

Ansible allows to specify per-host variables, and I like that. Let's try to model a host as a dataclass:

@dataclass
class Host:
    """
    A host to be provisioned.
    """
    name: str
    type: str = "Mitogen"
    args: Dict[str, Any] = field(default_factory=dict)

    def _make_system(self) -> System:
        cls = getattr(transilience.system, self.type)
        return cls(self.name, **self.args)

This should have enough information to create a connection to the host, and can be subclassed to add host-specific dataclass fields.

Host variables can then be provided as default constructor arguments when instantiating Roles:

    # Add host/group variables to role constructor args
    host_fields = {f.name: f for f in fields(host)}
    for field in fields(role_cls):
        if field.name in host_fields:
            role_kwargs.setdefault(field.name, getattr(host, field.name))

    role = role_cls(**role_kwargs)

Group variables

It looks like I can model groups and group variables by using dataclasses as mixins:

@dataclass
class Webserver:
    server_name: str = "www.example.org"

@dataclass
class Srv1(Webserver):
    ...

Doing things like filtering all hosts that are members of a given group can be done with a simple isinstance or issubclass test.

Playbooks

So far Transilience is executing on one host at a time, and Ansible can execute on a whole host inventory.

Since the most part of running a playbook is I/O bound, we can parallelize hosts using threads, without worrying too much about the performance impact of GIL.

Let's introduce a Playbook class as the main entry point for a playbook:

class Playbook:
    def setup_logging(self):
        ...

    def make_argparser(self):
        description = inspect.getdoc(self)
        if not description:
            description = "Provision systems"

        parser = argparse.ArgumentParser(description=description)
        parser.add_argument("-v", "--verbose", action="store_true",
                            help="verbose output")
        parser.add_argument("--debug", action="store_true",
                            help="verbose output")
        return parser

    def hosts(self) -> Sequence[Host]:
        """
        Generate a sequence with all the systems on which the playbook needs to run
        """
        return ()

    def start(self, runner: Runner):
        """
        Start the playbook on the given runner.

        This method is called once for each system returned by systems()
        """
        raise NotImplementedError(f"{self.__class__.__name__}.start is not implemented")

    def main(self):
        parser = self.make_argparser()
        self.args = parser.parse_args()
        self.setup_logging()

        # Start all the runners in separate threads
        threads = []
        for host in self.hosts():
            runner = Runner(host)
            self.start(runner)
            t = threading.Thread(target=runner.main)
            threads.append(t)
            t.start()

        # Wait for all threads to complete
        for t in threads:
            t.join()

And an actual playbook will now look like something like this:

from dataclasses import dataclass
import sys
from transilience import Playbook, Host


@dataclass
class MyServer(Host):
    srv_root: str = "/srv"
    site_admin: str = "enrico@enricozini.org"


class VPS(Playbook):
    """
    Provision my VPS
    """

    def hosts(self):
        yield MyServer(name="server", args={
            "method": "ssh",
            "hostname": "host.example.org",
            "username": "root",
        })

    def start(self, runner):
        runner.add_role("fail2ban")
        runner.add_role("prosody")
        runner.add_role(
                "mailserver",
                postmaster="enrico",
                myhostname="mail.example.org",
                aliases={...})


if __name__ == "__main__":
    sys.exit(VPS().main())

It looks quite straightforward to me, works on any number of hosts, and has a proper command line interface:

./provision  --help
usage: provision [-h] [-v] [--debug]

Provision my VPS

optional arguments:
  -h, --help     show this help message and exit
  -v, --verbose  verbose output
  --debug        verbose output

18 June, 2021 01:58PM

June 17, 2021

Elana Hashman

I'm hosting a Bug Scrub for Kubernetes SIG Node

It's been a long while since I last hosted a BSP, but 'tis the season.

Kubernetes SIG Node will be holding a bug scrub on June 24-25, and this is a great opportunity for you to get involved if you're interested in contributing to Kubernetes or SIG Node!

We will be hosting a global event with region captains for all timezones. I am one of the NASA captains (~17:00-01:00 UTC) and I'll be leading the kickoff. We will be working on Slack and Zoom. I hope you'll be able to drop in!

Details

I'm an existing contributor, what should I work on?

Work on triaging and closing SIG Node bugs. We have a lot of bugs!!

The goal of our event is to categorize, clean up, and resolve some of the 450+ issues in k/k for SIG Node.

Check out the event docs for more instructions.

I'm a new contributor and want to join but I have no idea what I'm doing!

At some point, that was all of us!

This is a great opportunity to get involved if you've never contributed to Kubernetes. We'll have dedicated mentors available to coordinate and help out new contributors.

If you've never contributed to Kubernetes before, I recommend you check out the Getting Started and Contributor Guide resources in advance of the event. You will want to ensure you've signed the contributor license agreement (CLA).

Remember, you don't have to code to make valuable contributions! Triaging the bug tracker is a great example of this.

See you there!

Happy hacking.

17 June, 2021 03:20PM by Elana Hashman

Enrico Zini

Reimagining Ansible variables

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

While experimenting with Transilience, I've been giving some thought about Ansible variables.

My gripes

I like the possibility to define host and group variables, and I like to have a set of variables that are autodiscovered on the target systems.

I do not like to have everything all blended in a big bucket of global variables.

Let's try some more prototyping.

My fiddlings

First, Role classes could become dataclasses, too, and declare the variables and facts that they intend to use (typed, even!):

class Role(role.Role):
    """
    Postfix mail server configuration
    """
    # Postmaster username
    postmaster: str = None
    # Public name of the mail server
    myhostname: str = None
    # Email aliases defined on this mail server
    aliases: Dict[str, str] = field(default_factory=dict)

Using dataclasses.asdict() I immediately gain context variables for rendering Jinja2 templates:

class Role:
    # [...]
    def render_file(self, path: str, **kwargs):
        """
        Render a Jinja2 template from a file, using as context all Role fields,
        plus the given kwargs.
        """
        ctx = asdict(self)
        ctx.update(kwargs)
        return self.template_engine.render_file(path, ctx)

    def render_string(self, template: str, **kwargs):
        """
        Render a Jinja2 template from a string, using as context all Role fields,
        plus the given kwargs.
        """
        ctx = asdict(self)
        ctx.update(kwargs)
        return self.template_engine.render_string(template, ctx)

I can also model results from fact gathering into dataclass members:

# From ansible/module_utils/facts/system/platform.py
@dataclass
class Platform(Facts):
    """
    Facts from the platform module
    """
    ansible_system: Optional[str] = None
    ansible_kernel: Optional[str] = None
    ansible_kernel: Optional[str] = None
    ansible_kernel_version: Optional[str] = None
    ansible_machine: Optional[str] = None
    # [...]
    ansible_userspace_architecture: Optional[str] = None
    ansible_machine_id: Optional[str] = None

    def summary(self):
        return "gather platform facts"

    def run(self, system: transilience.system.System):
        super().run(system)
        # ... collect facts

I like that this way, one can explicitly declare what variables a Facts action will collect, and what variables a Role needs.

At this point, I can add machinery to allow a Role to declare what Facts it needs, and automatically have the fields from the Facts class added to the Role class. Then, when facts are gathered, I can make sure that their fields get copied over to the Role classes that use them.

In a way, variables become role-scoped, and Facts subclasses can be used like some kind of Role mixin, that contributes only field members:

# Postfix mail server configuration
@role.with_facts([actions.facts.Platform])
class Role(role.Role):
    # Postmaster username
    postmaster: str = None
    # Public name of the mail server
    myhostname: str = None
    # Email aliases defined on this mail server
    aliases: Dict[str, str] = field(default_factory=dict)
    # All fields from actions.facts.Platform are inherited here!

    def have_facts(self, facts):
        # self.ansible_domain comes from actions.facts.Platform
        self.add(builtin.command(
            argv=["certbot", "certonly", "-d", f"mail.{self.ansible_domain}", "-n", "--apache"],
            creates=f"/etc/letsencrypt/live/mail.{self.ansible_domain}/fullchain.pem"
        ), name="obtain mail.* certificate")

        # the template context will have the Role variables, plus the variables
        # of all the Facts the Role uses
        with self.notify(ReloadPostfix):
            self.add(builtin.copy(
                dest="/etc/postfix/main.cf",
                content=self.render_file("roles/mailserver/templates/main.cf"),
            ), name="configure /etc/postfix/main.cf")

One can also fill in variables when instantiating Roles, making parameterized generic Roles possible and easy:

    runner.add_role(
            "mailserver",
            postmaster="enrico",
            myhostname="mail.enricozini.org",
            aliases={
                "me": "enrico",
            },
    )

Outcomes

I like where this is going: having well defined variables for facts and roles, means that the variables that get into play can be explicitly defined, well known, and documented.

I think this design lends itself quite well to role reuse:

  • Roles can use variables without risking interfering with each other.
  • Variables from facts can have well defined meanings across roles.
  • Roles are classes, and can easily be made inheritable.

I have a feeling that, this way, it may be much easier to create generic libraries of Roles that one can reuse to compose complex playbooks.

Since roles are just Python modules, we even already know how to package and distribute them!

Next step: Playbooks, host vars, group vars.

17 June, 2021 01:52PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Submit your ideas for Debian and +1 those that you find important

A while ago, I got a request from Kentaro Hayashi on the project I use to manage funding requests addressed to Freexian. He was keen to see some improvements on the way reimbursement requests are handled in Debian. In my opinion, the idea is certainly good but he’s not part of the treasurer team and was not willing to implement the project either, so it was not really ready to be submitted to us.

To be able to fund a useful project, we need either someone that is willing to do the work and try to push it further in Debian, or we need a Debian team interested in the result of the project (and in that case, we can try to find someone willing to implement the project). In this case, it’s a bit sad that the treasurer team didn’t comment at all… but in general, what should we do with those suggestions ?

It would still be interesting to have a list of such suggestions and have Debian developers be able to advocate (+1) those suggestions. It’s in this spirit that Kentaro created the “Grow your ideas” project on salsa.debian.org. Browse the list of issues, submit your own and click on 👍 (+1) to bump the suggestions that you find interesting. And click on the “Watch” button on the homepage to receive notifications when others submit suggestions so that you can evaluate those suggestions and possibly improve them through your comments.

In the end, this can only help those that are looking for ways to contribute to Debian: maybe some of them will formalize a concrete action plan and request funding for it! We have a pile of money sitting unused that’s clearly targeted to pay contributors to improve our infrastructure. And we have endless lists of (wishlist) bugs on all our infrastructure projects (dak, distro-tracker, …). What are you waiting ?

17 June, 2021 01:36PM by Raphaël Hertzog

hackergotchi for Joey Hess

Joey Hess

typed pipes in every shell

Powershell and nushell take unix piping beyond raw streams of text to structured or typed data. Is it possible to keep a traditional shell like bash and still get typed pipes?

I think it is possible, and I'm now surprised noone seems to have done it yet. This is a fairly detailed design for how to do it. I've not implemented it yet. RFC.

Let's start with a command called typed. You can use it in a pipeline like this:

typed foo | typed bar | typed baz

What typed does is discover the types of the commands to its left and its right, while communicating the type of the command it runs back to them. Then it checks if the types match, and runs the command, communicating the type information to it. Pipes are unidirectional, so it may seem hard to discover the type to the right, but I'll explain how it can be done in a minute.

Now suppose that foo generates json, and bar filters structured data of a variety of types, and baz consumes csv and pretty-prints a table. Then bar will be informed that its input is supposed to be json, and that its output should be csv. If bar didn't support json, typed foo and typed bar would both fail with a type error.

Writing "typed" in front of everything is annoying. But it can be made a shell alias like "t". It also possible to wrap programs using typed:

cat >~/bin/foo <<EOF
#/usr/bin/typed /usr/bin/foo
EOF

Or program could import a library that uses typed, so it natively supports being used in typed pipelines. I'll explain one way to make such a library later on, once some more details are clear.

Which gets us back to a nice simple pipeline, now automatically typed.

foo | bar | baz

If one of the commands is not actually typed, the other ones in the pipe will treat it as having a raw stream of text as input or output. Which will sometimes result in a type error (yay, I love type errors!), but in other cases can do something useful.

find | bar | baz
# type error, bar expected json or csv

foo | bar | less
# less displays csv 

So how does typed discover the types of the commands to the left and right? That's the hard part. It has to start by finding the pids to its left and right. There is no really good way to do that, but on Linux, it can be done: Look at what /proc/self/fd/0 and /proc/self/fd/1 link to, which contains the unique identifiers of the pipes. Then look at other processes' fd/0 and fd/1 to find matching pipe identifiers. (It's also possible to do this on OSX, I believe. I don't know about BSDs.)

Searching through all processes would be a bit expensive (around 15 ms with an average number of processes), but there's a nice optimisation: The shell will have started the processes close together in time, so the pids are probably nearby. So look at the previous pid, and the next pid, and fan outward. Also, check isatty to detect the beginning and end of the pipeline and avoid scanning all the processes in those cases.

To indicate the type of the command it will run, typed simply opens a file with an extension of ".typed". The file can be located anywhere, and can be an already existing file, or can be created as needed (eg in /run). Once it discovers the pid at the other end of a pipe, typed first looks at /proc/$pid/cmdline to see if it's also running typed. If it is, it looks at its open file handles to find the first ".typed" file. It may need to wait for the file handle to get opened, which is why it needs to verify the pid is running typed.

There also needs to be a way for typed to learn the type of the command it will run. Reading /usr/share/typed/$command.typed is one way. Or it can be specified at the command line, which is useful for wrapper scripts:

cat >~/bin/bar <<EOF
#/usr/bin/typed --type="JSON | CSV" --output-type="JSON | CSV" /usr/bin/bar
EOF

And typed communicates the type information to the command that it runs. This way a command like bar can know what format its input should be in, and what format to use as output. This might be done with environment variables, eg INPUT_TYPE=JSON and OUTPUT_TYPE=CSV

I think that's everything typed needs, except for the syntax of types and how the type checking works. Which I should probably not try to think up off the cuff. I used Haskell ADT syntax in the example above, but don't think that's necessarily the right choice.

Finally, here's how to make a library that lets a program natively support being used in a typed pipeline. It's a bit tricky, because it has to run typed, because typed checks /proc/$pid/cmdline as detailed above. So, check an environment variable. When not set yet, set it, and exec typed, passing it the path to the program, which it will re-exec. This should be done before program does anything else.


This work was sponsored by Mark Reidenbach on Patreon.

17 June, 2021 01:09AM

June 16, 2021

hackergotchi for Julien Danjou

Julien Danjou

Python Tools to Try in 2021

Python Tools to Try in 2021

The Python programming language is one of the most popular and in huge demand. It is free, has a large community, is intended for the development of projects of varying complexity, is easy to learn, and opens up great opportunities for programmers. To work comfortably with it, you need special Python tools, which are able to simplify your work. We have selected the best Python tools that will be relevant in 2021.

Python tools make life much easier for any developer and provide ample opportunities for creating effective applications or sites. These solutions help to automate different processes and minimize routine tasks.

In fact, their functionality varies considerably. Some are made for full-fledged complex multi-level development, while others have a simplified interface that allows you to develop individual modules and blocks. Before choosing a tool, you need to define objectives and understand goals. In this case, it will become clear what exactly to use.

Mailtrap

As you may probably know, in order to send an email, you need SMTP (Simple Mail Transfer Protocol). This is because you can't just send a letter to the recipient. It needs to be sent to the server from which the recipient will download this letter using IMAP and POP3.

Mailtrap provides an opportunity to send emails in python. Moreover, Mailtrap provides #rest #api to access current emails. It can be used to automate email testing, which will improve your email marketing campaigns. For example, you can check the password recovery form in the Selenium Test and immediately see if an email was sent to the correct address. Then take a new password from the email and try to enter the site with it. Cool, isn't it?

Pros

  • All emails are in one place.
  • Mailtrap provides multiple inboxes.
  • Shared access is present.
  • It is easy to set up.
  • RESTful API

Cons

No visible disadvantages were found.

Django

Python Tools to Try in 2021

Django is a free and open-source full-stack framework. It is one of the most important and popular among Python developers. It helps you move from a prototype to a ready-made working solution in a short time since its main task is to automate processes and speed up work through associations and libraries. It’s a great choice for a product launch.

You can use Django if at least a few of the following points interest you:

  • There is a need to develop the server-side of the API.
  • You need to develop a web application.
  • In the course of work, many changes are made, you have to constantly deploy the application and make edits.
  • There are many complex tasks that are difficult to solve on your own, and you will need the help of the community.
  • ORM support is needed to avoid accessing the database directly.
  • There is a need to integrate new technologies such as machine learning.

Django is a great Python Web Framework that does its job. It is not for nothing that it is one of the most popular, and is actively used by millions of developers.

Pros

Django has quite a few advantages. It contains a large number of ready-made solutions, which greatly simplifies development. Admin panel, database migration, various forms, user authentication tools are extremely helpful. The structure is very clear and simple.

A large community helps to solve almost any problem. Thanks to ORM, there is a high level of security and it is comfortable to work with databases.

Cons

Despite its powerful capabilities, Django's Python Web Framework has drawbacks. It is very massive, monolithic, therefore it develops slowly. Despite the many generic modules, the development speed of Django itself is reduced.

CherryPy

Python Tools to Try in 2021

CherryPy is a micro-framework. It is designed to solve specific problems, capable of running the program on any operating system. CherryPy is used in the following cases:

  • To create an application with small code size.
  • There is a need to manage several servers at the same time.
  • You need to monitor the performance of applications.

CherryPy refers to Python Frameworks, which are designed for specific tasks. It's clear, user-friendly, and ideal for Android development.

Pros

CherryPy Python tool has a friendly and understandable development environment. This is a functional and complete framework, which can be used to build good applications. The source code is open, so the platform is completely free for developers, and the community, although not too large, is very responsive, and always helps to solve problems.

Cons

There are not so many cons to this Python tool. It is not capable of performing complex tasks and functions, it is intended more for specific solutions, for example, for the development of certain plugins or modules.

Pyramid

Python Tools to Try in 2021

Python Pyramid tool is designed for programming complex objects and solving multifunctional problems. It is used by professional programmers and is traditionally used for identification and routing. It is aimed at a wide audience and is capable of developing API prototypes.

It is used in the following cases:

  • You need problem indicator tools to make timely adjustments and edits.
  • You use several programming languages ​​at once;
  • You work with reporting and financial calculations, forecasting;
  • You need to quickly create a simple application.

At the same time, the Python Web Framework Pyramid allows you to create complex applications with great functionality like a translation software.

Pros

Pyramid does an excellent job of developing basic applications quickly. It is quite flexible and easy to learn. In fact, the key to the success of this framework is that it is completely based on fundamental principles, using simple and basic programming techniques. It is minimalistic, but at the same time offers users a lot of freedom of action. It is able to work with both small applications and powerful multifunctional programs.

Cons

It is difficult to deviate from the basic principles. This Python tool makes the decision for you. Simple programs are very easy to implement. But to do something complex and large-scale, you have to completely immerse yourself in the study of the environment and obey it.

Grok

Python Tools to Try in 2021

Grok is a Python tool that works with templates. Its main task is to eliminate repetitions in the code. If the element is repeated, then the template that was already created earlier is simply applied. This greatly simplifies and speeds up the work.

Grok suits developers in the following cases:

  • If a programmer has little experience and is not yet ready to develop his modules.
  • There is a need to quickly develop a simple application.
  • The functionality of the application is simple, straightforward, and the interface does not play a key role.

Pros

The Grok framework is a child of Zope3, which was released earlier. It has a simplified structure of work, easy installation of modules, more capabilities, and better flexibility. It is designed to develop small applications. Yes, it is not intended for complex work, but due to its functionality, it allows you to quickly implement a project.

Cons

The Grok community is not very large, as this Python tool has not gained widespread popularity. Nevertheless, it is used by Python adepts for comfortable development. It is impossible to implement complex tasks on it since the possibilities are quite limited.

Grok is one of the best Python Web Frameworks. It is understandable and has enough features for comfortable development.

Web2Py

Python Tools to Try in 2021

Web2Py is a Python tool that has its own IDEwhich, which includes a code editor, debugger, and deployment. It works great without the need for configuration or installation, provides a high level of data security, and is suitable for work on various platforms.

Web2Py is great in the following cases:

  • When there is a need to develop something on different operating systems.
  • If there is no way to install and configure the framework.
  • When a high level of data security is required, for example, when developing financial applications or sales performance management tools.
  • If you need to carefully track bugs right during development, and not during the testing phase.

Pros

Web2Py is capable of working with different protocols, has a built-in error tracker, and has a backward compatibility feature that helps to work on the basis of previous versions of the framework. This means that code maintenance becomes much easier and cheaper. It's free, open-source, and very flexible.

Cons

Among the many Python tools, there are not many that require the latest version of the language. Web2Py is one of those and won't work on Python 3 and below. Therefore, you need to constantly monitor the updates.

Web2Py does an excellent job of its tasks. It is quite simple and accessible to everyone.

BlueBream

Python Tools to Try in 2021

BlueBream used to be called Zope3 before. It copes well with tasks of the medium and high level of complexity and is suitable for working on serious projects.

Pros

The BlueBream build system is quite powerful and suitable for complex tasks. You can create functional applications on it, and the principle of reuse of components makes the code easier. At the same time, the speed of development increases. The software can be scaled, and a transactional object database provides an easy path to store it. This means that queries are processed quickly and database management is simple.

Cons

This is not a very flexible framework, it is better to know clearly in advance what is required of it. In addition, it cannot withstand heavy loads. When working with 1000 users at the same time, it can crash and give errors. Therefore, it should be used to solve narrow problems.

Python frameworks are often designed for specific tasks. BlueBream is one of these and is suitable for applications where database management plays a key role.

Conclusion

Python tools come in different forms and have vastly different capabilities. There are quite a few of them, but in 2021 these will be the most popular and in demand. Experienced programmers always choose several development tools for their comfortable work.

16 June, 2021 03:30PM by Andriy Zapisotskyi

Abhijith PA

Changing LCD screen of car infotainment system

I have a 2013 model used car that I bought two years ago. It came with a 7 inch touch screen infotainment system on its dash board with features like navigation, Bluetooth phone connectivity and a good FM AM radio. Except the radio I rarely use navigation or Bluetooth phone sync. After couple of months the touch started to become non-responsive. Since all important things such as call termination, mute and volume control have physical switches, I was happy with it.

During a periodic car check up on a local workshop, mechanic pulled battery terminals making the infotainment system locked. Now it ask for 4 digit pass code. Its one of their ant-theft mechanism and in order to enter those digits you need a touch responsive screen. So now I am completed locked out.

I visited service center of this car to get it changed, turns out they don’t repair it and only change by the unit. And will cost me Rs 50,000. Considering my usage is restricted to radio. That price is way too much.

So my next options were,

  1. Use a normal cheap car radio player. I dropped this plan, since stereo players comes in very small size and I might need plastic placeholders to close rest of the area from the 7 inch screen. This can mess aesthetics of dash board. And it also affect value of the car if I will sell in future.

  2. Use a 7 inch third party infotainment system available in the market. I did enquired with local car accessories shop and I dropped this plan as well because most of them are android and comes with enormous number of pre installed apps which I never gonna use it anyway. And I don’t need one more device that need to be connected to the Internet. Also the wiring at the back of these devices are different than what I have already so it need rewiring.

  3. Last option was to change the infotainment system parts my own. I was under the impression that these units are in-house made, tightly assembled and sealed with screws that you never see in your life and you will never able to be open it up, let alone fix parts.

Couple of articles showed me that car infotainment system sizes and wiring at the back are actually standardized. This particular system’s OEM is LG

After gaining confidence from several youtube videos and articles, I chose the 3rd option and started to disassemble head unit. Luckily all those screw driver heads were with me. After taking everything apart I could see back of touch screen glass bulged and pushing against the display. The back of the display had a label with its manufacturers name Innolux display and its model number AT070TN94. You might know this company from all those raspberry pi displays in the market. Also Tesla car makers use Innolux display in their infotainment unit.

I was able to spot an online reseller in Hong Kong. I only need a touch screen display but to be on the safe side I also placed order for display as well(Price difference were negligible) Took 1 month to reach at my hand due to COVID restrictions. I assembled everything together and collected radio pass code the very same day I received the item. Everything is working now. yay!

The touch screen + display cost me Rs 6000/-. I can easily get an average android infotainment around this price. But it was the price I paid for avoiding e-waste.

So if you have a broken head unit in your pre 2015 era car. Try to change the parts yourself. Its not complicated like our mobile phones.

Back of the infotainment system inside infotainment damaged screen After reassembling with new screen Finally

16 June, 2021 05:53AM

June 15, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nikon D4 repair

“Various error messages saved on sequence and aperture control unit in front module. Error also appears on testing here. The service period for this product is expired, and Nikon will not deliver parts. Thus, we can unfortunately not repair your camera. Charged for inspection.”

So, seriously, Nikon, I could understand it if this were a dinky $200 compact, or a phone which could no longer keep up with the burdens of the ecosystem it was part of, but this is a 2012 flagship DSLR. You pay $6000, and yet you can't even get parts nine years later? I'm usually not the one to complain the loudest about “planned obsolescence”, but I think stocking up on parts should be possible. :-)

Supposedly, a third-party repair shop still has D4 parts and the know-how to switch them without messing things up (which I don't). So with some luck, I'll get five more years or so out of it. Oh well, 131k exposures isn't bad at any rate…

15 June, 2021 08:25PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, May 2021

A Debian LTS logo

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In May, we again put aside 2100 EUR to fund Debian projects. There was no proposals for new projects received, thus we’re looking forward to receive more projects from various Debian teams! Please do not hesitate to submit a proposal, if there is a project that could benefit from the funding!

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In May, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 7.0h (out of 14h assigned and 12h from April), thus carrying over 19h to June.
  • Anton Gladky did 12h (out of 12h assigned).
  • Ben Hutchings did 16h (out of 13.5h assigned plus 4.5h from April), thus is carrying over 2h for June.
  • Chris Lamb did 18h (out of 18h assigned).
  • Holger Levsen‘s work was coordinating/managing the LTS team, he did 5.5h and gave back 6.5h to the pool.
  • Markus Koschany did 15h (out of 29.75h assigned and 15h from April), thus carrying over 29.75h to June.
  • Ola Lundqvist did 12h (out of 12h assigned and 4.5h from April), thus carrying over 4.5h to June.
  • Roberto C. Sánchez did 7.5h (out of 27.5h assigned and 27h from April), and gave back 47h to the pool.
  • Sylvain Beucler did 29.75h (out of 29.75h assigned).
  • Thorsten Alteholz did 29.75h (out of 29.75h assigned).
  • Utkarsh Gupta did 29.75h (out of 29.75h assigned).

Evolution of the situation

In May we released 33 DLAs and mostly skipped our public IRC meeting and the end of the month. In June we’ll have another team meeting using video as lined out on our LTS meeting page.
Also, two months ago we announced that Holger would step back from his coordinator role and today we are announcing that he is back for the time being, until a new coordinator is found.
Finally, we would like to remark once again that we are constantly looking for new contributors. Please contact Holger if you are interested!

The security tracker currently lists 41 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

15 June, 2021 12:04PM by Raphaël Hertzog

June 14, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, May 2021

In May I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 4.5 hours from earlier months. I worked 16 hours and will carry over the remainder.

I finished reviewing the futex code in the PREEMPT_RT patchset for Linux 4.9, and identified several places where it had been mis-merged with the recent futex security fixes. I sent a patch for these upstream, which was accepted and applied in v4.9.268-rt180.

I have continued updating the Linux 4.9 package to later upstream stable versions, and backported some missing security fixes. I have still not made a new upload, but intend to do so this week.

14 June, 2021 09:47PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Opinionated IkiWiki v1

It's been more than a year since I wrote about Opinionated IkiWiki, a pre-configured, containerized deployment of Ikiwiki with opinions. My intention was to make something that is easy to get up and running if you are more experienced with containers than IkiWiki.

I haven't yet switched to Opinionated IkiWiki for this site, but that was my goal, and I think it's mature enough now that I can migrate over at some point, so it seems a good time to call it Version 1.0. I have been using it for my own private PIM systems for a while now.

You can pull built images from quay.io, here: https://quay.io/repository/jdowland/opinionated-ikiwiki The source lives here: https://github.com/jmtd/opinionated-ikiwiki A description of some of the changes made to the IkiWiki version lives here: https://github.com/jmtd/ikiwiki/blob/opinionated-doc/README.md

14 June, 2021 08:58PM

Enrico Zini

Pipelining

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Running actions on a server is nice, but a network round trip for each action is not very efficient. If I need to run a linear sequence of actions, I can stream them all to the server, and then read replies streamed from the server as they get executed.

This technique is called pipelining and one can see it used, for example, in Redis, or Mitogen.

Roles

Ansible has the concept of "Roles" as a series of related tasks: I'll play with that. Here's an example role to install and setup fail2ban:

class Role(role.Role):
    def main(self):
        self.add(builtin.apt(
            name=["fail2ban"],
            state="present",
        ))

        self.add(builtin.copy(
            content=inline("""
                [postfix]
                enabled = true
                [dovecot]
                enabled = true
            """),
            dest="/etc/fail2ban/jail.local",
            owner="root",
            group="root",
            mode=0o644,
        ), name="configure fail2ban")

I prototyped roles as classes, with methods that push actions down the pipeline. If an action fails, all further actions for the same role won't executed, and will be marked as skipped.

Since skipping is applied per-role, it means that I can blissfully stream actions for multiple roles to the server down the same pipe, and errors in one role will stop executing that role and not others. Potentially I can get multiple roles going with a single network round-trip:

#!/usr/bin/python3

import sys
from transilience.system import Mitogen
from transilience.runner import Runner


@Runner.cli
def main():
    system = Mitogen("my server", "ssh", hostname="server.example.org", username="root")

    runner = Runner(system)

    # Send roles to the server
    runner.add_role("general")
    runner.add_role("fail2ban")
    runner.add_role("prosody")

    # Run until all roles are done
    runner.main()

if __name__ == "__main__":
    sys.exit(main())

That looks like a playbook, using Python as glue rather than YAML.

Decision making in roles

Besides filing a series of actions, a role may need to take decisions based on the results of previous actions, or on facts discovered from the server. In that case, we need to wait until the results we need come back from the server, and then decide if we're done or if we want to send more actions down the pipe.

Here's an example role that installs and configures Prosody:

from transilience import actions, role
from transilience.actions import builtin
from .handlers import RestartProsody


class Role(role.Role):
    """
    Set up prosody XMPP server
    """
    def main(self):
        self.add(actions.facts.Platform(), then=self.have_facts)

        self.add(builtin.apt(
            name=["certbot", "python-certbot-apache"],
            state="present",
        ), name="install support packages")

        self.add(builtin.apt(
            name=["prosody", "prosody-modules", "lua-sec", "lua-event", "lua-dbi-sqlite3"],
            state="present",
        ), name="install prosody packages")

    def have_facts(self, facts):
        facts = facts.facts  # Malkovich Malkovich Malkovich!

        domain = facts["domain"]
        ctx = {
            "ansible_domain": domain
        }

        self.add(builtin.command(
            argv=["certbot", "certonly", "-d", f"chat.{domain}", "-n", "--apache"],
            creates=f"/etc/letsencrypt/live/chat.{domain}/fullchain.pem"
        ), name="obtain chat certificate")

        with self.notify(RestartProsody):
            self.add(builtin.copy(
                content=self.template_engine.render_file("roles/prosody/templates/prosody.cfg.lua", ctx),
                dest="/etc/prosody/prosody.cfg.lua",
            ), name="write prosody configuration")

            self.add(builtin.copy(
                src="roles/prosody/templates/firewall-ruleset.pfw",
                dest="/etc/prosody/firewall-ruleset.pfw",
            ), name="write prosody firewall")

    # ...

This files some general actions down the pipe, with a hook that says: when the results of this action come back, run self.have_facts().

At that point, the role can use the results to build certbot command lines, render prosody's configuration from Jinja2 templates, and use the results to file further action down the pipe.

Note that this way, while the server is potentially still busy installing prosody, we're already streaming prosody's configuration to it.

If anything goes wrong with the installation of prosody's package, the role will be marked as failed and all further actions of the same role, even those filed by have_facts() will be skipped.

Notify and handlers

In the previous example self.notify() also appears: that's my attempt to model the equivalent of Ansible's handlers. If any of the actions inside the with produce changes, then the RestartProsody role will be executed, potentially filing more actions ad the end of the playbook.

The runner will take care of collecting all the triggered role classes in a set, which discards duplicates, and then running the main() method of all resulting roles, which will cause more actions to be filed down the pipe.

Action conditions

Sometimes some actions are only meaningful as consequences of other actions. Let's take, for example, enabling buster-backports as an extra apt source:

        a = self.add(builtin.copy(
            owner="root",
            group="root",
            mode=0o644,
            dest="/etc/apt/sources.list.d/debian-buster-backports.list",
            content="deb [arch=amd64] https://mirrors.gandi.net/debian/ buster-backports main contrib",
        ), name="enable backports")

        self.add(builtin.apt(
            update_cache=True
        ), name="update after enabling backports",
           # Run only if the previous copy changed anything
           when={a: ResultState.CHANGED},
        )

Here we want to update Apt's cache, which is a slow operation, only after we actually write /etc/apt/sources.list.d/debian-buster-backports.list. If the file was already there from a previous run, we can skip downloading the new package lists.

The when= attributes adds an annotation to the action that is sent town the pipeline, that says that it should only be run if the state of a previous action matches the given one.

In this case, when on the remote it's the turn of "update after enabling backports", it gets skipped unless the state of the previous "enable backports" action is CHANGED.

Effects of pipelining

I ported enough of Ansible's modules to be able to run the provisioning scripts of my VPS entirely via ansible.

This is the playbook run as plain Ansible:

$ time ansible-playbook vps.yaml
[...]
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    2m10.072s
user    0m33.149s
sys 0m10.379s

This is the same playbook run with Ansible speeded up via the Mitogen backend, which makes Ansible more bearable:

$ export ANSIBLE_STRATEGY=mitogen_linear
$ time ansible-playbook vps.yaml
[...]
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    0m24.428s
user    0m8.479s
sys 0m1.894s

This is the same playbook ported to Transilience:

$ time ./provision
[...]
real    0m2.585s
user    0m0.659s
sys 0m0.034s

Doing nothing went from 2 minutes down to 3 seconds!

That's the kind of running time that finally makes me comfortable with maintaining my VPS by editing the playbook only, and never logging in to mess with the system configuration by hand!

Next steps

I'm quite happy with what I have: I can now maintain my VPS with a simple script with quick iterative cycles.

I might use it to develop new playbooks, and port them to ansible only when they're tested and need to be shared with infrastructure that needs to rely on something more solid and battle tested than a prototype provisioning system.

I might also keep working on it as I have more interesting ideas that I'd like to try. I feel like Ansible reached some architectural limits that are hard to overcome without a major redesign, and are in many way hardcoded in its playbook configuration. It's nice to be able to try out new designs without that baggage.

I'd love it if even just the library of Transilience actions could grow, and gain widespread use. Ansible modules standardized a set of management operations, that I think became the way people think about system management, and should really be broadly available outside of Ansible.

If you are interesting in playing with Transilience, such as:

  • polishing the packaging, adding a setup.py, publishing to PIP, packaging in Debian
  • adding example playbooks
  • porting more Ansible modules to Transilience actions
  • improving the command line interface
  • test other ways to feed actions to pipelines
  • test other pipeline primitives
  • add backends besides Local and Mitogen
  • prototype a parser to turn a subsets of YAML playbook syntax into transilience actions
  • adopt it into your multinational organization infrastructure to speed up provisioning times by orders of magnitude at the cost of the development time that it takes to turn this prototype into something solid and road tested
  • create a startup and get millions in venture capital to disrupt the provisioning ecosystem

do get in touch or send a pull request! :)

Next step: Reimagining Ansible variables.

14 June, 2021 03:40PM

Use ansible actions in a script

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

I like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

This doesn't look easy to do with Ansible code as it is. Also, the code quality of various Ansible modules doesn't fit something I'd want in a standard library of cross-platform provisioning functions.

Modeling Actions

I want to keep the declarative, idempotent aspect of describing actions on a system. A good place to start could be a hierarchy of dataclasses that hold the same parameters as ansible modules, plus a run() method that performs the action:

@dataclass
class Action:
    """
    Base class for all action implementations.

    An Action is the equivalent of an ansible module: a declarative
    representation of an idempotent operation on a system.

    An Action can be run immediately, or serialized, sent to a remote system,
    run, and sent back with its results.
    """
    uuid: str = field(default_factory=lambda: str(uuid.uuid4()))
    result: Result = field(default_factory=Result)

    def summary(self):
        """
        Return a short text description of this action
        """
        return self.__class__.__name__

    def run(self, system: transilience.system.System):
        """
        Perform the action
        """
        self.result.state = ResultState.NOOP

I like that Ansible tasks have names, and I hate having to give names to trivial tasks like "Create directory /foo/bar", so I added a summary() method so that trivial tasks like that can take care of naming themselves.

Dataclasses allow to introspect fields and annotate them with extra metadata, and together with docstrings, I can make actions reasonably self-documeting.

I ported some of Ansible's modules over: see complete list in the git repository.

Running Actions in a script

With a bit of glue code I can now run Ansible-style functions from a plain Python script:

#!/usr/bin/python3

from transilience.runner import Script

script = Script()

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

Running Actions remotely

Dataclasses have an asdict function that makes them trivially serializable. If their members stick to data types that can be serialized with Mitogen and the run implementation doesn't use non-pure, non-stdlib Python modules, then I can trivially run actions on all sorts of remote systems using Mitogen:

#!/usr/bin/python3

from transilience.runner import Script
from transilience.system import Mitogen

script = Script(system=Mitogen("my server", "ssh", hostname="machine.example.org", username="user"))

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

How fast would that be, compared to Ansible?

$ time ansible-playbook test.yaml
[...]
real    0m15.232s
user    0m4.033s
sys 0m1.336s

$ time ./test_script

real    0m4.934s
user    0m0.547s
sys 0m0.049s

With a network round-trip for each single operation I'm already 3x faster than Ansible, and it can run on nspawn containers, too!

I always wanted to have a library of ansible modules useable in normal scripts, and I've always been angry with Ansible for not bundling their backend code in a generic library. Well, now there's the beginning of one!

Sweet! Next step, pipelining.

14 June, 2021 02:35PM

François Marier

Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies

Here are all of the extra Debian packages I had to install on my server:

apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl

Then I enabled the CGI module in Apache:

a2enmod cgi

and un-commented the following in /etc/apache2/mods-available/mime.conf:

AddHandler cgi-script .cgi

Creating a separate user account

Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:

adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup

Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):

git clone --bare git://feedingthecloud.branchable.com/ source.git

Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work.

Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:

git branch -d setup
git remote rm origin

Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:

cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud

I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop.

Finaly, I generated a new ssh key without a passphrase:

ssh-keygen -t ed25519

and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config

While I started with the Branchable setup file, I changed the following things in it:

adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()

Then I created the destdir:

mkdir /var/www/blog
chown blog:blog /var/www/blog

and generated the initial copy of the blog as the blog user:

ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild

One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config

Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

    Include /etc/fmarier-org/blog-common
</VirtualHost>

<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

and the common config I put in /etc/fmarier-org/blog-common:

ServerAdmin webmaster@fmarier.org

DocumentRoot /var/www/blog

LogLevel core:info
CustomLog ${APACHE_LOG_DIR}/blog-access.log combined
ErrorLog ${APACHE_LOG_DIR}/blog-error.log

AddType application/rss+xml .rss

<Location /blog.cgi>
        Options +ExecCGI
</Location>

before enabling all of this using:

a2ensite blog
apache2ctl configtest
systemctl restart apache2.service

The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements

Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served.

First of all, I enabled the HTTP/2 and Brotli modules:

a2enmod http2
a2enmod brotli

and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:

<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType ${TRANSFER_COMPRESSION} text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/rss+xml
    AddOutputFilterByType ${TRANSFER_COMPRESSION} application/xml
  </IfModule>
</IfDefine>

and replacing /etc/apache2/mods-available/deflate.conf with the following:

# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632

before enabling this new config:

a2enconf compression

Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion

    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 

<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>

Then I followed the Mozilla Observatory recommendations and enabled the following security headers:

Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"

Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure.

I also used the Mozilla TLS config generator to improve the TLS config for my server.

Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:

Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt

I also followed these instructions to create a sitemap for my blog with the following alias:

Alias /sitemap.xml /var/www/blog/sitemap/index.rss

Finally, I simplified a few error pages to save bandwidth:

ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s

Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/blog-error.log 

Based on that, I added a few redirects to point bots and users to the location of my RSS feed:

Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss

and to tell them to stop trying to fetch obsolete resources:

Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi

I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:

Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom

I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:

User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements

There are a few things I'd like to improve on my current setup.

The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for.

Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:

[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"

but I'd like to also reject unsigned pushes.

While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Patches for this would be very welcome of course.

Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:

[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom

This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

14 June, 2021 07:20AM

Sergio Durigan Junior

I am not on Freenode anymore

This is a quick public announcement to say that I am not on the Freenode IRC network anymore. My nickname (sergiodj), which was more than a decade old, has just been deleted (along with every other nickname and channel in that network) from their database today, 2021-06-14.

For your safety, you should assume that everybody you knew at Freenode is not there either, even if you see their nicknames online. Do not trust without verifying. In fact, I would strongly encourage that you do not join Freenode anymore: their new policies are absolutely questionable and their disregard for their users is blatant.

If you would like to chat with me, you can find me at OFTC (preferred) and Libera.

14 June, 2021 04:00AM by Sergio Durigan Junior

June 13, 2021

Vincent Fourmond

Solution for QSoas quiz #2: averaging several Y values for the same X value

This post describes two similar solutions to the Quiz #2, using the data files found there. The two solutions described here rely on split-on-values. The first solution is the one that came naturally to me, and is by far the most general and extensible, but the second one is shorter, and doesn't require external script files.

Solution #1

The key to both solution is to separate the original data into a series of datasets that only contain data at a fixed value of x (which corresponds here to a fixed pH), and then process each dataset one by one to extract the average and standard deviation. This first step is done thus:
QSoas> load kcat-vs-ph.dat
QSoas> split-on-values pH x /flags=data
After these commands, the stacks contains a series of datasets bearing the data flag, that each contain a single column of data, as can be seen from the beginnings of a show-stack command:
QSoas> k
Normal stack:
	 F  C	Rows	Segs	Name	
#0	(*) 1	43	1	'kcat-vs-ph_subset_22.dat'
#1	(*) 1	44	1	'kcat-vs-ph_subset_21.dat'
#2	(*) 1	43	1	'kcat-vs-ph_subset_20.dat'
...
Each of these datasets have a meta-data named pH whose value is the original x value from kcat-vs-ph.dat. Now, the idea is to run a stats command on the resulting datasets, extracting the average value of x and its standard deviation, together with the value of the meta pH. The most natural and general way to do this is to use run-for-datasets, using the following script file (named process-one.cmds):
stats /meta=pH /output=true /stats=x_average,x_stddev
So the command looks like:
QSoas> run-for-datasets process-one.cmds flagged:data
This command produces an output file containing, for each flagged dataset, a line containing x_average, x_stddev, and pH. Then, it is just a matter of loading the output file and shuffling the columns in the right order to get the data in the form asked. Overall, this looks like this:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
output result.dat /overwrite=true
run-for-datasets process-one.cmds flagged:data
l result.dat
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2
The slight improvement over what is described above is the use of the output command to write the output to a dedicated file (here result.dat), instead of out.dat and ensuring it is overwritten, so that no data remains from previous runs.

Solution #2

The second solution is almost the same as the first one, with two improvements:
  • the stats command can work with datasets other than the current one, by supplying them to the /buffers= option, so that it is not necessary to use run-for-datasets;
  • the use of the output file can by replaced by the use of the accumulator.
This yields the following, smaller, solution:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
stats /meta=pH /accumulate=* /stats=x_average,x_stddev /buffers=flagged:data
pop
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2


About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

13 June, 2021 09:14PM by Vincent Fourmond (noreply@blogger.com)

June 12, 2021

hackergotchi for Norbert Preining

Norbert Preining

Future of Cinnamon in Debian

OK, this is not an easy post. I have been maintaining Cinnamon in Debian for quite some time, since around the times version 4 came out. The soon (hahaha) to be released Bullseye will carry the last release of the 4-track, but version 5 is already waiting, After Bullseye, the future of Cinnamon in Debian currently looks bleak.

Since my switch to KDE/Plasma, I haven’t used Cinnamon in months. Only occasionally I tested new releases, but never gave them a real-world test. Having left Gnome3 for it’s complete lack of usability for pro-users, I escaped to Cinnamon and found a good home there for quite some time – using modern technology but keeping user interface changes conservative. For long time I haven’t even contemplated using KDE, having been burned during the bad days of KDE3/4 when bloat-as-bloat-can-be was the best description.

What revelation it was that KDE/Plasma was more lightweight, faster, responsive, integrated, customizable, all in all simple great. Since my switch to KDE/Plasma I think not for a second I have missed anything from the Gnome3 or Cinnamon world.

And that means, I will most probably NOT packaging Cinnamon 5, nor do any real packaging work of Cinnamon for Debian in the future. Of course, I will try to keep maintenance of the current set of packages for Bullseye, but for the next release, I think it is time that someone new steps in. Cinnamon packaging taught me a lot on how to deal with multiple related packages, which is of great use in the KDE packaging world.

If someone steps forward, I will surely be around for support and help, but as long as nobody takes the banner, it will mean the end of Cinnamon in Debian.

Please contact me if you are interested!

12 June, 2021 01:19PM by Norbert Preining

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

fabre.debian.net has moved to Debian.net Team Infrastructure

Today, fabre.debian.net has moved to Debian.net Team Infrastructure

So far, fabre.debian.net was sponsored by FOSSHOST which provides us a VPS instance since Jan, 2021. It was located at OSU Open Source Lab. It worked pretty well, Thanks FOSSHOST sponsorship since ever!

Now, fabre.debian.net uses the VPS instance which is provided by Debian.net Team Infrastructure. (still non-DSA managed) It is hosted at HETZNER Cloud.

About fabre.debian.net

fabre.debian.net is a experimental service to demonstrate how to improve user experience with finding and fixing Debian unstable related bugs for making "unstable life" comfortable.

Thank Debian.net Team for sponsoring,

12 June, 2021 12:00PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Wrote a quick hack to open chroot in emacs tramp.

Wrote a quick hack to open chroot in emacs tramp. I wrote a mode for cros_sdk and it was relatively simple. I figured that chroot must be easier. I could write one in about 30 minutes. I need to mount proc and home inside the chroot to make it useful, but here goes. chroot-tramp.el

12 June, 2021 08:32AM by Junichi Uekawa

June 11, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

New: The Debian BBB Packaging Team (and: Kurento Media Server goes Debian)

Today, Fre(i)e Software GmbH has been contracted for packaging Kurento Media Server for Debian. This packaging project will be funded by GUUG e.V. (the German Unix User Group e.V.). A big thanks to the people from GUUG e.V. for making this packaging project possible.

About Kurento Media Server

Kurento is an open source software project providing a platform suitable for creating modular applications with advanced real-time communication capabilities. For knowing more about Kurento, please visit the Kurento project website: https://www.kurento.org.

Kurento is part of FIWARE. For further information on the relationship of FIWARE and Kurento check the Kurento FIWARE Catalog Entry. Kurento is also part of the NUBOMEDIA research initiative.

Kurento Media Server is a WebRTC-compatible server that processes audio and video streams, doing composable pipeline-based processing of media.

About BigBlueButton

As some of you may know, Kurento Media Server is one of the core components of the BigBlueButton software, an ,,Open Source Virtual Classroom Software''.

The context of the KMS funding is - after several other steps - getting the complete software component stack of BigBlueButton (aka BBB) into Debian some day, so that we can provide BBB as native Debian packages. On Debian. (Currently, one needs to use an always already a bit outdated version of Ubuntu).

Due to this greater context, I just created the Debian BBB Packaging Team on salsa.debian.org.

Outlook and Appreciation

The current project (uploading Kurento Media Server to Debian) will very likely be extended to one year of package maintenance for all Kurento Media Server components in Debian. Extending this maintenance funding to a second year, has also been discussed, and seems a possible option.

Probably most Debian Developer colleagues will agree with me when I say that Debian packaging is not a one-time shot until the first uploads of software packages have landed and settled. Debian package maintenance is a long term responsibility and requires long term commitment. I am very glad, that the people at GUUG e.V are on the same page with me (with us) regarding this. This is much and dearly appreciated. Thank you!!!

What else?

Well, we have also talked about another BigBlueButton component that is not yet in Debian: FreeSwitch. But more of that, when time has come.

How to Join the Debian BBB Packaging Team?

Please ping me via IRC (sunweaver on OFTC IRC) or [matrix] (@sunweaver:matrix.org).

How to Support the Debian BBB Packaging Team?

If you, your organization, your company, your municipality, your university, etc. feels like supporting the effort of packaging BigBlueButton for Debian, please get in touch with: mike.gabriel@freiesoftware.gmbh

And yes, the company homepage is not online, yet, but it is in the makings...

light+love
Mike (aka sunweaver)

11 June, 2021 09:35PM by sunweaver

hackergotchi for Colin Watson

Colin Watson

SSH quoting

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"
/home/user

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp
pwd

The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"
/home/user
/tmp

Following the logic above, this ends up as if you’d run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"
/tmp

And this is as if you’d run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

11 June, 2021 10:22AM by Colin Watson

hackergotchi for Mike Gabriel

Mike Gabriel

Linux on Acer Spin 3

Recently, I bought an Acer Spin 3 Convertible Notebook for the company and provided it to Robert Tari for his daily work on Ayatana Indicators (which currently is funded by the UBports Foundation via my company Fre(i)e Software GmbH).

Some days ago Robert reported back about a sleepless night he spent with that machine... He got stuck with a tricky issue regarding the installation of Manjaro GNU/Linux on that machine, that could be -- at the end -- resolved by a not so well documented trick.

Before anyone else spends another sleepless night on this, we thought we'd better share Robert's solution.

So, the below applies to the Acer Spin 3 series (and probably to other Spin models, perhaps even some other Acer laptops):

Acer Spin 3 Pre-Inst Cheat Codes

Before you even plug in the USB install media:

  1. Go to UEFI settings (i.e. BIOS for us elderly people) [F2]
  2. Security -> Set Supervisor Password [Enabled]
  3. Enter the password you'll use
  4. Boot -> Secure Boot -> [Disabled] (you can't disable it without a set supervisor password)
  5. Exit -> Exit Saving Changes
  6. Restart and go to UEFI settings again [F2]
  7. Main -> [Now press CTRL + S] -> VMD Controller -> [Disabled]
  8. Exit -> Exit Saving Changes
  9. Now plug in the install USB and restart

Esp. the disabling of the VMD Controller is essential. Otherwise, GRUB won't find any partition nor EFI registered boot items after the installation and drops into the EFI recovery shell.

Robert hasn't tested the Wacom pen that comes with the device, nor the fingerprint reader, yet.

Everything else works out-of-the-box.

light+love
Mike Gabriel (aka sunweaver)

11 June, 2021 06:31AM by sunweaver

June 10, 2021

Petter Reinholdtsen

Nikita version 0.6 released - free software archive API server

I am very pleased to be able to share with you the announcement of a new version of the archiving system Nikita published by its lead developer Thomas Sødring:

It is with great pleasure that we can announce a new release of nikita. Version 0.6 (https://gitlab.com/OsloMet-ABI/nikita-noark5-core). This release makes new record keeping functionality available. This really is a maturity release. Both in terms of functionality but also code. Considerable effort has gone into refactoring the codebase and simplifying the code. Notable changes for this release include:

  • Significantly improved OData parsing
  • Support for business specific metadata and national identifiers
  • Continued implementation of domain model and endpoints
  • Improved testing
  • Ability to export and import from arkivstruktur.xml

We are currently in the process of reaching an agreement with an archive institution to publish their picture archive using nikita with business specific metadata and we hope that we can share this with you soon. This is an interesting project as it allows the organisation to bring an older picture archive back to life while using the original metadata values stored as business specific metadata. Combined with OData means the scope and use of the archive is significantly increased and will showcase both the flexibility and power of Noark.

I really think we are approaching a version 1.0 of nikita, even though there is still a lot of work to be done. The notable work at the moment is to implement access-control and full text indexing of documents.

My sincere thanks to everyone who has contributed to this release!

- Thomas

Release 0.6 2021-06-10 (d1ba5fc7e8bad0cfdce45ac20354b19d10ebbc7b)

  • Refactor metadata entity search
  • Remove redundant security configuration
  • Make OpenAPI documentation work
  • Change database structure / inheritance model to a more sensible approach
  • Make it possible to move entities around the fonds structure
  • Implemented a number of missing endpoints
  • Make sure yml files are in sync
  • Implemented/finalised storing and use of
         
    • Business Specific Metadata
    •    
    • Norwegian National Identifiers
    •    
    • Cross Reference
    •    
    • Keyword
    •    
    • StorageLocation
    •    
    • Author
    •    
    • Screening for relevant objects
    •    
    • ChangeLog
    •    
    • EventLog
  • Make generation of updated docker image part of successful CI pipeline
  • Implement pagination for all list requests
         
    • Refactor code to support lists
    •    
    • Refactor code for readability
    •    
    • Standardise the controller/service code
  • Finalise File->CaseFile expansion and Record->registryEntry/recordNote expansion
  • Improved Continuous Integration (CI) approach via gitlab
  • Changed conversion approach to generate tagged PDF documents
  • Updated dependencies
         
    • For security reasons
    •    
    • Brought codebase to spring-boot version 2.5.0
    •    
    • Remove import of necessary dependencies
    •    
    • Remove non-used metrics classes
  • Added new analysis to CI including
  • Implemented storing of Keyword
  • Implemented storing of Screening and ScreeningMetadata
  • Improved OData support
         
    • Better support for inheritance in queries where applicable
    •    
    • Brought in more OData tests
    •    
    • Improved OData/hibernate understanding of queries
    •    
    • Implement $count, $orderby
    •    
    • Finalise $top and $skip
    •    
    • Make sure & is used between query parameters
  • Improved Testing in codebase
         
    • A new approach for integration tests to make test more readable
    •    
    • Introduce tests in parallel with code development for TDD approach
    •    
    • Remove test that required particular access to storage
  • Implement case-handling process from received email to case-handler
         
    • Develop required GUI elements (digital postroom from email)
    •    
    • Introduced leader, quality control and postroom roles
  • Make PUT requests return 200 OK not 201 CREATED
  • Make DELETE requests return 204 NO CONTENT not 200 OK
  • Replaced 'oppdatert*' with 'endret*' everywhere to match latest spec
  • Upgrade Gitlab CI to use python > 3 for CI scripts
  • Bug fixes
         
    • Fix missing ALLOW
    •    
    • Fix reading of objects from jar file during start-up
    •    
    • Reduce the number of warnings in the codebase
    •    
    • Fix delete problems
    •    
    • Make better use of cascade for "leaf" objects
    •    
    • Add missing annotations where relevant
    •    
    • Remove the use of ETAG for delete
    •    
    • Fix missing/wrong/broken rels discovered by runtest
    •    
    • Drop unofficial convertFil (konverterFil) end point
    •    
    • Fix regex problem for dateTime
    •    
    • Fix multiple static analysis issues discovered by coverity
    •    
    • Fix proxy problem when looking for object class names
    •    
    • Add many missing translated Norwegian to English (internal) attribute/entity names
    •    
    • Change UUID generation approach to allow code also set a value
    •    
    • Fix problem with Part/PartParson
    •    
    • Fix problem with empty OData search results
    •    
    • Fix metadata entity domain problem
  • General Improvements
         
    • Makes future refactoring easier as coupling is reduced
    •    
    • Allow some constant variables to be set from property file
    •    
    • Refactor code to make reflection work better across codebase
    •    
    • Reduce the number of @Service layer classes used in @Controller classes
    •    
    • Be more consistent on naming of similar variable types
    •    
    • Start printing rels/href if they are applicable
    •    
    • Cleaner / standardised approach to deleting objects
    •    
    • Avoid concatenation when using StringBuilder
    •    
    • Consolidate code to avoid duplication
    •    
    • Tidy formatting for a more consistent reading style across similar class files
    •    
    • Make throw a log.error message not an log.info message
    •    
    • Make throw print the log value rather than printing in multiple places
    •    
    • Add some missing pronom codes
    •    
    • Fix time formatting issue in Gitlab CI
    •    
    • Remove stale / unused code
    •    
    • Use only UUID datatype rather than combination String/UUID for systemID
    •    
    • Mark variables final and @NotNull where relevant to indicate intention
  • Change Date values to DateTime to maintain compliance with Noark 5 standard
  • Domain model improvements using Hypersistence Optimizer
         
    • Move @Transactional from class to methods to avoid borrowing the JDBC Connection unnecessarily
    •    
    • Fix OneToOne performance issues
    •    
    • Fix ManyToMany performance issues
    •    
    • Add missing bidirectional synchronization support
    •    
    • Fix ManyToMany performance issue
  • Make List
  • and Set
  • use final-keyword to avoid potential problems during update operations
  • Changed internal URLs, replaced "hateoas-api" with "api".
  • Implemented storing of Precedence.
  • Corrected handling of screening.
  • Corrected _links collection returned for list of mixed entity types to match the specific entity.
  • Improved several internal structures.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.oftc.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10 June, 2021 03:10PM

Vincent Bernat

Serving WebP & AVIF images with Nginx

WebP and AVIF are two image formats for the web. They aim to produce smaller files than JPEG and PNG. They both support lossy and lossless compression, as well as alpha transparency. WebP was developed by Google and is a derivative of the VP8 video format.1 It is supported on most browsers. AVIF is using the newer AV1 video format to achieve better results. It is supported by Chromium-based browsers and has experimental support for Firefox.2

Your browser supports WebP and AVIF image formats. Your browser supports none of these image formats. Your browser only supports the WebP image format. Your browser only supports the AVIF image format.

Without JavaScript, I can’t tell what your browser supports.

Converting and optimizing images

For this blog, I am using the following shell snippets to convert and optimize JPEG and PNG images. Skip to the next section if you are only interested in the Nginx setup.

JPEG images

JPEG images are converted to WebP using cwebp.

find media/images -type f -name '*.jpg' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      cwebp -q 84 -af '{}' -o '{}'.webp

They are converted to AVIF using avifenc from libavif:

find media/images -type f -name '*.jpg' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      avifenc --codec aom --yuv 420 --min 20 --max 25 '{}' '{}'.avif

Then, they are optimized using jpegoptim built with Mozilla’s improved JPEG encoder, via Nix. This is one reason I love Nix.

jpegoptim=$(nix-build --no-out-link \
      -E 'with (import <nixpkgs>{}); jpegoptim.override { libjpeg = mozjpeg; }')
find media/images -type f -name '*.jpg' -print0 \
  | sort -z
  | xargs -0n10 -P$(nproc) \
      ${jpegoptim}/bin/jpegoptim --max=84 --all-progressive --strip-all

PNG images

PNG images are down-sampled to 8-bit RGBA-palette using pngquant. The conversion reduces file sizes significantly while being mostly invisible.

find media/images -type f -name '*.png' -print0 \
  | sort -z
  | xargs -0n10 -P$(nproc) \
      pngquant --skip-if-larger --strip \
               --quiet --ext .png --force

Then, they are converted to WebP with cwebp in lossless mode:

find media/images -type f -name '*.png' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      cwebp -z 8 '{}' -o '{}'.webp

No conversion is done to AVIF: lossless compression is not as efficient as pngquant and lossy compression is only marginally better than what I get with WebP.

Keeping only the smallest files

I am only keeping WebP and AVIF images if they are at least 10% smaller than the original format: decoding is usually faster for JPEG and PNG; and JPEG images can be decoded progressively.3

for f in media/images/**/*.{webp,avif}; do
  orig=$(stat --format %s ${f%.*})
  new=$(stat --format %s $f)
  (( orig*0.90 > new )) || rm $f
done

I only keep AVIF images if they are smaller than WebP.

for f in media/images/**/*.avif; do
  [[ -f ${f%.*}.webp ]] || continue
  orig=$(stat --format %s ${f%.*}.webp)
  new=$(stat --format %s $f)
  (( $orig > $new )) || rm $f
done

We can compare how many images are kept when converted to WebP or AVIF:

printf "     %10s %10s %10s\n" Original WebP AVIF
for format in png jpg; do
  printf " ${format:u} %10s %10s %10s\n" \
    $(find media/images -name "*.$format" | wc -l) \
    $(find media/images -name "*.$format.webp" | wc -l) \
    $(find media/images -name "*.$format.avif" | wc -l)
done

AVIF is better than MozJPEG for most JPEG files while WebP beats MozJPEG only for one file out of two:

       Original       WebP       AVIF
 PNG         64         47          0
 JPG         83         40         74

Further reading

I didn’t detail my choices for quality parameters and there is not much science in it. Here are two resources providing more insight on AVIF:

Serving WebP & AVIF with Nginx

To serve WebP and AVIF images, there are two possibilities:

  1. use <picture> to let the browser pick the format it supports, or
  2. use content negotiation to let the server send the best-supported format.

I use the second approach. It relies on inspecting the Accept HTTP header in the request. For Chrome, it looks like this:

Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8

I configure Nginx to serve AVIF image, then the WebP image, and fallback to the original JPEG/PNG image depending on what the browser advertises:4

http {
  map $http_accept $webp_suffix {
    default        "";
    "~image/webp"  ".webp";
  }
  map $http_accept $avif_suffix {
    default        "";
    "~image/avif"  ".avif";
  }
}
server {
  # […]
  location ~ ^/images/.*\.(png|jpe?g)$ {
    add_header Vary Accept;
    try_files $uri$avif_suffix$webp_suffix $uri$avif_suffix $uri$webp_suffix $uri =404;
  }
}

For example, let’s suppose the browser requests /images/ont-box-orange@2x.jpg. If it supports WebP but not AVIF, $webp_suffix is set to .webp while $avif_suffix is set to the empty string. The server tries to serve the first existing file in this list:

  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg

If the browser supports both AVIF and WebP, Nginx walks the following list:

  • /images/ont-box-orange@2x.jpg.webp.avif (it never exists)
  • /images/ont-box-orange@2x.jpg.avif
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg

Eugene Lazutkin explains in more detail how this works. I have only presented a variation of his setup supporting both WebP and AVIF.


  1. VP8 is only used for lossy compression. Lossless compression is using an unrelated format↩︎

  2. Firefox support was scheduled for Firefox 86 but because of the lack of proper color space support, it is still not enabled by default. ↩︎

  3. Progressive decoding is not planned for WebP but could be implemented using low-quality thumbnail images for AVIF. See this issue for a discussion. ↩︎

  4. The Vary header ensures an intermediary cache (a proxy or a CDN) checks the Accept header before using a cached response. Internet Explorer has trouble with this header and may not be able to cache the resource properly. There is a workaround but Internet Explorer’s market share is now so small that it is pointless to implement it. ↩︎

10 June, 2021 01:39PM by Vincent Bernat

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

New Desktop Computer

I built my last desktop computer what seems like ages ago. In 2011, I was in a very different place, both financially and as a person. At the time, I was earning minimum wage at my school's café to pay rent. Since the café was owned by the school cooperative, I had an employee discount on computer parts. This gave me a chance to build my first computer from spare parts at a reasonable price.

After 10 years of service1, the time has come to upgrade. Although this machine was still more than capable for day to day tasks like browsing the web or playing casual video games, it started to show its limits when time came to do more serious work.

Old computer specs:

CPU: AMD FX-8530
Memory: 8GB DDR3 1600Mhz
Motherboard: ASUS TUF SABERTOOTH 990FX R2.0
Storage: Samsung 850 EVO 500GB SATA

I first started considering an upgrade in September 2020: David Bremner was kindly fixing a bug in ledger that kept me from balancing my books and since it seemed like a class of bug that would've been easily caught by an autopkgtest, I decided to add one.

After adding the necessary snippets to run the upstream testsuite (an easy task I've done multiple times now), I ran sbuild and ... my computer froze and crashed. Somehow, what I thought was a simple Python package was maxing all the cores on my CPU and using all of the 8GB of memory I had available.2

A few month later, I worked on jruby and the builds took 20 to 30 minutes — long enough to completely disrupt my flow. The same thing happened when I wanted to work on lintian: the testsuite would take more than 15 minutes to run, making quick iterations impossible.

Sadly, the pandemic completely wrecked the computer hardware market and prices here in Canada have only recently started to go down again. As a result, I had to wait more time than I would've liked not to pay scalper prices.

New computer specs:

CPU: AMD Ryzen 5900X
Memory: 64GB DDR4 3200MHz
Motherboard: MSI MPG B550 Gaming Plus
Storage: Corsair MP600 500 GB Gen4 NVME

The difference between the two machines is pretty staggering: I've gone from a CPU with 2 cores and 8 threads, to one with 12 cores and 24 threads. Not only that, but single-threaded performance has also vastly increased in those 10 years.

A good example would be building grammalecte, a package I've recently sponsored. I feel it's a good benchmark, since the build relies on single-threaded performance for the normal Python operations, while being threaded when it compiles the dictionaries.

On the old computer:

Build needed 00:10:07, 273040k disk space

And as you can see, on the new computer the build time has been significantly reduced:

Build needed 00:03:18, 273040k disk space

Same goes for things like the lintian testsuite. Since it's a very multi-threaded workload, it now takes less than 2 minutes to run; a 750% improvement.

All this to say I'm happy with my purchase. And — lo and behold — I can now build ledger without a hitch, even though it maxes my 24 threads and uses 28GB of RAM. Who would've thought...

Screen capture of htop showing how much resources ledger takes to build


  1. I managed to fry that PC's motherboard in 2016 and later replaced it with a brand new one. I also upgraded the storage along the way, from a very cheap cacheless 120GB SSD to a larger Samsung 850 EVO SATA drive. 

  2. As it turns out, ledger is mostly written in C++ :) 

10 June, 2021 04:00AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#33: Collaborative Editing and Execution in Shared Byoby Sessions

Welcome to the 33th post in the rigorously raconteuring R recommendations series, or R4 for short. This post is also a post in the T4 series of tips, tricks, tools, and toys as it picks up and extends earlier posts on byobu. And it fits nicely in the more recent ESS-Intro series as we show some Emacs. You can find earlier R4 posts here, and the T4 posts here; the ESS-Intro series is here.

The focus of this short video (and slides) is on collaboration using files, but also entire sessions, execution and all aspects of joint exploration, development or debugging. Anything you can do in a terminal you can also do shared in a terminal. The video contains a brief lightning talk, and a shared session jointly with Grant McDermott and Vicent Arel-Bundock. My big big thanks to both of them for prodding and encouragement, as well as fearless participation in the joint section of the video:

The corresponding pdf slides are here.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 June, 2021 12:49AM

June 09, 2021

hackergotchi for Michael Prokop

Michael Prokop

efivars is gone with Debian/bullseye #newinbullseye

Continuing with #newinbullseye, it’s worth being aware of, that efivars is gone with the kernel version shipped as of Debian/bullseye.

Quoting from wiki.debian.org/UEFI:

The Linux kernel gives access to the UEFI configuration variables via a set of files under /sys, using two different interfaces.

The older interface was showing files under /sys/firmware/efi/vars, and this is what was used by default in both Wheezy and Jessie.

The new interface is efivarfs, which will expose things in a slightly different format under /sys/firmware/efi/efivars.
This is the new preferred way of using UEFI configuration variables, and Debian switched to it by default from Stretch onwards.

Now, CONFIG_EFI_VARS is no longer enabled in Debian due to commit 20146398c4 (shipped as such with Debian kernel package versions >=5.10.1-1~exp1).

As a result, the kernel module efivars is no longer available on systems running Debian kernels >=5.10 (which includes Debian/bullseye). Now, when running such a system in EFI mode, chroot-ing into a system and executing e.g. efibootmgr, it might fail with:

# efibootmgr
EFI variables are not supported on this system.

This is caused by /sys/firmware/efi/vars no longer being available, because of the disabled CONFIG_EFI_VARS. To get this working again, you need to make efivarfs available via:

# mount -t efivarfs efivarfs /sys/firmware/efi/efivars

Then efibootmgr and further tools relying on efivars should work again.

FYI: if you’re a user of Grml’s grml-chroot tool, this is going to be handled out of the box for you.

09 June, 2021 05:16PM by mika

Thorsten Alteholz

My Debian Activities in May 2021

FTP master

This month I accepted 85 and rejected 6 packages. The overall number of packages that got accepted was only 88. Yeah, Debian is frozen but hopefully will unfreeze soon.

Debian LTS

This was my eighty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 29.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2650-1] exim4 security update for 17 CVEs
  • [DLA 2665-1] ring security update one CVE
  • [DLA 2669-1] libxml2 security update one CVE
  • the fix for tnef/CVE-2019-18849 had been approved and I could do the PU-upload

I also made some progress with gpac and struggle with dozens of issues here.

Last but not least I did some days of frontdesk duties, which for whatever reason was rather time-consuming this month.

Debian ELTS

This month was the thirty-fifth ELTS month.

During my allocated time I uploaded:

  • ELA-420-1 for exim4
  • ELA-435-1 for python2.7
  • ELA-436-1 for libxml2

I also made some progress with python3.4

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

Last but not least I adopted gnucobol.

09 June, 2021 03:33PM by bladmin

June 08, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Encoding AVIF from Perl

AVIF is basically AV1 in a HEIF container; it's one of several contenders (among WebP, WebP2, HEIC, JPEG XL and probably others) for the next-generation still image codec on the web.

I wanted to try it out, but it turns out that ImageMagick (which is what my image gallery uses behind the scenes) in Debian is too old to support writing it, and besides, I'm not sure if I'd trust it to get everything right. Here's the code for 4:2:0:

      my ($fh, $raw_filename) = File::Temp::tempfile('tmp.XXXXXXXX', DIR => $dirname, SUFFIX => '.yuv');
      # Write a Y4M header, so that we get the chroma siting and color space correct.
      printf $fh "YUV4MPEG2 W%d H%d F25:1 Ip A1:1 C420jpeg XYSCSS=420JPEG XCOLORRANGE=FULL\nFRAME\n", $nwidth, $nheight;
      my %parms = (
        file => $fh,
        filename => $raw_filename,
        'sampling-factor' => '2x2'
      ); 
$cimg->write(%parms);
close($fh);
my $ivf_filename; ($fh, $ivf_filename) = File::Temp::tempfile('tmp.XXXXXXXX', DIR => $dirname, SUFFIX => '.ivf'); close($fh);
system('aomenc', '--quiet', '--cpu-used=0', '--bit-depth=10', '--end-usage=q', '--cq-level=10', '--target-bitrate=0', '--good', '--aq-mode=1', '--matrix-coefficients=bt601', '-o', $ivf_filename, $raw_filename); unlink($raw_filename); system('MP4Box', '-quiet', '-add-image', "$ivf_filename:primary", '-ab', 'avif', '-ab', 'miaf', '-new', $cachename);

I eventually figured 4:2:0 wasn't going to cut it for all of my images, though, so I switched to 4:4:4, which confusingly is rather different from ImageMagick (not from aomenc):

      my ($fh, $raw_filename) = File::Temp::tempfile('tmp.XXXXXXXX', DIR => $dirname, SUFFIX => '.ycbcr');
      # Write a Y4M header, so that we get the chroma range correct.
      printf $fh "YUV4MPEG2 W%d H%d F25:1 Ip A1:1 C444 XYSCSS=444 XCOLORRANGE=FULL\nFRAME\n", $nwidth, $nheight;
      my %parms = (
        file => $fh,
        filename => $raw_filename,
        interlace => 'Plane'
      );
      $cimg->write(%parms);
      # and so on...

Unfortunately, I am unable to find a setting where AVIF 4:4:4 consistently outperforms regular JPEG 4:4:4 (even with libjpeg); it's fantastic at lower bitrates, but I want visual lossless, and at those bitrates, it's either smoothing way too much structure or using way too may bits. So next out is probably trying JPEG XL, or maybe figuring out how to hook up mozjpeg…

08 June, 2021 11:15PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

LaTeX draft documents

I'm writing up a PhD deliverable (which will show up here eventually) using LaTeX, which is my preferred tool for such things, since I also use it for papers, and will eventually be using it for my thesis itself. For this last document, I experimented with a few packages and techniques for organising the document which I found useful, so I thought I'd share them.

What version is this anyway?

I habitually store and develop my documents in git repositories. From time to time I generate a PDF and copy it off elsewhere to review (e.g., an iPad). Later on it can be useful to be able to figure out exactly what source built the PDF. I achieve this using

\newcommand{\version}{\input|"git describe --always --dirty"}

And \version\ somewhere in the header of the document.

Draft mode

The common document classes all accept a draft argument, to enable Draft Mode.

\documentclass[12pt,draft]{article}

Various other packages behave differently if Draft Mode is enabled. The graphicx package, for example, doesn't actually draw pictures in draft mode, which I don't find useful. So for that package, I force it to behave as if we were in "Final Mode" at all times:

\usepackage[final]{graphicx}

I want to also include some different bits and pieces in Draft Mode. Although the final version won't need it, I find having a Table of Contents very helpful during the writing process. The ifdraft package adds a convenience macro to query whether we are in draft or not. I use it like so:

\ifdraft{
This page will be cut from the final report.
\tableofcontents
\newpage
}{}

For this document, I have been given the section headings I must use and the number of pages each section must run to. When drafting, I want to include the page budget in the section names (e.g. Background (2 pages)). I also force new pages at the beginning of each Section, to make it easier to see how close I am to each section's page budget.

\ifdraft{\newpage}{}
\section{Work completed\ifdraft{ (1 page)}{}} % 1 Page

todonotes

Two TODO items in the margin

Two TODO items in the margin

Collated TODOs in a list

Collated TODOs in a list

The todonotes package package is one of many that offers macros to make managing in-line TODO notes easier. Within the source of my document, I can add a TODO right next to the relevant text with \todo{something to do}. In the document, by default, this is rendered in the right-hand margin. With the right argument, the package will only render the notes in draft mode.

\usepackage[textsize=small,obeyDraft]{todonotes}

todonotes can also collate all the TODOs together into a single list. The list items are hyperlinked back to the page where the relevant item appears.

\ifdraft{
\newpage
\section{TODO}
  This page will be cut from the final report.
  \listoftodos
}{}

08 June, 2021 12:35PM

Pavit Kaur

GSoC: About my Project and Community Bonding Period

alt text

To start writing about updates regarding my GSoC project, the first obvious thing I need to do is to explain what my project really is. So let’s get started.

About my project

What is debci?

Directly stating from the official docs:

The Debian continuous integration (debci) is an automated system that coordinates the execution of automated tests against packages in the Debian system.

Let’s try decoding it:

Debian is a huge system with thousands of packages and within these packages exist inter-package dependencies. So if any package is updated, it is important to test if that package is working correctly but it is equally important to test that all the packages which are dependent on this updated package are working correctly too.

Debci is a platform serving this purpose of automated testing for the entire Debian archive whenever a new version of the package, or of any package in its dependency chain is available. It comes with a UI that lets developers easily run tests and see their results if they pass or not.

For my GSoC project, I am working to implement some incremental improvements to debci making it easier to use and maintain.

Community Bonding Period

The debci community

Everyone I have come across till now in the community is very nice. The debci community in itself is a small but active community. It really feels good to be a part of conversations here.

Weekly call set up

I have two mentors for this project Antonio Terceiro and Paul Gevers and they have set up a weekly sync call with me in which I will share my updates regarding the work done in the past week, any issues I am facing, and discuss the work for next week. In addition to this, I can always contact them on IRC for any issue I am stuck in.

Work till now

The first thing I did in the community bonding period is setting up this blog. I wanted to have one for a long time and this seems to be a really nice opportunity to start. And the fact this has been added to Planet Debian too makes me happier to write. I am still trying to get a hang of this and definitely need to learn how to spend less time writing it.

I also worked on my already opened merge requests during this period and got them merged.

Since I am already familiar with the codebase, so I started with my first deliverable a bit earlier before the official coding period begins which is migrating the logins to Salsa, Debian’s Gitlab Instance.

Currently, debci uses Debian SSO client certificates for logging in, but that is deprecated so it needs to be migrated to use Salsa as an identity provider.

The OmniAuth library is being used to implement this with help of ruby-omniauth-gitlab strategy. I explored a great deal about integrating OmniAuth with our application and bumped into so many issues too when I began implementing that. Once I am done integrating the Salsa Authentication with debci, I am planning to write a separate tutorial on that which could be helpful to other people using OmniAuth with their application.

With that, the community bonding period has ended on 7th June and the coding period officially begins and for now, I will be continuing working on my first deliverable.

That’s all for now. See you next time!

08 June, 2021 12:00PM by Pavit Kaur (pavitk1@gmail.com)

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.22 for Debian

Today, KDE released version 5.22 of the Plasma desktop with the usual long list of updates and improvements. And packages for Debian are ready for consumption! Ah and yes, KDE Gear 21.04 is also ready!

As usual, I am providing packages via my OBS builds. If you have used my packages till now, then you only need to change the plasma521 line to read plasma522. Just for your convenience, if you want the full set of stuff, here are the apt-source entries:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma522/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2104/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

and for testing the same with Debian_unstable replaced with Debian_Testing. As usual, don’t forget that you need to import my OBS gpg key to make these repos work!

The sharp eye might have detected also the apps2104 line, yes the newly renamed KDE Gear suite of packages is also available in my OBS builds (and in Debian/experimental).

Uploads to Debian

Currently, the frameworks and most of the KDE Gear (Apps) 21.04 are in Debian/experimental. I will upload Plasma 5.22 to experimental as soon as NEW processing of two packages is finished (which might take another few months).

After the release of bullseye, all the current versions will be uploaded to unstable as usual.

Enjoy the new Plasma!

08 June, 2021 11:47AM by Norbert Preining

hackergotchi for Bits from Debian

Bits from Debian

Registration for DebConf21 Online is Open

DebConf21 banner

The DebConf team is glad to announce that registration for DebConf21 Online is now open.

The 21st Debian Conference is being held Online, due to COVID-19, from August 22 to August 29, 2021. It will also sport a DebCamp from August 15 to August 21, 2021 (preceeding the DebConf).

To register for DebConf21, please visit the DebConf website at https://debconf21.debconf.org/register

Reminder: Creating an account on the site does not register you for the conference, there's a conference registration form to complete after signing in.

Participation in DebConf21 is conditional on your respect of our Code of Conduct. We require you to read, understand and abide by this code.

A few notes about the registration process:

  • We need to know attendees' locations to better plan the schedule around timezones. Please make sure you fill in the "Country I call home" field in the registration form accordingly. It's especially important to have this data for people who submitted talks, but also for other attendees.

  • We are offering limited amounts of financial support for those who require it in order to attend. Please refer to the corresponding page on the website for more information.

Any questions about registration should be addressed to registration@debconf.org.

See you online!

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak, and our Gold Sponsor Matanel Foundation.

08 June, 2021 08:00AM by Stefano Rivera

June 07, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

UBports: Packaging of Lomiri Operating Environment for Debian (part 05)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri (Feb - May 2021)

Over the past 4 months I attended 14 of the weekly scheduled UBports development sync sessions and worked on the following bits and pieces regarding Lomiri in Debian:

  • Bundle upload to Debian of all Ayatana Indicators packages, including lomiri-url-dispatcher
  • Upload to Debian unstable: ayatana-indicator-session 0.8.2-1 (fix parallel build problem)
  • lomiri-ui-toolkit: Consulting on non-free font usage and other issues with upstream side of the Ubuntu Touch's UI Toolkit
  • Upload to Debian unstable: wlcs 1.2.1-1
  • qtmir update: 0.6.1-6 (fixing qml-demo-shell)
  • Upload to Debian unstable: qtmir 0.6.1-7
  • Upload to Debian unstable as NEW: deviceinfo 0.1.0-1
  • Upload to Debian unstable: mir 1.8.0+dfsg1-16 (fixing #985503)
  • Discuss with upstream how to handle hard-coded /opt/click.ubuntu.com path in src:pkg click
  • File upstream MR: https://gitlab.com/ubports/core/click/-/issues/2
  • Upload to Debian experimental as NEW: click 0.5.0-1
  • Upload to Debian experimental as NEW: libusermetrics 1.2.0-1
  • Port libusermetrics over to pkgkde-symbolshelper
  • Upload to Debian experimental as NEW: hfd-service 0.1.0-1 (This included upstream development: upstart to systemd conversion, a D-Bus security fix, etc.)
  • Upload to Debian experimental as NEW: libayatana-common 0.9.1-1
  • Upload to Debian experimental: deviceinfo (0.1.0-2): Update .symbols file for non-amd64 Debian architectures
  • Upload to Debian experimental: ayatana-indicator-keyboard 0.7.901-1
  • Test-Build lomiri-ui-toolkit (after several Qt5.15 fixes on upstream's side); exclude broken unit tests from building lomiri-ui-toolkit as recommended by upstream
  • Upstream MR (libusermetrics): Amend URL in Lomiri upstream path (https://gitlab.com/ubports/core/libusermetrics/-/merge_requests/3)
  • Upstream bug hunting (lomiri-ui-toolkit, unit test failures in unit/components/tst_haptics.qml)
  • Prepare content-hub packaging for initial build tests. However, this requires lomiri-ui-toolkit to be packaged first
  • Upstream MRs related to lomiri-ui-toolkit:
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/24 (grammar fixes)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/25 (fix misspelled property name)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/26 (fix misspelled word in CONTEXT_TRACE() call)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/27 (drop license test)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/28 (app-launch-profile: Use lomiri namespace in binary file names)
  • Upload to Debian experimental: lomiri-ui-toolkit 1.3.3000+dfsg1-1
  • Upload to Debian unstable: lomiri-download-manager 0.1.0-8 (Fix wrong package dependency, #988808)
  • Upload to Debian unstable: lomiri-app-launch 0.0.90-8 (Update .symbols file for alpha and hppa archs)
  • Upload to Debian experimental: hfd-service 0.1.0-2 (systemd build-requirement fix)
  • Upload to Debian experimental: deviceinfo 0.1.0-3 (Update .symbols for s390x arch)
  • Prepare for upload to Debian: content-hub 1.0.0
  • For content-hub to be buildable, we needed another packaging fix for lomiri-ui-toolkit which received an...
  • Upload to Debian expermental AS NEW (2): lomiri-ui-toolkit 1.3.3000+dfsg-2 (font package dependency fix, but also a d/copyright update with correct artwork license)

The largest amount of work (and time) went into getting lomiri-ui-toolkit ready for upload. That code component is a absolutely massive beast and dearly intertwined with Qt5 (and unit tests fail with every new warning a new Qt5.x introduces). This bit of work I couldn't do alone (see below in "Credits" section).

The next projects / packages ahead are some smaller packages (content-hub, gmenuharness, etc.) before we will finally come to lomiri (i.e. main bit of the Lomiri Operating Environment) itself.

Credits

Many big thanks go to everyone on the UBports project, but especially to Ratchanan Srirattanamet who lived inside of lomiri-ui-toolkit for more than two weeks, it seemed.

Also, thanks to Florian Leeber for being my point of contact for topics regarding my cooperation with the UBports Foundation.

Packaging Status

The current packaging status of Lomiri related packages in Debian can be viewed at:
https://qa.debian.org/developer.php?login=team%2Bubports%40tracker.debia...

light+love
Mike Gabriel (aka sunweaver)

07 June, 2021 12:23PM by sunweaver

Russell Coker

Dell PowerEdge T320 and Linux

I recently bought a couple of PowerEdge T320 servers, so now to learn about setting them up. They are a little newer than the R710 I recently setup (which had iDRAC version 6), they have iDRAC version 7.

RAM Speed

One system has a E5-2440 CPU with 2*16G DDR3 DIMMs and a Memtest86+ speed of 13,043MB/s, the other is essentially identical but with a E5-2430 CPU and 4*16G DDR3 DIMMs and a Memtest86+ speed of 8,270MB/s. I had expected that more DIMMs means better RAM performance but this isn’t what happened. I firstly upgraded the BIOS, as I expected it didn’t make a difference but it’s a good thing to try first.

On the E5-2430 I tried removing a DIMM after it was pointed out on Facebook that the CPU has 3 memory channels (here’s a link to a great site with information on that CPU and many others [1]). When I did that I was prompted to disable advanced ECC (which treats pairs of DIMMs as a single unit for ECC allowing correcting more than 1 bit errors) and I had to move the 3 remaining DIMMS to different slots. That improved the performance to 13,497MB/s. I then put the spare DIMM into the E5-2440 system and the performance increased to 13,793MB/s, when I installed 4 DIMMs in the E5-2440 system the performance remained at 13,793MB/s and the E5-2430 went down to 12,643MB/s.

This is a good result for me, I now have the most RAM and fastest RAM configuration in the system with the fastest CPU. I’ll sell the other one to someone who doesn’t need so much RAM or performance (it will be really good for a small office mail server and NAS).

Firmware Update

BIOS

The first issue is updating the BIOS, unfortunately the first link I found to the Dell web site didn’t have a link to download the Linux installer. It offered a Windows binary, an EFI program, and a DOS binary. I’m not about to install Windows if there is any other option and EFI is somewhat annoying, so that leaves DOS. The first Google result for installing FreeDOS advised using “unetbootin”, that didn’t work at all for me (created a USB image that the Dell BIOS didn’t recognise as bootable) and even if it did it wouldn’t have been a good solution.

I went to the FreeDOS download page [2] and got the “Lite USB” zip file. That contained “FD12LITE.img” which I could just dd to a USB stick. I then used fdisk to create a second 32MB partition, used mkfs.fat to format it, and then copied the BIOS image file to it. I booted the USB stick and then ran the BIOS update program from drive D:. After the BIOS update this became the first system I’ve seen get a totally green result from “spectre-meltdown-checker“!

I found the link to the Linux installer for the new Dell BIOS afterwards, but it was still good to play with FreeDOS.

PERC Driver

I probably didn’t really need to update the PERC (PowerEdge Raid Controller) firmware as I’m just going to run it in JBOD mode. But it was easy to do, a simple bash shell script to update it.

Here are the perccli commands needed to access disks, it’s all hot-plug so you can insert disks and do all this without a reboot:

# show overview
perccli show
# show controller 0 details
perccli /c0 show all
# show controller 0 info with less detail
perccli /c0 show
# clear all "foreign" RAID members
perccli /c0 /fall delete
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

The “perccli /c0 show” command gives the following summary of disk (“PD” in perccli terminology) information amongst other information. The EID is the enclosure, Slt is the “slot” (IE the bay you plug the disk into) and the DID is the disk identifier (not sure what happens if you have multiple enclosures). The allocation of device names (sda, sdb, etc) will be in order of EID:Slt or DID at boot time, and any drives added at run time will get the next letters available.

----------------------------------------------------------------------------------
EID:Slt DID State DG       Size Intf Med SED PI SeSz Model                     Sp 
----------------------------------------------------------------------------------
32:0      0 Onln   0  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:1      1 Onln   1  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:3      3 Onln   2   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
32:4      4 Onln   3   3.637 TB SATA HDD N   N  512B WDC WD40EURX-64WRWY0      U  
32:5      5 Onln   5 278.875 GB SAS  HDD Y   N  512B ST300MM0026               U  
32:6      6 Onln   6 558.375 GB SAS  HDD N   N  512B AL13SXL600N               U  
32:7      7 Onln   4   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
----------------------------------------------------------------------------------

The PERC controller is a MegaRAID with possibly some minor changes, there are reports of Linux MegaRAID management utilities working on it for similar functionality to perccli. The version of MegaRAID utilities I tried didn’t work on my PERC hardware. The smartctl utility works on those disks if you tell it you have a MegaRAID controller (so obviously there’s enough similarity that some MegaRAID utilities will work). Here are example smartctl commands for the first and last disks on my system. Note that the disk device node doesn’t matter as all device nodes associated with the PERC/MegaRAID are equal for smartctl.

# get model number etc on DID 0 (Samsung SSD)
smartctl -d megaraid,0 -i /dev/sda
# get all the basic information on DID 0
smartctl -d megaraid,0 -a /dev/sda
# get model number etc on DID 7 (Seagate 4TB disk)
smartctl -d megaraid,7 -i /dev/sda
# exactly the same output as the previous command
smartctl -d megaraid,7 -i /dev/sdc

I have uploaded etbemon version 1.3.5-6 to Debian which has support for monitoring smartctl status of MegaRAID devices and NVMe devices.

IDRAC

To update IDRAC on Linux there’s a bash script with the firmware in the same file (binary stuff at the end of a shell script). To make things a little more exciting the script insists that rpm be available (running “apt install rpm” fixes that for a Debian system). It also creates and runs other shell scripts which start with “#!/bin/sh” but depend on bash syntax. So I had to make /bin/sh a symlink to /bin/bash. You know you need this if you see errors like “typeset: not found” and “[: -eq: unexpected operator” and then the system reboots. Dell people, please test your scripts on dash (the Debian /bin/sh) or just specify #!/bin/bash.

If the IDRAC update works it will take about 8 minutes.

Lifecycle Controller

The Lifecycle Controller is apparently for installing OS and firmware updates. I use Linux tools to update Linux and I generally don’t plan to update the firmware after deployment (although I could do so from Linux if needed). So it doesn’t seem to offer anything useful to me.

Setting Up IDRAC

For extra excitement I decided to try to setup IDRAC from the Linux command-line. To install the RAC setup tool you run “apt install srvadmin-idracadm7 libargtable2-0” (because srvadmin-idracadm7 doesn’t have the right dependencies).

# srvadmin-idracadm7 is missing a dependency
apt install srvadmin-idracadm7 libargtable2-0
# set the IP address, netmask, and gatewat for IDRAC
idracadm7 setniccfg -s 192.168.0.2 255.255.255.0 192.168.0.1
# put my name on the front panel LCD
idracadm7 set System.LCD.UserDefinedString "Russell Coker"

Conclusion

This is a very nice deskside workstation/server. It’s extremely quiet with hardly any fan noise and the case is strong enough to contain the noise of hard drives. When running with 3* 3.5″ SATA disks and 2*10k 2.5″ SAS disks on a wooden floor it wasn’t annoyingly loud. Without the SAS disks it was as quiet as you can expect any PC to be, definitely not the volume you expect from a serious server! I bought the T320 systems loaded with SAS disks which made them quite loud, I immediately put the disks on ebay and installed SATA SSDs and hard drives which gives me more performance and more space than the SAS disks with less cost and almost no noise.

8*3.5″ drive bays gives room for expansion. I currently have 2*SATA SSDs and 3*SATA disks, the SSDs are for the root filesystem (including /home) and the disks are for a separate filesystem for large files.

07 June, 2021 07:18AM by etbe

Russ Allbery

Review: Stoneskin

Review: Stoneskin, by K.B. Spangler

Series: Deep Witches #0
Publisher: A Girl and Her Fed Books
Copyright: September 2017
ASIN: B075PHK498
Format: Kindle
Pages: 226

Stoneskin is a prequel to the Deep Witches Trilogy, which is why I have it marked as book 0 of the series. Unlike most prequels, it was written and published before the series and there doesn't seem to be any reason not to read it first.

Tembi Moon is an eight-year-old girl from the poor Marumaru area on the planet of Adhama. Humanity has spread to the stars and first terraformed the worlds and then bioformed themselves to live there. The differences are subtle, but Tembi's skin becomes thicker and less sensitive when damaged (either physically or emotionally) and she can close her ears against dust storms. One day, she wakes up in an unknown alley and finds herself on the world of Miha'ana, sixteen thousand light-years away, where she is rescued and brought home by a Witch named Matindi.

In this science fiction future, nearly all interstellar travel is done through the Deep. The Deep is not like the typical hand-waved science fiction subspace, most notably in that it's alive. No one is entirely sure where it came from or what sort of creature it is. It sometimes manages to communicate in words, but abstract patterns with feelings attached are more common, and it only communicates with specific people. Those people are Witches, who are chosen by the Deep via some criteria no one understands. Witches can use the Deep to move themselves or anything else around the galaxy. All interstellar logistics rely on them.

The basics of Tembi's story are not that unusual; she's been chosen by the Deep to be a Witch. What is remarkable is that she's young and she's poor, completely disconnected from the power structures of the galaxy. But, once chosen, her path as far as the rest of the galaxy is concerned is fixed: she will go to Lancaster to be trained as a Witch. Matindi is able to postpone this for a time by keeping an eye on her, but not forever.

I bought this book because of the idea of the Deep, and that is indeed the best part of the book. There is a lot of mystery about its exact nature, most of which is not resolved in this book, but it mostly behaves like a giant and extremely strange dog, and it's awesome. Replacing the various pseudo-scientific explanations for faster than light travel with interactions with a dream-weird giant St. Bernard with too many paws that talks in swirls of colored bubbles and is very eager to please its friends is brilliant.

This book covers a lot of years of Tembi's life and is, as advertised, a prelude to a story that is not resolved here. It's a coming of age story in which she does eventually end up at Lancaster, learns and chafes at the elaborate and very conservative structures humans have put in place to try to make interactions with the Deep predictable and reliable, and eventually gets drawn into the politics of war and the question of when people have a responsibility to intervene. Tembi, and the reader, also have many opportunities to get extremely upset at how the Deep is treated and how much entitlement the Witches have about their access and control, although how the Deep feels about it is left for a future book.

Not all of this story is as good as the premise. There are some standard coming of age tropes that I'm not fond of, such as Tembi's predictable temporary falling out with the Deep (although the Deep's reaction is entertaining). It's also not at all a complete story, although that's clearly signaled by the subtitle. But as an introduction to the story universe and an extended bit of scene-setting, it accomplishes what it sets out to do. It's also satisfyingly thoughtful about the moral trade-offs around stability and the value of preserving institutions. I know which side I'm on within the universe, but I appreciated how much nuance and thoughtfulness Spangler puts into the contrary opinion.

I'm hooked on the universe and want to learn more about the Deep, enough so that I've already bought the first book of the main trilogy.

Followed by The Blackwing War.

Rating: 7 out of 10

07 June, 2021 04:56AM

June 06, 2021

Iustin Pop

Goodbye Travis, hello GitHub Actions

My very cyclical open-source work

For some reason, I only manage to do coding at home every some months - mostly 6 months apart, so twice a year (even worse than my blogging frequency :P). As such, I missed the whole discussion about travis-ci (the .org version) going away, etc.

So when I finally did some work on corydalis a few weeks ago, and had a travis build failure (restoring a large cache simply timed out, either travis or S3 has some hiccup - it cleared by itself a day later), I opened the travis-ci interface to see the scary banner (“travis-ci.org is shutting down”), and asked myself what’s happening. The deadline was in less than a week even 😧…

Long story short, Travis was a good home for many years, but they were bought and are doing significant changes to their OSS support, so it’s time to move on. I anyway wanted to learn GitHub Actions for a while (ahem, free time intervened), so this was a good (forced) “opportunity”.

Proper composable CI

The advantage of Travis’ infrastructure was that the build configuration was really simple. It had very few primitives: pre-steps (before_install), install steps, and the actual things to do to test, post-steps (after_sucess) and a few other small helpers (caching, apt packages, etc.) This made it really easy to just pick up and write a config, plus it had the advantage of allowing to test configs from the web UI without needing to push.

This simplicity was unfortunately also its significant limiter: the way to do complex things in steps was simply to add more shell commands.

GitHub actions, together with its marketplace, changes this entirely. There are no “built-in” actions, the language just defines the build/job/step hierarchy, and one glues together whatever steps they want. This has the disadvantage that even checking out the code needs to be explicitly written in all workflows (so boilerplate, if you don’t need customisation), but it opens up a huge opportunity for composition, since it allows people to publish actions (steps) that you just import, encapsulating all the work.

So, after learning how to write a moderately complicated workflow (complicated as in multiple Python version, some of them needing different OS version, and multi-OS), it was straightforward to port this to all my projects - just somewhat tedious. I’ve now shutdown all builds on Travis, just can’t find a way to delete my account 😅

Better multi-OS, worse (missing) multi-arch

In theory, Travis supports Linux, MacOS, FreeBSD and Windows, but I’ve found that support for non-Linux is not quite as good. Maybe I missed things, but multi-version Python builds on MacOS were not as nicely supported as Linux; Windows is quite early, and very limited; and I haven’t tested FreeBSD.

GitHub is more restrictive - Linux, MacOS and Windows - but I found support for MacOS and Windows better for my use cases. If your use case is testing multiple MacOS versions, Travis wins, if it’s more varied languages/etc. on the single available MacOS version, GitHub works better.

On the multi-arch side, Travis wins hands-down. Four different native architectures, and enabling one more is just adding another key to the arch list. With GitHub, if I understand right, you either have to use docker+emulation, or use self-hosted runners.

So here it really matters what is more important to you. Maybe in the future GitHub will support more arches, but right now, Travis wins for this use-case.

Summary

For my specific use-case, GitHub Actions is a better fit right now. The marketplace has advantages (I’ll explain better in a future post), the actions are a very nice way to encapsulate functionality, and it’s still available/free (up to a limit) for open source projects. I don’t know what the future of Travis for OSS will be, but all I heard so far is very concerning.

However, I’ll still miss a few things.

For example, an overall dashboard for all my projects, like this one:

Travis dashboard Travis dashboard

I couldn’t find any such thing on GitHub, so I just use my set of badges.

Then cache management. Travis allows you to clear the cache, and it does auto-update the cache. GitHub caches are immutable once built, so you have to:

  • watch if changed dependencies/dependency chains result in things that are no longer cached;
  • if so, need to manually bump the cache key, resulting in a commit for purely administrative purposes.

For languages where you have a clean full chain of dependencies recorded (e.g. node’s package-lock.json, stack’s stack.yaml.lock), this is trivial to achieve, but gets complicated if you add OS dependencies, languages which don’t record all this, etc.

Hmm, maybe I should just embed the year/month in the cache name - cheating, but automated cheating.

With all said and done, I think GHA is much less refined, but with more potential. Plus, the pace of innovation on Travis side was quite slow (likely money problems, hence them being bought, etc.)…

So one TODO done: “learn GitHub Actions”, even if a bit unplanned 😂

06 June, 2021 09:00PM

Russell Coker

Netflix and IPv6

It seems that Netflix has an ongoing issue of not working well with IPv6, apparently they have some sort of region checking code that doesn’t correctly identify IPv6 prefixes. To fix this I wrote the following script to make a small zone file with only A records for Netflix and no AAAA records. The $OUT.header file just has the SOA record for my fake netflix.com domain.

#!/bin/bash

OUT=/etc/bind/data/netflix.com
HEAD=$OUT.header

cp $HEAD $OUT
dig -t a www.netflix.com @8.8.8.8|sed -n -e "s/^.*IN/www IN/p"|grep [0-9]$ >> $OUT
dig -t a android.prod.cloud.netflix.com @8.8.8.8|sed -n -e "s/^.*IN/android.prod.cloud IN/p"|grep [0-9]$ >> $OUT
/usr/sbin/rndc reload > /dev/null

Update

I updated this post to add a line for android.prod.cloud.netflix.com which is the address used by Android devices.

06 June, 2021 11:36AM by etbe

hackergotchi for Evgeni Golov

Evgeni Golov

Controlling Somfy roller shutters using an ESP32 and ESPHome

Our house has solar powered, remote controllable roller shutters on the roof windows, built by the German company named HEIM & HAUS. However, when you look closely at the remote control or the shutter motor, you'll see another brand name: SIMU. As the shutters don't have any wiring inside the house, the only way to control them is via the remote interface. So let's go on the Internet and see how one can do that, shall we? ;)

First thing we learn is that SIMU remote stuff is just re-branded Somfy. Great, another name! Looking further we find that Somfy uses some obscure protocol to prevent (replay) attacks (spoiler: it doesn't!) and there are tools for RTL-SDR and Arduino available. That's perfect!

Always sniff with RTL-SDR first!

Given the two re-brandings in the supply chain, I wasn't 100% sure our shutters really use the same protocol. So the first "hack" was to listen and decrypt the communication using RTL-SDR:

$ git clone https://github.com/dimhoff/radio_stuff
$ cd radio_stuff
$ make -C converters am_to_ook
$ make -C decoders decode_somfy
$ rtl_fm -M am -f 433.42M -s 270K | ./am_to_ook -d 10 -t 2000 -  | ./decode_somfy
<press some buttons on the remote>

The output contains the buttons I pressed, but also the id of the remote and the command counter (which is supposed to prevent replay attacks). At this point I could just use the id and the counter to send own commands, but if I'd do that too often, the real remote would stop working, as its counter won't increase and the receiver will drop the commands when the counters differ too much.

But that's good enough for now. I know I'm looking for the right protocol at the right frequency. As the end result should be an ESP32, let's move on!

Acquiring the right hardware

Contrary to an RTL-SDR, one usually does not have a spare ESP32 with 433MHz radio at home, so I went shopping: a NodeMCU-32S clone and a CC1101. The CC1101 is important as most 433MHz chips for Arduino/ESP only work at 433.92MHz, but Somfy uses 433.42MHz and using the wrong frequency would result in really bad reception. The CC1101 is essentially an SDR, as you can tune it to a huge spectrum of frequencies.

Oh and we need some cables, a bread board, the usual stuff ;)

The wiring is rather simple:

ESP32 wiring for a CC1101

And the end result isn't too beautiful either, but it works:

ESP32 and CC1101 in a simple case

Acquiring the right software

In my initial research I found an Arduino sketch and was totally prepared to port it to ESP32, but luckily somebody already did that for me! Even better, it's explicitly using the CC1101. Okay, okay, I cheated, I actually ordered the hardware after I found this port and the reference to CC1101. ;)

As I am using ESPHome for my ESPs, the idea was to add a "Cover" that's controlling the shutters to it. Writing some C++, how hard can it be?

Turns out, not that hard. You can see the code in my GitHub repo. It consists of two (relevant) files: somfy_cover.h and somfy.yaml.

somfy_cover.h essentially wraps the communication with the Somfy_Remote_Lib library into an almost boilerplate Custom Cover for ESPHome. There is nothing too fancy in there. The only real difference to the "Custom Cover" example from the documentation is the split into SomfyESPRemote (which inherits from Component) and SomfyESPCover (which inherits from Cover) -- this is taken from the Custom Sensor documentation and allows me to define one "remote" that controls multiple "covers" using the add_cover function. The first two params of the function are the NVS name and key (think database table and row), and the third is the rolling code of the remote (stored in somfy_secrets.h, which is not in Git).

In ESPHome a Cover shall define its properties as CoverTraits. Here we call set_is_assumed_state(true), as we don't know the state of the shutters - they could have been controlled using the other (real) remote - and setting this to true allows issuing open/close commands at all times. We also call set_supports_position(false) as we can't tell the shutters to move to a specific position.

The one additional feature compared to a normal Cover interface is the program function, which allows to call the "program" command so that the shutters can learn a new remote.

somfy.yaml is the ESPHome "configuration", which contains information about the used hardware, WiFi credentials etc. Again, mostly boilerplate. The interesting parts are the loading of the additional libraries and attaching the custom component with multiple covers and the additional PROG switches:

esphome:
  name: somfy
  platform: ESP32
  board: nodemcu-32s
  libraries:
    - SmartRC-CC1101-Driver-Lib@2.5.6
    - Somfy_Remote_Lib@0.4.0
    - EEPROM
  includes:
    - somfy_secrets.h
    - somfy_cover.h


cover:
  - platform: custom
    lambda: |-
      auto somfy_remote = new SomfyESPRemote();
      somfy_remote->add_cover("somfy", "badezimmer", SOMFY_REMOTE_BADEZIMMER);
      somfy_remote->add_cover("somfy", "kinderzimmer", SOMFY_REMOTE_KINDERZIMMER);
      App.register_component(somfy_remote);
      return somfy_remote->covers;

    covers:
      - id: "somfy"
        name: "Somfy Cover"
      - id: "somfy2"
        name: "Somfy Cover2"

switch:
  - platform: template
    name: "PROG"
    turn_on_action:
      - lambda: |-
          ((SomfyESPCover*)id(somfy))->program();
  - platform: template
    name: "PROG2"
    turn_on_action:
      - lambda: |-
          ((SomfyESPCover*)id(somfy2))->program();

The switch to trigger the program mode took me a bit. As the Cover interface of ESPHome does not offer any additional functions besides movement control, I first wrote code to trigger "program" when "stop" was pressed three times in a row, but that felt really cumbersome and also had the side effect that the remote would send more than needed, sometimes confusing the shutters. I then decided to have a separate button (well, switch) for that, but the compiler yelled at me I can't call program on a Cover as it does not have such a function. Turns out, you need to explicitly cast to SomfyESPCover and then it works, even if the code becomes really readable, NOT. Oh and as the switch does not have any code to actually change/report state, it effectively acts as a button that can be pressed.

At this point we can just take an existing remote, press PROG for 5 seconds, see the blinds move shortly up and down a bit and press PROG on our new ESP32 remote and the shutters will learn the new remote.

And thanks to the awesome integration of ESPHome into HomeAssistant, this instantly shows up as a new controllable cover there too.

Future Additional Work

I started writing this post about a year ago… And the initial implementation had some room for improvement…

More than one remote

The initial code only created one remote and one cover element. Sure, we could attach that to all shutters (there are 4 of them), but we really want to be able to control them separately.

Thankfully I managed to read enough ESPHome docs, and learned how to operate std::vector to make the code dynamically accept new shutters.

Using ESP32's NVS

The ESP32 has a non-volatile key-value storage which is much nicer than throwing bits at an emulated EEPROM. The first library I used for that explicitly used EEPROM storage and it would have required quite some hacking to make it work with NVS. Thankfully the library I am using now has a plugable storage interface, and I could just write the NVS backend myself and upstream now supports that. Yay open-source!

Remaining issues

Real state is unknown

As noted above, the ESP does not know the real state of the shutters: a command could have been lost in transmission (the Somfy protocol is send-only, there is no feedback) or the shutters might have been controlled by another remote. At least the second part could be solved by listening all the time and trying to decode commands heard over the air, but I don't think this is worth the time -- worst that can happen is that a closed (opened) shutter receives another close (open) command and that is harmless as they have integrated endstops and know that they should not move further.

Can't program new remotes with ESP only

To program new remotes, one has to press the "PROG" button for 5 seconds. This was not exposed in the old library, but the new one does support "long press", I just would need to add another ugly switch to the config and I currently don't plan to do so, as I do have working remotes for the case I need to learn a new one.

06 June, 2021 09:22AM by evgeni

June 05, 2021

Utkarsh Gupta

FOSS Activites in May 2021

Here’s my (twentieth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 29th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

Interesting month, surprisingly. Lots of things happening and lots of moving parts; becoming the “new normal”, I believe. Anyhow, working on Ubuntu full-time has its own advantage and one of them is being able to work on Debian stuff! 🥰

So whilst I couldn’t upload a lot of packages because of the freeze, here’s what I worked on:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers and assisting people in BSP.
  • Moderation of -project mailing list.

Ubuntu

This was my 4th month of actively contributing to Ubuntu. Now that I’ve joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

This month, by all means, was dedicated mostly to PHP 8.0, transitioning from PHP 7.4 to 8.0. Naturally, it had so many moving parts and moments of utmost frustration, shared w/ Bryce. :D

So even though I can’t upload anything, I worked on the following stuff & asked for sponsorship.
But before, I’d like to take a moment to stress how kind and awesome Gianfranco Costamagna, a.k.a. LocutusOfBorg is! He’s been sponsoring a bunch of my things & helping with re-triggers, et al. Thanks a bunch, Gianfranco; beers on me whenever we meet! 🍻

Merges:

Uploads & Syncs:

MIRs:

Seed Operations:


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my twentieth month as a Debian LTS and eleventh month as a Debian ELTS paid contributor.
I was assigned 29.75 hours for LTS and 40.00 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Front-desk duty from 24-05 until 30-05 for both LTS and ELTS.
  • Triaged rails, libimage-exiftool-perl, hivex, graphviz, glibc, libexosip2, impacket, node-ws, thunar, libgrss, nginx, postgresql-9.6, ffmpeg, composter, and curl.
  • Mark CVE-2019-9904/graphviz as ignored for stretch and jessie.
  • Mark CVE-2021-32029/postgresql-9.6 as not-affected for stretch.
  • Mark CVE-2020-24020/ffmpeg as not-affected for stretch.
  • Mark CVE-2020-22020/ffmpeg as postponed for stretch.
  • Mark CVE-2020-22015/ffmpeg as ignored for stretch.
  • Mark CVE-2020-21041/ffmpeg as postponed for stretch.
  • Mark CVE-2021-33574/glibc as no-dsa for stretch & jessie.
  • Mark CVE-2021-31800/impacket as no-dsa for stretch.
  • Mark CVE-2021-32611/libexosip2 as no-dsa for stretch.
  • Mark CVE-2016-20011/libgrss as ignored for stretch.
  • Mark CVE-2021-32640/node-ws as no-dsa for stretch.
  • Mark CVE-2021-32563/thunar as no-dsa for stretch.
  • [LTS] Help test and review bind9 update for Emilio.
  • [LTS] Suggest and add DEP8 tests for bind9 for stretch.
  • [LTS] Sponsored upload of htmldoc to buster for Havard as a consequence of #988289.
  • [ELTS] Fix triage order for jetty and graphviz.
  • [ELTS] Raise issue upstream about cloud-init; mock tests instead.
  • [ELTS] Write to private ELTS list about triage ordering.
  • [ELTS] Review Emilio’s new script and write back feedback, mentioning extra file created, et al.
  • [ELTS/LTS] Raise upgrade problems from LTS -> LTS+1 to the list. Thread here.
    • Further help review and raise problems that could occur, et al.
  • [LTS] Help explain path forward for firmware-nonfree update to Ola. Thread here.
  • [ELTS] Revert entries of TEMP-0000000-16B7E7 and TEMP-0000000-1C4729; CVEs assigned & fix ELTS tracker build.
  • Auto EOL’ed linux, libgrss, node-ws, and inspircd for jessie.
  • Attended monthly Debian LTS meeting, which didn’t happen, heh.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.

Until next time.
:wq for today.

05 June, 2021 08:40PM