August 29, 2023

hackergotchi for Matthew Garrett

Matthew Garrett

Unix sockets, Cygwin, SSH agents, and sadness

Work involves supporting Windows (there's a lot of specialised hardware design software that's only supported under Windows, so this isn't really avoidable), but also involves git, so I've been working on extending our support for hardware-backed SSH certificates to Windows and trying to glue that into git. In theory this doesn't sound like a hard problem, but in practice oh good heavens.

Git for Windows is built on top of msys2, which in turn is built on top of Cygwin. This is an astonishing artifact that allows you to build roughly unmodified POSIXish code on top of Windows, despite the terrible impedance mismatches inherent in this. One is that until 2017, Windows had no native support for Unix sockets. That's kind of a big deal for compatibility purposes, so Cygwin worked around it. It's, uh, kind of awful. If you're not a Cygwin/msys app but you want to implement a socket they can communicate with, you need to implement this undocumented protocol yourself. This isn't impossible, but ugh.

But going to all this trouble helps you avoid another problem! The Microsoft version of OpenSSH ships an SSH agent that doesn't use Unix sockets, but uses a named pipe instead. So if you want to communicate between Cygwinish OpenSSH (as is shipped with git for Windows) and the SSH agent shipped with Windows, you need something that bridges between those. The state of the art seems to be to use npiperelay with socat, but if you're already writing something that implements the Cygwin socket protocol you can just use npipe to talk to the shipped ssh-agent and then export your own socket interface.

And, amazingly, this all works? I've managed to hack together an SSH agent (using Go's SSH agent implementation) that can satisfy hardware backed queries itself, but forward things on to the Windows agent for compatibility with other tooling. Now I just need to figure out how to plumb it through to WSL. Sigh.

comment count unavailable comments

29 August, 2023 06:57AM

August 28, 2023

Andrew Cater

20230826 - OMGWTFBBQ - Breakfast is happening more or less

 And nothing changes: rediscovered from past Andrew at his first Cambridge BBQ and almost the first blog post here:

"Thirty second rule on sofa space - if you left for more than about 30 seconds you had to sit on the floor when you got back (I jammed myself onto a corner of the sofa once I realised I'd barely get through the crush :) )
[Forget students in a mini / UK telephone box - how many DDs can you fit into a very narrow kitchen :) ]

It's a huge, dysfunctional family with its own rules, geeky humour and in-jokes but it's MINE - it's the people I want to hang out with and, as perverse as it sounds, just being there gave me a whole new reaffirmed sense of identity and a large amount of determination to carry on "wasting my time with Linux" and Debian"

The *frightening* thing - this is from August 31st 2009 ... where have the years gone in between.

28 August, 2023 10:51AM by Andrew Cater (noreply@blogger.com)

hackergotchi for Jonathan McDowell

Jonathan McDowell

OMGWTFBBQ 2023

A person wearing an OMGWTFBBQ Catering t-shirt standing in front of a BBQ. Various uncooked burgers are visible.

As is traditional for the UK August Bank Holiday weekend I made my way to Cambridge for the Debian UK BBQ. As was pointed out we’ve been doing this for more than 20 years now, and it’s always good to catch up with old friends and meet new folk.

Thanks to Collabora, Codethink, and Andy for sponsoring a bunch of tasty refreshments. And, of course, thanks to Steve for hosting us all.

28 August, 2023 08:01AM

August 27, 2023

hackergotchi for Shirish Agarwal

Shirish Agarwal

FSCKing /home

There is a bit of context that needs to be shared before I get to this and would be a long one. For reasons known and unknown, I have a lot of sudden electricity outages. Not just me, all those who are on my line. A discussion with a lineman revealed that around 200+ families and businesses are on the same line and when for whatever reason the electricity goes for all. Even some of the traffic lights don’t work. This affects software more than hardware or in some cases, both. And more specifically HDD’s are vulnerable. I had bought an APC unit several years for precisely this, but over period of time it just couldn’t function and trips also when the electricity goes out. It’s been 6-7 years so can’t even ask customer service to fix the issue and from whatever discussions I have had with APC personnel, the only meaningful difference is to buy a new unit but even then not sure this is an issue that can be resolved, even with that.

That comes to the issue that happens once in a while where the system fsck is unable to repair /home and you need to use an external pen drive for the same. This is my how my hdd stacks up –
/ is on dev/sda7 /boot is on /dev/sda6, /boot/efi is on /dev/sda2 and /home is on /dev/sda8 so theoretically, if /home for some reason doesn’t work I should be able drop down on /dev/sda7, unmount /dev/sda8, run fsck and carry on with my work. I tried it number of times but it didn’t work. I was dropping down on tty1 and attempting the same, no dice as root/superuser getting the barest x-term. So first I tried asking couple of friends who live nearby me. Unfortunately, both are MS-Windows users and both use what are called as ‘company-owned laptops’. Surfing on those systems were a nightmare. Especially the number of pop-ups of ads that the web has become. And to think about how much harassment ublock origin has saved me over the years. One of the more ‘interesting’ bits from both their devices were showing all and any downloads from fosshub was showing up as malware. I dunno how much of that is true or not as haven’t had to use it as most software we get through debian archives or if needed, download from github or wherever and run/install it and you are in business. Some of them even get compiled into a good .deb package but that’s outside the conversation atm. My only experience with fosshub was few years before the pandemic and that was good. I dunno if fosshub really has malware or malwarebytes was giving false positives. It also isn’t easy to upload a 600 MB+ ISO file somewhere to see whether it really has malware or not. I used to know of a site or two where you could upload a suspicious file and almost 20-30 famous and known antivirus and anti-malware engines would check it and tell you the result. Unfortunately, I have forgotten the URL and seeing things from MS-Windows perspective, things have gotten way worse than before.

So left with no choice, I turned to the local LUG for help. Fortunately, my mobile does have e-mail and I could use gmail to solicit help. While there could have been any number of live CD’s that could have helped but one of my first experiences with GNU/Linux was that of Knoppix that I had got from Linux For You (now known as OSFY) sometime in 2003. IIRC, had read an interview of Mr. Klaus Knopper as well and was impressed by it. In those days, Debian wasn’t accessible to non-technical users then and Knoppix was a good tool to see it. In fact, think he was the first to come up with the idea of a Live CD and run with it while Canonical/Ubuntu took another 2 years to do it. I think both the CD and the interview by distrowatch was shared by LFY in those early days. Of course, later the story changes after he got married, but I think that is more about Adriane rather than Knoppix. So Vishal Rao helped me out. I got an HP USB 3.2 32GB Type C OTG Flash Drive x5600c (Grey & Black) from a local hardware dealer around similar price point. The dealer is a big one and has almost 200+ people scattered around the city doing channel sales who in turn sell to end users. Asking one of the representatives about their opinion on stopping electronic imports (apparently more things were added later to the list including all sorts of sundry items from digital cameras to shavers and whatnot.) The gentleman replied that he hopes that it would not happen otherwise more than 90% would have to leave their jobs. They already have started into lighting fixtures (LED bulbs, tubelights etc.) but even those would come in the same ban 😦

The main argument as have shared before is that Indian Govt. thinks we need our home grown CPU and while I have no issues with that, as shared before except for RISC-V there is no other space where India could look into doing that. Especially after the Chip Act, Biden has made that any new fabs or any new thing in chip fabrication will only be shared with Five Eyes only. Also, while India is looking to generate about 2000 GW by 2030 by solar, China has an ambitious 20,000 GW generation capacity by the end of this year and the Chinese are the ones who are actually driving down the module prices. The Chinese are also automating their factories as if there’s no tomorrow. The end result of both is that China will continue to be the world’s factory floor for the foreseeable future and whoever may try whatever policies, it probably is gonna be difficult to compete with them on prices of electronic products. That’s the reason the U.S. has been trying so that China doesn’t get the latest technology but that perhaps is a story for another day.

HP USB 3.2 Type C OTG Flash Drive x5600c

For people who have had read this blog they know that most of the flash drives today are MLC Drives and do not have the longevity of the SLC Drives. For those who maybe are new, this short brochure/explainer from Kingston should enhance your understanding. SLC Drives are rare and expensive. There are also a huge number of counterfeit flash drives available in the market and almost all the companies efforts whether it’s Kingston, HP or any other manufacturer, they have been like a drop in the bucket. Coming back to the topic at hand. While there are some tools that can help you to figure out whether a pen drive is genuine or not. While there are products that can tell you whether they are genuine or not (basically by probing the memory controller and the info. you get from that.) that probably is a discussion left for another day. It took me couple of days and finally I was able to find time to go Vishal’s place. The journey of back and forth lasted almost 6 hours, with crazy traffic jams. Tells you why Pune or specifically the Swargate, Hadapsar patch really needs a Metro. While an in-principle nod has been given, it probably is more than 5-7 years or more before we actually have a functioning metro. Even the current route the Metro has was supposed to be done almost 5 years to the date and even the modified plan was of 3 years ago. And even now, most of the Stations still need a lot of work to be done. PMC, Deccan as examples etc. still have loads to be done. Even PMT (Pune Muncipal Transport) that that is supposed to do the last mile connections via its buses has been putting half-hearted attempts 😦

Vishal Rao

While Vishal had apparently seen me and perhaps we had also interacted, this was my first memory of him although we have been on a few boards now and then including stackexchange. He was genuine and warm and shared 4-5 distros with me, including Knoppix and System Rescue as shared by Arun Khan. While this is and was the first time I had heard about Ventoy apparently Vishal has been using it for couple of years now. It’s a simple shell script that you need to download and run on your pen drive and then just dump all the .iso images. The easiest way to explain ventoy is that it looks and feels like Grub. Which also reminds me an interaction I had with Vishal on mobile. While troubleshooting the issue, I was unsure whether it was filesystem that was the issue or also systemd was corrupted. Vishal reminded me of putting fastboot to the kernel parameters to see if I’m able to boot without fscking and get into userspace i.e. /home. Although journalctl and systemctl were responding even on tty1 still was a bit apprehensive. Using fastboot was able to mount the whole thing and get into userspace and that told me that it’s only some of the inodes that need clearing and there probably are some orphaned inodes. While Vishal had got a mini-pc he uses that a server, downloads stuff to it and then downloads stuff from it. From both privacy, backup etc. it is a better way to do things but then you need to laptop to access it. I am sure he probably uses it for virtualization and other ways as well but we just didn’t have time for that discussion. Also a mini-pc can set you back anywhere from 25 to 40k depending on the mini-pc and the RAM and the SSD. And you need either a lappy or an Raspberry Pi with some kinda visual display to interact with the mini-pc. While he did share some of the things, there probably could have been a far longer interaction just on that but probably best left for another day.

Now at my end, the system I had bought is about 5-6 years old. At that time it only had 6 USB 2.0 drives and 2 USB 3.0 (A) drives.

The above image does tell of the various form factors. One of the other things is that I found the pendrive and its connectors to be extremely fiddly. It took me number of times fiddling around with it when I was finally able to put in and able to access the pen drive partitions. Unfortunately, was unable to see/use systemrescue but Knoppix booted up fine. I mounted the partitions briefly to see where is what and sure enough /dev/sda8 showed my /home files and folders. Unmounted it, then used $fsck -y /dev/sda8 and back in business.

This concludes what happened.

Updates – Quite a bit was left out on the original post, part of which I didn’t know and partly stuff which is interesting and perhaps need a blog post of their own. It’s sad I won’t be part of debconf otherwise who knows what else I would have come to know.

  1. One of the interesting bits that I came to know about last week is the Alibaba T-Head T-Head TH1520 RISC-V CPU and saw it first being demoed on a laptop and then a standalone tablet. The laptop is an interesting proposition considering Alibaba opened up it’s chip thing only couple of years ago. To have an SOC within 18 months and then under production for lappies and tablets is practically unheard of especially of a newbie/startup. Even AMD took 3-4 years for its first chip.It seems they (Alibaba) would be parceling them out by quarter end 2023 and another 1000 pieces/Units first quarter next year, while the scale is nothing compared to the behemoths, I think this would be more as a matter of getting feedback on both the hardware and software. The value proposition is much better than what most of us get, at least in India. For example, they are doing a warranty for 5 years and also giving spare parts. RISC-V has been having a lot of resurgence in China in part as its an open standard and partly development will be far cheaper and faster than trying x86 or x86-64. If you look into both the manufacturers, due to monopoly, both of them now give 5-8% increment per year, and if you look back in history, you would find that when more chips were in competition, they used to give 15-20% performance increment per year.

2. While Vishal did share with me what he used and the various ways he uses the mini-pc, I did have a fun speculating on what he could use it. As shared by Romane as his case has shared, the first thing to my mind was backups. Filesystems are notorious in the sense they can be corrupted or can be prone to be corrupted very easily as can be seen above 😉 . Backups certainly make a lot of sense, especially rsync.

The other thing that came to my mind was having some sort of A.I. and chat server. IIRC, somebody has put quite a bit of open source public domain data in debian servers that could be used to run either a chatbot or an A.I. or both and use that similar to how chatGPT but with much limited scope than what chatgpt uses. I was also thinking a media server which Vishal did share he does. I may probably visit him sometime to see what choices he did and what he learned in the process, if anything.

Another thing that could be done is just take a dump of any of commodity markets or any markets and have some sort of predictive A.I. or whatever. A whole bunch of people have scammed thousands of Indian users on this, but if you do it on your own and for your own purposes to aid you buy and sell stocks or whatever commodity you may fancy. After all, nowadays markets themselves are virtual.

While Vishal’s mini-pc doesn’t have any graphics, if it was an AMD APU mini-pc, something like this he could have hosted games in the way of thick server, thin client where all graphics processing happens on the server rather than the client. With virtual reality I think the case for the same case could be made or much more. The only problem with VR/AR is that we don’t really have mass-market googles, eye pieces or headset. The only notable project that Google has/had in that place is the Google VR Cardboard headset and the experience is not that great or at least was not that great few years back when I could hear and experience the same. Most of the VR headsets say for example the Meta Quest 2 is for around INR 44k/- while Quest 3 is INR 50k+ and officially not available. As have shared before, the holy grail of VR would be when it falls below INR 10k/- so it becomes just another accessory, not something you really have to save for. There also isn’t much content on that but then that is also the whole chicken or egg situation. This again is a non-stop discussion as so much has been happening in that space it needs its own blog post/article whatever.

Till later.

27 August, 2023 11:31PM by shirishag75

hackergotchi for Steve McIntyre

Steve McIntyre

We're back!

It's August Bank Holiday Weekend, we're in Cambridge. It must be the Debian UK OMGWTFBBQ!.

We're about halfway through, and we've already polished off lots and lots of good food and beer. Lars is making pancakes as I write this, :-) We had an awesome game of Mao last night. People are having fun!

Many thanks to a number of awesome friendly people for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

27 August, 2023 01:03PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Interested in adopting the RPi images for Debian?

Back in June 2018, Michael Stapelberg put the Raspberry Pi image building up for adoption. He created the first set of “unofficial, experimental” Raspberry Pi images for Debian. I promptly answered to him, and while it took me some time to actually warp my head around Michael’s work, managed to eventually do so. By December, I started pushing some updates.

Not only that: I didn’t think much about it in the beginning, as the needed non-free pacakge was called raspi3-firmware, but… By early 2019, I had it running for all of the then-available Raspberry families (so the package was naturally renamed to raspi-firmware). I got my Raspberry Pi 4 at DebConf19 (thanks to Andy, who brought it from Cambridge), and it soon joined the happy Debian family. The images are built daily, and are available in https://raspi.debian.net.

In the process, I also adopted Lars’ great vmdb2 image building tool, and have kept it decently up to date (yes, I’m currently lagging behind, but I’ll get to it soonish™).

Anyway… This year, I have been seriously neglecting the Raspberry builds. I have simply not had time to regularly test built images, nor to debug why the builder has not picked up building for trixie (testing). And my time availability is not going to improve any time soon.

We are close to one month away from moving for six months to Paraná (Argentina), where I’ll be focusing on my PhD. And while I do contemplate taking my Raspberries along, I do not forsee being able to put much energy to them.

So… This is basically a call for adoption for the Raspberry Debian images building service. I do intend to stick around and try to help. It’s not only me (although I’m responsible for the build itself) — we have a nice and healthy group of Debian people hanging out in the #debian-raspberrypi channel in OFTC IRC.

Don’t be afraid, and come ask. I hope giving this project in adoption will breathe new life into it!

27 August, 2023 12:46AM

August 26, 2023

hackergotchi for Christoph Berg

Christoph Berg

PostgreSQL Popularity Contest

Back in 2015, when PostgreSQL 9.5 alpha 1 was released, I had posted the PostgreSQL data from Debian's popularity contest.

8 years and 8 PostgreSQL releases later, the graph now looks like this:

Currently, the most popular PostgreSQL on Debian systems is still PostgreSQL 13 (shipped in Bullseye), followed by PostgreSQL 11 (Buster). At the time of writing, PostgreSQL 9.6 (Stretch) and PostgreSQL 15 (Bookworm) share the third place, with 15 rising quickly.

26 August, 2023 09:49PM

Andrew Cater

20230826 - OMGWTFBBQ - BBQ still in full swing

 There's been a very successful barbeque running in the garden: burgers, sausages, beer, vegetarian dishes and then ice cream.

The chance to catch up with people you only meet in IRC. Talking and laughter - and probably a couple of games of Mao.

Thanks also to our sponsors - Collabora, Codethink and RattusRattus for contributions to food and drink.


26 August, 2023 08:35PM by Andrew Cater (noreply@blogger.com)

20230826 OMGWTFBBQ - Cambridge is waking up

 The meat has been fetched: those of us in the house are about to get bacon sandwiches. Pepper the dog is in the garden. Time for the mayhem to start, I think.

Various folk are travelling here so it will soon be crowded: the weather is sunny but cool and it looks good for a three day weekend.

This is a huge effort that falls to Steve and Jo and a huge disruption for them each year - for which many thanks, as ever. [And, as is traditional on this blog, the posts only ever seem to appear from Cambridge].

26 August, 2023 10:57AM by Andrew Cater (noreply@blogger.com)

August 25, 2023

Scarlett Gately Moore

KDE Snaps Weekly report, Debian recommenced!

Now that all the planets are fixed, please see what you missed here!

/https://www.scarlettgatelymoore.dev/kde-a-day-in-the-life-the-kde-snapcrafter-part-2/

EXTREMELY IMPORTANT: I am still looking for a super awesome team lead for a super amazing project involving KDE and Snaps. Time is running out and well the KDE world will be a better a better place if this project goes through! I would like to clarify, this is a paid position! A current KDE developer would be ideal as it is a small team so your time will be split managing and coding alike. If you or anyone you know might be interested please contact me ASAP!

Lots of news on the snap front 23.04.3 is now complete with new snaps! I know, just in time for 23.08.0. I have fixed some major issues in this release, 23.08 should go much quicker. Even quicker if my per repo snapcraft files gets approved!

  • kirigami-gallery
  • Itinerary

We have more PIM snaps, however I am waiting for reserved name approvals from the snap store.

I was approached to decouple qt and frameworks sdk snaps and I have agreed for the fact that security updates are near impossible when new versions are released. Conversation here:

https://forum.snapcraft.io/t/proposal-for-changes-to-kde-content-snap-and-extension

I have started qt5 here https://github.com/ScarlettGatelyMoore/qt-5-15-10-snap

And some exciting news – I have started the KF6 content pack! I am doing like above and I am using the qt6 content pack Jarred Wilson has made. This is a requirement to start the plasma snap. Progress can be tracked here: https://github.com/ScarlettGatelyMoore/kf6-snap

I am still have on on going request for snapcraft files in their respective repositories. While defending my request I have tested some options. Snapcraft files in the repository does allow for proper snap recipes in launchpad by mirroring the repo in launchpad -> create snap recipe. I created a recipe based on stable branch and it created and published the snap as expected.

After being pointed to the flatpak workflow I discovered snaps has a similiar store feature with github, however I will need to create a github repo for each snap, which is tempting. I want to avoid duplication of snapcraft files, but I guess this is what they do for flatpak? I never received an answer.

Snapcraft: Some more tidying of the qmake plugin and resolved some review conversations.

Debian!

I am back to getting things in Debian proper, starting with the golang packages I was working on for bubble-gum a cool console beautification application. As each one passes through NEW I will keep uploading. I will be checking in with the qt-kde team to see what needs doing. I am looking into seeing if openvoices is still a viable replacement for mycroft, hopefully all that work isn’t wasted time.

And finally, I do hate having to ask, but as we quickly approach September, I have not come close to enough to pay my pesky bills, required to have a place to live and eat! I am seeking employment as a backup if my amazing project falls through. I tried to enable ads, but that broke my planet feeds, I can’t have that! So without further ado… Anything helps! Also please share! Thanks for your consideration.

25 August, 2023 05:40PM by sgmoore

hackergotchi for Debian Brasil

Debian Brasil

Debian Day 30 years online in Brazil

In 2023 the traditional Debian Day is being celebrated in a special way, after all on August 16th Debian turned 30 years old!

To celebrate this special milestone in the Debian's life, the Debian Brasil community organized a week with talks online from August 14th to 18th. The event was named Debian 30 years.

Two talks were held per night, from 7:00 pm to 10:00 pm, streamed on the Debian Brasil channel on YouTube totaling 10 talks. The recordings are also available on the Debian Brazil channel on Peertube.

We had the participation of 9 DDs, 1 DM, 3 contributors in 10 activities. The live audience varied a lot, and the peak was on the preseed talk with Eriberto Mota when we had 47 people watching.

Thank you to all participants for the contribution you made to the success of our event.

  • Antonio Terceiro
  • Aquila Macedo
  • Charles Melara
  • Daniel Lenharo de Souza
  • David Polverari
  • Eriberto Mota
  • Giovani Ferreira
  • Jefferson Maier
  • Lucas Kanashiro
  • Paulo Henrique de Lima Santana
  • Sergio Durigan Junior
  • Thais Araujo
  • Thiago Andrade

Veja abaixo as fotos de cada atividade:

Nova geração: uma entrevista com iniciantes no projeto Debian
Nova geração: uma entrevista com iniciantes no projeto Debian

Instalação personalizada e automatizada do Debian com preseed
Instalação personalizada e automatizada do Debian com preseed

Manipulando patches com git-buildpackage
Manipulando patches com git-buildpackage

debian.social: Socializando Debian do jeito Debian
debian.social: Socializando Debian do jeito Debian

Proxy reverso com WireGuard
Proxy reverso com WireGuard

Celebração dos 30 anos do Debian!
Celebração dos 30 anos do Debian!

Instalando o Debian em disco criptografado com LUKS
Instalando o Debian em disco criptografado com LUKS

O que a equipe de localização já conquistou nesses 30 anos
O que a equipe de localização já conquistou nesses 30 anos

Debian - Projeto e Comunidade!
Debian - Projeto e Comunidade!

Design Gráfico e Software livre, o que fazer e por onde começar
Design Gráfico e Software livre, o que fazer e por onde começar

25 August, 2023 04:00PM

Debian Day 30 anos online no Brasil

Em 2023 o tradicional Debian Day está sendo celebrado de forma especial, afinal no dia 16 de agostoo Debian completou 30 anos!

Para comemorar este marco especial na vida do Debian, a comunidade Debian Brasil organizou uma semana de palestras online de 14 a 18 de agosto. O evento foi chamado de Debian 30 anos.

Foram realizadas 2 palestras por noite, das 19h às 22h, transmitidas pelo canal Debian Brasil no YouTube totalizando 10 palestras. As gravações já estão disponíveis também no canal Debian Brasil no Peertube.

Nas 10 atividades tivemos as participações de 9 DDs, 1 DM, 3 contribuidores(as). A audiência ao vivo variou bastante, e o pico foi na palestra sobre preseed com o Eriberto Mota quando tivemos 47 pessoas assistindo.

Obrigado a todos(as) participantes pela contribuição que vocês deram para o sucesso do nosso evento.

  • Antonio Terceiro
  • Aquila Macedo
  • Charles Melara
  • Daniel Lenharo de Souza
  • David Polverari
  • Eriberto Mota
  • Giovani Ferreira
  • Jefferson Maier
  • Lucas Kanashiro
  • Paulo Henrique de Lima Santana
  • Sergio Durigan Junior
  • Thais Araujo
  • Thiago Andrade

Veja abaixo as fotos de cada atividade:

Nova geração: uma entrevista com iniciantes no projeto Debian
Nova geração: uma entrevista com iniciantes no projeto Debian

Instalação personalizada e automatizada do Debian com preseed
Instalação personalizada e automatizada do Debian com preseed

Manipulando patches com git-buildpackage
Manipulando patches com git-buildpackage

debian.social: Socializando Debian do jeito Debian
debian.social: Socializando Debian do jeito Debian

Proxy reverso com WireGuard
Proxy reverso com WireGuard

Celebração dos 30 anos do Debian!
Celebração dos 30 anos do Debian!

Instalando o Debian em disco criptografado com LUKS
Instalando o Debian em disco criptografado com LUKS

O que a equipe de localização já conquistou nesses 30 anos
O que a equipe de localização já conquistou nesses 30 anos

Debian - Projeto e Comunidade!
Debian - Projeto e Comunidade!

Design Gráfico e Software livre, o que fazer e por onde começar
Design Gráfico e Software livre, o que fazer e por onde começar

25 August, 2023 04:00PM

Ian Jackson

I cycled to all the villages in alphabetical order

This last weekend I completed a bike rides project I started during the first Covid lockdown in 2020:

I’ve cycled to every settlement (and radio observatory) within 20km of my house, in alphabetical order.

Stir crazy

In early 2020, during the first lockdown, I was going a bit stir crazy. Clare said “you’re going very strange, you have to go out and get some exercise”. After a bit of discussion, we came up with this plan: I’d visit all the local villages, in alphabetical order.

Choosing the radius

I decided that I would pick a round number of kilometers, as the crow flies, from my house. 20km seemed about right. 25km would have included Ely, which would have been nice, but it would have added a great many places, all of them quite distant.

Software

I wrote a short Rust program to process OSM data into a list of places to visit, and their distances and bearings.

You can download a tarball of the alphabetical villages scanner. (I haven’t published the git history because it has my house’s GPS coordinates in it, and because I committed the output files from which that location can be derived.)

The Rides

I set off on my first ride, to Aldreth, on Sunday the 31st of May 2020. The final ride collected Yelling, on Saturday the 19th of August 2023.

I did quite a few rides in June and July 2020 - more than one a week. (I’d read the lockdown rules, and although some of the government messaging said you should stay near your house, that wasn’t in the legislation. Of course I didn’t go into any buildings or anything.)

I’m not much of a morning person, so I often set off after lunch. For the longer rides I would usually pack a picnic. Almost all of the rides I did just by myself. There were a handful where I had friends along:

Dry Drayton, which I collected with Clare, at night. I held my bike up so the light shone at the village sign, so we could take a photo of it.

Madingley, Melbourn and Meldreth, which was quite an expedition with my friend Ben. We went out as far as Royston and nearby Barley (both outside my radius and not on my list) mostly just so that my project would have visited Hertfordshire.

The Hemingfords, where I had my friend Matthew along, and we had a very nice pub lunch.

Girton and Wilburton, where I visited friends. Indeed, I stopped off in Wilburton on one or two other occasions.

And, of course, Yelling, for which there were four of us, again with a nice lunch (in Eltisley).

I had relatively little mechanical trouble. My worst ride for this was Exning: I got three punctures that day. Luckily the last one was close to home.

I often would stop to take lots of photos en-route. My mum in particular appreciated all the pretty pictures.

Rules

I decided on these rules:

I would cycle to each destination, in order, and it would count as collected if I rode both there and back. I allowed collecting multiple villages in the same outing, provided I did them in the right order. (And obviously I was allowed to pass through places out of order, without counting them.)

I tried to get a picture of the village sign, where there was one. Failing that, I got a picture of something in the village with the village’s name on it. I think the only one I didn’t manage this for was Westley Bottom; I had to make do with the word “Westley” on some railway level crossing equipment. In Barway I had to make do with a planning application, stuck to a pole.

I tried not to enter and leave a village by the same road, if possible.

Edge cases

I had to make some decisions:

I decided that I would consider the project complete if I visited everywhere whose centre was within my radius. But the centre of a settlement is rather hard to define. I needed a hard criterion for my OpenStreetMap data mining: a place counted if there was any node, way or relation, with the relevant place tag, any part of which was within my ambit. That included some places that probably oughtn’t to have counted, but, fine.

I also decided that I wouldn’t visit suburbs of Cambridge, separately from Cambridge itself. I don’t consider them separate settlements, at least, not if they’re conurbated with Cambridge. So that excluded Trumpington, for example. But I decided that Girton and Fen Ditton were (just) separable. Although the place where I consider Girton and Cambridge to nearly touch, is administratively well inside Girton, I chose to look at land use (on the ground, and in OSM data), rather than administrative boundaries.

But I did visit both Histon and Impington, and all each of the Shelfords and Stapleford, as separate entries in my list. Mostly because otherwise I’d have to decide whether to skip (say) Impington, or Histon. Whereas skipping suburbs of Cambridge in favour of Cambridge itself was an easy decision, and it also got rid of a bunch of what would have been quite short, boring, urban expeditions.

I sorted all the Greats and Littles under G and L, rather than (say) “Shelford, Great”, which seemed like it would be cheating because then I would be able to do “Shelford, Great” and “Shelford, Little” in one go.

Northstowe turned from mostly a building site into something that was arguably a settlement, during my project. It wasn’t included in the output of my original data mining. Of course it’s conurbated with Oakington - but happily, Northstowe inserts right before Oakington in the alphabetical list, so I decided to add it, visiting both the old and new in the same day.

There are a bunch of other minor edge cases. Some villages have an outlying hamlet. Mostly I included these. There are some individual farms, which I generally didn’t count.

Some stats

I visited 150 villages plus the Lords Bridge radio observatory. The project took 3 years and 3 months to complete.

There were 96 rides, totalling about 4900km. So my mean distance was around 51km. The median distance per ride was a little higher, at around 52 km, and the median duration (including stoppages) was about 2h40. The total duration, if you add them all up, including stoppages, was about 275h, giving a mean speed including photo stops, lunches and all, of 18kph.

The longest ride was 89.8km, collecting Scotland Farm, Shepreth, and Six Mile Bottom, so riding across the Cam valley. The shortest ride was 7.9km, collecting Cambridge (obviously); and I think that’s the only one I did on my Brompton. The rest were all on my trusty Thorn Audax.

My fastest ride (ranking by distance divided by time spent in motion) was to collect Haddenham, where I covered 46.3km in 1h39, giving an average speed in motion of 28.0kph.

The most I collected in one day was 5 places: West Wickham, West Wratting, Westley Bottom, Westley Waterless, and Weston Colville. That was the day of the Wests. (There’s only one East: East Hatley.)

Map

Here is a pretty picture of all of my tracklogs:

Edited 2023-08-25 01:32 BST to correct a slip.


comment count unavailable comments

25 August, 2023 12:33AM

Reproducible Builds (diffoscope)

diffoscope 248 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 248. This version includes the following changes:

[ Greg Chabala ]
* Merge Docker "RUN" commands into single layer.

You find out more by visiting the project homepage.

25 August, 2023 12:00AM

August 24, 2023

hackergotchi for Debian Brasil

Debian Brasil

Debian Day 30 years in Belo Horizonte - Brazil

For the first time, the city of Belo Horizonte held a Debian Day to celebrate the anniversary of the Debian Project.

The communities Debian Minas Gerais and Free Software Belo Horizonte and Region felt motivated to celebrate this special date due the 30 years of the Debian Project in 2023 and they organized a meeting on August 12nd in UFMG Knowledge Space.

The Debian Day organization in Belo Horizonte received the important support from UFMG Computer Science Department to book the room used by the event.

It was scheduled three activities:

  • Talk: The Debian project wants you! Paulo Henrique de Lima Santana
  • Talk: Customizing Debian for use in PBH schools: the history of Libertas - Fred Guimarães
  • Discussion: about the next steps to increase a Free Software community in BH - Bruno Braga Fonseca

In total, 11 people were present and we took a photo with those who stayed until the end.

Presentes no Debian Day 2023 em BH

24 August, 2023 11:00PM

Debian Day 30 anos em Belo Horizonte

Pela primeira vez a cidade de Belo Horizonte realizou um Debian Day para celebrar o aniversário do Projeto Debian.

As comunidades Debian Minas Gerais e Software Livre de BH e Região se sentiram motivadas para celebrar esta data especial devido aos 30 anos do Projeto Debian em 2023 e organizou um encontro no dia 12 de agosto dentro Espaço do Conhecimento da UFMG.

A organização do Debian Day em Belo Horizonte recebeu o importante apoio do Departamento de Ciência da Computação da UFMG para reservar a sala que foi utilizada para o evento.

A programação contou com três atividades:

  • Palestra O projeto Debian quer você! Paulo Henrique de Lima Santana
  • Palestra Personalizando o Debian para uso em escolas da PBH: a história da Libertas - Fred Guimarães
  • Bate-papo sobre os próximos passos para formar uma comunidade de Software Livre em BH - Bruno Braga Fonseca

No total etiveram presentes 11 pessoas e fizemos uma foto com as que ficaram até o final.

Presentes no Debian Day 2023 em BH

24 August, 2023 11:00PM

Debian Day 30 years in Curitiba - Brazil

As we all know, this year is a very special year for the Debian project, the project turns 30!
The Brazilian Community joined in and during the anniversary week, organized some online activities through the Debian Brasil YouTube channel.
Information about talks given can be seen on the commemoration website.
Talks are also have been published individually on the Debian social Peertube and Youtube.

After this week of celebration, the Debian Community in Curitiba, decided to get together for a lunch for confraternization among some local members.
The confraternization took place at The Barbers restaurant. The local menu was a traditional Feijoada (Rice, Beans with pork meat, fried banana, cabbage, orange and farofa). The meeting took place with a lot of laughs, conversations, fun, caipirinha and draft beer!

We can only thank the Debian Project for providing great moments!

A small photographic record of the people present!

Group photo

24 August, 2023 08:00PM

Lukas Märdian

Netplan v0.107 is now available

I’m happy to announce that Netplan version 0.107 is now available on GitHub and is soon to be deployed into a Linux installation near you! Six months and more than 200 commits after the previous version (including a .1 stable release), this release is brought to you by 8 free software contributors from around the globe.

Highlights

Highlights of this release include the new configuration types for veth and dummy interfaces:

network:
  version: 2
  virtual-ethernets:
    veth0:
      peer: veth1
    veth1:
      peer: veth0
  dummy-devices:
    dm0:
      addresses:
        - 192.168.0.123/24
      ...

Furthermore, we implemented CFFI based Python bindings on top of libnetplan’s API, that can easily be consumed by 3rd party applications (see full cffi-bindings.py example):

from netplan import Parser, State, NetDefinition
from netplan import NetplanException, NetplanParserException

parser = Parser()

# Parse the full, existing YAML config hierarchy
parser.load_yaml_hierarchy(rootdir='/')

# Validate the final parser state
state = State()
try:
    # validation of current state + new settings
    state.import_parser_results(parser)
except NetplanParserException as e:
    print('Error in', e.filename, 'Row/Col', e.line, e.column, '->', e.message)
except NetplanException as e:
    print('Error:', e.message)

# Walk through ethernet NetdefIDs in the state and print their backend
# renderer, to demonstrate working with NetDefinitionIterator &
# NetDefinition
for netdef in state.ethernets.values():
    print('Netdef', netdef.id, 'is managed by:', netdef.backend)
    print('Is it configured to use DHCP?', netdef.dhcp4 or netdef.dhcp6)

Changelog:

Bug fixes:

24 August, 2023 12:59PM by slyon

August 23, 2023

hackergotchi for Jo Shields

Jo Shields

Retirement

Apparently it’s nearly four years since I last posted to my blog. Which is, to a degree, the point here. My time, and priorities, have changed over the years. And this lead me to the decision that my available time and priorities in 2023 aren’t compatible with being a Debian or Ubuntu developer, and realistically, haven’t been for years. As of earlier this month, I quit as a Debian Developer and Ubuntu MOTU.

I think a lot of my blogging energy got absorbed by social media over the last decade, but with the collapse of Twitter and Reddit due to mismanagement, I’m trying to allocate more time for blog-based things instead. I may write up some of the things I’ve achieved at work (.NET 8 is now snapped for release Soon™). I might even blog about work-adjacent controversial topics, like my changed feelings about the entire concept of distribution packages. But there’s time for that later. Maybe.

I’ll keep tagging vaguely FOSS related topics with the Debian and Ubuntu tags, which cause them to be aggregated in the Planet Debian/Ubuntu feeds (RSS, remember that from the before times?!) until an admin on those sites gets annoyed at the off-topic posting of an emeritus dev and deletes them.

But that’s where we are. Rather than ignore my distro obligations, I’ve admitted that I just don’t have the energy any more. Let someone less perpetually exhausted than me take over. And if they don’t, maybe that’s OK too.

23 August, 2023 03:52PM by directhex

August 22, 2023

Scarlett Gately Moore

KDE: A Day in the Life the KDE Snapcrafter Part 2

KDE MascotKDE Mascot

Much to my dismay, I figured out that my blog has been disabled on the Ubuntu planet since May. If you are curious about what I have been up to, please go to the handy links -> and read up! This post is a continuation of last weeks https://www.scarlettgatelymoore.dev/kde-a-day-in-the-life-of-the-kde-snapcrafter/

IMPORTANT: I am still looking for a super awesome team lead for a super amazing project involving KDE and Snaps. Time is running out and well the KDE world will be a better a better place if this project goes through! I would like to clarify, this is a paid position! A current KDE developer would be ideal as it is a small team so your time will be split managing and coding alike. If you or anyone you know might be interested please contact me ASAP!

Snaps: I am wrapping up the 23.04.3 KDE applications release! Head on over to https://snapcraft.io/search?q=KDE and enjoy! We are now up to 180 snaps! PIM snaps will be slowly rolling in as they go through manual reviews for D-Bus.

Snapcraft: minor fix in qmake plugin found by ruff.

Launchpad: I almost have approval for per application repository snapcraft files, but I have to prove it will work to our benefit and not cause loads of polling etc. So I have been testing various methods of achieving such a task, and so far I have come up with launchpads ability to watch and download release tarballs into a project. I will then need to script getting the tarball and pushing it to a bzr branch from which I can create a proper snap recipe. Unfortunately, my proper snap recipe fails! Hopefully a very helpful cjwatson will chime in, or if anyone wants to take a gander please chime in here: https://bugs.launchpad.net/launchpad/+bug/2031307

As reality sets in that my project may not happen if I don’t find anyone, I need help surviving until I find work or funding to continue my snap work ( still much to do! ) If you or anyone else you know enjoys our snaps please consider a donation, anything helps! Please share! Thank you for your consideration!

22 August, 2023 05:55PM by sgmoore

August 21, 2023

Melissa Wen

AMD Driver-specific Properties for Color Management on Linux (Part 1)

TL;DR:

Color is a visual perception. Human eyes can detect a broader range of colors than any devices in the graphics chain. Since each device can generate, capture or reproduce a specific subset of colors and tones, color management controls color conversion and calibration across devices to ensure a more accurate and consistent color representation. We can expose a GPU-accelerated display color management pipeline to support this process and enhance results, and this is what we are doing on Linux to improve color management on Gamescope/SteamDeck. Even with the challenges of being external developers, we have been working on mapping AMD GPU color capabilities to the Linux kernel color management interface, which is a combination of DRM and AMD driver-specific color properties. This more extensive color management pipeline includes pre-defined Transfer Functions, 1-Dimensional LookUp Tables (1D LUTs), and 3D LUTs before and after the plane composition/blending.


The study of color is well-established and has been explored for many years. Color science and research findings have also guided technology innovations. As a result, color in Computer Graphics is a very complex topic that I’m putting a lot of effort into becoming familiar with. I always find myself rereading all the materials I have collected about color space and operations since I started this journey (about one year ago). I also understand how hard it is to find consensus on some color subjects, as exemplified by all explanations around the 2015 online viral phenomenon of The Black and Blue Dress. Have you heard about it? What is the color of the dress for you?

So, taking into account my skills with colors and building consensus, this blog post only focuses on GPU hardware capabilities to support color management :-D If you want to learn more about color concepts and color on Linux, you can find useful links at the end of this blog post.

Linux Kernel, show me the colors ;D

DRM color management interface only exposes a small set of post-blending color properties. Proposals to enhance the DRM color API from different vendors have landed the subsystem mailing list over the last few years. On one hand, we got some suggestions to extend DRM post-blending/CRTC color API: DRM CRTC 3D LUT for R-Car (2020 version); DRM CRTC 3D LUT for Intel (draft - 2020); DRM CRTC 3D LUT for AMD by Igalia (v2 - 2023); DRM CRTC 3D LUT for R-Car (v2 - 2023). On the other hand, some proposals to extend DRM pre-blending/plane API: DRM plane colors for Intel (v2 - 2021); DRM plane API for AMD (v3 - 2021); DRM plane 3D LUT for AMD - 2021. Finally, Simon Ser sent the latest proposal in May 2023: Plane color pipeline KMS uAPI, from discussions in the 2023 Display/HDR Hackfest, and it is still under evaluation by the Linux Graphics community.

All previous proposals seek a generic solution for expanding the API, but many seem to have stalled due to the uncertainty of matching well the hardware capabilities of all vendors. Meanwhile, the use of AMD color capabilities on Linux remained limited by the DRM interface, as the DCN 3.0 family color caps and mapping diagram below shows the Linux/DRM color interface without driver-specific color properties [*]:

Bearing in mind that we need to know the variety of color pipelines in the subsystem to be clear about a generic solution, we decided to approach the issue from a different perspective and worked on enabling a set of Driver-Specific Color Properties for AMD Display Drivers. As a result, I recently sent another round of the AMD driver-specific color mgmt API.

For those who have been following the AMD driver-specific proposal since the beginning (see [RFC][V1]), the main new features of the latest version [v2] are the addition of pre-blending Color Transformation Matrix (plane CTM) and the differentiation of Pre-defined Transfer Functions (TF) supported by color blocks. For those who just got here, I will recap this work in two blog posts. This one describes the current status of the AMD display driver in the Linux kernel/DRM subsystem and what changes with the driver-specific properties. In the next post, we go deeper to describe the features of each color block and provide a better picture of what is available in terms of color management for Linux.

The Linux kernel color management API and AMD hardware color capabilities

Before discussing colors in the Linux kernel with AMD hardware, consider accessing the Linux kernel documentation (version 6.5.0-rc5). In the AMD Display documentation, you will find my previous work documenting AMD hardware color capabilities and the Color Management Properties. It describes how AMD Display Manager (DM) intermediates requests between the AMD Display Core component (DC) and the Linux/DRM kernel interface for color management features. It also describes the relevant function to call the AMD color module in building curves for content space transformations.

A subsection also describes hardware color capabilities and how they evolve between versions. This subsection, DC Color Capabilities between DCN generations, is a good starting point to understand what we have been doing on the kernel side to provide a broader color management API with AMD driver-specific properties.

Why do we need more kernel color properties on Linux?

Blending is the process of combining multiple planes (framebuffers abstraction) according to their mode settings. Before blending, we can manage the colors of various planes separately; after blending, we have combined those planes in only one output per CRTC. Color conversions after blending would be enough in a single-plane scenario or when dealing with planes in the same color space on the kernel side. Still, it cannot help to handle the blending of multiple planes with different color spaces and luminance levels. With plane color management properties, userspace can get a more accurate representation of colors to deal with the diversity of color profiles of devices in the graphics chain, bring a wide color gamut (WCG), convert High-Dynamic-Range (HDR) content to Standard-Dynamic-Range (SDR) content (and vice-versa). With a GPU-accelerated display color management pipeline, we can use hardware blocks for color conversions and color mapping and support advanced color management.

The current DRM color management API enables us to perform some color conversions after blending, but there is no interface to calibrate input space by planes. Note that here I’m not considering some workarounds in the AMD display manager mapping of DRM CRTC de-gamma and DRM CRTC CTM property to pre-blending DC de-gamma and gamut remap block, respectively. So, in more detail, it only exposes three post-blending features:

  • DRM CRTC de-gamma: used to convert the framebuffer’s colors to linear gamma;
  • DRM CRTC CTM: used for color space conversion;
  • DRM CRTC gamma: used to convert colors to the gamma space of the connected screen.

AMD driver-specific color management interface

We can compare the Linux color management API with and without the driver-specific color properties. From now, we denote driver-specific properties with the AMD prefix and generic properties with the DRM prefix. For visual comparison, I bring the DCN 3.0 family color caps and mapping diagram closer and present it here again:

Mixing AMD driver-specific color properties with DRM generic color properties, we have a broader Linux color management system with the following features exposed by properties in the plane and CRTC interface, as summarized by this updated diagram:

The blocks highlighted by red lines are the new properties in the driver-specific interface developed by me (Igalia) and Joshua (Valve). The red dashed lines are new links between API and AMD driver components implemented by us to connect the Linux/DRM interface to AMD hardware blocks, mapping components accordingly. In short, we have the following color management properties exposed by the DRM/AMD display driver:

  • Pre-blending - AMD Display Pipe and Plane (DPP):
    • AMD plane de-gamma: 1D LUT and pre-defined transfer functions; used to linearize the input space of a plane;
    • AMD plane CTM: 3x4 matrix; used to convert plane color space;
    • AMD plane shaper: 1D LUT and pre-defined transfer functions; used to delinearize and/or normalize colors before applying 3D LUT;
    • AMD plane 3D LUT: 17x17x17 size with 12 bit-depth; three dimensional lookup table used for advanced color mapping;
    • AMD plane blend/out gamma: 1D LUT and pre-defined transfer functions; used to linearize back the color space after 3D LUT for blending.
  • Post-blending - AMD Multiple Pipe/Plane Combined (MPC):
    • DRM CRTC de-gamma: 1D LUT (can’t be set together with plane de-gamma);
    • DRM CRTC CTM: 3x3 matrix (remapped to post-blending matrix);
    • DRM CRTC gamma: 1D LUT + AMD CRTC gamma TF; added to take advantage of driver pre-defined transfer functions;

Note: You can find more about AMD display blocks in the Display Core Next (DCN) - Linux kernel documentation, provided by Rodrigo Siqueira (Linux/AMD display developer) in a 2021-documentation series. In the next post, I’ll revisit this topic, explaining display and color blocks in detail.

How did we get a large set of color features from AMD display hardware?

So, looking at AMD hardware color capabilities in the first diagram, we can see no post-blending (MPC) de-gamma block in any hardware families. We can also see that the AMD display driver maps CRTC/post-blending CTM to pre-blending (DPP) gamut_remap, but there is post-blending (MPC) gamut_remap (DRM CTM) from newer hardware versions that include SteamDeck hardware. You can find more details about hardware versions in the Linux kernel documentation/AMDGPU Product Information.

I needed to rework these two mappings mentioned above to provide pre-blending/plane de-gamma and CTM for SteamDeck. I changed the DC mapping to detach stream gamut remap matrixes from the DPP gamut remap block. That means mapping AMD plane CTM directly to DPP/pre-blending gamut remap block and DRM CRTC CTM to MPC/post-blending gamut remap block. In this sense, I also limited plane CTM properties to those hardware versions with MPC/post-blending gamut_remap capabilities since older versions cannot support this feature without clashes with DRM CRTC CTM.

Unfortunately, I couldn’t prevent conflict between AMD plane de-gamma and DRM plane de-gamma since post-blending de-gamma isn’t available in any AMD hardware versions until now. The fact is that a post-blending de-gamma makes little sense in the AMD color pipeline, where plane blending works better in a linear space, and there are enough color blocks to linearize content before blending. To deal with this conflict, the driver now rejects atomic commits if users try to set both AMD plane de-gamma and DRM CRTC de-gamma simultaneously.

Finally, we had no other clashes when enabling other AMD driver-specific color properties for our use case, Gamescope/SteamDeck. Our main work for the remaining properties was understanding the data flow of each property, the hardware capabilities and limitations, and how to shape the data for programming the registers - AMD color block capabilities (and limitations) are the topics of the next blog post. Besides that, we fixed some driver bugs along the way since it was the first Linux use case for most of the new color properties, and some behaviors are only exposed when exercising the engine.

Take a look at the Gamescope/Steam Deck Color Pipeline[**], and see how Gamescope uses the new API to manage color space conversions and calibration (please click on the image for a better view):

In the next blog post, I’ll describe the implementation and technical details of each pre- and post-blending color block/property on the AMD display driver.

* Thank Harry Wentland for helping with diagrams, color concepts and AMD capabilities.

** Thank Joshua Ashton for providing and explaining Gamescope/Steam Deck color pipeline.

*** Thanks to the Linux Graphics community - explicitly Harry, Joshua, Pekka, Simon, Sebastian, Siqueira, Alex H. and Ville - to all the learning during this Linux DRM/AMD color journey. Also, Carlos and Tomas for organizing the 2023 Display/HDR Hackfest where we have a great and immersive opportunity to discuss Color & HDR on Linux.

  1. Cinematic Color - 2012 SIGGRAPH course notes by Jeremy Selan: an introduction to color science, concepts and pipelines.
  2. Color management and HDR documentation for FOSS graphics by Pekka Paalanen: documentation and useful links on applying color concepts to the Linux graphics stack.
  3. HDR in Linux by Jeremy Cline: a blog post exploring color concepts for HDR support on Linux.
  4. Methods for conversion of high dynamic range content to standard dynamic range content and vice-versa by ITU-R: guideline for conversions between HDR and SDR contents.
  5. Using Lookup Tables to Accelerate Color Transformations by Jeremy Selan: Nvidia blog post about Lookup Tables on color management.
  6. The Importance of Being Linear by Larry Gritz and Eugene d’Eon: Nvidia blog post about gamma and color conversions.

21 August, 2023 11:13AM

hackergotchi for Jonathan Dowland

Jonathan Dowland

FreshRSS

Now that it's more convenient for me to run containers at home, I thought I'd write a bit about web apps I am enjoying.

First up, FreshRSS, a web feed aggregator. I used to make heavy use of Google Reader until Google killed it, and although a bunch of self-hosted cloned sprung up very quickly afterwards, I didn't transition to any of them.

Then followed a number of years within which, in retrospect, I basically didn't do a great job of organising reading the web. This lasted until a couple of years ago when, on a whim, I tried out NetNewsWire for iOS. NetNewsWire is a well-established and much-loved feed reader for Mac which I've never used. I used the iOS version in isolation for a long time: only dipping into web feeds on my phone and never on another device.

I'd like to see the old web back, and do my part to make that happen. I've continually published rss and atom feeds for my own blog. I'm also trying to blog more: this might be easier now that Twitter (which, IMHO, took a lot of the energy for quick writing out of blogging) is mortally wounded.

So, I'm giving FreshRSS a go. Early signs are good: it's fast, lightweight, easy to use, vaguely resembles how I remember Google Reader, and has a native dark-mode. It's still early days building up a new list of feeds to follow. I'll be sure to share interesting ones as I discover them!

21 August, 2023 08:29AM

Russ Allbery

Review: Some Desperate Glory

Review: Some Desperate Glory, by Emily Tesh

Publisher: Tordotcom
Copyright: 2023
ISBN: 1-250-83499-6
Format: Kindle
Pages: 438

Some Desperate Glory is a far-future space... opera? That's probably the right genre classification given the setting, but this book is much more intense and character-focused than most space opera. It is Emily Tesh's first novel, although she has two previous novellas that were published as books.

The alien majo and their nearly all-powerful Wisdom have won the war by destroying Earth with an antimatter bomb. The remnants of humanity were absorbed into the sprawling majo civilization. Gaea Station is the lone exception: a marginally viable station deep in space, formed from a lifeless rocky planetoid and the coupled hulks of the last four human dreadnoughts. Gaea Station survives on military discipline, ruthless use of every available resource, and constant training, raising new generations of soldiers for the war that it refuses to let end.

While Earth's children live, the enemy shall fear us.

Kyr is a warbreed, one of a genetically engineered line of soldiers that, following an accident, Gaea Station has lost the ability to make except the old-fashioned way. Among the Sparrows, her mess group, she is the best at the simulated combat exercises they use for training. She may be the best of her age cohort except her twin Magnus. As this novel opens, she and the rest of the Sparrows are about to get their adult assignments. Kyr is absolutely focused on living up to her potential and the attention of her uncle Jole, the leader of the station.

Kyr's future will look nothing like what she expects.

This book was so good, and I despair of explaining why it was so good without unforgivable spoilers. I can tell you a few things about it, but be warned that I'll be reduced to helpless gestures and telling you to just go read it. It's been a very long time since I was this surprised by a novel, possibly since I read Code Name: Verity for the first time.

Some Desperate Glory follows Kyr in close third-person throughout the book, which makes the start of this book daring. If you're getting a fascist vibe from the setup, you're not wrong, and this is intentional on Tesh's part. But Kyr is a true believer at the start of the book, so the first quarter has a protagonist who is sometimes nasty and cruel and who makes some frustratingly bad decisions. Stay with it, though; Tesh knows exactly what she's doing.

This is a coming of age story, in a way. Kyr has a lot to learn and a lot to process, and Some Desperate Glory is about that process. But by the middle of part three, halfway through the book, I had absolutely no idea where Tesh was going with the story. She then pulled the rug out from under me, in the best way, at least twice more. Part five of this book is an absolute triumph, the payoff for everything that's happened over the course of the novel, and there is no way I could have predicted it in advance. It was deeply satisfying in that way where I felt like I learned some things along with the characters, and where the characters find a better ending than I could possibly have worked out myself.

Tesh does use some world-building trickery, which is at its most complicated in part four. That was the one place where I can point to a few chapters where I thought the world-building got a bit too convenient in order to enable the plot. But it also allows for some truly incredible character work. I can't describe that in detail because it would be a major spoiler, but it's one of my favorite tropes in fiction and Tesh pulls it off beautifully. The character growth and interaction in this book is just so good: deep and complicated and nuanced and thoughtful in a way that revises reader impressions of earlier chapters.

The other great thing about this book is that for a 400+ page novel, it moves right along. Both plot and character development is beautifully paced with only a few lulls. Tesh also doesn't belabor conversations. This is a book that provides just the right amount of context for the reader to fully understand what's going on, and then trusts the reader to be following along and moves straight to the next twist. That makes it propulsively readable. I had so much trouble putting this book down at any time during the second half.

I can't give any specifics, again because of spoilers, but this is not just a character story. Some Desperate Glory has strong opinions on how to ethically approach the world, and those ethics are at the center of the plot. Unlike a lot of books with a moral stance, though, this novel shows the difficulty of the work of deriving that moral stance. I have rarely read a book that more perfectly captures the interior experience of changing one's mind with all of its emotional difficulty and internal resistance. Tesh provides all the payoff I was looking for as a reader, but she never makes it easy or gratuitous (with the arguable exception of one moment at the very end of the book that I think some people will dislike but that I personally needed).

This is truly great stuff, probably the best science fiction novel that I've read in several years. Since I read it (I'm late on reviews again), I've pushed it on several other people, and I've not had a miss yet. The subject matter is pretty heavy, and this book also uses several tropes that I personally adore and am therefore incapable of being objective about, but with those caveats, this gets my highest possible recommendation.

Some Desperate Glory is a complete story in one novel with a definite end, although I love these characters so much that I'd happily read their further adventures, even if those are thematically unnecessary.

Content warnings: Uh, a lot. Genocide, suicide, sexual assault, racism, sexism, homophobia, misgendering, and torture, and I'm probably forgetting a few things. Tesh doesn't linger on these long, but most of them are on-screen. You may have to brace yourself for this one.

Rating: 10 out of 10

21 August, 2023 04:30AM

August 20, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.2.4 on CRAN: Maintenance

Another minor release, now at 0.2.4, of our RcppRedis package arrived on CRAN yesterday. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has carried production loads on a trading floor for several years. It also supports pub/sub dissemination of streaming market data as per this earlier example.

This update is (just like the previous one) fairly mechanical. CRAN noticed a shortcoming of the default per-package help page in a number of packages, in our case it was matter of adding one line for a missing alias to the Rd file. We also demoted the mention of the suggested (but retired) rredis package to a mere mention in the DESCRIPTION file as a formal Suggests: entry, even with an added Additional_repositories, create a NOTE. Life is simpler without those,

The detailed changes list follows.

Changes in version 0.2.4 (2023-08-19)

  • Add missing alias for ‘RcppRedis-package’ to rhiredis.Rd.

  • Remove Suggests: rredis which triggers a NOTE nag as it is only on an ‘Additional_repositories’.

Courtesy of my CRANberries, there is also a diffstat report for this this release. More information is on the RcppRedis page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 August, 2023 10:16PM

Russell Coker

GPT Systems and Relationships

Sam Hartman wrote an interesting blog post about his work as a sex and intimacy educator and how GPT systems could impact that [1].

I’ve read some positive reviews of Replika – a commercial system that is somewhat promoted as a counsellor [2], so I decided to try it out. In my brief trial it seemed to be using all the methods that Android pay to play games are known for. Having multiple types of in-game currency, pay to buy new clothes etc for your friend, etc. Basically it seems pretty horrible. I didn’t pay for it and the erotic and romantic features all require payment so I didn’t test that.

When thinking about this logically, having a system designed to deal with people when they are vulnerable (either being in a romantic relationship or getting counselling) that uses manipulative techniques to get money from them can’t have a good result. So a free software system seems the best option.

When I first learned of virtual girlfriends I never thought I would feel compelled to advocate for a free software virtual dating program, but that’s where the world has got to.

Virtual girlfriends have been around for years now. Several years ago I watched a documentary about their use in Japan. It seemed a bit strange when a group of men who had virtual girlfriends had a dinner party with their tablets and phones propped up so their girlfriends could join in as they all appeared to be dating the same girl. The documentary didn’t go in to enough detail to cover whether the girlfriend app could learn or be customised enough that they would seem to have different personalities.

Virtual boyfriends have also been around for a while apparently without most people noticing. I just Googled it and found a review of a virtual boyfriend app published in 2016!

One thing that will probably concern people is the possibility for virtual dating systems to be used for inappropriate things. That is a reasonable thing to be concerned about but I don’t think it’s possible to prevent technology that has already been released from doing such things. As a general rule technology can always be used for good and bad things so we need to just make it easy to do good things and let the legal system develop ways of dealing with the bad things.

20 August, 2023 04:32AM by etbe

August 18, 2023

Scarlett Gately Moore

KDE: A day in the life of the KDE snapcrafter!

KDE MascotKDE Mascot

As mentioned last week, I am still looking for a super awesome team lead for a super amazing project involving KDE and Snaps. Time is running out and well the KDE world will be a better a better place if this project goes through! I would like to clarify, this is a paid position! A current KDE developer would be ideal as it is a small team so your time will be split managing and coding alike. If you or anyone you know might be interested please contact me ASAP!

On to snappy things I have achieved this week:

Most 23.04.3 is done, I am just testing them now. New applications: kmymoney ( Thanks Carlos! ), kde-dev-utils, and kxstitch ( Thanks Jeremy! )

With that said, I have seen on the internets –candidate channel apps being promoted. Please use this channel with utmost care as they are being tested and could quite possibly be very broken!

Still working on some QML issues with kirigami platform not found.

I have begun the launchpad build issues journey and have been kindly pointed to using snap recipes on launchpad so we aren’t doing public uploads which creates temporary recipes to build and cannot be bumped priority wise. So I have sent the request into the kde-devel arena to revisit having per repository snapcraft files ( rejected in the past ) as they do with flatpak files. So far I am getting positive feedback and hopefully this will go through. Once it does I can move forward with fully automating application new releases. Hooray!

This week I jumped into the xdg-desktop-portals rabbithole while working on https://bugs.kde.org/show_bug.cgi?id=473003 for neochat. After fixing it with adding plug password-manager-service I am told that auto-connect on that one is discouraged and the libsecret should work out of the box with portals. I found and joined just in time a snapcrafter google meet and we had a long conversation spitballing and testing our portal support. At least in Neon it appears to be broken. I now have some things to do and test to see that we get this functional. Most of our online apps are affected. For now though – snap connect neochat:password-manager-service :password-manager-service does work. Auto-connect was rejected as it exposes to much. Understandable.

I have started a new thread on the KDE forums for users to ask any questions, or let me know of any issues you may have related to snaps here: https://discuss.kde.org/t/all-things-snaps-questions-concerns-praise/4033 come join the conversation!

In the snapcraft arena I have fixed my PR for the much needed qmake plugin! This should be merged and rolled out in the very near future!

I would like to continue my hard work on snap things regardless of the project going through. Unfortunately, to do so, I must ask for donations as life isn’t free. I am working on self sufficiency but even that costs money to get started! KDE snaps are used by 1.7 million active devices! I do ask that if you use KDE snaps and find my work useful, or know someone that does, to please consider donating to keep my momentum going. There is still much work to be done with Qt6 rolling out. I would like to work on the KDE Plasma snap and KDE PIM suite of apps ( I have started on this ).

Even if you can’t help, please share! Thank you for your consideration! I have a new donation form for anyone that doesn’t like gofundme here:

Gofund me:

18 August, 2023 06:26PM by sgmoore

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#43: r2u Faster Than the Alternatives

Welcome to the 43th post in the $R^4 series.

And with that, a good laugh. When I set up Sunday’s post, I was excited enough about the (indeed exciting !!) topic of r2u via browser or vscode that I mistakenly labeled it as the 41th post. And overlooked the existing 41th post from July! So it really is as if Douglas Adams, Arthur Dent, and, for good measure, Dirk Gently, looked over my shoulder and declared there shall not be a 42th post!! So now we have two 41th post: Sunday’s and July’s.

Back the current topic, which is of course r2u. Earlier this week we had a failure in (an R based) CI run (using a default action which I had not set up). A package was newer in source than binary, so a build from source was attempted. And of course failed as it was a package needing a system dependency to build. Which the default action did not install.

I am familiar with the problem via my general use of r2u (or my r-ci which uses it under the hood). And there we use a bspm variable to prefer binary over possibly newer source. So I was curious how one would address this with the default actions. It so happens that the same morning I spotted a StackOverflow question on the same topic, where the original poster had suffered the exact same issue!

I offered my approach (via r2u) as a comment and was later notified of a follow-up answer by the OP. Turns our there is a new, more powerful action that does all this, potentially flipping to a newer version and building it, all while using a cache.

Now I was curious, and in the evening cloned the repo to study the new approach and compare the new action to what r2u offers. In particular, I was curious if a use of caches would be benficial on repeated runs. A screenshot of the resulting Actions and their times follows.

Turns out maybe not so much (yet ?). As the actions page of my cloned ‘comparison repo’ shows in this screenshot, r2u is consistently faster at always below one minute compared to new entrant at always over two minutes. (I should clarify that the original actions sets up dependencies, then scrapes, and commits. I am timing only the setup of dependencies here.)

We can also extract the six datapoints and quickly visualize them.

Now, this is of course entirely possibly that not all possible venues for speedups were exploited in how the action setup was setup. If so, please file an issue at the repo and I will try to update accordingly. But for now it seems that a default of setup r2u is easily more than twice as fast as an otherwise very compelling alternative (with arguably much broader scope). However, where r2u choses to play, on the increasingly common, popular and powerful Ubuntu LTS setup, it clearly continues to run circles around alternate approaches. So the saying remains:

r2u: fast, easy, reliable.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Originally posted 2023-08-13, minimally edited 2023-08-15 which changed the timestamo and URL.

18 August, 2023 02:18AM

August 17, 2023

hackergotchi for Jonathan Carter

Jonathan Carter

Debian 30th Birthday: Local Group event and Interview

Inspired by the fine Debian Local Groups all over the world, I’ve long since wanted to start one in Cape Town. Unfortunately, there’s been many obstacles over the years. Shiny distractions, an epidemic, DPL terms… these are just some of the things that got in the way.

Fortunately, things are starting to gain traction, and we’re well on our way to forming a bona fide local group for South Africa.

We got together at Woodstock Grill, they have both a nice meeting room, and good food and beverage, also reasonably central for most of us.

Cake

Starting with the important stuff, we got this Debian cake made that ended up much bigger than we expected, at least we all got to take some home too! (it tasted great too)

Yes, cake.

Talk

This event was planned very last minute, so we didn’t do any kind of RSVP and I had no idea who exactly would show up, so I went ahead and prepared one of my usual introduction to Linux and Debian talks, and how these things are having an impact on the world out there. I also talked a bit about the community and how we intend to grow our local team here in South Africa. It turned out most of the audience were already experienced Linux users, but I was happy to see that they were very enthusiastic about the local group concept!

While reading through some material to find some inspiration for this talk, I came across an old quote from the original Debian Manifesto that I found very poignant again, so I feel compelled to share (didn’t use it in my talk this time though since I didn’t cover much current events):

“The time has come to concentrate on the future of Linux rather than on the destructive goal of enriching oneself at the expense of the entire Linux community and its future.”Ian Murdock, 1994

Debian-ZA logo

Tammy spent some time creating a whole bunch of logo concepts, that she presented to us. They aren’t meant as final logo choices, but as initial concepts, and they worked well to invoke a very lively discussion about logos and design!

Here are just some of her designs that I cherry picked since they were the most discussed. We still haven’t decided if it will be Debian ZA or Debian-ZA or Debian South Africa, although the latter will probably cause least confusion internationally.

Personally, the last one on this image that I referred to as “the carpet” is my personal favourite :-)

Happy Birthday Song

John and Heila wrote a happy birthday song. After seeing the lyrics (and considering myself an amateur lyricist) I thought it was way too tacky and I told John to put it away. But when the cake came out, someone said “we should sing!” and the lyrics quickly re-emerged and was handed out to everyone. It’s also meant to be a loopy clip for the upcoming DebConf23. I’ll concede that it worked out alright in the end! You judge for yourself:

Mousepads

People still use mousepads? That was my initial reaction when Heila told me that she was going to make some commemorative 30 year Debian mousepads for our birthday event, and they ended up being popular. It’s probably a safer surface to put dev boards on than on my desk directly, so at least I do have a use for one!

Group Photo

The group photo, complete with disco ball! We should’ve taken this earlier because it was already getting late and some people had to head back home. Lesson learned for next time!

30th Birthday Interview with The Changelog

I also did an interview with The Changelog for Debian’s 30th birthday. It was late, and I haven’t had a chance to listen to it yet, so I hope their producers managed to edit something coherent out of my usual Debian babbling:

More later

There’s so much more I’d like to say about Debian, the last 30 years, local groups, the eco-system that we find ourselves in. And, Lemmy! But I’ll see if I can get an instance up over the weekend and will then talk about that some more another time.

17 August, 2023 09:46PM by jonathan

August 16, 2023

Sam Hartman

A First Exercise with AI Training

Taking a hands-on low-level approach to learning AI has been incredibly rewarding. I wanted to create an achievable task that would motivate me to learn the tools and get practical experience training and using large language models. Just at the point when I was starting to spin up GPU instances, Llama2 was released to the public. So I elected to start with that model. As I mentioned, I’m interested in exploring how sex-positive AI can help human connection in positive ways. For that reason, I suspected that Llama2 might not produce good results without training: some of Meta’s safety goals run counter to what I’m trying to explore. I suspected that there might be more attention paid to safety in the chat variants of Llama2 rather than the text generation variants, and working against that might be challenging for a first project, so I started with Llama-2-13b as a base.

Preparing a Dataset

I elected to generate a fine tuning dataset using fiction. Long term, that might not be a good fit. But I’ve always wanted to understand how an LLM’s tone is adjusted—how you get an LLM to speak in a different voice. So much of fine tuning focuses on examples where a given prompt produces a particular result. I wanted to understand how to bring in data that wasn’t structured as prompts. The Huggingface course actually gives an example of how to adjust a model set up for masked language modeling trained on wikitext to be better at predicting the vocabulary of movie reviews. There though, doing sample breaks in the dataset at movie review boundaries makes sense. There’s another example of training an LLM from scratch based on a corpus of python code. Between these two examples, I figured out what I needed. It was relatively simple in retrospect: tokenize the whole mess, and treat everything as output. That is, compute loss on all the tokens.

Long term, using fiction as a way to adjust how the model responds is likely to be the wrong starting point. However, it maximized focus on aspects of training I did not understand and allowed me to satisfy my curiosity.

Rangling the Model

I decided to actually try and add additional training to the model directly rather than building an adapter and fine tuning a small number of parameters. Partially this was because I had enough on my mind without understanding how LoRA adapters work. Partially, I wanted to gain an appreciation for the infrastructure complexity of AI training. I have enough of a cloud background that I ought to be able to work on distributed training. (As it turned out, using BitsAndBytes 8-bit optimizer, I was just able to fit my task onto a single GPU).

I wasn’t even sure that I could make a measurable difference in Llama-2-13b running 890,000 training tokens through a couple of training epochs. As it turned out I had nothing to fear on that front.

Getting everything to work was more tricky than I expected. I didn’t have an appreciation for exactly how memory intensive training was. The Transformers documentation points out that with typical parameters for mixed-precision training, it takes 18 bytes per model parameter. Using bfloat16 training and an 8-bit optimizer was enough to get things to fit.

Of course then I got to play with convergence. My initial optimizer parameters caused the model to diverge, and before I knew it, my model had turned to NAN, and would only output newlines. Oops. But looking back over the logs, watching what happened to the loss, and looking at the math in the optimizer to understand how I ended up getting something that rounded to a divide by zero gave me a much better intuition for what was going on.

The results.

This time around I didn’t do anything in the way of quantitative analysis of what I achieved. Empirically I definitely changed the tone of the model. The base Llama-2 model tends to steer away from sexual situations. It’s relatively easy to get it to talk about affection and sometimes attraction. Unsurprisingly, given the design constraints, it takes a bit to get it to wonder into sexual situations. But if you hit it hard enough with your prompt, it will go there, and the results are depressing. At least for prompts I used, it tended to view sex fairly negatively. It tended to be less coherent than with other prompts. One inference managed to pop out in the middle of some text that wasn’t hanging together well, “Chapter 7 - Rape.”

With my training, I did manage to achieve my goal of getting the model to use more positive language and emotional signaling when talking about sexual situations. More importantly, I gained a practical understanding of many ways training can go wrong.

  • There were overfitting problems: names of characters from my dataset got more attention than I wished they did. As a model for interacting with some of the universes I used as input, that was kind of cool, but if I was looking to just adjust how the model talked about intimate situations, I massively got things to be too specific.

  • I gained a new appreciation for how easy it is to trigger catastrophic forgetting.

  • I begin to appreciate how this sort of unsupervised training could be best paired with supervised training to help correct model confusion. Playing with the model, I often ran into cases where my reaction was like “Well, I don’t want to train it to give that response, but if it ever does wander into this part of the state space, I’d like to at least get it to respond more naturally.” And I think I understand how to approach that either with custom loss functions or manipulating which tokens compute loss and which ones do not.

  • And of course realized I need to learn a lot about sanitizing and preparing datasets.

A lot of articles I’ve been reading about training make more sense. I have better intuition for why you might want to do training a certain way, or why mechanisms for countering some problem will be important.

Future Activities:

  • Look into LoRA adapters; having understood what happens when you manipulate the model directly, I can now move on to intelligent solutions.

  • Look into various mechanisms for rewards and supervised training.

  • See how hard it is to train a chat based model out of some of its safety constraints.

  • Construct datasets; possibly looking at sources like relationship questions/advice.



comment count unavailable comments

16 August, 2023 02:13PM

Simon Josefsson

Enforcing wrap-and-sort -satb

For Debian package maintainers, the wrap-and-sort tool is one of those nice tools that I use once in a while, and every time have to re-read the documentation to conclude that I want to use the --wrap-always --short-indent --trailing-comma --sort-binary-package options (or -satb for short). Every time, I also wish that I could automate this and have it always be invoked to keep my debian/ directory tidy, so I don’t have to do this manually once every blue moon. I haven’t found a way to achieve this automation in a non-obtrusive way that interacts well with my git-based packaging workflow. Ideally I would like for something like the lintian-hook during gbp buildpackage to check for this – ideas?

Meanwhile, I have come up with a way to make sure I don’t forget to run wrap-and-sort for long, and that others who work on the same package won’t either: create an autopkgtest which is invoked during the Salsa CI/CD pipeline using the following as debian/tests/wrap-and-sort:

#!/bin/sh

set -eu

TMPDIR=$(mktemp -d)
trap "rm -rf $TMPDIR" 0 INT QUIT ABRT PIPE TERM

cp -a debian $TMPDIR
cd $TMPDIR
wrap-and-sort -satb
diff -ur $OLDPWD/debian debian

Add the following to debian/tests/control to invoke it – which is intentionally not indented properly so that the self-test will fail so you will learn how it behaves.

Tests: wrap-and-sort
Depends: devscripts, python3-debian
Restrictions: superficial

Now I will get build failures in the pipeline once I upload the package into Salsa, which I usually do before uploading into Debian. I will get a diff output, and it won’t be happy until I push a commit with the output of running wrap-and-sort with the parameters I settled with.

While autopkgtest is intended to test the installed package, the tooling around autopkgtest is powerful and easily allows this mild abuse of its purpose for a pleasant QA improvement.

Thoughts? Happy hacking!

16 August, 2023 09:00AM by simon

hackergotchi for Bits from Debian

Bits from Debian

Debian Celebrates 30 years!

Debian 30 years by Jeff Maier

Over 30 years ago the late Ian Murdock wrote to the comp.os.linux.development newsgroup about the completion of a brand-new Linux release which he named "The Debian Linux Release".

He built the release by hand, from scratch, so to speak. Ian laid out guidelines for how this new release would work, what approach the release would take regarding its size, manner of upgrades, installation procedures; and with great care of consideration for users without Internet connection.

Unaware that he had sparked a movement in the fledgling F/OSS community, Ian worked on and continued to work on Debian. The release, now aided by volunteers from the newsgroup and around the world, grew and continues to grow as one of the largest and oldest FREE operating systems that still exist today.

Debian at its core is comprised of Users, Contributors, Developers, and Sponsors, but most importantly, People. Ians drive and focus remains embedded in the core of Debian, it remains in all of our work, it remains in the minds and hands of the users of The Universal Operating System.

The Debian Project is proud and happy to share our anniversary not exclusively unto ourselves, instead we share this moment with everyone, as we come together in celebration of a resounding community that works together, effects change, and continues to make a difference, not just in our work but around the world.

Debian is present in cluster systems, datacenters, desktop computers, embedded systems, IoT devices, laptops, servers, it may possibly be powering the web server and device you are reading this article on, and it can also be found in Spacecraft.

Closer to earth, Debian fully supports projects for accessibility: Debian Edu/Skolelinux - an operating system designed for educational use in schools and communities, Debian Science - providing free scientific software across many established and emerging fields, Debian Hamradio - for amateur radio enthusiasts, Debian-Accessibility - a project focused on the design of an operating system suited to fit the requirements of people with disabilites, and Debian Astro - focused on supporting professional and hobbyist astronomers.

Debian strives to give, reach, embrace, mentor, share, and teach with internships through many programs internally and externally such as the Google Summer of Code, Outreachy, and the Open Source Promotion Plan.

None of this could be possible without the vast amount of support, care, and contributions from what started as and is still an all volunteer project. We celebrate with each and every one who has helped shape Debian over all of these years and toward the future.

Today we all certainly celebrate 30 years of Debian, but know that Debian celebrates with each and every one of you all at the same time.

Over the next few days Celebration parties are planned to take place in Austria, Belgium, Bolivia, Brazil, Bulgaria, Czech Republic, France, Germany (CCCcamp), India, Iran, Portugal, Serbia, South Africa, and Turkey.

You are of course, invited to join us!

Check out, attend, or form your very own DebianDay 2023 Event.

See you then!

Thank you, thank you all so very much.

With Love,

The Debian Project

16 August, 2023 09:00AM by Jean-Pierre Giraud, Donald Norwood, Grzegorz Szymaszek, Debian Publicity Team

hackergotchi for Wouter Verhelst

Wouter Verhelst

Perl test suites in GitLab

I've been maintaining a number of Perl software packages recently. There's SReview, my video review and transcoding system of which I split off Media::Convert a while back; and as of about a year ago, I've also added PtLink, an RSS aggregator (with future plans for more than just that).

All these come with extensive test suites which can help me ensure that things continue to work properly when I play with things; and all of these are hosted on salsa.debian.org, Debian's gitlab instance. Since we're there anyway, I configured GitLab CI/CD to run a full test suite of all the software, so that I can't forget, and also so that I know sooner rather than later when things start breaking.

GitLab has extensive support for various test-related reports, and while it took a while to be able to enable all of them, I'm happy to report that today, my perl test suites generate all three possible reports. They are:

  • The coverage regex, which captures the total reported coverage for all modules of the software; it will show the test coverage on the right-hand side of the job page (as in this example), and it will show what the delta in that number is in merge request summaries (as in this example
  • The JUnit report, which tells GitLab in detail which tests were run, what their result was, and how long the test took (as in this example)
  • The cobertura report, which tells GitLab which lines in the software were ran in the test suite; it will show up coverage of affected lines in merge requests, but nothing more. Unfortunately, I can't show an example here, as the information seems to be no longer available once the merge request has been merged.

Additionally, I also store the native perl Devel::Cover report as job artifacts, as they show some information that GitLab does not.

It's important to recognize that not all data is useful. For instance, the JUnit report allows for a test name and for details of the test. However, the module that generates the JUnit report from TAP test suites does not make a distinction here; both the test name and the test details are reported as the same. Additionally, the time a test took is measured as the time between the end of the previous test and the end of the current one; there is no "start" marker in the TAP protocol.

That being said, it's still useful to see all the available information in GitLab. And it's not even all that hard to do:

test:
  stage: test
  image: perl:latest
  coverage: '/^Total.* (\d+.\d+)$/'
  before_script:
    - cpanm ExtUtils::Depends Devel::Cover TAP::Harness::JUnit Devel::Cover::Report::Cobertura
    - cpanm --notest --installdeps .
    - perl Makefile.PL
  script:
    - cover -delete
    - HARNESS_PERL_SWITCHES='-MDevel::Cover' prove -v -l -s --harness TAP::Harness::JUnit
    - cover
    - cover -report cobertura
  artifacts:
    paths:
    - cover_db
    reports:
      junit: junit_output.xml
      coverage_report:
        path: cover_db/cobertura.xml
        coverage_format: cobertura

Let's expand on that a bit.

The first three lines should be clear for anyone who's used GitLab CI/CD in the past. We create a job called test; we start it in the test stage, and we run it in the perl:latest docker image. Nothing spectacular here.

The coverage line contains a regular expression. This is applied by GitLab to the output of the job; if it matches, then the first bracket match is extracted, and whatever that contains is assumed to contain the code coverage percentage for the code; it will be reported as such in the GitLab UI for the job that was ran, and graphs may be drawn to show how the coverage changes over time. Additionally, merge requests will show the delta in the code coverage, which may help deciding whether to accept a merge request. This regular expression will match on a line of that the cover program will generate on standard output.

The before_script section installs various perl modules we'll need later on. First, we intall ExtUtils::Depends. My code uses ExtUtils::MakeMaker, which ExtUtils::Depends depends on (no pun intended); obviously, if your perl code doesn't use that, then you don't need to install it. The next three modules -- Devel::Cover, TAP::Harness::JUnit and Devel::Cover::Report::Cobertura are necessary for the reports, and you should include them if you want to copy what I'm doing.

Next, we install declared dependencies, which is probably a good idea for you as well, and then we run perl Makefile.PL, which will generate the Makefile. If you don't use ExtUtils::MakeMaker, update that part to do what your build system uses. That should be fairly straightforward.

You'll notice that we don't actually use the Makefile. This is because we only want to run the test suite, which in our case (since these are PurePerl modules) doesn't require us to build the software first. One might consider that this makes the call of perl Makefile.PL useless, but I think it's a useful test regardless; if that fails, then obviously we did something wrong and shouldn't even try to go further.

The actual tests are run inside a script snippet, as is usual for GitLab. However we do a bit more than you would normally expect; this is required for the reports that we want to generate. Let's unpack what we do there:

cover -delete

This deletes any coverage database that might exist (e.g., due to caching or some such). We don't actually expect any coverage database, but it doesn't hurt.

HARNESS_PERL_SWITCHES='-MDevel::Cover'

This tells the TAP harness that we want it to load the Devel::Cover addon, which can generate code coverage statistics. It stores that in the cover_db directory, and allows you to generate all kinds of reports on the code coverage later (but we don't do that here, yet).

prove -v -l -s

Runs the actual test suite, with verbose output, shuffling (aka, randomizing) the test suite, and adding the lib directory to perl's include path. This works for us, again, because we don't actually need to compile anything; if you do, then -b (for blib) may be required.

ExtUtils::MakeMaker creates a test target in its Makefile, and usually this is how you invoke the test suite. However, it's not the only way to do so, and indeed if you want to generate a JUnit XML report then you can't do that. Instead, in that case, you need to use the prove, so that you can tell it to load the TAP::Harness::JUnit module by way of the --harness option, which will then generate the JUnit XML report. By default, the JUnit XML report is generated in a file junit_output.xml. It's possible to customize the filename for this report, but GitLab doesn't care and neither do I, so I don't. Uploading the JUnit XML format tells GitLab which tests were run and

Finally, we invoke the cover script twice to generate two coverage reports; once we generate the default report (which generates HTML files with detailed information on all the code that was triggered in your test suite), and once with the -report cobertura parameter, which generates the cobertura XML format.

Once we've generated all our reports, we then need to upload them to GitLab in the right way. The native perl report, which is in the cover_db directory, is uploaded as a regular job artifact, which we can then look at through a web browser, and the two XML reports are uploaded in the correct way for their respective formats.

All in all, I find that doing this makes it easier to understand how my code is tested, and why things go wrong when they do.

16 August, 2023 08:05AM

hackergotchi for Charles Plessy

Charles Plessy

I forgot about the “make clean” command.

Je ne me souviens plus de la dernière fois où j'ai utilisé la commande make clean. Si j'empaquète pour Debian, le travail se fait dans un dépôt git, et j'utilise les commandes git clean -fdx ; git checkout . que je peux rappeler depuis mon historique des commandes via Ctrl-r la plupart du temps. Et dans les autres cas, si les sources ne sont pas déjà dans git, alors les commandes git init . ; git add . ; git commit -m 'hopla' règlent le problème.

16 August, 2023 04:32AM

August 15, 2023

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#41: Using r2u in Codespaces

Welcome to the 41th post in the $R^4 series. This post draws on joint experiments first started by Grant building on the lovely work done by Eitsupi as part of our Rocker Project. In short, r2u is an ideal match for Codespaces, a Microsoft/GitHub service to run code ‘locally but in the cloud’ via browser or Visual Studio Code. This posts co-serves as the README.md in the .devcontainer directory as well as a vignette for r2u.

So let us get into it. Starting from the r2u repository, the .devcontainer directory provides a small self-containted file devcontainer.json to launch an executable environment R using r2u. It is based on the example in Grant McDermott’s codespaces-r2u repo and reuses its documentation. It is driven by the Rocker Project’s Devcontainer Features repo creating a fully functioning R environment for cloud use in a few minutes. And thanks to r2u you can add easily to this environment by installing new R packages in a fast and failsafe way.

Try it out

To get started, simply click on the green “Code” button at the top right. Then select the “Codespaces” tab and click the “+” symbol to start a new Codespace.

The first time you do this, it will open up a new browser tab where your Codespace is being instantiated. This first-time instantiation will take a few minutes (feel free to click “View logs” to see how things are progressing) so please be patient. Once built, your Codespace will deploy almost immediately when you use it again in the future.

After the VS Code editor opens up in your browser, feel free to open up the examples/sfExample.R file. It demonstrates how r2u enables us install packages and their system-dependencies with ease, here installing packages sf (including all its geospatial dependencies) and ggplot2 (including all its dependencies). You can run the code easily in the browser environment: Highlight or hover over line(s) and execute them by hitting Cmd+Return (Mac) / Ctrl+Return (Linux / Windows).

(Both example screenshots reflect the initial codespaces-r2u repo as well as personal scratchspace one which we started with, both of course work here too.)

Do not forget to close your Codespace once you have finished using it. Click the “Codespaces” tab at the very bottom left of your code editor / browser and select “Close Current Codespace” in the resulting pop-up box. You can restart it at any time, for example by going to https://github.com/codespaces and clicking on your instance.

Extend r2u with r-universe

r2u offers “fast, easy, reliable” access to all of CRAN via binaries for Ubuntu focal and jammy. When using the latter (as is the default), it can be combined with r-universe and its Ubuntu jammy binaries. We demontrates this in a second example file examples/censusExample.R which install both the cellxgene-census and tiledbsoma R packages as binaries from r-universe (along with about 100 dependencies), downloads single-cell data from Census and uses Seurat to create PCA and UMAP decomposition plots. Note that in order run this you have to change the Codespaces default instance from ‘small’ (4gb ram) to ‘large’ (16gb ram).

Local DevContainer build

Codespaces are DevContainers running in the cloud (where DevContainers are themselves just Docker images running with some VS Code sugar on top). This gives you the very powerful ability to ‘edit locally’ but ‘run remotely’ in the hosted codespace. To test this setup locally, simply clone the repo and open it up in VS Code. You will need to have Docker installed and running on your system (see here). You will also need the Remote Development extension (you will probably be prompted to install it automatically if you do not have it yet). Select “Reopen in Container” when prompted. Otherwise, click the >< tab at the very bottom left of your VS Code editor and select this option. To shut down the container, simply click the same button and choose “Reopen Folder Locally”. You can always search for these commands via the command palette too (Cmd+Shift+p / Ctrl+Shift+p).

Use in Your Repo

To add this ability of launching Codespaces in the browser (or editor) to a repo of yours, create a directory .devcontainers in your selected repo, and add the file .devcontainers/devcontainer.json. You can customize it by enabling other feature, or use the postCreateCommand field to install packages (while taking full advantage of r2u).

Acknowledgments

There are a few key “plumbing” pieces that make everything work here. Thanks to:

Colophon

More information about r2u is at its site, and we answered some question in issues, and at stackoverflow. More questions are always welcome!

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Originally posted 2023-08-13, minimally edited 2023-08-15 which changed the timestamo and URL.

15 August, 2023 05:25PM

Ian Jackson

DKIM: rotate and publish your keys

If you are an email system administrator, you are probably using DKIM to sign your outgoing emails. You should be rotating the key regularly and automatically, and publishing old private keys. I have just released dkim-rotate 1.0; dkim-rotate is tool to do this key rotation and publication.

If you are an email user, your email provider ought to be doing this. If this is not done, your emails are “non-repudiable”, meaning that if they are leaked, anyone (eg, journalists, haters) can verify that they are authentic, and prove that it to others. This is not desirable (for you).

Non-repudiation of emails is undesirable

This problem was described at some length in Matthew Green’s article Ok Google: please publish your DKIM secret keys.

Avoiding non-repudiation sounds a bit like lying. After all, I’m advising creating a situation where some people can’t verify that something is true, even though it is. So I’m advocating casting doubt. Crucially, though, it’s doubt about facts that ought to be private. When you send an email, that’s between you and the recipient. Normally you don’t intend for anyone, anywhere, who happens to get a copy, to be able to verify that it was really you that sent it.

In practical terms, this verifiability has already been used by journalists to verify stolen emails. Associated Press provide a verification tool.

Advice for all email users

As a user, you probably don’t want your emails to be non-repudiable. (Other people might want to be able to prove you sent some email, but your email system ought to serve your interests, not theirs.)

So, your email provider ought to be rotating their DKIM keys, and publishing their old ones. At a rough guess, your provider probably isn’t :-(.

How to tell by looking at email headers

A quick and dirty way to guess is to have a friend look at the email headers of a message you sent. (It is important that the friend uses a different email provider, since often DKIM signatures are not applied within a single email system.)

If your friend sees a DKIM-Signature header then the message is DKIM signed. If they don’t, then it wasn’t. Most email traversing the public internet is DKIM signed nowadays; so if they don’t see the header probably they’re not looking using the right tools, or they’re actually on the same email system as you.

In messages signed by a system running dkim-rotate, there will also be a header about the key rotation, to notify potential verifiers of the situation. Other systems that avoid non-repudiation-through-DKIM might do something similar. dkim-rotate’s header looks like this:

DKIM-Signature-Warning: NOTE REGARDING DKIM KEY COMPROMISE
 https://www.chiark.greenend.org.uk/dkim-rotate/README.txt
 https://www.chiark.greenend.org.uk/dkim-rotate/ae/aeb689c2066c5b3fee673355309fe1c7.pem

But an email system might do half of the job of dkim-rotate: regularly rotating the key would cause the signatures of old emails to fail to verify, which is a good start. In that case there probably won’t be such a header.

Testing verification of new and old messages

You can also try verifying the signatures. This isn’t entirely straightforward, especially if you don’t have access to low-level mail tooling. Your friend will need to be able to save emails as raw whole headers and body, un-decoded, un-rendered.

If your friend is using a traditional Unix mail program, they should save the message as an mbox file. Otherwise, ProPublica have instructions for attaching and transferring and obtaining the raw email. (Scroll down to “How to Check DKIM and ARC”.)

Checking that recent emails are verifiable

Firstly, have your friend test that they can in fact verify a DKIM signature. This will demonstrate that the next test, where the verification is supposed to fail, is working properly and fails for the right reasons.

Send your friend a test email now, and have them do this on a Linux system:

    # save the message as test-email.mbox
    apt install libmail-dkim-perl # or equivalent on another distro
    dkimproxy-verify <test-email.mbox

You should see output containing something like this:

    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: pass
    ...

If the output ontains verify result: fail (body has been altered) then probably your friend didn’t manage to faithfully save the unalterered raw message.

Checking old emails cannot be verified

When you both have that working, have your friend find an older email of yours, from (say) month ago. Perform the same steps.

Hopefully they will see something like this:

    originator address: ijackson@chiark.greenend.org.uk
    signature identity: @chiark.greenend.org.uk
    verify result: fail (bad RSA signature)

or maybe

    verify result: invalid (public key: not available)

This indicates that this old email can no longer be verified. That’s good: it means that anyone who steals a copy, can’t verify it either. If it’s leaked, the journalist who receives it won’t know it’s genuine and unmodified; they should then be suspicious.

If your friend sees verify result: pass, then they have verified that that old email of yours is genuine. Anyone who had a copy of the mail can do that. This is good for email thieves, but not for you.

For email admins: announcing dkim-rotate 1.0

I have been running dkim-rotate 0.4 on my infrastructure, since last August. and I had entirely forgotten about it: it has run flawlessly for a year. I was reminded of the topic by seeing DKIM in other blog posts. Obviously, it is time to decreee that dkim-rotate is 1.0.

If you’re a mail system administrator, your users are best served if you use something like dkim-rotate. The package is available in Debian stable, and supports Exim out of the box, but other MTAs should be easy to support too, via some simple ad-hoc scripting.

Limitation of this approach

Even with this key rotation approach, emails remain nonrepudiable for a short period after they’re sent - typically, a few days.

Someone who obtains a leaked email very promptly, and shows it to the journalist (for example) right away, can still convince the journalist. This is not great, but at least it doesn’t apply to the vast bulk of your email archive.

There are possible email protocol improvements which might help, but they’re quite out of scope for this article.



comment count unavailable comments

15 August, 2023 12:16AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, July 2023 (by Santiago Ruano Rincón)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In July, 18 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 0.0h (out of 0h assigned and 2.0h from previous period), thus carrying over 2.0h to the next month.
  • Adrian Bunk did 24.75h (out of 18.25h assigned and 6.5h from previous period).
  • Anton Gladky did 5.0h (out of 5.0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Bastien Roucariès did 17.0h (out of 17.0h assigned and 3.0h from previous period), thus carrying over 3.0h to the next month.
  • Ben Hutchings did 14.0h (out of 24.0h assigned), thus carrying over 9.5h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Emilio Pozuelo Monfort did 24.0h (out of 24.75h assigned), thus carrying over 0.25h to the next month.
  • Guilhem Moulin did 23.25h (out of 24.75h assigned), thus carrying over 1.5h to the next month.
  • Jochen Sprickerhof did 10.0h (out of 20.0h assigned), thus carrying over 10.0h to the next month.
  • Lee Garrett did 16.0h (out of 9.75h assigned and 15.5h from previous period), thus carrying over 9.25h to the next month.
  • Markus Koschany did 24.75h (out of 24.75h assigned).
  • Ola Lundqvist did 0.0h (out of 13.0h assigned and 11.0h from previous period), thus carrying over 24.0h to the next month.
  • Roberto C. Sánchez did 19.25h (out of 14.75h assigned and 10.0h from previous period), thus carrying over 5.5h to the next month.
  • Santiago Ruano Rincón did 25.5h (out of 10.5h assigned and 15.25h from previous period), thus carrying over 0.25h to the next month.
  • Sylvain Beucler did 16.0h (out of 21.25h assigned and 3.5h from previous period), thus carrying over 8.75h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 16.0h (out of 16.0h assigned).
  • Utkarsh Gupta did 1.5h (out of 0h assigned and 13.75h from previous period), thus carrying over 12.25h to the next month.

Evolution of the situation

In July, we have released 35 DLAs.

LTS contributor Lee Garrett, has continued his hard work to prepare a testing framework for Samba, that can now provision bootable VMs with little effort, both for Debian and for Windows. This work included the introduction of a new package to Debian, rhsrvany, which allows turning any Windows program or script into a Windows service. As the Samba testing framework matures it will be possible to perform functional tests which cannot be performed with other available test mechanisms and aspects of this framework will be generalizable to other package ecosystems beyond Samba.

July included a notable security update of bind9 by LTS contributor Chris Lamb. This update addressed a potential denial of service attack in this critical network infrastructure component.

Thanks to our sponsors

Sponsors that joined recently are in bold.

15 August, 2023 12:00AM by Santiago Ruano Rincón

August 14, 2023

hackergotchi for Jonathan McDowell

Jonathan McDowell

listadmin3: An imperfect replacement for listadmin on Mailman 3

One of the annoyances I had when I upgraded from Buster to Bullseye (yes, I’m talking about an upgrade I did at the end of 2021) is that I ended up moving from Mailman 2 to Mailman 3. Which is fine, I guess, but it meant I could no longer use listadmin to deal with messages held for moderation. At the time I looked around, couldn’t find anything, shrugged, and became incredibly bad at performing my list moderation duties.

Last week I finally accepted what I should have done at least a year ago and wrote some hopefully not too bad Python to web scrape the Mailman 3 admin interface. It then presents a list of subscribers and held messages that might need approved or discarded. It’s heavily inspired by listadmin, but not a faithful copy (partly because it’s been so long since I used it that I’m no longer familiar with its interface). Despite that I’ve called it listadmin3.

It currently meets the bar of “extremely useful to me” so I’ve tagged v0.1. You can get it on Github. I’d be interested in knowing if it actually works for / is useful to anyone else (I suspect it won’t be happy with interfaces configured to not be in English, but that should be solvable). Comment here or reply to my Fediverse announcement.

Example usage, cribbed directly from the README:

$ listadmin3
fetching data for partypeople@example.org ... 200 messages
(1/200) 5303: omgitsspam@example.org / March 31, 2023, 6:39 a.m.:
  The message is not from a list member: TOP PICK
(a)ccept, (d)iscard, (b)ody, (h)eaders, (s)kip, (q)uit? q
Moving on...
fetching data for admins@example.org ... 1 subscription requests
(1/1) "The New Admin" <newadmin@example.org>
(a)ccept, (d)iscard, (r)eject, (s)kip, (q)uit? a
1 messages
(1/1) 6560: anastyspamer@example.org / Aug. 13, 2023, 3:15 p.m.:
  The message is not from a list member: Buy my stuff!
(a)ccept, (d)iscard, (b)ody, (h)eaders, (s)kip, (q)uit? d
0 to accept, 1 to discard, proceed? (y/n) y
fetching data for announce@example.org ... nothing in queue
$

There’s Debian packaging in the repository (dpkg-buildpackage -uc -us -b will spit you out a .deb) but I’m holding off on filing an ITP + actually uploading until I know if it’s useful enough for others before doing so. You only really need the listadmin3 file and to ensure you have Python3 + MechanicalSoup installed.

(Yes, I still run mailing lists. Hell, I still run a Usenet server.)

14 August, 2023 05:28PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Encoding “The Legend of Sisyphus”

I watched this year's Assembly democompo live, as I usually do—or more precisely, I watched it on stream. I had heard that ASD would return with a demo for the first time in five years, and The legend of Sisyphus did not disappoint. At all. (No, it's not written in assembly; Assembly is the name of the party. :-) )

I do fear that this is the last time we will see such a “blockbuster” demo (meaning, roughly: a demo that is a likely candidate for best high-end demo of the year) at Assembly, or even any non-demoscene-specific party; the scene is slowly retracting into itself, choosing to clump together in their (our?) own places and seeing the presence others mainly as an annoyance. But I digress.

Anyway, most demos these days are watched through video captures; even though Navis has done a remarkable job of making it playable on not-state-of-the-art GPUs (the initial part even runs quite well on my Intel embedded GPU, although it just stops working from there), there's something about the convenience. And for my own part, I don't even have Windows, so realtime is mostly off-limits anyway. And this is a demo that really hates your encoder; lots of motion, noise layers, sharp edges that move around erratically. So even though there are some decent YouTube captures out there (and YouTube's encoder does a surprisingly good job in most parts), I wanted to make a truly good capture. One that gives you the real feeling of watching the demo on a high-end machine, without the inevitable blocking you get from a bitrate-constrained codec in really difficult situations.

So, first problem is actually getting a decent capture; either completely uncompressed (well, losslessly compressed), or so lightly compressed that it doesn't really matter. neon put in a lot of work here with his 4070 Ti machine, but it proved to be quite difficult. For starters, it wouldn't run in .kkapture at all (I don't know the details). It ran under Capturinha, but in the most difficult places, the H.264 bitstream would simply be invalid, and a HEVC recording would not deliver frames for many seconds. Also, the recording was sometimes out-of-sync even though the demo itself played just fine (which we could compensate for, but it doesn't feel risk-free). A HDMI capture in 2560x1440 was problematic for other reasons involving EDIDs and capture cards.

So I decided to see if I could peek inside it to see if there's a recording mode in the .exe; I mean, they usually upload their own video versions to YouTube, so it would be natural to have one, right? I did find a lot of semi-interesting things (though I didn't care to dive into the decompression code, which I guess is a lot of what makes the prod work at all), but it was also clear that there was no capture mode. However, I've done one-off renders earlier, so perhaps this was a good chance?

But to do that, I would have to go from square one back to square zero, which is to have it run at all on my machine. No Windows, remember, and the prod wouldn't run at all in WINE (complaints about msvcp140.dll). Some fiddling with installing the right runtime through winetricks made it run (but do remember to use vcrun2022 and not vcrun2019, even though the DLL names are the same…), but everything was black. More looking inside the .exe revealed that this is probably an SDL issue; the code creates an “SDL renderer” (a way to software-render things onto the window), which works great on native Linux and on native Windows, but somehow not in WINE. But the renderer is never ever used for anything, so I could just nop out the call, and voila! Now I could see the dialog box. (Hey Navis, could you take out that call for the next time? You don't need BASS_Start either as long as you haven't called BASS_Stop. :-) ) The precalc is super-duper-slow under WINE due to some CPU usage issue; it seems there is maybe a lot of multi-threaded memory allocation that's not very fast on the WINE side. But whatever, I could wait those 15 minutes or so, and after that, the demo would actually run perfectly!

Next up would be converting my newfound demo-running powers into a render. I was a bit disappointed to learn that WINE doesn't contain any special function hook machinery (you would think these things are both easier and more useful in a layered implementation like that, right?), so I went for the classic hacky way of making a proxy DLL that would pretend to be some of the utility DLLs the program used, forwarding some calls and intercepting others. We need to intercept the SwapBuffers call, to save each frame to disk, and some timing functions, so that we can get perfect frame pacing with no drops, no matter how slow or fast our machine is. (You can call this a poor man's .kkapture, but my use of these techniques actually predates .kkapture by quite a bit. I was happy he made something more polished back in the day, though, as I don't want to maintain hacky Windows-only software forever.)

Thankfully for us, the strings “SDL2.dll” and “BASS.dll” are exactly the same length, so I hexedited both to say “hook.dll” instead, which supplied the remaining functions. SwapBuffers was easy; just do glReadPixels() and write the frame to disk. (I assumed I would need something faster with asynchronous readback to a PBO eventually, and probably also some PNG compression to save on I/O, but this was ludicrously fast as it was, so I never needed to change it.) Timing was trickier; the demo seems to use both _Xtick_get_time() (an internal MSVC timing function; I'd assume what Navis wrote was std::chrono, and then it got inlined into that call) and BASS. Every frame, it seems to compare those two timers, and then adjust its frame pacing to be a little faster or slower depending on which one is ahead. (Since its only options are delta * 0.97 or delta * 1.03, I'd guess it cannot ever run perfectly without jitter?) If it's more than 50 ms away from BASS, it even seeks the MP3 to match the real time! (I've heard complaints from some that the MP3 is skipping on their system, and I guess this is why.) I don't know exactly why this is done, but I'd guess there are some physical effects that can only run “from the start” (i.e., there is feedback from the previous frame) and isn't happy about too-large timesteps, so that getting a reliable time delta for each frame is important.

Anyhow, this means it's not enough to hook BASS' timer functions, but we also need _Xtick_get_time() to give the same value (namely our current frame number divided by 59.94, suitably adjusted for units). This was a bit annoying, since this function lives in a third library, and I wasn't up for manually proxying all of the MSVC runtime. After some mulling, I found an unused SDL import (showing a message box), repurposed it to be _Xtick_get_time() and simply hexedited the 2–3 call sites to point to that import. Easy peasy, and the demo rendered perfectly to 300+ gigabytes of uncompressed 1440p frames without issue. (Well, I had an overflow issue at one point that caused the demo to go awry half-way, so I needed two renders instead of one. But this was refreshingly smooth.)

I did consider hacking the binary to get an actual 2160p capture; Navis has been pretty clear that it looks better the more resolution you have, but it felt like tampering with his art in a disallowed way. (The demo gives you the choice between 720p, 1080p, and 1440p. There's also a “safe for work” option, but I'm not entirely sure what it actually does!)

That left only the small matter of the actual encoding, or, the entire point of the exercise in the first place. I had already experimented a fair bit with this based on neon's captures, and had realized the main challenge is to keep up the image quality while still having a file that people can actually play, which is much trickier than I assumed. I originally wanted to use 10-bit AV1, but even with dav1d, the players I tried could not reliably keep up the 1440p60 stream without dropping lots of frames. (It seemed to be somewhat single-thread bound, actually. mpv used 10% of that thread on updating its OSD, which sounded sort of wasted given that I didn't have it enabled.) I tried various 8- and 10-bit encodes with both aomenc and SVT-AV1, and it just wasn't going well, so I had to abandon it. The point of this exercise, after all, is to be able to conveniently view the demo in high quality without having a monster machine. (The AV1 common knowledge seems to be that you should use Av1an as a wrapper to save you a lot of pain, but it doesn't have prebuilt Linux binaries or packages, depends on a zillion Rust crates and instantly segfaulted on startup for me. I doubt it would affect the main issue much anyway.)

So I went a step down, to 10-bit HEVC, with an added bonus of a bit wider hardware support. (I know I wanted 10-bit, since 8-bit is frequently having issues in gradient-heavy content such as this, and I probably needed every bit of fidelity I could get anyway, assuming decoding could keep up.) I used Level 5.2 Main10 as a rough guide; it's what Quick Sync supports, and I would assume hardware UHD players can also generally deal with it. Level 5.2 (without the High tier), very roughly, caps the bitrate at maximum 60 Mbit/sec (I was generally using CRF encodes, but the max cap needed to come on top of that). Of course, if you just tell the encoder that 60 is max, it will happily “save up” bytes during the initial black segment (or generally, during anything that is easy) and then burst up to 500 Mbit/sec for a brief second when the cool explosions happen, so that's obviously out of the question—which means there are also buffer limitations (again very roughly, the rules say you can only use 60 Mbit on average during any given one-second window). Of course, software video players don't generally follow these specs (if they've even heard of them), so I still had some frame drops. I generally responded by tightening the buffer spec a bit, turning off a couple of HEVC features (surprisingly, the higher presets can make harder-to-decode videos!), and then accepting that slower machines (that also do not have hardware acceleration) will drop a few frames in the most difficult scenes.

Over the course of a couple of days, I made dozens of test encodings using different settings, looking at them both in real time and sometimes using single-frame stepping. (Thank goodness for my 5950X!) More than once, I'd find that something that looked acceptable on my laptop was pretty bad on a 28" display, so a bit back and forth would be needed. There are many scenes that have the worst possible combination of things for a video encode; sharp edges, lots of motion, smooth gradients, motion… and actually, a bunch of scenes look smudgy (and sometimes even blocky) in a video compression-ish way, so having an uncompressed reference to compare to was useful.

Generally I try to stay away from applying too-specific settings; there's a lot of cargo culting in video encoding, and most of it is based more on hearsay than on solid testing. I will say that I chickened out and disabled SAO, though, based on “some guy said on a forum it's bad for high-sharpness high-bitrate content”, so I'm not completely guilt-free here. Also, for one scene I actually had to simply tell the encoder what to do; I added a “zone” just before it to allocate fewer bits to that (where it wasn't as noticeable), and then set the quality just right for the problematic scene to not run out of bits and go super-blocky mid-sequence. It's not perfect, but even the zones system in x265 does not allow going past the max rate (which would be outside the given level anyway, of course). I also added a little metadata to make sure hardware players know the right color primaries etc.; at least one encoding on YouTube seems to have messed this up somehow, and is a bit too bright.

Audio was easy; I just dropped in the MP3 wholesale. I didn't see the point of encoding it down to something lower-bitrate, given that anything that can decode 10-bit HEVC can also easily decode MP3, and it's maybe 0.1% of my total file size. For an AV1 encode, I'd definitely transcode to Opus since that's the WebM ecosystem for you, but this is HEVC. In a Mastroska mux, though, not MP4 (better metadata support, for one).

All in all, I'm fairly satisfied with the result; it looks pretty good and plays OK on my laptop's software decoding most of the time (it plays great if I enable hardware acceleration), although I'm sure I've messed up something and it just barfs out on someone's machine. The irony is not lost on me that the file size ended up at 2.7 GB, after complaints that the demo itself is a bit over 600 MB compressed. (I do believe The Legend of Sisyphus actually defends its file size OK, although I would perhaps personally have preferred some sort of interpolation instead of storing all key frames separately at 30 Hz. I am less convinced about e.g. Pyrotech's 1+ GB production from the same party, though, or Fairlight's Mechasm. But as long as party rules allow arbitrary file sizes, we'll keep seeing these huge archives.) Even the 1440p YouTube video (in VP9) is about 1.1 GB, so perhaps I shouldn't have been surprised, but the bits really start piling on quickly for this kind of resolution and material.

If you want to look at the capture (and thus, the demo), you can find it here. And now I think finally the boulder is resting at the top of the mountain for me, after having rolled down so many times. Until the next interesting demo comes along :-)

14 August, 2023 04:30PM

August 13, 2023

hackergotchi for Gunnar Wolf

Gunnar Wolf

Back to online teaching

Mexico’s education sector had one of the longest lockdowns due to COVID: As everybody, we “went virtual” in March 2020, and it was only by late February 2022 that I went back to teach presentially at the University.

But for the semester starting next Tuesday, I’m going back to a full-online mode. Why? Because me and my family will be travelling to Argentina for six months, starting this October and until next March. When I went to ask for my teaching to be “frozen” for two semesters, the Head of Division told me he was actually looking for teachers wanting to do distance-teaching — With a student population of >380,000 students, and not being able to grow the physical infrastructure, and with such a big city as Mexico City, where a person can take ove 2hr to commute daily… It only makes sense to offer part of the courses online.

To be honest, I’m a bit nervous about this. The past couple of days, I’ve been setting up again the technological parts (i.e. spinning up a Jitsi instance, remembering my usual practices and programs). But… Well, I know that being videoconference-bound, my teaching will lose the dynamism that comes from talking face to face with students. I think I will miss it!

(but at the same time, I’m happy to try this anew: to go virtual, but where students choosing this modality do so by choice rather than because the world forced them to)

13 August, 2023 11:39PM

François Marier

Using iptables with systemd-networkd

I used to rely on ifupdown to bring up my iptables firewall automatically using a config like this in /etc/network/interfaces:

allow-hotplug eno1
iface eno1 inet dhcp
    pre-up iptables-restore /etc/network/iptables.up.rules

iface eno1 inet6 dhcp
    pre-up ip6tables-restore /etc/network/ip6tables.up.rules

but I wanted to modernize my network configuration and make use of systemd-networkd after upgrading one of my servers to Debian bookworm.

Since I already wrote an iptables dispatcher script for NetworkManager, I decided to follow the same approach for systemd-networkd.

I started by installing networkd-dispatcher:

apt install networkd-dispatcher

and then adding a script for the routable state in /etc/networkd-dispatcher/routable.d/iptables:

#!/bin/sh

LOGFILE=/var/log/iptables.log

if [ "$IFACE" = lo ]; then
    echo "$0: ignoring $IFACE for \`$STATE'" >> $LOGFILE
    exit 0
fi

case "$STATE" in
    routable)
        echo "$0: restoring iptables rules for $IFACE" >> $LOGFILE
        /sbin/iptables-restore /etc/network/iptables.up.rules >> $LOGFILE 2>&1
        /sbin/ip6tables-restore /etc/network/ip6tables.up.rules >> $LOGFILE 2>&1
        ;;
    *)
        echo "$0: nothing to do with $IFACE for \`$STATE'" >> $LOGFILE
        ;;
esac

before finally making that script executable (otherwise it won't run):

chmod a+x /etc/NetworkManager/dispatcher.d/pre-up.d/iptables

With this in place, I can put my iptables rules in the usual place (/etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules) and use the handy iptables-apply and ip6tables-apply commands to test any changes to my firewall rules.

Looking at /var/log/iptables.log confirms that it is being called correctly for each network interface as they are started.

13 August, 2023 10:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Terrain base for 3D castle

terrain base for the castle

I designed and printed a "terrain" base for my 3D castle in OpenSCAD. The castle was the first thing I designed and printed on our (then new) office 3D printer. I use it as a test bed if I want to try something new, and this time I wanted to try procedurally generating a model.

I've released the OpenSCAD source for the terrain generator under the name Zarchscape.

mid 90s terrain generation

Lots of mid-90s games had very boxy floors

Lots of mid-90s games had very boxy floors

Terrain generation, 90s-style. From [this article](https://web.archive.org/web/19990822085321/http://www.gamedesign.net/tutorials/pavlock/cool-ass-terrain/)

Terrain generation, 90s-style. From this article

Back in the 90s I spent some time designing maps/levels/arenas for Quake and its sibling games (like Half-Life), mostly in the tool Worldcraft. A lot of beginner maps (including my own), ended up looking pretty boxy. I once stumbled across an article blog post that taught my a useful trick for making more natural-looking terrain. In brief: tessellate the floor region with triangle polygons, then randomly add some jitter to the z-dimension for their vertices. A really simple technique with fairly dramatic results.

OpenSCAD

Doing the same in OpenSCAD stretched me, and I think stretched OpenSCAD. It left me with some opinions which I'll try to write up in a future blog post.

Final results

multicolour

I've generated and printed the result a couple of times, including an attempt a multicolour print.

At home, I have a large spool of brown-coloured recycled PLA, and many small lengths of samples in various colours (that I picked up at Maker Faire Czech Republic last year), including some short lengths of green.

My home printer is a Prusa Mini, and I cheaped out and didn't buy the filament runout sensor, which would detect when the current filament ran out and let me handle the situation gracefully. Instead, I added several colour change instructions to the g-code at various heights, hoping that whatever plastic I loaded for each layer was enough to get the print to the next colour change instruction.

The results are a little mixed I think. I didn't catch the final layer running out in time (forgetting that the Bowden tube also means I need to catch it running out before the loading gear, a few inches earlier than the nozzle), so the final lush green colour ends prematurely. I've also got a fair bit of stringing to clean up.

Finally, all these non-flat planes really show up some of the limitations of regular Slicing. It would be interesting to try this with a non-planar Slicer.

13 August, 2023 07:30PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#41: Using r2u in Codespaces

Welcome to the 41th post in the $R^4 series. This post draws on joint experiments first started by Grant building on the lovely work Eitsupi as part of our Rocker Project. In short, r2u is an ideal match for Codesspaces, a Microsoft/GitHub service to run code ‘locally but in the cloud’ via browser or Visual Studio Code. This posts co-serves as the README.md in the .devcontainer directory as well as a vignette for r2u.

So let us get into it. Starting from the r2u repository, the .devcontainer directory provides a small self-containted file devcontainer.json to launch an executable environment R using r2u. It is based on the example in Grant McDermott’s codespaces-r2u repo and reuses its documentation. It is driven by the Rocker Project’s Devcontainer Features repo creating a fully functioning R environment for cloud use in a few minutes. And thanks to r2u you can add easily to this environment by installing new R packages in a fast and failsafe way.

Try it out

To get started, simply click on the green “Code” button at the top right. Then select the “Codespaces” tab and click the “+” symbol to start a new Codespace.

The first time you do this, it will open up a new browser tab where your Codespace is being instantiated. This first-time instantiation will take a few minutes (feel free to click “View logs” to see how things are progressing) so please be patient. Once built, your Codespace will deploy almost immediately when you use it again in the future.

After the VS Code editor opens up in your browser, feel free to open up the examples/sfExample.R file. It demonstrates how r2u enables us install packages and their system-dependencies with ease, here installing packages sf (including all its geospatial dependencies) and ggplot2 (including all its dependencies). You can run the code easily in the browser environment: Highlight or hover over line(s) and execute them by hitting Cmd+Return (Mac) / Ctrl+Return (Linux / Windows).

(Both example screenshots reflect the initial codespaces-r2u repo as well as personal scratchspace one which we started with, both of course work here too.)

Do not forget to close your Codespace once you have finished using it. Click the “Codespaces” tab at the very bottom left of your code editor / browser and select “Close Current Codespace” in the resulting pop-up box. You can restart it at any time, for example by going to https://github.com/codespaces and clicking on your instance.

Extend r2u with r-universe

r2u offers “fast, easy, reliable” access to all of CRAN via binaries for Ubuntu focal and jammy. When using the latter (as is the default), it can be combined with r-universe and its Ubuntu jammy binaries. We demontrates this in a second example file examples/censusExample.R which install both the cellxgene-census and tiledbsoma R packages as binaries from r-universe (along with about 100 dependencies), downloads single-cell data from Census and uses Seurat to create PCA and UMAP decomposition plots. Note that in order run this you have to change the Codespaces default instance from ‘small’ (4gb ram) to ‘large’ (16gb ram).

Local DevContainer build

Codespaces are DevContainers running in the cloud (where DevContainers are themselves just Docker images running with some VS Code sugar on top). This gives you the very powerful ability to ‘edit locally’ but ‘run remotely’ in the hosted codespace. To test this setup locally, simply clone the repo and open it up in VS Code. You will need to have Docker installed and running on your system (see here). You will also need the Remote Development extension (you will probably be prompted to install it automatically if you do not have it yet). Select “Reopen in Container” when prompted. Otherwise, click the >< tab at the very bottom left of your VS Code editor and select this option. To shut down the container, simply click the same button and choose “Reopen Folder Locally”. You can always search for these commands via the command palette too (Cmd+Shift+p / Ctrl+Shift+p).

Use in Your Repo

To add this ability of launching Codespaces in the browser (or editor) to a repo of yours, create a directory .devcontainers in your selected repo, and add the file .devcontainers/devcontainer.json. You can customize it by enabling other feature, or use the postCreateCommand field to install packages (while taking full advantage of r2u).

Acknowledgments

There are a few key “plumbing” pieces that make everything work here. Thanks to:

Colophon

More information about r2u is at its site, and we answered some question in issues, and at stackoverflow. More questions are always welcome!

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 August, 2023 03:11PM

August 11, 2023

Ian Jackson

Private posts

I have started to make private posts, accessible only to my Dreamwidth access list.

If you’re a friend of mine and would like to be on that list, please contact me with your Dreamwidth username (or your OpenID).



comment count unavailable comments

11 August, 2023 06:57PM

Scarlett Gately Moore

KDE: Post Akademy Snap Wrap Up and Future

Skrooge snapKDE Skrooge snap

It has been a very busy couple of weeks in the KDE snap world! Here is a rundown of what has been done:

  • Solved issues with an updated mesa in Jammy causing some apps to seg fault by rebuilding our content pack. Please do a snap refresh if this happens to you.
  • Resolved our scanner apps not finding any scanners. Skanlite and Skanpage now work as expected and find your scanners, even network scanners!
  • Fixed an issue with neochat/ruqola that relaunching the application just hangs https://forum.snapcraft.io/t/neochat-autoconnect-requests/36331 and https://bugs.kde.org/show_bug.cgi?id=473003 by allowing access to system password manager. Still working out online accounts ( specifically ubuntu-sso )
  • Fixed issues with QML and styles not being found or set in many snaps.
  • Helped FreeCAD update their snap to core22 ( while not KDE, they do use the kde-neon ext ) https://github.com/FreeCAD/FreeCAD-snap/pull/92
  • New applications completed – Skrooge – Qrca – massif-visualizer
  • Updating applications to 23.04.3 – half way through – unfortunately our priority for launchpad builders is last so it is a bottleneck until I sort out how to get them to bump that up.
  • Updated our content pack to latest in snapcraft upstream for the kde-neon extension.
  • Various fixes to ease updating our snapcraft files with new releases ( to ease CI automated releases )

An update to the “Very exciting news coming soon”: While everything went well, it is not (yet!) happening. I do not have the management experience the stakeholders are looking for to run the project. I understand completely! I have the passion and project experience, just not management in this type of project. So with that said, are you a KDE/C++ developer with a management background and have a history of bringing projects to the finishline? Are you interested in an exciting new project with new technologies? Talk to me! I can be reached via sgmoore on the various chat channels, sgmoore at kde dot org, or connect via linkedin and message: https://www.linkedin.com/in/scarlettgatelymoore If you know anyone that might be interested, please point them here!

As this project gets further delayed, it leaves me without an income still. If you or someone you know has any short term contract work let me know. Pesky bills and life expenses don’t pay themselves 🙁 If you can spare some change ( anything helps ) please consider a donation. https://gofund.me/5d0691bc

A big thank you to the community for making my work possible thus far!

11 August, 2023 04:24PM by sgmoore

Birger Schacht

Another round of rust

A couple of weeks ago I had to undergo surgery, because one of my kidneys malfunctioned. Everything went well and I’m on my way to recovery. Luckily the most recent local heat wave was over just shortly after I got home, which made being stuck at home a little easier (not sure yet when I’ll be allowed to do sports again, I miss my climbing gym…).

At first I did not have that much energy to do computer stuff, but after a week or so I was able to sit in front of the screen for short amounts of time and I started to get into writing Rust code again.

carl

The first thing I did was updating carl. I updated all the dependencies and switched the dependency that does coloring from ansi_term, which is unmaintained, to nu-ansi-term. When I then updated the clap dependency to version 4 I realized that clap now depends on the anstyle crate for text styling - so I updated carls coloring code once again so it now uses anstyle, which led to less dependencies overall. Implementing this change I also did some refactoring of the code.

carl how also has its own website as well as a subdomain1.

I also added a couple of new date properties to carl, namely all weekdays as well as odd and even - this means it is now possible choose a separate color for every weekday and have a rainbow calendar:

screenshot carl

This is included in version 0.1.0 of carl, which I published on crates.io.

typelerate

Then I started writing my first game - typelerate. It is a copy of the great typespeed, without the multiplayer support.

To describe the idea behind the game, I quote the typespeed website:

Typespeed’s idea is ripped from ztspeed (a DOS game made by Zorlim). The Idea behind the game is rather easy: type words that are flying by from left to right as fast as you can. If you miss 10 or more words, game is over.

Instead of the multiplayer support, typelerate works with UTF-8 strings and it also has another game mode: in typespeed you only type whats scrolling via the screen. In typelerate I added the option to have one or more answer strings. One of those has to be typed instead of the word flying across the screen. This lets you implement kind of an question/answer game. To be backwards compatible with the existing wordfiles from typespeed2, the wordfiles for the question/answer games contain comma separated values. The typelerate repository contains wordfiles with Python and Rust keywords as well as wordfiles where you are shown an Emoji and you have to type the corresponding Github shortcode. I’m happy to add additional wordfiles (there could be for example math questions…).

screenshot typelerate

marsrover

Another commandline game I really like, because I am fascinated by the animated ASCII graphics, is the venerable moon-buggy. In this game you have to drive a vehicle across the moon’s surface and deal with obstacles like craters or aliens.

I reimplemented the game in rust and called it marsrover:

screenshot marsrover

I published it on crates.io, you can find the repository on github. The game uses a configuration file in $XDG_CONFIG_HOME/marsrover/config.toml - you can configure the colors of the elements as well as the levels. The game comes with four levels predefined, but you can use the configuration file to override that list of levels with levels with your own properties. The level properties define the probabilities of obstacles occuring on your way on the mars surface and a points setting that defines how many points the user can get in that level (=the game switches to the next level if the user reaches the points).

[[levels]]
prob_ditch_one = 0.2
prob_ditch_two = 0.0
prob_ditch_three = 0.0
prob_alien = 0.5
points = 100

After the last level, the game generates new ones on the fly.


  1. thanks to the service from https://cli.rs. ↩︎

  2. actually, typelerate is not backwards compatible with the typespeed wordfiles, because those are not UTF-8 encoded ↩︎

11 August, 2023 08:52AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.6.1.0 on CRAN: New Upstream

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1092 other packages on CRAN, downloaded 30.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 545 times according to Google Scholar.

This release brings bugfix upstream release 12.6.1. Conrad release 12.6.0 when CRAN went on summer break. I rolled it up ran the full reverse-depenency check against the now more than 1000 packages. And usage from one those revealed a corner-case bug (of not always ‘flattening’ memory for sparse matrices to zero values) so 12.6.1 followed. This is what was uploaded today. And as I prepared it earlier in the week as CRAN reopened, Conrad released a new 12.6.2. However, its changes are only concerned with settings for Armadillo-internal use of its random number generators (RNGs). And as RcppArmadillo connects Armadillo to the RNGs provided by R, the upgrade does not affect R users at all. However it is available in the github repo, in the Rcpp drap repo and at r-universe.

The set of changes for this RcppArmadillo release follows.

Changes in RcppArmadillo version 0.12.6.1.0 (2023-07-26)

  • Upgraded to Armadillo release 12.6.1 (Cortisol Retox)

    • faster multiplication of dense vectors by sparse matrices (and vice versa)

    • faster eigs_sym() and eigs_gen()

    • faster conv() and conv2() when using OpenMP

    • added diags() and spdiags() for generating band matrices from set of vectors

Courtesy of my CRANberries, there is a [diffstat report relative to previous release]. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 August, 2023 12:06AM

August 10, 2023

Sandro Tosi

Mastodon hook for dput-ng

If you use dput-ng, you may be familiar with the Twitter hook that tweets a message when uploading a package.

A similar hook is now available for Mastodon too; if interested, give it a try and comment on the MR

10 August, 2023 06:50PM by Sandro Tosi (noreply@blogger.com)

Petter Reinholdtsen

Invidious add-on for Kodi 20

I still enjoy Kodi and LibreELEC as my multimedia center at home. Sadly two of the services I really would like to use from within Kodi are not easily available. The most wanted add-on would be one making The Internet Archive available, and it has not been working for many years. The second most wanted add-on is one using the Invidious privacy enhanced Youtube frontent. A plugin for this has been partly working, but not been kept up to date in the Kodi add-on repository, and its upstream seem to have given it up in April this year, when the git repository was closed. A few days ago I got tired of this sad state of affairs and decided to have a go at improving the Invidious add-on. As Google has already attacked the Invidious concept, so it need all the support if can get. My small contribution here is to improve the service status on Kodi.

I added support to the Invidious add-on for automatically picking a working Invidious instance, instead of requiring the user to specify the URL to a specific instance after installation. I also had a look at the set of patches floating around in the various forks on github, and decided to clean up at least some of the features I liked and integrate them into my new release branch. Now the plugin can handle channel and short video items in search results. Earlier it could only handle single video instances in the search response. I also brushed up the set of metadata displayed a bit, but hope I can figure out how to get more relevant metadata displayed.

Because I only use Kodi 20 myself, I only test on version 20 and am only motivated to ensure version 20 is working. Because of API changes between version 19 and 20, I suspect it will fail with earlier Kodi versions.

I already asked to have the add-on added to the official Kodi 20 repository, and is waiting to heard back from the repo maintainers.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10 August, 2023 05:50PM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: PTS tracker, DebConf23 Bursary, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

tracker.debian.org work by Raphaël Hertzog

Raphaël spent some time during his vacation to update distro-tracker to be fully compatible with Django 3.2 so that the codebase and the whole testsuite can run on Debian 12. There’s one exception though with the functional test suite that still needs further work to cope with the latest version of Selenium.

By dropping support of Django 2.2, Raphaël could also start to work toward support of Django 4.2 since Django helpfully emits deprecation warnings of things that will break in future versions. All the warnings have been fixed but the codebase still fails its testsuite in Django 4.2 because we have to get rid of the python3-django-jsonfield dependency (that is rightfully dead upstream since Django has native support nowadays). All the JSONField have been converted to use Django’s native field, but the migration system still requires that dependency at this point.

This will require either some fresh reboot of the migration history, or some other trickery to erase the jsonfield dependency from the history of migrations. If you have experience with that, don’t hesitate to share it (mail at hertzog@debian.org, or reach out to buxy on IRC).

At this point, tracker.debian.org runs with Django 3.2 on Debian 11 since Debian System Administrators are not yet ready to upgrade debian.org hosts to Debian 12.

DebConf 23 bursary work by Utkarsh Gupta

Utkarsh led the bursary team this year. The bursary team got a ton of requests this time. Rolling out the results in 4 batches, the bursary team catered over 165 bursary requests - which is superb!

The team managed to address all the requests and answered a bit over 120 emails in the process. With that, the bursaries are officially closed for DebConf 2023. The team also intends to roll out some of the statistics closer to DebConf.

Miscellaneous contributions

  • Stefano implemented meson support in pybuild, the tool for building Python packages in Debian against multiple Python versions.
  • Santiago did some work on Salsa CI to enhance the ARM support on the autopkgtest job and make Josch’s branch work. MR to come soon.
  • Helmut sent patches for 6 cross build failures.
  • Stefano has been preparing for DebConf 23: Working on the website, and assisting the local teams.
  • Stefano attended the DebConf Video team sprint in Paris, mostly looking at new hardware and software options for video capture and live-mixing. Full sprint report.

10 August, 2023 12:00AM by Utkarsh Gupta

August 09, 2023

Antoine Beaupré

OpenPGP key transition

This is a short announcement to say that I have changed my main OpenPGP key. A signed statement is available with the cryptographic details but, in short, the reason is that I stopped using my old YubiKey NEO that I have worn on my keyring since 2015.

I now have a YubiKey 5 which supports ED25519 which features much shorter keys and faster decryption. It allowed me to move all my secret subkeys on the key (including encryption keys) while retaining reasonable performance.

I have written extensive documentation on how to do that OpenPGP key rotation and also YubiKey OpenPGP operations.

Warning on storing encryption keys on a YubiKey

People wishing to move their private encryption keys to such a security token should be very careful as there are special precautions to take for disaster recovery.

I am toying with the idea of writing an article specifically about disaster recovery for secrets and backups, dealing specifically with cases of death or disabilities.

Autocrypt changes

One nice change is the impact on Autocrypt headers, which are considerably shorter.

Before, the header didn't even fit on a single line in an email, it overflowed to five lines:

Autocrypt: addr=anarcat@torproject.org; prefer-encrypt=nopreference;
 keydata=xsFNBEogKJ4BEADHRk8dXcT3VmnEZQQdiAaNw8pmnoRG2QkoAvv42q9Ua+DRVe/yAEUd03EOXbMJl++YKWpVuzSFr7IlZ+/lJHOCqDeSsBD6LKBSx/7uH2EOIDizGwfZNF3u7X+gVBMy2V7rTClDJM1eT9QuLMfMakpZkIe2PpGE4g5zbGZixn9er+wEmzk2mt20RImMeLK3jyd6vPb1/Ph9+bTEuEXi6/WDxJ6+b5peWydKOdY1tSbkWZgdi+Bup72DLUGZATE3+Ju5+rFXtb/1/po5dZirhaSRZjZA6sQhyFM/ZhIj92mUM8JJrhkeAC0iJejn4SW8ps2NoPm0kAfVu6apgVACaNmFb4nBAb2k1KWru+UMQnV+VxDVdxhpV628Tn9+8oDg6c+dO3RCCmw+nUUPjeGU0k19S6fNIbNPRlElS31QGL4H0IazZqnE+kw6ojn4Q44h8u7iOfpeanVumtp0lJs6dE2nRw0EdAlt535iQbxHIOy2x5m9IdJ6q1wWFFQDskG+ybN2Qy7SZMQtjjOqM+CmdeAnQGVwxowSDPbHfFpYeCEb+Wzya337Jy9yJwkfa+V7e7Lkv9/OysEsV4hJrOh8YXu9a4qBWZvZHnIO7zRbz7cqVBKmdrL2iGqpEUv/x5onjNQwpjSVX5S+ZRBZTzah0w186IpXVxsU8dSk0yeQskblrwARAQABzSlBbnRvaW5lIEJlYXVwcsOpIDxhbmFyY2F0QHRvcnByb2plY3Qub3JnPsLBlAQTAQgAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgBYhBI3JAc5kFGwEitUPu3khUlJ7dZIeBQJihnFIBQkacFLiAAoJEHkhUlJ7dZIeXNAP/RsX+27l9K5uGspEaMH6jabAFTQVWD8Ch1om9YvrBgfYtq2k/m4WlkMh9IpT89Ahmlf0eq+V1Vph4wwXBS5McK0dzoFuHXJa1WHThNMaexgHhqJOs
 S60bWyLH4QnGxNaOoQvuAXiCYV4amKl7hSuDVZEn/9etDgm/UhGn2KS3yg0XFsqI7V/3RopHiDT+k7+zpAKd3st2V74w6ht+EFp2Gj0sNTBoCdbmIkRhiLyH9S4B+0Z5dUCUEopGIKKOSbQwyD5jILXEi7VTZhN0CrwIcCuqNo7OXI6e8gJd8McymqK4JrVoCipJbLzyOLxZMxGz8Ki0b9O844/DTzwcYcg9I1qogCsGmZfgVze2XtGxY+9zwSpeCLeef6QOPQ0uxsEYSfVgS+onCesSRCgwAPmppPiva+UlGuIMun87gPpQpV2fqFg/V8zBxRvs6YTGcfcQjfMoBHmZTGb+jk1//QAgnXMO7fGG38YH7iQSSzkmodrH2s27ZKgUTHVxpBL85ptftuRqbR7MzIKXZsKdA88kjIKKXwMmez9L1VbJkM4k+1Kzc5KdVydwi+ujpNegF6ZU8KDNFiN9TbDOlRxK5R+AjwdS8ZOIa4nci77KbNF9OZuO3l/FZwiKp8IFJ1nK7uiKUjmCukL0od/6X2rJtAzJmO5Co93ZVrd5r48oqUvjklzzsBNBFmeC3oBCADEV28RKzbv3dEbOocOsJQWr1R0EHUcbS270CrQZfb9VCZWkFlQ/1ypqFFQSjmmUGbNX2CG5mivVsW6Vgm7gg8HEnVCqzL02BPY4OmylskYMFI5Bra2wRNNQBgjg39L9XU4866q3BQzJp3r0fLRVH8gHM54Jf0FVmTyHotR/Xiw5YavNy2qaQXesqqUv8HBIha0rFblbuYI/cFwOtJ47gu0QmgrU0ytDjlnmDNx4rfsNylwTIHS0Oc7Pezp7MzLmZxnTM9b5VMprAXnQr4rewXCOUKBSto+j4rD5/77DzXw96bbueNruaupb2Iy2OHXNGkB0vKFD3xHsXE2x75NBovtABEBAAHCwqwEGAEIACAWIQSNyQHOZBRsBIrVD7t5IVJSe3WSHgUCWZ4LegIbAgFACRB5IV
 JSe3WSHsB0IAQZAQgAHRYhBHsWQgTQlnI7AZY1qz6h3d2yYdl7BQJZngt6AAoJED6h3d2yYdl7CowH/Rp7GHEoPZTSUK8Ss7crwRmuAIDGBbSPkZbGmm4bOTaNs/gealc2tsVYpoMx7aYgqUW+t+84XciKHT+bjRv8uBnHescKZgDaomDuDKc2JVyx6samGFYuYPcGFReRcdmH0FOoPCn7bMW5mTPztV/wIA80LZD9kPKIXanfUyI3HLP0BPwZG4WTpKzJaalR1BNwu2oF6kEK0ymH3LfDiJ5Sr6emI2jrm4gH+/19ux/x+ST4tvm2PmH3BSQOPzgiqDiFd7RZoAIhmwr3FW4epsK9LtSxsi9gZ2vATBKO1oKtb6olW/keQT6uQCjqPSGojwzGRT2thEANH+5t6Vh0oDPZhrKUXRAAxHMBNHEaoo/M0sjZo+5OF3Ig1rMnI6XbKskLv6hu13cCymW0w/5E4XuYnyQ1cNC3pLvqDQbDx5mAPfBVHuqxJdRLQ3yDM/D2QIsxnkzQwi0FsJuni4vuJzWK/NHHDCvxMCh0YmSgbptUtgW8/niatd2Y6MbfRGxUHoctKtzqzivC8hKMTFrj4AbZhg/e9QVCsh5zSXtpWP0qFDJsxRMx0/432n9d4XUiy4U672r9Q09SsynB3QN6nTaCTWCIxGxjIb+8kJrRqTGwy/PElHX6kF0vQUWZNf2ITV1sd6LK/s/7sH+x4rzgUEHrsKr/qPvY3rUY/dQLd+owXesY83ANOu6oMWhSJnPMksbNa4tIKKbjmw3CFIOfoYHOWf3FtnydHNXoXfj4nBX8oSnkfhLILTJgf6JDFXfw6mTsv/jMzIfDs7PO1LK2oMK0+prSvSoM8bP9dmVEGIurzsTGjhTOBcb0zgyCmYVD3S48vZlTgHszAes1zwaCyt3/tOwrzU5JsRJVns+B/TUYaR/u3oIDMDygvE5ObWxXaFVnCC59r+zl0FazZ0ouyk2AYIR
 zHf+n1n98HCngRO4FRel2yzGDYO2rLPkXRm+NHCRvUA/i4zGkJs2AV0hsKK9/x8uMkBjHAdAheXhY+CsizGzsKjjfwvgqf84LwAzSDdZqLVE2yGTOwU0ESiArJwEQAJhtnC6pScWjzvvQ6rCTGAai6hrRiN6VLVVFLIMaMnlUp92EtgVSNpw6kANtRTpKXUB5fIPZVUrVdfEN06t96/6LE42tgifDAFyFTZY5FdHHri1GG/Cr39MpW2VqCDCtTTPVWHTUlU1ZG631BJ+9NB+ce58TmLr6wBTQrT+W367eRFBC54EsLNb7zQAspCn9pw1xf1XNHOGnrAQ4r9BXhOW5B8CzRd4nLRQwVgtw/c5M/bjemAOoq2WkwN+0mfJe4TSfHwFUozXuN274X+0Gr10fhp8xEDYuQM0qu6W3aDXMBBwIu0jTNudEELsTzhKUbqpsBc9WjwNMCZoCuSw/RTpFBV35mXbqQoQgbcU7uWZslLl9Wvv/C6rjXgd+GeX8SGBjTqq1ZkTv5UXLHTNQzPnbkNEExzqToi/QdSjFMIACnakeOSxc0ckfnsd9pfGv1PUyPyiwrHiqWFzBijzGIZEHxhNGFxAkXwTJR7Pd40a7RDxwbO6p/TSIIum41JtteehLHwTRDdQNMoyfLxuNLEtNYS0uR2jYI1EPQfCNWXCdT2ZK/l6GVP6jyB/olHBIOr+oVXqJh+48ki8cATPczhq3fUr7UivmguGwD67/4omZ4PCKtz1hNndnyYFS9QldEGo+AsB3AoUpVIA0XfQVkxD9IZr+Zu6aJ6nWq4M2bsoxABEBAAHCwXYEGAEIACACGwwWIQSNyQHOZBRsBIrVD7t5IVJSe3WSHgUCWPerZAAKCRB5IVJSe3WSHkIgEACTpxdn/FKrwH0/LDpZDTKWEWm4416l13RjhSt9CUhZ/Gm2GNfXcVTfoF/jKXXgjHcV1DHjfLUPmPVwMdqlf5ACOiFqIUM2ag/OEARh356w
 YG7YEobMjX0CThKe6AV2118XNzRBw/S2IO1LWnL5qaGYPZONUa9Pj0OaErdKIk/V1wge8Zoav2fQPautBcRLW5VA33PH1ggoqKQ4ES1hc9HC6SYKzTCGixu97mu/vjOa8DYgM+33TosLyNy+bCzw62zJkMf89X0tTSdaJSj5Op0SrRvfgjbC2YpJOnXxHr9qaXFbBZQhLjemZi6zRzUNeJ6A3Nzs+gIc4H7s/bYBtcd4ugPEhDeCGffdS3TppH9PnvRXfoa5zj5bsKFgjqjWolCyAmEvd15tXz5yNXtvrpgDhjF5ozPiNp/1EeWX4DxbH2i17drVu4fXwauFZ6lcsAcJxnvCA28RlQlmEQu/gFOx1axVXf6GIuXnQSjQN6qJbByUYrdc/cFCxPO2/lGuUxnufN9Tvb51Qh54laPgGLrlD2huQeSD9Sxa0MNUjNY0qLqaReT99Ygb2LPYGSLoFVx9iZz6sZNt07LqCx9qNgsJwsdmwYsNpMuFbc7nkWjtlEqzsXZHTvYN654p43S+hcAhmmOzQZcew6h71fAJLciiqsPBnCEdgCGFAWhZZdPkMA==

After the change, the entire key fits on a single line, neat!

Autocrypt: addr=anarcat@torproject.org; prefer-encrypt=nopreference;
 keydata=xjMEZHZPzhYJKwYBBAHaRw8BAQdAWdVzOFRW6FYVpeVaDo3sC4aJ2kUW4ukdEZ36UJLAHd7NKUFudG9pbmUgQmVhdXByw6kgPGFuYXJjYXRAdG9ycHJvamVjdC5vcmc+wpUEExYIAD4WIQS7ts1MmNdOE1inUqYCKTpvpOU0cwUCZHZgvwIbAwUJAeEzgAULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRACKTpvpOU0c47SAPdEqfeHtFDx9UPhElZf7nSM69KyvPWXMocu9Kcu/sw1AQD5QkPzK5oxierims6/KUkIKDHdt8UcNp234V+UdD/ZB844BGR2UM4SCisGAQQBl1UBBQEBB0CYZha2IMY54WFXMG4S9/Smef54Pgon99LJ/hJ885p0ZAMBCAfCdwQYFggAIBYhBLu2zUyY104TWKdSpgIpOm+k5TRzBQJkdlDOAhsMAAoJEAIpOm+k5TRzBg0A+IbcsZhLx6FRIqBJCdfYMo7qovEo+vX0HZsUPRlq4HkBAIctCzmH3WyfOD/aUTeOF3tY+tIGUxxjQLGsNQZeGrQI

Note that I have implemented my own kind of ridiculous Autocrypt support for the Notmuch Emacs email client I use, see this elisp code. To import keys, I pipe the message into this script which is basically just:

sq autocrypt decode | gpg --import

... thanks to Sequoia best-of-class Autocrypt support.

Note on OpenPGP usage

While some have claimed OpenPGP's death, I believe those are overstated. Maybe it's just me, but I still use OpenPGP for my password management, to authenticate users and messages, and it's the interface to my YubiKey for authenticating with SSH servers.

I understand people feel that OpenPGP is possibly insecure, counter-intuitive and full of problems, but I think most of those problems should instead be attributed to its current flagship implementation, GnuPG. I have tried to work with GnuPG for years, and it keeps surprising me with evilness and oddities.

I have high hopes that the Sequoia project can bring some sanity into this space, and I also hope that RFC4880bis can eventually get somewhere so we have a more solid specification with more robust crypto. It's kind of a shame that this has dragged on for so long, but Update: there's a separate draft called openpgp-crypto-refresh that might actually be adopted as the "OpenPGP RFC" soon! And it doesn't keep real work from happening in Sequoia and other implementations. Thunderbird rewrote their OpenPGP implementation with RNP (which was, granted, a bumpy road because it lost compatibility with GnuPG) and Sequoia now has a certificate store with trust management (but still no secret storage), preliminary OpenPGP card support and even a basic GnuPG compatibility layer. I'm also curious to try out the OpenPGP CA capabilities.

So maybe it's just because I'm becoming an old fart that doesn't want to change tools, but so far I haven't seen a good incentive in switching away from OpenPGP, and haven't found a good set of tools that completely replace it. Maybe OpenSSH's keys and CA can eventually replace it, but I suspect they will end up rebuilding most of OpenPGP anyway, just more slowly. If they do, let's hope they avoid the mistakes our community has done in the past at least...

09 August, 2023 06:18PM

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

How to setup DMARC policy for subdomain on debian.net

For setting up subdomain on debian.net, we usually use LDAP Gateway. [1]

db.debian.org

[1] https://db.debian.org/doc-mail.html

With changing dnsZoneEntry, we can set up each subdomain of debian.net.

For example, you can customize SPF TXT record for example.debian.net.

example IN TXT v=spf1 a:example.debian.net ~all

But when you setup DMARC policy for dnsZoneEntry, it may cause the trouble. LDAP Gateway returns the following error:

Command is not understood. Halted - no changes committed

This is caused by unsupported v=DMARC1 record by changes@db.debian.org.

Even though LDAP Gateway doesn't support v=DMARC1 record, there is a workaround for it. (e.g example.debian.net)

  • Step 1. If you own your domain, set v=DMARC1 record on your domain. (e.g. _dmarc.example.example.org)

TXT record of _dmarc.example.example.org is something like this:

v=DMARC1; p=quarantine; fo=s; aspf=s; rua=dmarc-reports@example.debian.net; ruf=dmarc-reports@example.debian.net

  • Step 2: Set dnsZoneEntry on debian.net

dmarc.example IN CNAME dmarc.example.example.org.

It means that _dmarc.example.debian.net is provided by _dmarc.example.example.org 's txt record.

Now you can ready to verify it.

09 August, 2023 10:18AM

August 08, 2023

James Valleroy

FreedomBox backport automatic update issue

When run on Debian stable, FreedomBox has an optional feature called “Frequent Feature Updates”. If this feature is enabled, it has 2 effects:

  1. The stable-backports repository is added to the system.
  2. Apt pinning is configured so that FreedomBox itself, and a small number of other carefully selected packages, will be kept updated to the latest version available in the backports repository.

However for bookworm-backports, there was a small change in the repository for bookworm-backports, which meant that our approach to apt pinning was no longer correct. The change is a difference between the repository’s “Suite” and “Codename”. For bullseye-backports, these were the same, but for bookworm-backports, they are now different (stable-backports vs bookworm-backports). The issue is described in [1].

The result is that a FreedomBox on the current Debian stable release, bookworm, will not automatically upgrade to new versions in bookworm-backports, even if the “Frequent Feature Updates” option is selected. The fix for a issue is a very small change to two configuration files (for apt and unattended-upgrades), so that they refer to the Codename instead of the Suite. (See [2] for details.)

So far, the fixed version of the freedombox package, 23.14~bpo12+1, is available in bookworm-backports. For FreedomBox users who would like to get the newer versions right away, you can get the latest package installed by running the following command through SSH or Cockpit’s terminal:

$ sudo apt install -t bookworm-backports freedombox

We are also planning to update the version of FreedomBox in bookworm to have the fix. Once this update is available, then running the above command won’t be necessary to receive FreedomBox feature updates.

[1] https://salsa.debian.org/freedombox-team/freedombox/-/issues/2368

[2] https://salsa.debian.org/freedombox-team/freedombox/-/merge_requests/2409

08 August, 2023 11:36PM by James Valleroy

hackergotchi for Matthew Garrett

Matthew Garrett

Updating Fedora the unsupported way

I dug out a computer running Fedora 28, which was released 2018-04-01 - over 5 years ago. Backing up the data and re-installing seemed tedious, but the current version of Fedora is 38, and while Fedora supports updates from N to N+2 that was still going to be 5 separate upgrades. That seemed tedious, so I figured I'd just try to do an update from 28 directly to 38. This is, obviously, extremely unsupported, but what could possibly go wrong?

Running sudo dnf system-upgrade download --releasever=38 didn't successfully resolve dependencies, but sudo dnf system-upgrade download --releasever=38 --allowerasing passed and dnf started downloading 6GB of packages. And then promptly failed, since I didn't have any of the relevant signing keys. So I downloaded the fedora-gpg-keys package from F38 by hand and tried to install it, and got a signature hdr data: BAD, no. of bytes(88084) out of range error. It turns out that rpm doesn't handle cases where the signature header is larger than a few K, and RPMs from modern versions of Fedora. The obvious fix would be to install a newer version of rpm, but that wouldn't be easy without upgrading the rest of the system as well - or, alternatively, downloading a bunch of build depends and building it. Given that I'm already doing all of this in the worst way possible, let's do something different.

The relevant code in the hdrblobRead function of rpm's lib/header.c is:

int32_t il_max = HEADER_TAGS_MAX;
int32_t dl_max = HEADER_DATA_MAX;

if (regionTag == RPMTAG_HEADERSIGNATURES) {
il_max = 32;
dl_max = 8192;
}

which indicates that if the header in question is RPMTAG_HEADERSIGNATURES, it sets more restrictive limits on the size (no, I don't know why). So I installed rpm-libs-debuginfo, ran gdb against librpm.so.8, loaded the symbol file, and then did disassemble hdrblobRead. The relevant chunk ends up being:

0x000000000001bc81 <+81>: cmp $0x3e,%ebx
0x000000000001bc84 <+84>: mov $0xfffffff,%ecx
0x000000000001bc89 <+89>: mov $0x2000,%eax
0x000000000001bc8e <+94>: mov %r12,%rdi
0x000000000001bc91 <+97>: cmovne %ecx,%eax

which is basically "If ebx is not 0x3e, set eax to 0xffffffff - otherwise, set it to 0x2000". RPMTAG_HEADERSIGNATURES is 62, which is 0x3e, so I just opened librpm.so.8 in hexedit, went to byte 0x1bc81, and replaced 0x3e with 0xfe (an arbitrary invalid value). This has the effect of skipping the if (regionTag == RPMTAG_HEADERSIGNATURES) code and so using the default limits even if the header section in question is the signatures. And with that one byte modification, rpm from F28 would suddenly install the fedora-gpg-keys package from F38. Success!

But short-lived. dnf now believed packages had valid signatures, but sadly there were still issues. A bunch of packages in F38 had files that conflicted with packages in F28. These were largely Python 3 packages that conflicted with Python 2 packages from F28 - jumping this many releases meant that a bunch of explicit replaces and the like no longer existed. The easiest way to solve this was simply to uninstall python 2 before upgrading, and avoiding the entire transition. Another issue was that some data files had moved from libxcrypt-common to libxcrypt, and removing libxcrypt-common would remove libxcrypt and a bunch of important things that depended on it (like, for instance, systemd). So I built a fake empty package that provided libxcrypt-common and removed the actual package. Surely everything would work now?

Ha no. The final obstacle was that several packages depended on rpmlib(CaretInVersions), and building another fake package that provided that didn't work. I shouted into the void and Bill Nottingham answered - rpmlib dependencies are synthesised by rpm itself, indicating that it has the ability to handle extensions that specific packages are making use of. This made things harder, since the list is hard-coded in the binary. But since I'm already committing crimes against humanity with a hex editor, why not go further? Back to editing librpm.so.8 and finding the list of rpmlib() dependencies it provides. There were a bunch, but I couldn't really extend the list. What I could do is overwrite existing entries. I tried this a few times but (unsurprisingly) broke other things since packages depended on the feature I'd overwritten. Finally, I rewrote rpmlib(ExplicitPackageProvide) to rpmlib(CaretInVersions) (adding an extra '\0' at the end of it to deal with it being shorter than the original string) and apparently nothing I wanted to install depended on rpmlib(ExplicitPackageProvide) because dnf finished its transaction checks and prompted me to reboot to perform the update. So, I did.

And about an hour later, it rebooted and gave me a whole bunch of errors due to the fact that dbus never got started. A bit of digging revealed that I had no /etc/systemd/system/dbus.service, a symlink that was presumably introduced at some point between F28 and F38 but which didn't get automatically added in my case because well who knows. That was literally the only thing I needed to fix up after the upgrade, and on the next reboot I was presented with a gdm prompt and had a fully functional F38 machine.

You should not do this. I should not do this. This was a terrible idea. Any situation where you're binary patching your package manager to get it to let you do something is obviously a bad situation. And with hindsight performing 5 independent upgrades might have been faster. But that would have just involved me typing the same thing 5 times, while this way I learned something. And what I learned is "Terrible ideas sometimes work and so you should definitely act upon them rather than doing the sensible thing", so like I said, you should not do this in case you learn the same lesson.

comment count unavailable comments

08 August, 2023 05:54AM

August 07, 2023

Thorsten Alteholz

My Debian Activities in July 2023

FTP master

This month I accepted 408 and rejected 40 packages. The overall number of packages that got accepted was 412.

Debian LTS

This was my hundred-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. 

This month my all in all workload has been 14h.

During that time I uploaded:

  • [DLA 3505-1] gst-plugins-good1.0 security update for one CVE
  • [DLA 3503-1] gst-plugins-bad1.0 security update for one CVE
  • [DLA 3504-1] gst-plugins-base1.0 security update for one CVE
  • [#1039026] the pu upload of cups was finally accepted
  • [#1039862] the pu upload of cpdb-libs was finally accepted

I also continued my work on ring and did some work on security-master.

Last but not least I did some days of frontdesk duties and took part in the LTS meeting.

Debian ELTS

This month was the sixtieth ELTS month.

  • [ELA-887-1] cups security update in Jessie and Stretch for on CVE
  • [ELA-898-1]gst-plugins-bad1.0 update in Jessie and Stretch for one CVE
  • [ELA-899-1]gst-plugins-base1.0 update in Jessie and Stretch for one CVE
  • [ELA-900-1]gst-plugins-good1.0 update in Jessie and Stretch for one CVE

Finally I found the problem with the openssl package. When starting to work on the package, it built fine without my patches. After applying some patches, the built suddenly failed, so I thought I did something wrong with the patches. At some point I found out that it weren’t my patches but a certificate, that was used for testing, expired. It was valid for 10 years and just when I worked on the package it expired. Now I just have to find out how to replace it…

Last but not least I did some days on frontdesk duties.

Debian Astro

This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

Other stuff

This month I did uploads of new packages:

07 August, 2023 05:10PM by alteholz

August 06, 2023

Sam Hartman

AI Tools

I wrote about how I’m exploring the role of AI in human connection and intimacy. The first part of that journey has been all about learning the software and tools for approaching large language models.

The biggest thing I wish I had known going in was not to focus on the traditional cloud providers. I was struggling until I found runpod.io. I kind of assumed that if you were willing to pay for it and had the money, you could go to Amazon on or google or whatever and get the compute resources you needed. Not so much. Google completely rejected my request to have the maximum number of GPUs I could run raised above a limit of 0. “Go talk to your sales representative.” And of course no sales representative was willing to waste their time on me. But I did eventually find some of the smaller AI-specific clouds.

I intentionally wanted to run software myself. Everyone has various fine-tuning and training APIs as well as APIs for inference. I thought I’d gain a much better understanding if I wrote my own code. That definitely ended up being true. I started by understanding PyTorch and the role of optimizers, gradient descent and what a model is. Then I focused on Transformers and that ecosystem, including Accelerate, tokenizers, generation and training.

I’m really impressed with the Hugging Face ecosystem. A lot of academic software is very purpose built and is hard to reuse and customize. But the hub strikes an amazing balance between providing abstractions for common interfaces like consuming a model or datasets without getting in the way of hacking on models or evolving the models.

I had a great time, and after a number of false starts, succeeded in customizing Llama2 to explore some of the questions on my mind. I’ll talk about what I accomplished and learned in the next post.



comment count unavailable comments

06 August, 2023 10:25PM

August 05, 2023

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

RIP Bram Moolenaar

It was with surprise that I learned that Bram Moolenaar, author of the Vim editor, had died two days ago. I cannot say we were ever friends, but I sat next to Bram for a year or so in the Google Zurich office and learned to recognize his ways and his work in Google (he worked on, among others, autocorrecting searches in Google Apps). He didn't talk much about Vim (not to me, anyway), but even at work, he kept up his life-long advocacy of the ICCF charity, helping children in Uganda.

I am fairly certain that without Bram, we would not see a situation where generation after generation of programmers, even new ones, are still learning, using and loving a descendant of the venerable vi(1) editor (which today in practice means Vim or its fork Neovim). Of course, there's no longer a world dominance, with VSCode being the biggest player in this landscape but there's a ubiquity there; if you log in somewhere and it has bash, it most likely has Vim as well.

:wq, and all the best wishes to his family and friends left behind. You will never be forgotten, Bram.

05 August, 2023 04:36PM

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Bits Volume 1, Issue 1


Debian Project Bits Volume 1, Issue 1 August 05, 2023

Welcome to the inaugural issue of Debian Project Bits!

Those remembering the Debian Weekly News (DwN) will recognize some of the sections here which served as our inspiration.

Debian Project Bits posts will allow for a faster turnaround of some project news on a monthly basis. The Debian Micronews service will continue to share shorter news items, the Debian Project News remains as our official newsletter which may move to a biannual archive format.

News

Debian Day

The Debian Project was officially founded by Ian Murdock on August 16, 1993. Since then we have celebrated our Anniversary of that date each year with events around the world. We would love it if you could join our revels this very special year as we have the honor of turning 30!

Attend or organize a local Debian Day celebration. You're invited to plan your own event: from Bug Squashing parties to Key Signing parties, Meet-Ups, or any type of social event whether large or small. And be sure to check our Debian reimbursement How To if you need such resources.

You can share your days, events, thoughts, or notes with us and the rest of the community with the #debianday tag that will be used across most social media platforms. See you then!

Events: Upcoming and Reports

Upcoming

Debian 30 anos

The Debian Brasil Community is organizing the event Debian 30 anos to celebrate the 30th anniversary of the Debian Project.

From August 14 to 18, between 7pm and 22pm (UTC-3) contributors will talk online in Portuguese and we will live stream on Debian Brasil YouTube channel.

DebConf23: Debian Developers Camp and Conference

The 2023 Debian Developers Camp (DebCamp) and Conference (DebConf23) will be hosted this year in Infopark, Kochi, India.

DebCamp is slated to run from September 3 through 9, immediately followed by the larger DebConf, September 10 through 17.

If you are planning on attending the conference this year, now is the time to ensure your travel documentation, visa information, bursary submissions, papers and relevant equipment are prepared. For more information contact: debconf@debconf.

MiniDebConf Cambridge 2023

There will be a MiniDebConf held in Cambridge, UK, hosted by ARM for 4 days in November: 2 days for a mini-DebCamp (Thu 23 - Fri 24), with space for dedicated development / sprint / team meetings, then two days for a more regular MiniDebConf (Sat 25 - Sun 26) with space for more general talks, up to 80 people.

Reports

During the last months, the Debian Community has organized some Bug Squashing Parties:

Tilburg, Netherlands. October 2022.

St-Cergue, Switzerland. January 2023

Montreal, Canada. February 2023

In January, Debian India hosted the MiniDebConf Tamil Nadu in Viluppuram, Tamil Nadu, India (Sat 28 - Sun 26).

The following month, the MiniDebConf Portugal 2023 was held in Lisbon (12 - 16 February 2023).

These events, seen as a stunning success by some of their attendees, demonstrate the vitality of our community.

Debian Brasil Community at Campus Party Brazil 2023

Another edition of Campus Party Brazil took place in the city of São Paulo between July 25th and 30th. And one more time the Debian Brazil Community was present. During the days in the available space, we carry out some activities such as:

  • Gifts for attendees (stickers, cups, lanyards);
  • Workshop on how to contribute to the translation team;
  • Workshop on packaging;
  • Key signing party;
  • Information about the project;

For more info and a few photos, check out the organizers' report.

MiniDebConf Brasília 2023

From May 25 to 27, Brasília hosted the MiniDebConf Brasília 2023. This gathering was composed of various activities such as talks, workshops, sprints, BSPs (Bug Squashing Party), key signings, social events, and hacking, aimed to bring the community together and celebrate the world's largest Free Software project: Debian.

For more information please see the full report written by the organizers.

Debian Reunion Hamburg 2023

This year the annual Debian Reunion Hamburg was held from Tuesday 23 to 30 May starting with four days of hacking followed by two days of talks, and then two more days of hacking. As usual, people - more than forty-five attendees from Germany, Czechia, France, Slovakia, and Switzerland - were happy to meet in person, to hack and chat together, and much more. If you missed the live streams, the video recordings are available.

Translation workshops from the pt_BR team

The Brazilian translation team, debian-l10n-portuguese, had their first workshop of 2023 in February with great results. The workshop was aimed at beginners, working in DDTP/DDTSS.

For more information please see the full report written by the organizers.

And on June 13 another workshop took place to translate The Debian Administrator's Handbook). The main goal was to show beginners how to collaborate in the translation of this important material, which has existed since 2004. The manual's translations are hosted on Weblate.

Releases

Stable Release

Debian 12 bookworm was released on June 10, 2023. This new version becomes the stable release of Debian and moves the prior Debian 11 bullseye release to oldstable status. The Debian community celebrated the release with 23 Release Parties all around the world.

Bookworm's first point release 12.1 address miscellaneous bug fixes affecting 88 packages, documentation, and installer updates was made available on July 22, 2023.

RISC-V support

riscv64 has recently been added to the official Debian architectures for support of 64-bit little-endian RISC-V hardware running the Linux kernel. We expect to have full riscv64 support in Debian 13 trixie. Updates on bootstrap, build daemon, porterbox, and development progress were recently shared by the team in a Bits from the Debian riscv64 porters post.

non-free-firmware

The Debian 12 bookworm archive now includes non-free-firmware; please be sure to update your apt sources.list if your systems requires such components for operation. If your previous sources.list included non-free for this purpose it may safely be removed.

apt sources.list

The Debian archive holds several components:

  • main: Contains DFSG-compliant packages, which do not rely on software outside this area to operate.
  • contrib: Contains packages that contain DFSG-compliant software, but have dependencies not in main.
  • non-free: Contains software that does not comply with the DFSG.
  • non-free-firmware: Firmware that is otherwise not part of the Debian system to enable use of Debian with hardware that requires such firmware.
Example of the sources.list file
deb http://deb.debian.org/debian bookworm main
deb-src http://deb.debian.org/debian bookworm main

deb http://deb.debian.org/debian-security/ bookworm-security main
deb-src http://deb.debian.org/debian-security/ bookworm-security main

deb http://deb.debian.org/debian bookworm-updates main
deb-src http://deb.debian.org/debian bookworm-updates main
Example using the components:
deb http://deb.debian.org/debian bookworm main non-free-firmware
deb-src http://deb.debian.org/debian bookworm main non-free-firmware

deb http://deb.debian.org/debian-security/ bookworm-security main non-free-firmware
deb-src http://deb.debian.org/debian-security/ bookworm-security main non-free-firmware

deb http://deb.debian.org/debian bookworm-updates main non-free-firmware
deb-src http://deb.debian.org/debian bookworm-updates main non-free-firmware

For more information and guidelines on proper configuration of the apt source.list file please see the Configuring Apt Sources - Wiki page.

Inside Debian

New Debian Members

Please welcome the following newest Debian Project Members:

  • Marius Gripsgard (mariogrip)
  • Mohammed Bilal (rmb)
  • Emmanuel Arias (amanu)
  • Robin Gustafsson (rgson)
  • Lukas Märdian (slyon)
  • David da Silva Polverari (polverari)

To find out more about our newest members or any Debian Developer, look for them on the Debian People list.

Security

Debian's Security Team releases current advisories on a daily basis. Some recently released advisories concern these packages:

trafficserver Several vulnerabilities were discovered in Apache Traffic Server, a reverse and forward proxy server, which could result in information disclosure or denial of service.

asterisk A flaw was found in Asterisk, an Open Source Private Branch Exchange. A buffer overflow vulnerability affects users that use PJSIP DNS resolver. This vulnerability is related to CVE-2022-24793. The difference is that this issue is in parsing the query record parse_query(), while the issue in CVE-2022-24793 is in parse_rr(). A workaround is to disable DNS resolution in PJSIP config (by setting nameserver_count to zero) or use an external resolver implementation instead.

flask It was discovered that in some conditions the Flask web framework may disclose a session cookie.

chromium Multiple security issues were discovered in Chromium, which could result in the execution of arbitrary code, denial of service or information disclosure.

Other

Popular packages

gpgv - GNU privacy guard signature verification tool. 99,053 installations.     gpgv is actually a stripped-down version of gpg which is only able to check signatures. It is somewhat smaller than the fully-blown gpg and uses a different (and simpler) way to check that the public keys used to make the signature are valid. There are no configuration files and only a few options are implemented.

dmsetup - Linux Kernel Device Mapper userspace library. 77,769 installations.     The Linux Kernel Device Mapper is the LVM (Linux Logical Volume Management) Team's implementation of a minimalistic kernel-space driver that handles volume management, while keeping knowledge of the underlying device layout in user-space. This makes it useful for not only LVM, but software raid, and other drivers that create "virtual" block devices.

sensible-utils - Utilities for sensible alternative selection. 96,001 daily users.     This package provides a number of small utilities which are used by programs to sensibly select and spawn an appropriate browser, editor, or pager. The specific utilities included are: sensible-browser sensible-editor sensible-pager.

popularity-contest - The popularity-contest package. 90,758 daily users.     The popularity-contest package sets up a cron job that will periodically anonymously submit to the Debian developers statistics about the most used Debian packages on the system. This information helps Debian make decisions such as which packages should go on the first CD. It also lets Debian improve future versions of the distribution so that the most popular packages are the ones which are installed automatically for new users.

New and noteworthy packages in unstable

Toolkit for scalable simulation of distributed applications      SimGrid is a toolkit that provides core functionalities for the simulation of distributed applications in heterogeneous distributed environments. SimGrid can be used as a Grid simulator, a P2P simulator, a Cloud simulator, a MPI simulator, or a mix of all of them. The typical use-cases of SimGrid include heuristic evaluation, application prototyping, and real application development and tuning. This package contains the dynamic libraries and runtime.

LDraw mklist program     3D CAD programs and rendering programs using the LDraw parts library of LEGO parts rely on a file called parts.lst containing a list of all available parts. The program ldraw-mklist is used to generate this list from a directory of LDraw parts.

Open Lighting Architecture - RDM Responder Tests      The DMX512 standard for Digital MultipleX is used for digital communication networks commonly used to control stage lighting and effects. The Remote Device Management protocol is an extension to DMX512, allowing bi-directional communication between RDM-compliant devices without disturbing other devices on the same connection. The Open Lighting Architecture (OLA) provides a plugin framework for distributing DMX512 control signals. The ola-rdm-tests package provides an automated way to check protocol compliance in RDM devices.

parsec-service      Parsec is an abstraction layer that can be used to interact with hardware-backed security facilities such as the Hardware Security Module (HSM), the Trusted Platform Module (TPM), as well as firmware-backed and isolated software services. The core component of Parsec is the security service, provided by this package. The service is a background process that runs on the host platform and provides connectivity with the secure facilities of that host, exposing a platform-neutral API that can be consumed into different programming languages using a client library. For a client library implemented in Rust see the package librust-parsec-interface-dev.

Simple network calculator and lookup tool     Process and lookup network addresses from the command line or CSV with ripalc. Output has a variety of customisable formats.

High performance, open source CPU/GPU miner and RandomX benchmark     XMRig is a high performance, open source, cross platform RandomX, KawPow, CryptoNight, and GhostRider unified CPU/GPU miner and RandomX benchmark.

Ping, but with a graph - Rust source code     This package contains the source for the Rust gping crate, packaged by debcargo for use with cargo and dh-cargo.

Once upon a time in Debian:

2014-07-31 The Technical committee choose libjpeg-turbo as the default JPEG decoder.

2010-08-01 DebConf10 starts à New York City, USA

2007-08-05 Debian Maintainers approved by vote

2009-08-05 Jeff Chimene files bug #540000 against live-initramfs.

Calls for help

The Publicity team calls for volunteers and help!

Your Publicity team is asking for help from you our readers, developers, and interested parties to contribute to the Debian news effort. We implore you to submit items that may be of interest to our community and also ask for your assistance with translations of the news into (your!) other languages along with the needed second or third set of eyes to assist in editing our work before publishing. If you can share a small amount of your time to aid our team which strives to keep all of us informed, we need you. Please reach out to us via IRC on #debian-publicity on OFTC.net, or our public mailing list, or via email at press@debian.org for sensitive or private inquiries.

05 August, 2023 10:30AM by Jean-Pierre Giraud, Joost van Baal-Ilić, Carlos Henrique Lima Melara, Donald Norwood, Paulo Henrique de Lima Santana

August 04, 2023

John Goerzen

Try the Last Internet Kermit Server

$ grep kermit /etc/services
kermit          1649/tcp

What is this mysterious protocol? Who uses it and what is its story?

This story is a winding one, beginning in 1981. Kermit is, to the best of my knowledge, the oldest actively-maintained software package with an original developer still participating. It is also a scripting language, an Internet server, a (scriptable!) SSH client, and a file transfer protocol.

And my first use of it was talking to my HP-48GX calculator over a 9600bps serial link. Yes, that calculator had a Kermit server built in.

But let’s back up and talk about serial ports and Modems.

Serial Ports and Modems

In my piece The PC & Internet Revolution in Rural America, I recently talked about getting a modem – what an excitement it was to get one! I realize that many people today have never used a serial line or a modem, so let’s briefly discuss.

Before Ethernet and Wifi took off in a big way, in the 1990s-2000s, two computers would talk to each other over a serial line and a modem. By modern standards, these were slow; 300bps was a common early speed. They also (at least in the beginning) had no kind of error checking. Characters could be dropped or changed. Sometimes even those speeds were faster than the receiving device could handle. Some serial links were 7-bit, and wouldn’t even pass all 7-bit characters; for instance, sending a Ctrl-S could lock up a remote until you sent Ctrl-Q.

And computers back in the 1970s and 1980s weren’t as uniform as they are now. They used different character sets, different line endings, and even had different notions of what a file is. Today’s notion of a file as whatever set of binary bytes an application wants it to be was by no means universal; some systems treated a file as a set of fixed-length records, for instance.

So there were a lot of challenges in reliably moving files between systems. Kermit was introduced to reliably move files between systems using serial lines, automatically working around the varieties of serial lines, detecting errors and retransmitting, managing transmit speeds, and adapting between architectures as appropriate. Quite a task! And perhaps this explains why it was supported on a calculator with a primitive CPU by today’s standards.

Serial communication, by the way, is still commonplace, though now it isn’t prominent in everyone’s home PC setup. It’s used a lot in industrial equipment, avionics, embedded systems, and so forth.

The key point about serial lines is that they aren’t inherently multiplexed or packetized. Whereas an Ethernet network is designed to let many dozens of applications use it at once, a serial line typically runs only one (unless it is something like PPP, which is designed to do multiplexing over the serial line).

So it become useful to be able to both log in to a machine and transfer files with it. That is, incidentally, still useful today.

Kermit and XModem/ZModem

I wondered: why did we end up with two diverging sets of protocols, created at about the same time? The Kermit website has the answer: essentially, BBSs could assume 8-bit clean connections, so XModem and ZModem had much less complexity to worry about. Kermit, on the other hand, was highly flexible. Although ZModem came out a few years before Kermit had its performance optimizations, by about 1993 Kermit was on par or faster than ZModem.

Beyond serial ports

As LANs and the Internet came to be popular, people started to use telnet (and later ssh) to connect to remote systems, rather than serial lines and modems. FTP was an early way to transfer files across the Internet, but it had its challenges. Kermit added telnet support, as well as later support for ssh (as a wrapper around the ssh command you already know). Now you could easily log in to a machine and exchange files with it without missing a beat.

And so it was that the Internet Kermit Service Daemon (IKSD) came into existence. It allows a person to set up a Kermit server, which can authenticate against local accounts or present anonymous access akin to FTP.

And so I established the quux.org Kermit Server, which runs the Unix IKSD (part of the Debian ckermit package).

Trying Out the quux.org Kermit Server

There are more instructions on the quux.org Kermit Server page! You can connect to it using either telnet or the kermit program. I won’t duplicate all of the information here, but here’s what it looks like to connect:

$ kermit
C-Kermit 10.0 Beta.08, 15 Dec 2022, for Linux+SSL (64-bit)
 Copyright (C) 1985, 2022,
  Trustees of Columbia University in the City of New York.
  Open Source 3-clause BSD license since 2011.
Type ? or HELP for help.
(/tmp/t/) C-Kermit>iksd /user:anonymous kermit.quux.org
 DNS Lookup...  Trying 135.148.101.37...  Reverse DNS Lookup... (OK)
Connecting to host glockenspiel.complete.org:1649
 Escape character: Ctrl-\ (ASCII 28, FS): enabled
Type the escape character followed by C to get back,
or followed by ? to see other options.
----------------------------------------------------

 >>> Welcome to the Internet Kermit Service at kermit.quux.org <<<

To log in, use 'anonymous' as the username, and any non-empty password

Internet Kermit Service ready at Fri Aug  4 22:32:17 2023
C-Kermit 10.0 Beta.08, 15 Dec 2022
kermit

Enter e-mail address as Password: [redacted]

Anonymous login.

You are now connected to the quux kermit server.

Try commands like HELP, cd gopher, dir, and the like.  Use INTRO
for a nice introduction.

(~/) IKSD>

You can even recursively download the entire Kermit mirror: over 1GB of files!

Conclusions

So, have fun. Enjoy this experience from the 1980s.

And note that Kermit also makes a better ssh client than ssh in a lot of ways; see ideas on my Kermit page.

This page also has a permanent home on my website, where it may be periodically updated.

04 August, 2023 10:51PM by John Goerzen

hackergotchi for Shirish Agarwal

Shirish Agarwal

License Raj 2.0, 2023

About a week back Jio launched a laptop called JioBook that will be manufactured in China –

The most interesting thing is that the whole thing will be produced in Hunan, China. Then 3 days later India mandates a licensing requirement for Apple, Dell and other laptop/tablet manufacturers. And all of these in the guise of ‘Make in India’. It is similar how India has exempted Adani and the Tatas from buying as much solar cells as are needed and then sell the same in India. Reliance will be basically monopolizing the laptop business. And if people think that projects like Raspberry Pi, Arduino etc. will be exempted they have another think coming.

History of License Raj

After India became free, in the 1980s the Congress wanted to open its markets to the world just like China did. But at that time, the BJP, though small via Jan Sangh made the argument that we are not ready for the world. The indian businessman needs a bit more time. And hence a compromise was made. The compromise was simple. Indian Industry and people who wanted to get anything from the west, needed a license. This was very much in line how the Russian economy was evolving. All the three nations, India, China and Russia were on similar paths. China broke away where it opened up limited markets for competition and gave state support to its firms. Russia and Japan on the other hand, kept their markets relatively closed. The same thing happened in India, what happened in Russia and elsewhere. The businessman got what he wanted, he just corrupted the system. Reliance, the conglomerate today abused the same system as much as it could. Its defence was to be seen as the small guy. I wouldn’t go into that as that itself would be a big story in itself. Whatever was sold in India was sold with huge commissions and just like Russia scarcity became the order of the day. Monopolies flourished and competition was nowhere. These remained till 1991 when Prime Minister Mr. Manmohan Singh was forced to liberalize and open up the markets. Even at that time, the RSS through its Swadeshi Jagran Manch was sharing the end of the world prophecies for the Indian businessman.

2014 – Current Regime

In 2010, in U.K. the Conservative party came in power under the leadership of David Cameron who was influenced by the policies of Margaret Thatcher who arguably ditched manufacturing in the UK. David Cameron and his party did the same 2010 onwards but for public services under the name austerity. India has been doing the same. The inequality has gone up while people’s purchasing power has gone drastically down. CMIE figures are much more drastic and education is a joke.

Add to that since 2016 funding for scientists have gone to the dogs and now they are even playing with doctor’s careers. I do not have to remind people that a woman scientist took almost a quarter century to find a drug delivery system that others said was impossible. And she did it using public finance. Science is hard. I have already shared in a previous blog post how it took the Chinese 20 years to reach where they are and somehow we think we will be able to both China and Japan. Of the 160 odd countries that are on planet earth, only a handful of countries have both the means and the knowledge to use and expand on that.

While I was not part of Taiwan Debconf, later I came to know that even Taiwan in many ways is similar to Japan in the sense that a majority of its population is stuck in low-paid jobs (apart from those employed in TSMC) which is similar to Keiretsu or Chabeol from either Japan or South Korea. In all these cases, only a small percentage of the economy is going forward while the rest is stagnating or even going backwards. Similar is the case in India as well 😦

Unlike the Americans who chose the path to have more competition, we have chosen the path to have more monopolies. So even though, I very much liked Louis’es project sooner or later finding the devices itself would be hard. While the recent notification is for laptops, what stops them from doing the same with mobiles or even desktop systems. As it is, both smartphones as well as desktop systems has been contracting since last year as food inflation has gone up.

Add to that availability of products has been made scarce (whether by design or not, unknown.) The end result, the latest processor launched overseas becomes the new thing here 3-4 years later. And that was before this notification. This will only decrease competition and make Ambanis rich at cost of everyone else. So much for ‘east of doing business’. Also the backlash has been pretty much been tepid. So what I shared will probably happen again sooner or later.

The only interesting thing is that it’s based on Android, probably in part due to the issues people seeing in both Windows 10, 11 and whatnot.

Till later.

Update :- The print tried a decluttering but instead cluttered the topic. While what he shared all was true, and certainly it is a step backwards but he didn’t need to show how most Indians had to go to RBI for the same. I remember my Mamaji doing the same and sharing afterwards that all he had was $100 for a day which while being a big sum was paltry if you were staying in a hotel and were there for company business. He survived on bananas and whatver cheap veg. he could find then. This is almost 35-40 odd years ago. As shared the Govt. has been doing missteps for quite sometime now. The print does try to take a balanced take so it doesn’t run counter of the Government but even it knows that this is a bad take. The whole thing about security is just laughable, did they wake up after 9 years. And now in its own wisdom it apparently has shifted the ban instead from now to 3 months afterwards. Of course, most people on the right just applauding without understanding the complexities and implications of the same. Vendors like Samsung and Apple who have made assembly operations would do a double-think and shift to Taiwan, Vietnam, Mexico anywhere. Global money follows global trends. And such missteps do not help 😦

Implications in A.I. products

One of the things that has not been thought about how companies that are making A.I. products in India or even MNC’s will suffer. Most of them right now are in stealth mode but are made for Intel or AMD or ARM depending upon how it works for them. There is nothing to tell if the companies made their plea and was it heard or unheard. If the Government doesn’t revert it then sooner or later they would either have to go abroad or cash out/sell to somebody else. Some people on the right also know this but for whatever reason have chosen to remain silent.

Till later 😦

04 August, 2023 08:56PM by shirishag75