Holger says
sq network keyserver search $id ; sq cert export --cert=$id > $id.asc
sq network keyserver search $id ; sq cert export --cert=$id > $id.asc
This is also known as: "ifconfig is not installed by default
anymore, how do I do this only with the ip command?"
I have been slowly training my brain to use the new commands but I
sometimes forget some. So, here's a couple of equivalence from the old
package to net-tools the new iproute2, about 10 years late:
net-tools |
iproute2 |
shorter form | what it does |
|---|---|---|---|
arp -an |
ip neighbor |
ip n |
|
ifconfig |
ip address |
ip a |
show current IP address |
ifconfig |
ip link |
ip l |
show link stats (up/down/packet counts) |
route |
ip route |
ip r |
show or modify the routing table |
route add default GATEWAY |
ip route add default via GATEWAY |
ip r a default via GATEWAY |
add default route to GATEWAY |
route del ROUTE |
ip route del ROUTE |
ip r d ROUTE |
remove ROUTE (e.g. default) |
netstat -anpe |
ss --all --numeric --processes --extended |
ss -anpe |
list listening processes, less pretty |
Also note that I often alias ip to ip -br -c as it provides a
much prettier output.
Compare, before:
anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
altname wlp166s0
altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
valid_lft 40699sec preferred_lft 40699sec
After:
anarcat@angela:~> ip -br -c a
lo UNKNOWN 127.0.0.1/8 ::1/128
wlan0 DOWN
virbr0 DOWN 192.168.122.1/24
eth0 UP 192.168.0.108/24
I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.
Also imagine pretty colors above.
Finally, I don't have a cheat sheet for iw vs iwconfig (from
wireless-tools) yet. I just use NetworkManager now and rarely have
to mess with wireless interfaces directly.
For context, there are traditionally two ways of configuring the network in Linux:
ifconfig, arp, route and
netstat, those are part of the net-tools packageip
command, that is the iproute2 packageIt seems like the latter was made "important" in Debian in 2008,
which means every release since Debian 5 "lenny"
has featured the
ip command.
The former net-tools package was demoted in December 2016 which
means every release since Debian 9 "stretch" ships without an
ifconfig command unless explicitly requested. Note that this was
mentioned in the release notes in a similar (but, IMHO, less
useful) table.
(Technically, the net-tools Debian package source still indicates it
is Priority: important but that's a bug I have just filed.)
Finally, and perhaps more importantly, the name iproute is hilarious
if you are a bilingual french speaker: it can be read as "I proute"
which can be interpreted as "I fart" as "prout!" is the sound a fart
makes. The fact that it's called iproute2 makes it only more
hilarious.
The FAI.me service has reached another milestone:
The 42.000th job was submitted via the web interface since the beginning of this service in 2017.
The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback.
The next job can be yours!
P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org
FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint.
Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.
The eighteenth release of the qlcal package arrivied at CRAN today. There have been no calendar updates in QuantLib 1.41 or 1.42 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.
qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.
This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say
> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
\(x) qlcal::isBusinessDay(xp=qlcal::getCalendar(x)))
UnitedStates/NYSE Canada/TSX Australia/ASX
TRUE TRUE TRUE
> to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday
> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
\(x) qlcal::isBusinessDay(as.Date("2026-02-16"), xp=qlcal::getCalendar(x)))
UnitedStates/NYSE Canada/TSX Australia/ASX
FALSE FALSE TRUE
> The full details from NEWS.Rd follow.
Changes in version 0.1.0 (2026-02-18)
Invalid calendars return id ‘TARGET’ now
Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used
For several functions a missing date object now implies computation on the current date, e.g.
isBusinessDay()
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.
interesting analysis of dbus and design for a more secure replacement [3].
Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.
Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.
This video about designing a C64 laptop is a masterclass in computer design [9].
Ron Garrett wrote an insightful blog post about abortion [11].
Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!
17 February, 2026 08:09AM by etbe
The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for January.
During the month of January, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).
The team released 33 DLAs fixing 216 CVEs.
The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.
Notable security updates:
Moreover, Sylvain Beucler studied the security support status of p7zip, a fork of 7zip that has become unmaintained upstream. To avoid letting the users continue using an unsupported package, Sylvain has investigated a path forward in collaboration with the security team and the 7zip maintainer, looking to replace p7zip with 7zip. It is to note however that 7zip developers don’t reveal the information about the patches that fix CVEs, making it difficult to backport single patches to fix vulnerabilities in Debian released versions.
Contributions from outside the LTS Team:
Thunderbird, prepared by maintainer Christoph Goehre. The DLA (DLA-4442-1) was published by Emilio.
The LTS Team has also contributed with updates to the latest Debian releases:
Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team.
Sponsors that joined recently are in bold.
17 February, 2026 12:00AM by Santiago Ruano Rincón
In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.
Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.
We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").
The ADR process is, for us, pretty simple. It consists of three things:
As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:
Context: What is the issue that we're seeing that is motivating this decision or change?
Decision: What is the change that we're proposing and/or doing?
Consequences: What becomes easier or more difficult to do because of this change?
More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.
Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum
The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.
An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.
The whole process is simple enough that it's worth quoting in full as well:
Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.
Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.
A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.
Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".
The new process better identifies stakeholders:
Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).
Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.
Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.
The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:
And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.
Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.
We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.
Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.
We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.
Note: this article was also published on the Tor Blog.
You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:
Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.
DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.
If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.
Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.
For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.
I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.
16 February, 2026 07:55PM by Philipp Kern (noreply@blogger.com)
What if I told you there is a way to configure the network on any Linux server that:
systemd-networkd, ifupdown, NetworkManager,
nothing)It has literally 8 different caveats on top of that, but is still totally worth your time.
People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:
ifupdown (/etc/network/interfaces): traditional static
configuration system, mostly for workstations and servers that has
been there forever in Debian (since at least 2000), documented
in the Debian wiki
NetworkManager: self-proclaimed "standard Linux network configuration", mostly used on desktops but technically supports servers as well, see the Debian wiki page (introduced in 2004)
systemd-network: used more for servers, see Debian reference Doc
Chapter 5 (introduced some time around Debian 8 "jessie", in
2015)
Netplan: latest entry (2018), YAML-based configuration abstraction layer on top of the above two, see also Debian reference Doc Chapter 5 and the Debian wiki
At this point, I feel ifupdown is on its way out, possibly replaced
by systemd-networkd. NetworkManager already manages most desktop
configurations.
The method is this:
ip= on the Linux kernel command line: for servers with a
single IPv4 or IPv6 address, no software required other than the
kernel and a boot loader (since 2002 or older)So by "new" I mean "new to me". This option is really old. The
nfsroot.txtwhere it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.
The trick is to add an ip= parameter to the kernel's
command-line. The syntax, as mentioned above, is in nfsroot.txt
and looks like this:
ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>
Most settings are pretty self-explanatory, if you ignore the useless ones:
<client-ip>: IP address of the server<gw-ip>: address of the gateway<netmask>: netmask, in quad notation<device>: interface name, if multiple available<autoconf>: how to configure the interface, namely:
off or none: no autoconfiguration (static)on or any: use any protocol (default)dhcp, essentially like on for all intents and purposes<dns0-ip>, <dns1-ip>: IP address of primary and secondary name
servers, exported to /proc/net/pnp, can by symlinked to
/etc/resolv.confWe're ignoring the options:
<server-ip>: IP address of the NFS server, exported to /proc/net/pnp<hostnname>: Name of the client, typically sent over the DHCP
requests, which may lead to a DNS record to be created in some
networks<ntp0-ip>: exported to /proc/net/ipconfig/ntp_servers, unused by
the kernelNote that the Red Hat manual has a different opinion:
ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]
It's essentially the same (although server-id is weird), and the
autoconf variable has other settings, so that's a bit odd.
For example, this command-line setting:
ip=192.0.2.42::192.0.2.1:255.255.255.0:::off
... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.
A DHCP only configuration will look like this:
ip=::::::dhcp
Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.
With GRUB, you need to edit (on Debian), the file /etc/default/grub
(ugh) and find a line like:
GRUB_CMDLINE_LINUX=
and change it to:
GRUB_CMDLINE_LINUX=ip=::::::dhcp
For systemd-boot UKI setups, it's simpler: just add the setting to
the /etc/kernel/cmdline file. Don't forget to include anything
that's non-default from /proc/cmdline.
This assumes that is the Cmdline=@ setting in
/etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for
my minimal documentation on this.
This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:
/etc/default/grub,
/boot/loader/entries/arch.conf for systemd-boot or
/etc/kernel/cmdline for UKI)/etc/default/grub, may be more RHEL mentions
grubby, possibly some systemd-boot things here as well)/etc/default/grub,
/efi/loader/entries/gentoo-sources-kernel.conf for systemd-boot,
or /etc/kernel/install.d/95-uki-with-custom-opts.install)It's interesting that /etc/default/grub is consistent across all
distributions above, while the systemd-boot setups are all over the
place (except for the UKI case), while I would have expected those be
more standard than GRUB.
If dropbear-initramfs is setup, it already requires you to have
such a configuration, and it might not work out of the box.
This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).
To fix this, you need to disable that "feature":
IFDOWN="none"
This will keep dropbear-initramfs from disabling the configured
interface.
Traditionally, I've always setup my servers with ifupdown on servers
and NetworkManager on laptops, because that's essentially the
default. But on some machines, I've started using systemd-networkd
because ifupdown has ... issues, particularly with reloading network
configurations. ifupdown is a old hack, feels like legacy, and is
Debian-specific.
Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.
I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.
So in a sense, this is a "Don't Repeat Yourself" solution.
Also known as: "wait, that works?" Yes, it does! That said...
This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.
This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.
It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.
It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.
I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)
It will not automatically reconfigure the interface on link
changes, but ifupdown does not either.
It will not write /etc/resolv.conf for you but the dns0-ip
and dns1-ip do end up in /proc/net/pnp which has a compatible
syntax, so a common configuration is:
ln -s /proc/net/pnp /etc/resolv.conf
I have not really tested this at scale: only a single, test server at home.
Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.
Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:
apt purge systemd-networkd ifupdown network-manager netplan.io
Note that ifupdown (and probably others) leave stray files in (e.g.)
/etc/network which you might want to cleanup, or keep in case all
this fails and I have put you in utter misery. Configuration files for
other packages might also be left behind, I haven't tested this, no
warranty.
This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!
Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.
It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.
When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?
We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).
We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:
Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.
Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.
This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.
This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.
16 February, 2026 03:13AM by Benjamin Mako Hill
tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.
We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.
tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.
This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.
(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)
git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.
dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.
They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.
tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.
See the Day-to-day work section below to see how simple your life could be.
Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.
We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.
The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.
And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.
One of Debian’s foundational principles is that we publish the source code.
Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.
But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:
debian/, or something even stranger.
debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.
tag2upload and dgit do solve this problem. When you upload, they:
archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.
This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.
(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)
tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.
So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.
Start with the wiki page and git-debpush(1) (ideally from forky aka testing).
You don’t need to do any of the other things recommended in this article.
The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.
Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.
You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.
Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.
You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.
Your co-maintainers are also adopting the new approach.
tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.
This article will guide you in adopting:
In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.
We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.
rationale
Much traditional Debian tooling like
quiltandgbp pquses the “patches-unapplied” branch format, which stores the delta as patch files indebian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.
Option 1: simply use git, directly, including git merge.
Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.
This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.
This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).
Option 2: Adopt git-debrebase.
git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.
The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.
This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).
Examples of complex packages using this approach include src:xen and src:sbcl.
We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.
rationale
Many maintainers have been importing upstream tarballs into git, for example by using
gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)
First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.
Edit debian/watch to contain something like this:
version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.
From now on we’ll generate our own .orig tarballs directly from git.
rationale
We need some “upstream tarball” for the
3.0 (quilt)source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The.origis just needed so a nice legacy Debian source package (.dsc) can be generated.
Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.
So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:
git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+gitIf you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.
Prepare a new branch on top of upstream git, containing what we want:
git branch -f old-master # make a note of the old git representation
git reset --hard v1.2.3 # go back to the real upstream git tag
git checkout old-master :debian # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master # it's incorporated in our history nowIf there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)
rationale
These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.
Convert the branch to git-debrebase format and rebase onto the upstream git:
git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+gitIf you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.
rationale
The force option
-fupstream-not-ffwill be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history.-fdivergedmay be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.
Manually make your history fast forward from the git import of your previous upload.
dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sidDelete any existing debian/source/options and/or debian/source/local-options.
Change debian/source/format to 1.0. Add debian/source/options containing -sn.
rationale
We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.
You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.
Ensure that debian/source/format contains 3.0 (quilt).
Now you are ready to do a local test build.
Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.
Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.
Add a note to debian/changelog about the git packaging change.
git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)
In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.
rationale
Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.
gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).
However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.
The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.
Create debian/salsa-ci.yml containing
include:
- https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.ymlIn your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.
rationale
Your project may have an upstream CI config in
.gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.You can add various extra configuration to
debian/salsa-ci.ymlto customise it. Consult the Salsa CI docs.
Add to debian/salsa-ci.yml:
.git-debrebase-prepare: &git-debrebase-prepare
# install the tools we'll need
- apt-get update
- apt-get --yes install git-debrebase git-debpush
# git-debrebase needs git user setup
- git config user.email "salsa-ci@invalid.invalid"
- git config user.name "salsa-ci"
# run git-debrebase make-patches
# https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
- git-debrebase --force
- git-debrebase make-patches
# make an orig tarball using the upstream tag, not a gbp upstream/ tag
# https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
- git-deborig
.build-definition: &build-definition
extends: .build-definition-common
before_script: *git-debrebase-prepare
build source:
extends: .build-source-only
before_script: *git-debrebase-prepare
variables:
# disable shallow cloning of git repository. This is needed for git-debrebase
GIT_DEPTH: 0rationale
Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).
These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.
Push this to salsa and make the CI pass.
If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.
In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.
This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.
gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.
(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)
Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.
The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.
With this capable tooling, most tasks are much easier.
Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.
On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.
For example, you can:
git cherry-pick an upstream commit.
git am a patch from a mailing list or from the Debian Bug System.
git revert an earlier commit, even an upstream one.
When you have a working state of things, tidy up your git branch:
Use git-rebase to squash/edit/combine/reorder commits.
Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.
Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.
Push the MR branch (topic branch) to Salsa and make a Merge Request.
Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)
If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.
An informal test build can be done like this:
apt-get build-dep .
dpkg-buildpackage -uc -bIdeally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.
If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.
For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.
Start an MR branch for the administrative changes for the release.
Document all the changes you’re going to release, in the debian/changelog.
gbp dch can help write the changelog for you:
dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/mainrationale
--ignore-branchis needed because gbp dch wrongly thinks you ought to be running this onmaster, but of course you’re running it on your MR branch.The
--git-log=^upstream/mainexcludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have anupstreamremote and that you’re basing your work on theirmainbranch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.
(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)
Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)
dch -r
git commit -m 'Finalise for upload' debian/changelogMake an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)
Now you can perform the actual upload:
git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local treegit-debpushgit-debpush --quilt=linear--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.
If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.
Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:
Prepare the changelog update and merge it, as above. Then:
Create the orig tarball and launder the git-derebase branch:
git-deborig
git-debrebase quickrationale
Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.
Build the source and binary packages, locally:
dgit sbuild
dgit push-builtrationale
You don’t have to use
dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.
Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:
git verify-tag v1.2.4rationale
Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.
Simply merge the new upstream version and update the changelog:
git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'Rebase your delta queue onto the new upstream version:
git debrebase mew-upstream 1.2.4If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.
After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.
git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.
When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.
As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:
dgit fetch sid
git diff dgit/dgit/sid..HEADOr to see the Debian delta of the proposed upload:
git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'Or to show all the delta as a series of commits:
git log -p v1.2.3..HEAD ':!debian'Don’t look at debian/patches/. It can be absent or out of date.
Fetch the NMU into your local git, and see what it contains:
dgit fetch sid
git diff master...dgit/dgit/sidIf the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.
Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:
git merge dgit/dgit/sidYou should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.
Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:
git diff debian/1.2.3-7...dgit/dgit/sidThe actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'
If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like
git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patchesto diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)
Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.
This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.
Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.
rationale
Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.
git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsgAnd now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.
If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.
git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsgIf the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.
rationale
Ideally
uscan, which has a way of representing DFSG filtering patterns indebian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.
Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.
It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.
gitattributes:
For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.
Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.
git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.
If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.
I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.
You may want to look at:
dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.
These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.
Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.
NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)
You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).
Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).
tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.
dgit reference documentation:
There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.
dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.
Design and implementation documentation for tag2upload is linked to from the wiki.
Debian’s git transition blog post from December.
tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.
git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.
git-debrebase reference documentation:
Of course there’s a comprehensive command-line manual in git-debrebase(1).
git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).
The following 42 15-bit values form a 2-disjunctive matrix (that is, no union of two values contain or equal a third value), or equivalently, a superimposed code:
000000000011111 000000011100011 000000101101100 000001010110100 000001101010001 000001110001010 000010011011000 000100100110010 000110010000110 000110100001001 000111001100000 001000110000101 001010000110001 001010101000010 001011000001100 001100001010100 001100010101000 001101000000011 010001000101001 010010001000101 010010110100000 010011000010010 010100001001010 010100010010001 010101100000100 011000000100110 011000100011000 011001011000000 100001001000110 100010000101010 100010100010100 100011010000001 100100000100101 100100111000000 100101000011000 101000001001001 101000010010010 101001100100000 110000001110000 110000010001100 110000100000011 111110000000000
This shows that A286874 a(15) >= 42.
If I had to make a guess, I'd say the equality holds, but I have nowhere near the computing resources to actually find the answer for sure. Stay tuned for news about a(14), though.
Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from July 20th to July 25th, 2026, in Santa Fe, Argentina.
The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026.
The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section.
As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members.
The last day to register with guaranteed swag is June 14th.
We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page.
The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st.
The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section.
The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st.
DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website.
See you in Santa Fe,
The DebConf 26 Team
14 February, 2026 12:15PM by Carlos Henrique Lima Melara, Santiago Ruano Rincón
Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable.
The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site.
You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to use data-nosnippet and cripple the snippet preview in Google.
The “AI” browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium.
There is no way to ban such AI use on your web site.
Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content.
If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it… And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.
Partially because of GenAI, StackOverflow is pretty much dead – which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch).
Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI.
Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.
However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by Weßels and Maibaum).
Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire.
Let’s dogfood the AI. Here’s an outline:
Here is an example prompt that you can use:
You are a university educator, preparing homework assignments in debugging.
The programming language used is {lang}.
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG
Generate a {lang} implementation (with bugs) of: {n} ({desc})
Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.
If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”.
On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don’t think technology is the answere here, but human networks of trust.
13 February, 2026 10:29AM by Erich Schubert
Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
Brian Ripley has now turned C++20 on as a default for R-devel (aka R
4.6.0 ‘to be’), and this turned up misbehvior in packages using RcppSpdlog such as
our spdl wrapper
(offering a nicer interface from both R and C++) when relying on
std::format. So for now, we turned this off and remain with
fmt::format from the fmt library while we
investigate further.
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.27 (2026-02-11)
- Under C++20 or later, keep relying on
fmt::formatuntil issues experienced usingstd::formatcan be identified and resolved
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.
In version 1.10.1, Meson merged a patch to make it call the correct
g-ir-scanner by default thanks to Eli Schwarz. This problem affected more than
130 source packages. Helmut retried building them all and filed 69 patches as a
result. A significant portion of those packages require another Meson
change to call the correct
vapigen. Another notable change is converting gnu-efi to multiarch,
which ended up requiring changes to a number of other packages. Since Aurelien
dropped the libcrypt-dev dependency from libc6-dev, this transition now is
mostly complete and has resulted in most of the Perl ecosystem correctly
expressing perl-xs-dev dependencies needed for cross building. It is these
infrastructure changes affecting several client packages that this work targets.
As a result of this continued work, about 66% of Debian’s source packages now
have satisfiable cross Build-Depends in unstable and about 10000 (55%) actually
can be cross built. There are now more than 500 open
bug reports
affecting more than 2000 packages most of which carry patches.
Maintaining architecture cross-bootstrap requires continued effort for adapting
to archive changes such as glib2.0 dropping a build profile or an e2fsprogs
FTBFS. Beyond those generic problems,
architecture-specific problems with e.g. musl-linux-any or sparc may arise.
While all these changes move things forward on the surface, the bootstrap
tooling has become a growing pile of patches. Helmut managed to upstream two
changes to glibc for reducing its Build-Depends in the stage2 build
profile and thanks Aurelien Jarno.
Debian Enhancement Proposal #3 (DEP-3) is named “Patch Tagging Guidelines” and standardizes meta-information that Debian contributors can put in patches included in Debian source packages. With the feedback received over the years, and with the change in the package management landscape, the need to refresh those guidelines became evident. As the initial driver of that DEP, I spent a good day reviewing all the feedback (that I kept in a folder) and producing a new version of the document. The changes aim to give more weight to the syntax that is compatible with git format-patch’s output, and also to clarify the expected uses and meanings of a couple of fields, including some algorithm that parsers should follow to define the state of the patch. After the announcement of the new draft on debian-devel, the revised DEP-3 received a significant number of comments that I still have to process.
debvm making it work with unstable as a target distribution
again.rocblas package to forky.publican
and did some maintenance for tracker.debian.org.festival Debian package for
systemd socket activation
and systemd service and socket units.
Adapted the patch for upstream and created a merge request
(also fixed a MacOS X building system
error while working on it). Updated Orca Wiki documentation
regarding festival. Discussed
a 2007 bug/feature in festival which allowed having a local shell and that the
new systemd socket activation has the same code path.abcde” package.python3-defaults and dh-python in support of
Python 3.14-as-default in Ubuntu. Also investigated the risk of ignoring byte-compilation
failures by default, and started down the road of implementing this.python-virtualenv and python-flexmock.knot-dns and knot-resolver are also less complex
software, which results in advantages in terms of security: only three CVEs have
been reported for knot-dns since 2011).pkg_resources module.groff 1.24.0
(the first upstream release since mid-2023, so a very large set of changes)
into experimental.ruby-rbpdf, jekyll,
origami-pdf, ruby-kdl, ruby-twitter, ruby-twitter-text, ruby-globalid.12 February, 2026 12:00AM by Anupa Ann Joseph
Debusine is a tool designed for Debian developers and Operating System developers in general. You can try out Debusine on debusine.debian.net, and follow its development on salsa.debian.org.
This post describes how to write a new worker task for Debusine. It can be used to add tasks to a self-hosted Debusine instance, or to submit to the Debusine project new tasks to add new capabilities to Debusine.
Tasks are the lower-level pieces of Debusine workflows. Examples of tasks are Sbuild, Lintian, Debdiff (see the available tasks).
This post will document the steps to write a new basic worker task.
The example will add a worker task that runs
reprotest and creates an artifact of the
new type ReprotestArtifact with the reprotest log.
Tasks are usually used by workflows. Workflows solve high-level goals by creating and orchestrating different tasks (e.g. a Sbuild workflow would create different Sbuild tasks, one for each architecture).
A task usually does the following:
lintian, debdiff, etc.). In this
blog post, it will run reprotestSuccess or FailureIf you want to follow the tutorial and add the Reprotest task, your
Debusine development instance should have at least one worker, one user,
a debusine client set up, and permissions for the client to create tasks.
All of this can be setup following the steps in the
Contribute section
of the documentation.
This blog post shows a functional Reprotest task. This task is not
currently part of Debusine. The Reprotest task implementation is simplified
(no error handling, unit tests, specific view, docs, some shortcuts in
the environment preparation, etc.). At some point,
in Debusine, we might add
a debrebuild task which is based on buildinfo files and uses
snapshot.debian.org to recreate the binary packages.
The input of the reprotest task will be a source artifact (a Debian source package). We model the input with pydantic in debusine/tasks/models.py:
class ReprotestData(BaseTaskDataWithExecutor):
"""Data for Reprotest task."""
source_artifact: LookupSingle
class ReprotestDynamicData(BaseDynamicTaskDataWithExecutor):
"""Reprotest dynamic data."""
source_artifact_id: int | None = None
The ReprotestData is what the user will input. A LookupSingle is a
lookup
that resolves to a single artifact.
We would also have configuration for the desired variations to test,
but we have left that out of this example for simplicity. Configuring variations
is left as an exercise for the reader.
Since ReprotestData is a subclass of BaseTaskDataWithExecutor it
also contains environment where the user can specify in which environment
the task will run. The environment is an artifact with a Debian image.
The ReprotestDynamicData holds the resolution of all lookups. These
can be seen in the “Internals” tab of the work request view.
Reprotest artifact data classIn order for the reprotest task to create a new Artifact of the type
DebianReprotest with the log and output metadata: add the new category to
ArtifactCategory in debusine/artifacts/models.py:
REPROTEST = "debian:reprotest"
In the same file add the DebianReprotest class:
class DebianReprotest(ArtifactData):
"""Data for debian:reprotest artifacts."""
reproducible: bool | None = None
def get_label(self) -> str:
"""Return a short human-readable label for the artifact."""
return "reprotest analysis"
It could also include the package name or version.
In order to have the category listed in the work request output artifacts
table, edit the file debusine/db/models/artifacts.py: In
ARTIFACT_CATEGORY_ICON_NAMES add ArtifactCategory.REPROTEST: "folder",
and in ARTIFACT_CATEGORY_SHORT_NAMES add ArtifactCategory.REPROTEST: "reprotest",.
In debusine/tasks/ create a new file reprotest.py.
# Copyright © The Debusine Developers
# See the AUTHORS file at the top-level directory of this distribution
#
# This file is part of Debusine. It is subject to the license terms
# in the LICENSE file found in the top-level directory of this
# distribution. No part of Debusine, including this file, may be copied,
# modified, propagated, or distributed except according to the terms
# contained in the LICENSE file.
"""Task to use reprotest in debusine."""
from pathlib import Path
from typing import Any
from debusine import utils
from debusine.artifacts.local_artifact import ReprotestArtifact
from debusine.artifacts.models import (
ArtifactCategory,
CollectionCategory,
DebianSourcePackage,
DebianUpload,
WorkRequestResults,
get_source_package_name,
get_source_package_version,
)
from debusine.client.models import RelationType
from debusine.tasks import BaseTaskWithExecutor, RunCommandTask
from debusine.tasks.models import ReprotestData, ReprotestDynamicData
from debusine.tasks.server import TaskDatabaseInterface
class Reprotest(
RunCommandTask[ReprotestData, ReprotestDynamicData],
BaseTaskWithExecutor[ReprotestData, ReprotestDynamicData],
):
"""Task to use reprotest in debusine."""
TASK_VERSION = 1
CAPTURE_OUTPUT_FILENAME = "reprotest.log"
def __init__(
self,
task_data: dict[str, Any],
dynamic_task_data: dict[str, Any] | None = None,
) -> None:
"""Initialize object."""
super().__init__(task_data, dynamic_task_data)
self._reprotest_target: Path | None = None
def build_dynamic_data(
self, task_database: TaskDatabaseInterface
) -> ReprotestDynamicData:
"""Compute and return ReprotestDynamicData."""
input_source_artifact = task_database.lookup_single_artifact(
self.data.source_artifact
)
assert input_source_artifact is not None
self.ensure_artifact_categories(
configuration_key="input.source_artifact",
category=input_source_artifact.category,
expected=(
ArtifactCategory.SOURCE_PACKAGE,
ArtifactCategory.UPLOAD,
),
)
assert isinstance(
input_source_artifact.data, (DebianSourcePackage, DebianUpload)
)
subject = get_source_package_name(input_source_artifact.data)
version = get_source_package_version(input_source_artifact.data)
assert self.data.environment is not None
environment = self.get_environment(
task_database,
self.data.environment,
default_category=CollectionCategory.ENVIRONMENTS,
)
return ReprotestDynamicData(
source_artifact_id=input_source_artifact.id,
subject=subject,
parameter_summary=f"{subject}_{version}",
environment_id=environment.id,
)
def get_input_artifacts_ids(self) -> list[int]:
"""Return the list of input artifact IDs used by this task."""
if not self.dynamic_data:
return []
return [
self.dynamic_data.source_artifact_id,
self.dynamic_data.environment_id,
]
def fetch_input(self, destination: Path) -> bool:
"""Download the required artifacts."""
assert self.dynamic_data
artifact_id = self.dynamic_data.source_artifact_id
assert artifact_id is not None
self.fetch_artifact(artifact_id, destination)
return True
def configure_for_execution(self, download_directory: Path) -> bool:
"""
Find a .dsc in download_directory.
Install reprotest and other utilities used in _cmdline.
Set self._reprotest_target to it.
:param download_directory: where to search the files
:return: True if valid files were found
"""
self._prepare_executor_instance()
if self.executor_instance is None:
raise AssertionError("self.executor_instance cannot be None")
self.run_executor_command(
["apt-get", "update"],
log_filename="install.log",
run_as_root=True,
check=True,
)
self.run_executor_command(
[
"apt-get",
"--yes",
"--no-install-recommends",
"install",
"reprotest",
"dpkg-dev",
"devscripts",
"equivs",
"sudo",
],
log_filename="install.log",
run_as_root=True,
)
self._reprotest_target = utils.find_file_suffixes(
download_directory, [".dsc"]
)
return True
def _cmdline(self) -> list[str]:
"""
Build the reprotest command line.
Use configuration of self.data and self._reprotest_target.
"""
target = self._reprotest_target
assert target is not None
cmd = [
"bash",
"-c",
f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x {target} package/; "
"cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; "
"rm *.deb ; "
"reprotest --vary=-time,-user_group,-fileordering,-domain_host .",
]
return cmd
@staticmethod
def _cmdline_as_root() -> bool:
r"""apt-get install --yes ./\*.deb must be run as root."""
return True
def task_result(
self,
returncode: int | None,
execute_directory: Path, # noqa: U100
) -> WorkRequestResults:
"""
Evaluate task output and return success.
For a successful run of reprotest:
-must have the output file
-exit code is 0
:return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.
"""
reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME
if reprotest_file.exists() and returncode == 0:
return WorkRequestResults.SUCCESS
return WorkRequestResults.FAILURE
def upload_artifacts(
self, exec_directory: Path, *, execution_result: WorkRequestResults
) -> None:
"""Upload the ReprotestArtifact with the files and relationships."""
if not self.debusine:
raise AssertionError("self.debusine not set")
assert self.dynamic_data is not None
assert self.dynamic_data.parameter_summary is not None
reprotest_artifact = ReprotestArtifact.create(
reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME,
reproducible=execution_result == WorkRequestResults.SUCCESS,
package=self.dynamic_data.parameter_summary,
)
uploaded = self.debusine.upload_artifact(
reprotest_artifact,
workspace=self.workspace_name,
work_request=self.work_request_id,
)
assert self.dynamic_data is not None
assert self.dynamic_data.source_artifact_id is not None
self.debusine.relation_create(
uploaded.id,
self.dynamic_data.source_artifact_id,
RelationType.RELATES_TO,
)
Below are the main methods with some basic explanation.
In order for Debusine to discover the task, add "Reprotest"
in the file debusine/tasks/__init__.py in the __all__ list.
Let’s explain the different methods of the Reprotest class:
build_dynamic_data methodThe worker has no access to Debusine’s database. Lookups are all resolved before the task gets dispatched to a worker, so all it has to do is download the specified input artifacts.
build_dynamic_data method lookup the artifact, assert that is a valid
category, extract the package name and version, and get the environment in
which it will be executed.
The environment is needed to run the task (reprotest will run
in a container using unshare, incus…).
def build_dynamic_data(
self, task_database: TaskDatabaseInterface
) -> ReprotestDynamicData:
"""Compute and return ReprotestDynamicData."""
input_source_artifact = task_database.lookup_single_artifact(
self.data.source_artifact
)
assert input_source_artifact is not None
self.ensure_artifact_categories(
configuration_key="input.source_artifact",
category=input_source_artifact.category,
expected=(
ArtifactCategory.SOURCE_PACKAGE,
ArtifactCategory.UPLOAD,
),
)
assert isinstance(
input_source_artifact.data, (DebianSourcePackage, DebianUpload)
)
subject = get_source_package_name(input_source_artifact.data)
version = get_source_package_version(input_source_artifact.data)
assert self.data.environment is not None
environment = self.get_environment(
task_database,
self.data.environment,
default_category=CollectionCategory.ENVIRONMENTS,
)
return ReprotestDynamicData(
source_artifact_id=input_source_artifact.id,
subject=subject,
parameter_summary=f"{subject}_{version}",
environment_id=environment.id,
)
get_input_artifacts_ids methodUsed to list the task’s input artifacts in the web UI.
def get_input_artifacts_ids(self) -> list[int]:
"""Return the list of input artifact IDs used by this task."""
if not self.dynamic_data:
return []
assert self.dynamic_data.source_artifact_id is not None
return [self.dynamic_data.source_artifact_id]
fetch_input methodDownload the required artifacts on the worker.
def fetch_input(self, destination: Path) -> bool:
"""Download the required artifacts."""
assert self.dynamic_data
artifact_id = self.dynamic_data.source_artifact_id
assert artifact_id is not None
self.fetch_artifact(artifact_id, destination)
return True
configure_for_execution methodInstall the packages needed by the task and set _reprotest_target, which
is used to build the task’s command line.
def configure_for_execution(self, download_directory: Path) -> bool:
"""
Find a .dsc in download_directory.
Install reprotest and other utilities used in _cmdline.
Set self._reprotest_target to it.
:param download_directory: where to search the files
:return: True if valid files were found
"""
self._prepare_executor_instance()
if self.executor_instance is None:
raise AssertionError("self.executor_instance cannot be None")
self.run_executor_command(
["apt-get", "update"],
log_filename="install.log",
run_as_root=True,
check=True,
)
self.run_executor_command(
[
"apt-get",
"--yes",
"--no-install-recommends",
"install",
"reprotest",
"dpkg-dev",
"devscripts",
"equivs",
"sudo",
],
log_filename="install.log",
run_as_root=True,
)
self._reprotest_target = utils.find_file_suffixes(
download_directory, [".dsc"]
)
return True
_cmdline methodReturn the command line to run the task.
In this case, and to keep the example simple, we will run reprotest
directly in the worker’s executor VM/container, without giving it an
isolated virtual server.
So, this command installs the build dependencies required by the package
(so reprotest can build it) and runs reprotest itself.
def _cmdline(self) -> list[str]:
"""
Build the reprotest command line.
Use configuration of self.data and self._reprotest_target.
"""
target = self._reprotest_target
assert target is not None
cmd = [
"bash",
"-c",
f"TMPDIR=/tmp ; cd /tmp ; dpkg-source -x {target} package/; "
"cd package/ ; mk-build-deps ; apt-get install --yes ./*.deb ; "
"rm *.deb ; "
"reprotest --vary=-time,-user_group,-fileordering,-domain_host .",
]
return cmd
Some reprotest variations are disabled. This is to keep the example simple with the set of packages to install and reprotest features.
_cmdline_as_root methodSince during the execution it’s needed to install packages, run it as root (in the container):
@staticmethod
def _cmdline_as_root() -> bool:
r"""apt-get install --yes ./\*.deb must be run as root."""
return True
task_result methodTask succeeded if a log is generated and the return code is 0.
def task_result(
self,
returncode: int | None,
execute_directory: Path, # noqa: U100
) -> WorkRequestResults:
"""
Evaluate task output and return success.
For a successful run of reprotest:
-must have the output file
-exit code is 0
:return: WorkRequestResults.SUCCESS or WorkRequestResults.FAILURE.
"""
reprotest_file = execute_directory / self.CAPTURE_OUTPUT_FILENAME
if reprotest_file.exists() and returncode == 0:
return WorkRequestResults.SUCCESS
return WorkRequestResults.FAILURE
upload_artifacts methodCreate the ReprotestArtifact with the log and the reproducible boolean,
upload it, and then add a relation between the ReprotestArtifact
and the source package:
def upload_artifacts(
self, exec_directory: Path, *, execution_result: WorkRequestResults
) -> None:
"""Upload the ReprotestArtifact with the files and relationships."""
if not self.debusine:
raise AssertionError("self.debusine not set")
assert self.dynamic_data is not None
assert self.dynamic_data.parameter_summary is not None
reprotest_artifact = ReprotestArtifact.create(
reprotest_output=exec_directory / self.CAPTURE_OUTPUT_FILENAME,
reproducible=execution_result == WorkRequestResults.SUCCESS,
package=self.dynamic_data.parameter_summary,
)
uploaded = self.debusine.upload_artifact(
reprotest_artifact,
workspace=self.workspace_name,
work_request=self.work_request_id,
)
assert self.dynamic_data is not None
assert self.dynamic_data.source_artifact_id is not None
self.debusine.relation_create(
uploaded.id,
self.dynamic_data.source_artifact_id,
RelationType.RELATES_TO,
)
To run this task in a local Debusine (see steps to have it ready with an environment, permissions and users created) you can do:
$ python3 -m debusine.client artifact import-debian -w System http://deb.debian.org/debian/pool/main/h/hello/hello_2.10-5.dsc
(get the artifact ID from the output of that command)
The artifact can be seen in
http://$DEBUSINE/debusine/System/artifact/$ARTIFACTID/.
Then create a reprotest.yaml:
$ cat <<EOF > reprotest.yaml
source_artifact: $ARTIFACT_ID
environment: "debian/match:codename=bookworm"
EOF
Instead of debian/match:codename=bookworm it could use the artifact ID.
Finally, create the work request to run the task:
$ python3 -m debusine.client create-work-request -w System reprotest --data reprotest.yaml
Using Debusine web you can see the work request, which should go to Running
status, then Completed with Success or Failure (depending if
reprotest could reproduce it or not). Clicking on the Output tab would have
an artifact of type debian:reprotest with one file: the log.
In the Metadata tab of the artifact it has Data: the package name and
reproducible (true or false).
This was a simple example of creating a task. Other things that could be done:
variationsreprotest directly on the worker host, using the executor
environment as a reprotest “virtual server”prepare_environment.QaWorkflow)10 February, 2026 12:00AM by Carles Pina i Estany
About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian‘s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people.
You can also support my work directly via Liberapay or GitHub Sponsors.
New upstream versions:
pkg_resources)pkg_resources)pkg_resources)Fixes for Python 3.14:
Fixes for pytest 9:
Porting away from the deprecated pkg_resources:
Other build/test failures:
global logged_msgs is unused: name is never assigned in scope (NMU)I investigated several more build failures and suggested removing the packages in question:
Other bugs:
Alejandro Colomar reported that man(1) ignored the MANWIDTH environment variable in some circumstances. I investigated this and fixed it upstream.
I contributed an ubuntu-dev-tools patch to stop recommending sudo.
I added forky support to the images used in Salsa CI pipelines.
I began working on getting a release candidate of groff 1.24.0 into experimental, though haven’t finished that yet.
I worked on some lower-priority security updates for OpenSSH.
08 February, 2026 07:30PM by Colin Watson
Both R and Python make it reasonably easy to work with compiled extensions. But how to access objects in one environment from the other and share state or (non-trivial) objects remains trickier. Recently (and while r-forge was ‘resting’ so we opened GitHub Discussions) a question was asked concerning R and Python object pointer exchange.
This lead to a pretty decent discussion including arrow interchange demos (pretty ideal if dealing with data.frame-alike objects), but once the focus is on more ‘library-specific’ objects from a given (C or C++, say) library it is less clear what to do, or how involved it may get.
R has external pointers, and these make it feasible to instantiate
the same object in Python. To demonstrate, I created a pair of
(minimal) packages wrapping a lovely (small) class from the excellent spdlog library by Gabi Melman, and more specifically
in an adapted-for-R version (to avoid some R CMD check
nags) in my RcppSpdlog
package. It is essentially a nicer/fancier C++ version of the
tic() and tic() timing scheme. When an object
is instantiated, it ‘starts the clock’ and when we accessing it later it
prints the time elapsed in microsecond resolution. In Modern C++ this
takes little more than keeping an internal chrono
object.
Which makes for a nice, small, yet specific object to pass to Python. So the R side of the package pair instantiates such an object, and accesses its address. For different reasons, sending a ‘raw’ pointer across does not work so well, but a string with the address printed works fabulously (and is a paradigm used around other packages so we did not invent this). Over on the Python side of the package pair, we then take this string representation and pass it to a little bit of pybind11 code to instantiate a new object. This can of course also expose functionality such as the ‘show time elapsed’ feature, either formatted or just numerically, of interest here.
And that is all that there is! Now this can be done from R as well
thanks to reticulate
as the demo() (also shown on the package README.md)
shows:
> library(chronometre)
> demo("chronometre", ask=FALSE)
demo(chronometre)
---- ~~~~~~~~~~~
> #!/usr/bin/env r
>
> stopifnot("Demo requires 'reticulate'" = requireNamespace("reticulate", quietly=TRUE))
> stopifnot("Demo requires 'RcppSpdlog'" = requireNamespace("RcppSpdlog", quietly=TRUE))
> stopifnot("Demo requires 'xptr'" = requireNamespace("xptr", quietly=TRUE))
> library(reticulate)
> ## reticulate and Python in general these days really want a venv so we will use one,
> ## the default value is a location used locally; if needed create one
> ## check for existing virtualenv to use, or else set one up
> venvdir <- Sys.getenv("CHRONOMETRE_VENV", "/opt/venv/chronometre")
> if (dir.exists(venvdir)) {
+ > use_virtualenv(venvdir, required = TRUE)
+ > } else {
+ > ## create a virtual environment, but make it temporary
+ > Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir())
+ > virtualenv_create("r-reticulate-env")
+ > virtualenv_install("r-reticulate-env", packages = c("chronometre"))
+ > use_virtualenv("r-reticulate-env", required = TRUE)
+ > }
> sw <- RcppSpdlog::get_stopwatch() # we use a C++ struct as example
> Sys.sleep(0.5) # imagine doing some code here
> print(sw) # stopwatch shows elapsed time
0.501220
> xptr::is_xptr(sw) # this is an external pointer in R
[1] TRUE
> xptr::xptr_address(sw) # get address, format is "0x...."
[1] "0x58adb5918510"
> sw2 <- xptr::new_xptr(xptr::xptr_address(sw)) # cloned (!!) but unclassed
> attr(sw2, "class") <- c("stopwatch", "externalptr") # class it .. and then use it!
> print(sw2) # `xptr` allows us close and use
0.501597
> sw3 <- ch$Stopwatch( xptr::xptr_address(sw) ) # new Python object via string ctor
> print(sw3$elapsed()) # shows output via Python I/O
datetime.timedelta(microseconds=502013)
> cat(sw3$count(), "\n") # shows double
0.502657
> print(sw) # object still works in R
0.502721
> The same object, instantiated in R is used in Python and thereafter again in R. While this object here is minimal in features, the concept of passing a pointer is universal. We could use it for any interesting object that R can access and Python too can instantiate. Obviously, there be dragons as we pass pointers so one may want to ascertain that headers from corresponding compatible versions are used etc but principle is unaffected and should just work.
Both parts of this pair of packages are now at the corresponding repositories: PyPI and CRAN. As I commonly do here on package (change) announcements, I include the (minimal so far) set of high-level changes for the R package.
Changes in version 0.0.2 (2026-02-05)
Removed replaced unconditional virtualenv use in demo given preceding conditional block
Updated README.md with badges and an updated demo
Changes in version 0.0.1 (2026-01-25)
- Initial version and CRAN upload
Questions, suggestions, bug reports, … are welcome at either the (now awoken from the R-Forge slumber) Rcpp mailing list or the newer Rcpp Discussions.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
I have unearthed a few old articles typed during my adolescence, between 1996 and 1998. Unremarkable at the time, these pages now compose, three decades later, the chronicle of a vanished era.1
The word “blog” does not exist yet. Wikipedia remains to come. Google has not been born. AltaVista reigns over searches, while already struggling to embrace the nascent immensity of the web2. To meet someone, you had to agree in advance and prepare your route on paper maps. 🗺️
The web is taking off. The CSS specification has just emerged, HTML tables still serve for page layout. Cookies and advertising banners are making their appearance. Pages are adorned with music and videos, forcing browsers to arm themselves with plugins. Netscape Navigator sits on 86% of the territory, but Windows 95 now bundles Internet Explorer to quickly catch up. Facing this offensive, Netscape opensource its browser.
France falls behind. Outside universities, Internet access remains expensive and laborious. Minitel still reigns, offering phone directory, train tickets, remote shopping. This was not yet possible with the Internet: buying a CD online was a pipe dream. Encryption suffers from inappropriate regulation: the DES algorithm is capped at 40 bits and cracked in a few seconds.
These pages bear the trace of the web’s adolescence. Thirty years have passed. The same battles continue: data selling, advertising, monopolies.
Most articles linked here are not translated from French to English. ↩︎
I recently noticed that Google no longer fully indexes my blog. For example, it is no longer possible to find the article on lanĉo. I assume this is a consequence of the explosion of AI-generated content or a change in priorities for Google. ↩︎
08 February, 2026 02:51PM by Vincent Bernat
This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian (as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities).
During my allocated time I uploaded or worked on:
I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them as ignored.
Unfortunately I didn’t found any time to work on this topic.
This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.
This work is generously funded by Fre(i)e Software GmbH!
This month I uploaded a new upstream version or a bugfix version of:
Unfortunately I didn’t found any time to work on this topic.
Unfortunately I didn’t found any time to work on this topic.
This month I uploaded a new upstream version or a bugfix version of:
Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.
08 February, 2026 01:25PM by alteholz
Another year of data from Société de Transport de Montréal, Montreal's transit agency!
A few highlights this year:
Although the Saint-Michel station closed for emergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for the new Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus.
The effects of the opening of the Royalmount shopping center has had a durable impact on the traffic at the De la Savane station. I reported on this last year, but it seems this wasn't just a fad.
With the completion of the Deux-Montagnes branch of the Réseau express métropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. The Édouard-Montpetit station has nearly reached its previous all-time record of 2015 and the McGill station has recovered from the general slump all the other stations have had in 2025.
The Assomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic.
Although still affected by a very high seasonality, the Jean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-Hélène et Notre-Dame islands.
More generally, it seems the Montreal subway has had a pretty bad year. Traffic had been slowly climbing back since the COVID-19 pandemic, but this is the first year since 2020 such a sharp decline can be witnessed. Even major stations like Jean-Talon or Lionel-Groulx are on a downward trend and it is pretty worrisome.
As for causes, a few things come to mind. First of all, as the number of Montrealers commuting to work by bike continues to rise1, a modal shift from public transit to active mobility is to be expected. As local experts put it, this is not uncommon and has been seen in other cities before.
Another important factor that certainly turned people away from the subway this year has been the impacts of the continued housing crisis in Montreal. As more and more people get kicked out of their apartments, many have been seeking refuge in the subway stations to find shelter.
Sadly, this also brought a unprecedented wave of incivilities. As riders' sense of security sharply decreased, the STM eventually resorted to banning unhoused people from sheltering in the subway. This decision did bring back some peace to the network, but one can posit damage had already been done and many casual riders are still avoiding the subway for this reason.
Finally, the weekslong STM worker's strike in Q4 had an important impact on general traffic, as it severely reduced the opening hours of the subway. As for the previous item, once people find alternative ways to get around, it's always harder to bring them back.
Hopefully, my 2026 report will be a more cheerful one...
By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.