KDE Needs You!
* KDE Randa Meetings and make a donation!
I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!
19 December, 2025 01:44PM by કાર્તિક
* KDE Randa Meetings and make a donation!
I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!
19 December, 2025 01:44PM by કાર્તિક

In January 2024 I wrote about the insanity of the magnificent seven dominating the MSCI World Index, and I wondered how long the number can continue to go up? It has continued to surge upward at an accelerating pace, which makes me worry that a crash is likely closer. As a software professional I decided to analyze if using stop-loss orders could be a reliable way to automate avoiding deep drawdowns.
As everyone with some savings in the stock market (hopefully) knows, the stock market eventually experiences crashes. It is just a matter of when and how deep the crash will be. Staying on the sidelines for years is not a good investment strategy, as inflation will erode the value of your savings. Assuming the current true inflation rate is around 7%, the price of a restaurant dinner that is today 20 euros will cost 24.50 euros in three years. Savings of 1000 euros today would drop in purchasing power from 50 dinners to only 40 dinners in three years.
Hence, if you intend to retain the value of your dear savings, they need to be invested in something that grows in value. Most people try to beat inflation by buying shares in stable companies, directly or via broad market ETFs. These historically grow faster than inflation during normal years, but likely drop in value during recessions.
What if you could buy stocks to benefit from their value increasing without having to worry about a potential crash? All modern online stock brokers have a feature called stop-loss, where you can enter a price at which your stocks automatically get sold if they drop down to that price. A trailing stop-loss order is similar, but instead of a fixed price, you enter a margin (e.g. 10%). If the stock price rises, the stop-loss price will trail upwards by that margin.
For example, if you buy a share at 100 euros and it has risen to 110 euros, you can set a 10% trailing stop-loss order which automatically sells it if the price drops 10% from the peak of 110 euros, at 99 euros. Thus no matter what happens, you lost only 1 euro. And if the stock price continues to rise to 150 euros, the trailing stop-loss would automatically readjust to 150 euros minus 10%, which is 135 euros (150-15=135). If the price dropped to 135 euros, you would lock in a gain of 35 euros, which is not the peak price of 150 euros, but still better than whatever the price fell down to as a result of a large crash.
In the simple case above it obviously makes sense in theory, but it might not make sense in practice. Prices constantly oscillate, so you don’t want a margin that is too small, otherwise you exit too early. Conversely, having a large margin may result in a too large of a drawdown before exiting. If markets crash rapidly it might be that nobody buys your stocks at the stop-loss price and shares have to be sold at an even lower price. Also, what will you do once the position is sold? The reason you invested in the stock market was to avoid holding cash, so would you buy the same stock back when the crash bottoms? But how will you know when the bottom has been reached?
I am not a professional investor, and nobody should take investment advice from me. However, I know what backtesting is and how to leverage open source software. So, I wrote a Python script to test if the trading strategy of using trailing stop-loss orders with specific margin values would have worked for a particular stock.
First you need to have data. YFinance is a handy Python library that can be used to download the historic price data for any stock ticker on Yahoo.com. Then you need to manipulate the data. Pandas is the Python data analysis library with advanced data structures for working with relational or labeled data. Finally, to visualize the results, I used Lightweight Charts, which is a fast, interactive library for rendering financial charts, allowing you to plot the stock price, the trailing stop-loss line, and the points where trades would have occurred. I really like how the zoom is implemented in Lightweight Charts, which makes drilling into the datapoints feel effortless.
The full solution is not polished enough to be published for others to use, but you can piece together your own by reusing some of the key snippets. To avoid re-downloading the same data repeatedly, I implemented a small caching wrapper that saves the data locally (as Parquet files):
CACHE_DIR.mkdir(parents=True, exist_ok=True)
end_date = datetime.today().strftime("%Y-%m-%d")
cache_file = CACHE_DIR / f"{TICKER}-{START_DATE}--{end_date}.parquet"
if cache_file.is_file():
dataframe = pandas.read_parquet(cache_file)
print(f"Loaded price data from cache: {cache_file}")
else:
dataframe = yfinance.download(
TICKER,
start=START_DATE,
end=end_date,
progress=False,
auto_adjust=False
)
dataframe.to_parquet(cache_file)
print(f"Fetched new price data from Yahoo Finance and cached to: {cache_file}")The dataframe is a Pandas object with a powerful API. For example, to print a snippet from the beginning and the end of the dataframe to see what the data looks like you can use:
print("First 5 rows of the raw data:")
print(df.head())
print("Last 5 rows of the raw data:")
print(df.tail())Example output:
First 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P
Dat
2014-01-02 29.956285 55.540001 56.910000 55.349998 56.700001 316552
2014-01-03 30.031801 55.680000 55.990002 55.290001 55.580002 210044
2014-01-06 30.080338 55.770000 56.230000 55.529999 55.560001 185142
2014-01-07 30.943321 57.369999 57.619999 55.790001 55.880001 370397
2014-01-08 31.385597 58.189999 59.209999 57.750000 57.790001 489940
Last 5 rows of the raw data
Price Adj Close Close High Low Open Volume
Ticker BNP.PA BNP.PA BNP.PA BNP.PA BNP.PA BNP.P
Dat
2025-12-11 78.669998 78.669998 78.919998 76.900002 76.919998 357918
2025-12-12 78.089996 78.089996 80.269997 78.089996 79.470001 280477
2025-12-15 79.080002 79.080002 79.449997 78.559998 78.559998 233852
2025-12-16 78.860001 78.860001 79.980003 78.809998 79.430000 283057
2025-12-17 80.080002 80.080002 80.150002 79.080002 79.199997 262818Adding new columns to the dataframe is easy. For example I used a custom function to calculate relative strength index (RSI) and to add a new column “RSI” with a value for every row based on the price from that row, only one line of code is needed, without custom loops:
df["RSI"] = compute_rsi(df["price"], period=14)After manipulating the data, the series can be converted into an array structure and printed as JSON into a placeholder in an HTML template:
baseline_series = [
{"time": ts, "value": val}
for ts, val in df_plot[["timestamp", BASELINE_LABEL]].itertuples(index=False)
]
baseline_json = json.dumps(baseline_series)
template = jinja2.Template("template.html")
rendered_html = template.render(
title=title,
heading=heading,
description=description_html,
...
baseline_json=baseline_json,
...
)
with open("report.html", "w", encoding="utf-8") as f:
f.write(rendered_html)
print("Report generated!")In the HTML template the marker {{ variable }} in Jinja syntax gets replaced with the actual JSON:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>{{ title }}</title>
...
</head>
<body>
<h1>{{ heading }}</h1>
<div id="chart"></div>
<script>
// Ensure the DOM is ready before we initialise the chart
document.addEventListener('DOMContentLoaded', () => {
// Parse the JSON data passed from Python
const baselineData = {{ baseline_json | safe }}
const strategyData = {{ strategy_json | safe }}
const markersData = {{ markers_json | safe }}
// Create the chart – use a unique variable name to avoid any clash with the DOM element ID
const chart = LightweightCharts.createChart(document.getElementById('chart'),
width: document.getElementById('chart').clientWidth
height: 500
layout:
background: { color: "#222" }
textColor: "#ccc"
}
grid:
vertLines: { color: "#555" }
horzLines: { color: "#555" }
}
})
// Add baseline serie
const baselineSeries = chart.addLineSeries(
title: '{{ baseline_label }}'
lastValueVisible: false
priceLineVisible: false
priceLineWidth: 1
})
baselineSeries.setData(baselineData)
baselineSeries.priceScale().applyOptions(
entireTextOnly: true
})
// Add strategy serie
const strategySeries = chart.addLineSeries(
title: '{{ strategy_label }}'
lastValueVisible: false
priceLineVisible: false
color: '#FF6D00'
)
strategySeries.setData(strategyData)
// Add buy/sell markers to the strategy serie
strategySeries.setMarkers(markersData)
// Fit the chart to show the full data range (full zoom
chart.timeScale().fitContent()
})
</script>
</body>
</html>There are also Python libraries built specifically for backtesting investment strategies, such as Backtrader and Zipline, but they do not seem to be actively maintained, and probably have too many features and complexity compared to what I needed for doing this simple test.
The screenshot below shows an example of backtesting a strategy on the Waste Management Inc stock from January 2015 to December 2025. The baseline “Buy and hold” scenario is shown as the blue line and it fully tracks the stock price, while the orange line shows how the strategy would have performed, with markers for the sells and buys along the way.
I experimented with multiple strategies and tested them with various parameters, but I don’t think I found a strategy that was consistently and clearly better than just buy-and-hold.
It basically boils down to the fact that I was not able to find any way to calculate when the crash has bottomed based on historical data. You can only know in hindsight that the price has stopped dropping and is on a steady path to recovery, but at that point it is already too late to buy in. In my testing, most strategies underperformed buy-and-hold because they sold when the crash started, but bought back after it recovered at a slightly higher price.
In particular when using narrow margins and selling on a 3-6% drawdown the strategy performed very badly, as those small dips tend to recover in a few days. Essentially, the strategy was repeating the pattern of selling 100 stocks at a 6% discount, then being able to buy back only 94 shares the next day, then again selling 94 shares at a 6% discount, and only being able to buy back maybe 90 shares after recovery, and so forth, never catching up to the buy-and-hold.
The strategy worked better in large market crashes as they tended to last longer, and there were higher chances of buying back the shares while the price was still low. For example in the 2020 crash selling at a 20% drawdown was a good strategy, as the stock I tested dropped nearly 50% and remained low for several weeks, so the strategy bought back the stocks while the price was still low and had not yet started to climb significantly. But that was just a lucky incident, as the delta between the trailing stop-loss margin of 20% and total crash of 50% was large enough. If the crash would have been only 25%, the strategy would have missed the rebound and ended up buying back the stocks at a slightly higher price.
Also, note that the simulation assumes that the trade itself is too small to affect the price formation. We should keep in mind that in reality, if a lot of people have stop-loss orders in place, a large price drop would trigger all of them, and create a flood of sales orders, which in turn would affect the price and drive it lower even faster and deeper. Luckily, it seems that stop-loss orders are generally not a good strategy, and we don’t need to fear that too many people would be using them.
Even though using a trailing stop-loss strategy does not seem to help in getting consistently higher returns based on my backtesting, I would still say it is useful in protecting from the downside of stock investing. It can act as a kind of “insurance policy” to considerably decrease the chances of losing big while increasing the chances of losing a little bit. If you are risk-averse, which I think I probably am, this tradeoff can make sense. I’d rather miss out on an initial 50% loss and an overall 3% gain on recovery than have to sit through weeks or months with a 50% loss before the price recovers to prior levels.
Most notably, the trailing stop-loss strategy works best if used only once. If it is repeated multiple times, the small losses in gains will compound into big losses overall.
Thus, I think I might actually put this automation in place at least on the stocks in my portfolio that have had the highest gains. If they keep going up, I will ride along, but once the crash happens, I will be out of those particular stocks permanently.
Do you have a favorite open source investment tool or are you aware of any strategy that actually works? Comment below!

A new release of my mixed collection of things package dang package arrived
at CRAN earlier today. The dang package regroups
a few functions of mine that had no other home as for example
lsos() from a
StackOverflow question from 2009 (!!), the overbought/oversold
price band plotter from an older blog post, the market monitor
blogged about as well as the checkCRANStatus() function
tweeted about by Tim
Taylor. And more so take a look.
This release retires two functions: the social media site nobody ever
visits anymore shut down its API too, so no way to mute posts by a given
handle. Similarly, the (never official) ability by Google to supply
financial data is no more, so the function to access data this way is
gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and
another little helper to re-order microbenchmark results by
summary column (defaulting to the median). Other than the usual updates
to continuous integrations, as well as a switch to Authors@R which will
result in CRAN nagging me less
about this, and another argument update.
The detailed NEWS entry follows.
Changes in version 0.0.17 (2025-12-18)
Added new funtion
reorderMicrobenchmarkResultswith aliasrmrUse
toloweron email argument tocheckCRANStatusAdded new function
cranORCIDsbootstrapped from two emails by Kurt HornikSwitched to using Authors@R in DESCRIPTION and added ORCIDs where available
Switched to
r-ciaction with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessorRemoved
googleFinanceDataas the (unofficial) API access point no longer worksRemoved
muteTweetersbecause the API was turned off
Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
We announced a public beta of Debusine repositories recently (Freexian blog, debian-devel-announce). One thing I’m very keen on is being able to use these to prepare “transitions”: changes to multiple packages that need to be prepared together in order to land in testing. As I said in my DebConf25 talk:
We have distribution-wide CI in unstable, but there’s only one of it and it’s shared between all of us. As a result it’s very possible to get into tangles when multiple people are working on related things at the same time, and we only avoid that as much as we do by careful coordination such as transition bugs. Experimental helps, but again, there’s only one of it and setting up another one is far from trivial.
So, what we want is a system where you can run experiments on possible Debian changes at a large scale without a high setup cost and without fear of breaking things for other people. And then, if it all works, push the whole lot into Debian.
Time to practice what I preach.
The setup process is documented on the Debian
wiki. You need to
decide whether you’re working on a short-lived experiment, in which case
you’ll run the create-experiment workflow and your workspace will expire
after 60 days of inactivity, or something that you expect to keep around for
longer, in which case you’ll run the create-repository workflow. Either
one of those will create a new workspace for you. Then, in that workspace,
you run debusine archive suite create for whichever suites you want to
use. For the case of a transition that you plan to land in unstable, you’ll
most likely use create-experiment and then create a single suite with the
pattern sid-<name>.
The situation I was dealing with here was moving to
Pylint 4. Tests showed that we
needed this as part of adding Python 3.14 as a supported Python version, and
I knew that I was going to need newer upstream versions of the astroid and
pylint packages. However, I wasn’t quite sure what the fallout of a new
major version of pylint was going to be. Fortunately, the Debian Python
ecosystem has pretty good autopkgtest coverage, so I thought I’d see what
Debusine said about it. I created an experiment called cjwatson-pylint
(resulting in
https://debusine.debian.net/debian/developers-cjwatson-pylint/ - I’m not
making that a proper link since it will expire in a couple of months) and a
sid-pylint suite in it.
From this starting point, the basic cycle involved uploading each package like this for each package I’d prepared:
$ dput -O debusine_workspace=developers-cjwatson-pylint \
-O debusine_workflow=publish-to-sid-pylint \
debusine.debian.net foo.changes
I could have made a new dput-ng profile to cut down on typing, but it
wasn’t worth it here.
Then I looked at the workflow results, figured out which other packages I needed to fix based on those, and repeated until the whole set looked coherent. Debusine automatically built each upload against whatever else was currently in the repository, as you’d expect.
I should probably have used version numbers with tilde suffixes (e.g.
4.0.2-1~test1) in case I needed to correct anything, but fortunately that
was mostly unnecessary. I did at least run initial test-builds locally of
just the individual packages I was directly changing to make sure that they
weren’t too egregiously broken, just because I usually find it quicker to
iterate that way.
I didn’t take screenshots as I was going along, but here’s what the list of top-level workflows in my workspace looked like by the end:

You can see that not all of the workflows are successful. This is because we currently just show everything in every workflow; we don’t consider whether a task was retried and succeeded on the second try, or whether there’s now a newer version of a reverse-dependency so tests of the older version should be disregarded, and so on. More fundamentally, you have to look through each individual workflow, which is a bit of a pain: we plan to add a dashboard that shows you the current state of a suite as a whole rather than the current workflow-oriented view, but we haven’t started on that yet.
Drilling down into one of these workflows, it looks something like this:

This was the first package I uploaded. The first pass of failures told me
about pylint (expected), pylint-flask (an obvious consequence), and
python-sphinx-autodoc2 and sphinx-autoapi (surprises). The slightly odd
pattern of failures and errors is because I retried a few things, and we
sometimes report retries in a slightly strange way, especially when there
are workflows involved that might not be able to resolve their input
parameters any more.
The next level was:

Again, there were some retries involved here, and also some cases where packages were already failing in unstable so the failures weren’t the fault of my change; for now I had to go through and analyze these by hand, but we’ll soon have regression tracking to compare with reference runs and show you where things have got better or worse.
After excluding those, that left pytest-pylint (not caused by my changes,
but I fixed it anyway in unstable to clear out some noise) and spyder.
I’d seen people talking about spyder on #debian-python recently, so after
a bit of conversation there I sponsored a rope upload by Aeliton Silva,
upgraded python-lsp-server, and patched spyder. All those went into my
repository too, exposing a couple more tests I’d forgotten in spyder.
Once I was satisfied with the results, I uploaded everything to unstable. The next day, I looked through the tracker as usual starting from astroid, and while there are some test failures showing up right now it looks as though they should all clear out as pieces migrate to testing. Success!
We still have some way to go before this is a completely smooth experience that I’d be prepared to say that every developer can and should be using; there are all sorts of fit-and-finish issues that I can easily see here. Still, I do think we’re at the point where a tolerant developer can use this to deal with the common case of a mid-sized transition, and get more out of it than they put in.
Without Debusine, either I’d have had to put much more effort into searching for and testing reverse-dependencies myself, or (more likely, let’s face it) I’d have just dumped things into unstable and sorted them out afterwards, resulting in potentially delaying other people’s work. This way, everything was done with as little disruption as possible.
This works best when the packages likely to be involved have reasonably good autopkgtest coverage (even if the tests themselves are relatively basic). This is an increasingly good bet in Debian, but we have plans to add installability comparisons (similar to how Debian’s testing suite works) as well as optional rebuild testing.
If this has got you interested, please try it out for yourself and let us know how it goes!
18 December, 2025 01:21PM by Colin Watson
21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.
From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.
It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.
At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.
exfatprogs 1.3.0 added a new defrag.exfat utility which
turned out to be not reliable
and cause data loss.
exfatprogs 1.3.1 disabled the utility,
and I followed that decision with the upload to Debian/unstable yesterday. But as usual
it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat!
At least not without a vetted and current backup.
Beside of that there is a compatibility issue with the way mkfs.exfat, as
shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector
size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a
change was implemented
to prefer the physical sector size on those devices. That turned out to be not compatible
with Windows, and was reverted
in exfatprogs 1.3.0. Sadly John Ogness ran into the issue
and spent some time to debug it. I've to admit that I missed the relevance of that change.
Huge kudos to John for the bug report. Based on that I prepared an update for the next
trixie point release.
If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating
with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use
exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need
Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar.
This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the C++11 transition.
There were no R-side changes in this release. The detailed changes since the last release follow.
Changes in RcppArmadillo version 15.2.3-1 (2025-12-16)
Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe)
Faster
.resize()for vectorsFaster
repcube()
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
unsubstantiated character assassinationand consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.
The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.
elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names
*elusive_woman has quit (Ping timeout: 2m30s), followed by one reading
*mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.
Today, the Debusine developers launched Debusine repositories, a beta implementation of PPAs. In the announcement, Colin remarks that "[d]iscussions about this have been happening for long enough that people started referring to PPAs for Debian as 'bikesheds'"; a characterization that I'm sure most will agree with.
So it is with great amusement that on this same day, I launch a second PPA implementation for Debian: Simple-PPA.
Simple-PPA was never meant to compete with Debusine, though. In fact, it's entirely the opposite: from discussions at DebConf, I knew that it was only a matter of time until Debusine gained a PPA-like feature, but I needed a stop-gap solution earlier, and with some polish, what was once by Python script already doing APT processing for apt.ai.debian.net, recently became Simple-PPA.
Consequently, Simple-PPA lacks (and will always lack) all of the features that Debusine offers: there is no auto-building, no CI, nor any other type of QA. It's the simplest possible type of APT repository: you just upload packages, they get imported into an archive, and the archive is exposed via a web server. Under the hood, reprepro does all the heavy lifting.
However, this also means it's trivial to set up. The following is the entire configuration that simple-ppa.debian.net started with:
# simple-ppa.conf
[CORE]
SignWith = 2906D748B7551BC8
ExportDir = /srv/www/simple-ppa
MailFrom: Simple-PPA <admin@simple-ppa.debian.net>
Codenames = sid forky trixie trixie-backports bookworm bookworm-backports
AlsoAllow = forky: unstable
trixie: unstable
bookworm: unstable
[simple-ppa-dev]
Label = Simple-PPA's self-hosted development repository
# ckk's key
Uploaders = allow * by key E76004C5CEF0C94C+
[ckk]
Label = Christian Kastner at Simple-PPA
Uploaders = allow * by key E76004C5CEF0C94C+
The CORE section just sets some defaults and sensible rules. Two PPAs are
defined, simple-ppa-dev and ckk, which accept packages signed by the
key with the ID E76004C5CEF0C94C. These PPAs use the global defaults, but
individual PPAs can override Architectures, Suites, and Components,
and of course allow an arbitrary number of users.
Users upload to this archive using SFTP (e.g.: with dput-ng). Every 15 minutes, uploads get processed, with ACCEPTED or REJECTED mails sent to the Maintainer address. The APT archive of all PPAs is signed with a single global key.
I myself intend to use Debusine repositories soon, as the autobuilding and
the QA tasks Debusine offers are something I need. However, I do still see a
niche use case for Simple-PPA: when you need an APT archive, but don't want to
do a deep dive into reprepro (which is extremely powerful).
If you'd like to give Simple-PPA a try, head over to simple-ppa.debian.net and follow the instructions for users.
16 December, 2025 09:15PM by Christian Kastner
I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.
Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.
We’re happy to announce that Debusine can now be used to maintain APT-compatible add-on package repositories for Debian. This facility is available in public beta to Debian developers and maintainers.
Debian developers typically put most of their effort towards maintaining the main Debian archive. However, it’s often useful to have other places to work, for various reasons:
The Ubuntu ecosystem has had PPAs for a long time to meet these sorts of needs, but people working directly on Debian have had to make do with putting things together themselves using something like reprepro or aptly. Discussions about this have been happening for long enough that people started referring to PPAs for Debian as “bikesheds”, and users often find themselves trying to use Ubuntu PPAs on Debian systems and hoping that dependencies will be compatible enough for things to more or less work. This clearly isn’t ideal, and solving it is one of Freexian’s objectives for Debusine.
Developers publishing packages to Debusine repositories can take advantage of all Debusine’s existing facilities, including a battery of QA tests and regression tracking (coming soon). Repositories are signed using per-repository keys held in Debusine’s signing service, and uploads to repositories are built against the current contents of that repository as well as the corresponding base Debian release. All repositories include automatic built-in snapshot capabilities.
We’ve set up debusine.debian.net to allow using repositories. All Debian Developers and Debian Maintainers can log in there and publish packages to it. The resulting repositories are public by default.
debusine.debian.net only allows packages with licences that allow distribution by Debian, and it is intended primarily for work that could reasonably end up in Debian; Freexian reserves the right to remove repositories from it.
If you are a Debian contributor, we’d be very excited to have you try this out, especially if you give us feedback. We have published instructions for developers on using this. Since this is a beta service, you can expect things to change, but we’ll maintain compatibility where we can.
If you’re interested in using this in a commercial setting, please contact Freexian to discuss what we can do for you.
16 December, 2025 12:00AM by Colin Watson
The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for November.
During the month of November, 18 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).
The team released 33 DLAs fixing 219 CVEs.
The LTS Team kept going with the usual cadence of preparing security updates for Debian 11 “bullseye”, but also for Debian 12 “bookworm”, Debian 13 “trixie” and even Debian unstable. As in previous months, we are pleased to say that there have been multiple contributions of LTS uploads by Debian Fellows outside the regular LTS Team.
Notable security updates:
Contributions from fellows outside the LTS Team:
Other than the regular LTS updates for bullseye, the LTS Team has also contributed updates to the latest Debian releases:
Beyond security updates, there has been a significant effort in revamping our documentation, aiming to make the processes more clear and consistent for all the members of the team. This work was mainly carried out by Sylvain, Jochen and Roberto.
We would like to express our gratitude to the sponsors for making the Debian LTS project possible. Also, special thanks to the fellows outside the LTS team for their valuable help.
Sponsors that joined recently are in bold.
16 December, 2025 12:00AM by Santiago Ruano Rincón
This post is an unpublished review for Unique security and privacy threats of large language models — a comprehensive survey
Much has been written about large language models (LLMs) being a risk to user security and privacy, including the issue that, being trained with datasets whose provenance and licensing are not always clear, they can be tricked into producing bits of data that should not be divulgated. I took on reading this article as means to gain a better understanding of this area. The article completely fulfilled my expectations.
This is a review article, which is not a common format for me to follow: instead of digging deep into a given topic, including an experiment or some way of proofing the authors’ claims, a review article will contain a brief explanation and taxonomy of the issues at hand, and a large number of references covering the field. And, at 36 pages and 151 references, that’s exactly what we get.
The article is roughly split in two parts: The first three sections present the issue of security and privacy threats as seen by the authors, as well as the taxonomy within which the review will be performed, and sections 4 through 7 cover the different moments in the life cycle of a LLM model (at pre-training, during fine-tuning, when deploying systems that will interact with end-users, and when deploying LLM-based agents), detailing their relevant publications. For each of said moments, the authors first explore the nature of the relevant risks, then present relevant attacks, and finally close outlining countermeasures to said attacks.
The text is accompanied all throughout its development with tables, pipeline diagrams and attack examples that visually guide the reader. While the examples presented are sometimes a bit simplistic, they are a welcome guide and aid to follow the explanations; the explanations for each of the attack models are necessarily not very deep, and I was often left wondering I correctly understood a given topic, or wanting to dig deeper – but being this a review article, it is absolutely understandable.
The authors present an easy to read prose, and this article covers an important spot in understanding this large, important, and emerging area of LLM-related study.
Review: Brigands & Breadknives, by Travis Baldree
| Series: | Legends & Lattes #3 |
| Publisher: | Tor |
| Copyright: | 2025 |
| ISBN: | 1-250-33489-6 |
| Format: | Kindle |
| Pages: | 325 |
Brigands & Breadknives is a secondary-world sword-and-sorcery fantasy and a sequel to both Legends & Lattes and Bookshops & Bonedust. It takes place shortly after Legends & Lattes chronologically, but Fern, the protagonist, was introduced in the Bookshops & Bonedust prequel.
You may have noticed I didn't describe this as cozy fantasy. That is intentional.
When we left Fern at the end of Bookshops & Bonedust, the rattkin was running a bookshop in the town of Murk. As Brigands & Breadknives opens, Fern is moving, for complicated and hard-to-describe personal reasons, to Thune where Viv has her coffee shop. Her plan is to open a new bookstore next door to Legends and Lattes. This is exactly the sort of plot one might expect from this series, and the first few chapters feel like yet another version of the first two novels. Then Fern makes an impulsive and rather inexplicable (even to herself) decision and the plot goes delightfully sideways.
Brigands & Breadknives is not, as Baldree puts it in the afterword, a book about fantasy small-business ownership as the answer to all of life's woes. It is, instead, a sword and sorcery story about a possibly immortal elven bounty hunter, her utterly baffling goblin prisoner, and a rattkin bookseller who becomes their unexpected travel companion for reasons she can't explain. It's a story about a mid-life crisis in a world and with supporting characters that I can only describe as inspired by a T. Kingfisher novel.
Baldree is not Ursula Vernon, of course. This book does not contain paladins or a romance, possibly to the relief of some readers. It's slower, a bit more introspective, and doesn't have as sharp of edges or the casual eerie unsettlingness. But there is a religious order that worships a tentacled space horror for entirely unexpected reasons, pompous and oleaginous talking swords with verbose opinions about everything, a mischievously chaotic orange-haired goblin who quickly became one of my favorite fantasy characters and then kept getting better, and a whole lot of heart. You may see why Kingfisher was my first thought for a comparison point.
Unlike Baldree's previous novels, there is a lot of combat and injury. I think some people will still describe this book as cozy, and I'm not going to argue too strongly because the conflicts are a bit lighter than the sort of rape and murder one would see in a Mercedes Lackey novel. But to me this felt like sword and sorcery in a Dungeons and Dragons universe made more interesting by letting the world-building go feral and a little bit sarcastic. Most of the book is spent traveling, there are a lot of random encounters that build into a connected plot, and some scenes (particularly the defense of the forest village) felt like they could have sold to the Swords and Sorceress anthology series.
Also, this was really good! I liked both Legends & Lattes and Bookshops & Bonedust, maybe a bit more than the prevailing opinion among reviewers since the anachronisms never bothered me, but I wasn't sure whether to dive directly into this book because I was expecting more of the same. This is not more of the same. I think it's clearly better writing and world-building than either of the previous books. It helps that Fern is the protagonist; as much as I like Viv, I think Fern is a more interesting character, and I am glad she got a book of her own.
Baldree takes a big risk on the emotional arc of this book. Fern starts the story in a bad state and makes some decisions to kick off the plot that are difficult to defend. She beats herself up for those decisions for most of the book, deservedly, and parts of that emotional turmoil are difficult to read. Baldree resists the urge to smooth everything over and instead provides a rather raw sense of depression, avoidance, and social anxiety that some readers are going to have to brace themselves for.
I respect the decision to not write the easy series book people probably expected, but I'm not sure Fern's emotional arc quite worked. Baldree is hinting at something that's hard to describe logically, and I'm not sure he was able to draw a clear enough map of Fern's thought process for the reader to understand her catharsis. The "follow your passion" self-help mindset has formed a gravitational singularity in the vicinity of this book's theme, it takes some skillful piloting to avoid being sucked into its event horizon, and I don't think Baldree quite managed to escape it. He made a valiant attempt, though, and it created a far more interesting book than one about safer emotions.
I wanted more of an emotional payoff than I got, but the journey, even with the moments of guilt and anxiety, was so worth it. The world-building is funnier and more interesting than the previous books of the series, and the supporting cast is fantastic. If you bailed on the series but you like sword and sorcery and T. Kingfisher novels, consider returning. You do probably need to read Bookshops & Bonedust first, if you haven't already, since it helps to know the start of Fern's story.
Recommended, and shortcomings aside, much better than I had expected.
Content notes: Bloody sword fights, major injury, some very raw emotions about letting down friends and destroying friendships.
Rating: 8 out of 10
We recently bought some Govee Glide Hexa Light Panels, because they have a local LAN API that is well integrated into Home Assistant. Or so we thought.
Our network is not that complicated, but there is a dedicated VLAN for IOT devices.
Home Assistant runs in a container (with network=host) on a box in the basement, and that box has a NIC in the IOT VLAN so it can reach devices there easily.
So far, this has never been a problem.
Enter the Govee LAN API. Or maybe its Python implementation. Not exactly sure who's to blame here.
The API involves sending JSON over multicast, which the Govee device will answer to.
No devices found on the network
After turning logging for homeassistant.components.govee_light_local to 11, erm debug, we see:
DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] Starting discovery with IP 192.168.42.2 DEBUG (MainThread) [homeassistant.components.govee_light_local.config_flow] No devices found with IP 192.168.42.2
That's not the IP address in the IOT VLAN!
Turns out the integration recently got support for multiple NICs, but Home Assistant doesn't just use all the interfaces it sees by default.
You need to go to Settings → Network → Network adapter and deselect "Autoconfigure", which will allow your to select individual interfaces.
Once you've done that, you'll see Starting discovery with IP messages for all selected interfaces and adding of Govee Lights Local will work.
14 December, 2025 03:48PM by evgeni

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 41.5 million package downloads.
Version 1.90.0 of Boost was released a few days ago following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed only one really minor issue among the over three hundred direct reverse dependencies. And that issue was addressed yesterday within hours by a truly responsove maintainer (and it helped that a related issue had been addressed months earlier with version 1.89.). So big thanks to Jean-Romain Roussel for the prompt fix, and to Andrew Johnson for the earlier test with 1.89.0.
As last year with 1.87.0, no new Boost libraries were added to BH so the (considerable) size is more or less unchanged. It lead to CRAN doing a manual inspection but as there were no other issues it sailed through as is now in the CRAN repository.
The short NEWS entry follows.
Changes in version 1.90.0-1 (2025-12-13)
Upgrade to Boost 1.90.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN
Minor upgrades to continuous integration
Via my CRANberries, there
is a diffstat report relative to the previous
release. Comments and suggestions about BH are welcome via the issue
tracker at the GitHub
repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
13 December, 2025 01:41AM by Junichi Uekawa
Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.
The DebConf Video Team records, streams, and publishes talks from DebConf and many miniDebConfs. A lot of the infrastructure development happens during setup for these events, but we also try to organize a sprint once a year to work on infrastructure, when there isn’t a DebConf about to happen. Stefano attended the sprint in Herefordshire this year and wrote up a report.
A number of jobs were stuck in architecture-specific failures. gcc-15 and
dpkg still disagree about whether PIE is enabled occasionally and big endian
mipsen needed fixes in systemd. Beyond this regular uploads of libxml2 and
gcc-15 required fixes and rebasing of pending patches.
Earlier, Loongson used rebootstrap to create the initial package set for
loong64 and Miao Wang now submitted their changes. Therefore, there is now
initial support for suites other than unstable and use with derivatives.
Vendors of Debian-based products may/should be paying attention to the evolution of different jurisdictions (such as the CRA or updates on CISA’s Minimum Elements for a Software Bill of Materials) that require to make available Software Bill of Materials (SBOM) of their products. It is important then to have tools in Debian to make it easier to produce such SBOMs.
In this context, Santiago continued the work on packaging libraries related to SBOMs. This includes the packaging of the SPDX python library (python-spdx-tools), and its dependencies rdflib and mkdocs-include-markdown-plugin. System Package Data Exchange (SPDX), defined by ISO/IEC 5962:2021, is an open standard capable of representing systems with software components as SBOMs and other data and security references. SPDX and CycloneDX (whose python library python3-cyclonedx-lib was packaged by prior efforts this year), encompass the two main SBOM standards available today.
python-debianbts;
changed some command line options naming or output based on user feedback;
finished refactoring user interaction to rich; codebase is now flake8-compliant;
added type safety with mypy.po-debconf-manager, created 19 bug reports for translations
where the merge requests were pending; reviewed and created merge requests for
4 packages.python-unidiff2 (adapted from the
original pull request to
python-unidiff). He also started preparing a qnetload
update.mkdocs-macros-plugin,
python-confuse, python-pip, python-mitogen.libcrypt-dev out of build-essential
and bumped the remaining bugs to rc-severity in coordination with the release team.mmdebstrap deals with start-stop-daemon may
result in broken output and sent a patch.armel being removed from “sid”, but not from “forky”, the
multiarch hinter broke. Helmut fixed it.systemd.lprng, cpdb-backend-cups, cpdb-libs and
ippsample to fix some RC bugs as well as other bugs that accumulated over time.
He also uploaded cups-filters to all Debian releases to fix three CVEs.12 December, 2025 12:00AM by Anupa Ann Joseph
Welcome to post 56 in the R4 series.
The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the ‘fast, easy, reliable: pick all three!’ provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.
This situation is still evolving. I have not converted any of my
existing CI scripts (apart from a test instance or two), but I keep
monitoring the situation. However, this also offered another
perspective: why not rely on a different container for a
different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his
mmap repo), it occurred to me we could also use on of
the Rocker containers for
R-devel. A minimal change to the underlying run.sh script
later, this was accomplished. An example is provided as both a test and
an illustration in the repo for package
RcppInt64 in its script ci.yaml:
strategy:
matrix:
include:
- { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
- { name: r-devel, os: ubuntu-latest, container: rocker/drd }
- { name: macos, os: macos-latest }
- { name: ubuntu, os: ubuntu-latest }
runs-on: ${{ matrix.os }}
container: ${{ matrix.container }}This runs both a standard Ubuntu setup (fourth entry) and the
alternate just described relying on the container (first entry) along
with the (usually commented-out) optional macOS setup (third entry). And
line two brings the drd container from Rocker. The CI runner script now
checks for a possible Rdevel binary as provided inside
drd (along with alias RD) and uses it when
present. And that is all that there is: no other change on the user
side; tests now run under R-devel. You can see some of the initial runs
at the rcppint64
repo actions log. Another example is now also at Jeff’s
mmap repo.
It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under ‘normal’ circumstances it is not needed.
Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.
During my allocated time I uploaded or worked on:
I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.
This month I uploaded a new upstream version or a bugfix version of:
I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.
This work is generously funded by Freexian!
This month I uploaded a new upstream version or a bugfix version of:
This month I uploaded a new upstream version or a bugfix version of:
This month I uploaded a new upstream version or a bugfix version of:
This month I uploaded a new upstream version or a bugfix version of:
On my fight against outdated RFPs, I closed 30 of them in November.
I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.
Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.
This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.
08 December, 2025 03:20PM by alteholz
Hello there . I am a software developer and tester. Some of my interests include bash scripting , website full-stack development and also open source software contribution.
Outreachy is dedicated to the open source community. The OpenQA community in itself had the right project I wanted to contribute to. It was a really great match for my interests and skills. The project is Debian Images testing with OpenQA.
The contribution period was very intense. It was a learning phase at first. I made adjustments to my computer to ensure it would handle the tasks. I also had many trials , failures and problem solving to do. There was a lot of questions asked. My mentors were really helpful.
What worked for me in the end was:
Every week is a learning phase to me. I encounter newer issues , lets say my latest issue was connecting the Virtual Machine to a newer Wi-Fi network. It took a whole day to get a solution but I eventually solved it. I regularly share my issues and write the solutions so that it would be helpful to anyone in the future.
By the end of the internship period, I hope to have contributed to the Debian OpenQA Open Source community.This is by working on the tasks and working with the Opensuse broader community on any issues. I want to build a network with my mentors: Philip, Tassia, Roland and other mentors in the community in order to create future opportunities for contributions, mentoring and just general communication.
08 December, 2025 07:27AM by hellen chemtai tay
I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.
The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.
I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."
I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").
A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.
I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).
One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.
It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.
One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.
If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:
The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.
Similarly for security reviews:
But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed
One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.
In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.
So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.
P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.
By now, the /usr-merge is an old transition.
Effectively, it turns top-level directories such as /bin into symbolic links pointing below /usr.
That way the entire operating system can be contained below the /usr hierarchy enabling e.g. image based update mechanisms.
It was first supported in Debian 9, which is no longer in active use at this point (except for users of Freexian’s ELTS offer).
When it became mandatory in Debian 12, it wasn’t really done though, because Debian’s package manager was not prepared to handle file system objects being referred to via two different paths.
With nobody interested in handling the resulting issues, Freexian stepped in and funded a project lead by Helmut Grohne to resolve the remaining issues.
While the initial idea was to enhance the package manager, Debian’s members disagreed. They preferred an approach where files were simply tracked with their physical location while handling the resulting misbehavior of the package manager using package-specific workarounds. This has been recorded in the DEP17 document. During the Debian 13 release cycle, the plan has been implemented. A tool for detecting possible problems was developed specifically for this transition. Since all files are now tracked with their physical location and necessary workarounds have been added, problematic behavior is no longer triggered. An upgrade from Debian 12 to Debian 13 is unlikely to run into aliasing problems as a result.
This whole project probably consumed more than 1500 hours of work from Debian contributors, of which 700 were sponsored by Freexian through the work of Helmut Grohne. What remains is eventually removing the workarounds.
08 December, 2025 12:00AM by Helmut Grohne
Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.
I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.
So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.
Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.
My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.
Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.
Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.
Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.
On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.
Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.
And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.
So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.
I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!
Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.
During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.
The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.
The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.
So what could a better design look like?
For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.
I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.
My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.
Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?
We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:
registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash
The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.
How would you use them? A small way to start the container is like this:
jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
guix 21ce6b3
repository URL: https://git.guix.gnu.org/guix.git
branch: master
commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2#
The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.
The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.
cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello
This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.
To use the containers to build something in a GitLab pipeline, here is an example snippet:
test-amd64-latest-wget-configure-make-libksba:
image: registry.gitlab.com/debdistutils/guix/container:latest
before_script:
- cp -rL /gnu/store/*profile/etc/* /etc/
- echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
- echo 'root:x:0:' > /etc/group
- groupadd --system guixbuild
- for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
- export HOME=/
- env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
- guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
- guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
- guix describe
- guix install libgpg-error
- GUIX_PROFILE="//.guix-profile"
- . "$GUIX_PROFILE/etc/profile"
script:
- wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
- tar xfa libksba-1.6.7.tar.bz2
- cd libksba-1.6.7
- ./configure
- make V=1
- make check VERBOSE=t V=1
More help on the project page for the Guix Container Images.
That’s it for tonight folks, and remember, Happy Hacking!
06 December, 2025 10:22PM by simon
It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.
My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.
As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:
I created the latest Wikipedia language edition, the Toki Pona Wikipedia, last month. Unlike most other wikis which start their lives in the Wikimedia Incubator before the full wiki is created, in this case the community had been using a completely external MediaWiki site to build the wiki before it was approved as a "proper" Wikipedia wiki,1 and now that external wiki needed to be imported to the newly created Wikimedia-hosted wiki. (As far as I'm aware, the last and previously only time an external wiki has been imported to a Wikimedia project was in 2013 when Wikitravel was forked as Wikivoyage.)
Creating a Wikimedia wiki these days is actually pretty straightforward, at least when compared to what it used to be like a couple of years ago. Today the process mostly involves using a script to generate two configuration changes, one to add the basic configuration for a wiki to operate and an another to add the wiki to the list of all wikis that exist, and then running a script to create the wiki database in between of deploying those two configuration changes. And then you wait half an hour while the script to tell all Wikidata client wikis about the new wiki runs on one wiki at a time.
The primary technical challenge in importing a third-party wiki is that there's no SUL making sure that a single username maps to the same account on both wikis. This means that the usual strategy of using the functionality I wrote in CentralAuth to manually create local accounts can't be used as is, and so we needed to come up with a new way of matching everyone's contributions to their existing Wikimedia accounts.
(Side note: While the user-facing interface tries to present a single "global" user account that can be used on all public Wikimedia wikis, in reality the account management layer in CentralAuth is mostly just a glue layer to link together individual "local" accounts on each wiki that the user has ever visited. These local accounts have independent user ID numbers — for example I am user #35938993 on the English Wikipedia but #4 on the new Toki Pona Wikipedia — and are what most of MediaWiki code interacts with except for a few features specifically designed with cross-wiki usage in mind. This distinction is also still very much present and visible in the various administrative and anti-abuse workflows.)
The approach we ended up choosing was to re-write the dump file before importing,
so that a hypothetical account called $Name would be turned $Name~wikipesija.org
after the import.2 We also created empty user accounts that would take
ownership of the edits to be imported so that we could use the standard account
management tools on them later on. MediaWiki supports importing contributions
without a local account to attribute them to, but it doesn't seem to be possible
to convert an imported actor3 to a regular user later on which we wanted
to keep as a possibility, even with the minor downside of creating a few hundred
users that'll likely never get touched again later.
We also made specific decisions to add the username suffix to everyone, not to just those names that'd conflicted with existing SUL accounts, and to deal with renaming users that wanted their contributions linked to an existing SUL account only after the import. This both reduced complexity and thus risk from the import phase, which already had much more unknowns compared to the rest of the process, but also were much better options ethically as well: suffixing all names meant we would not imply that those people chose to be Wikimedians with those specific usernames (when in reality it was us choosing to import those edits to the Wikimedia universe), and doing renames using the standard MediaWiki account management tooling meant that it produced the normal public log entries that all other MediaWiki administrative actions create.
With all of the edits imported, the only major thing remaining was doing those merges I mentioned earlier to attribute imported edits to people's existing SUL accounts. Thankfully, the local account -based system makes it actually pretty simple. Usually CentralAuth prevents renaming individual local accounts that are attached to a global account, but that check can be bypassed with a maintenance script or a privileged enough account. Renaming the user automatically detached it from the previous global account, after which an another maintenance script could be used to attach the user to the correct global account.
That external site was a fork of a fork of the original Toki Pona Wikipedia that was closed in 2005. And because cool URIs don't change, we made the the URLs that the old Wikipedia was using work again. Try it: https://art-tokipona.wikipedia.org. ↩︎
wikipesija.org was the domain where the old third-party wiki was hosted on, and
~ was used as a separator character in usernames during the
SUL finalization in the early
2010s so using it here felt appropriate as well. ↩︎
An actor is a MediaWiki term and a database table referring to anything that can do edits or logged actions. Usually an actor is a user account or an IP address, but an imported user name in a specific format can also be represented as an actor. ↩︎
06 December, 2025 12:00AM by Taavi Väänänen (taavi@majava.org)
My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.
You can also support my work directly via Liberapay or GitHub Sponsors.
I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.
This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.
I upgraded these packages to new upstream versions:
I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.
I fixed or helped to fix several other build/test failures:
I fixed a couple of other bugs:
04 December, 2025 05:55PM by Colin Watson
04 December, 2025 02:59PM by Ben Hutchings
Welcome to the report for November 2025 from the Reproducible Builds project!
These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
In this report:
On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.
Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:
[…] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?
In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Joschen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.
kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.
Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.
Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.
Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.
Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.
diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). […][…][…]
In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:
debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. […][…]Standards-Version to 4.7.2, with no changes needed. […]Rules-Requires-Root header as it is no longer required.. […]In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. […]
Once again, there were a number of improvements made to our website this month including:
Bernhard M. Wiedemann updated the SOURCE_DATE_EPOCH page to fix the Lisp example syntax. […]
Holger Levsen updated a number of pages on our website related to our recent summit in Vienna […][…][…][…][…], and added a link to the YouTube video of his recent talk at Transparency.dev in Gothenburg, Sweden […].
James Addison replaced a broken link on the Reproducibility Troubleshooting page with one using Archive.org. […]
kpcyrd also updated the Vienna summit page in order to update group picture […] as well as to expand the project list […].
Robert Stupp added a new Helm page […][…], and fleshed out some Gradle specifics, etc. on the JVM page […].
It was noticed that the Comparison of Linux distributions Wikipedia page now has a “Reproducible builds” column.
The popular Ruby on Rails web development framework had a reproducibility-related test failure due to daylight savings time changes.
Debian Developer Otto Kekäläinen appeared on the Open Source Security podcast, relating to their blog post about the XZ backdoor. The video, audio, as well as a full transcript of the show are available on the Open Source Security podcast page for this episode.
Thomas Weißschuh posted to our mailing list in order to look for feedback on their CONFIG_MODULE_HASHES patchset for the Linux kernel, “which aims to enable reproducible kernel packages for Linux distributions”.
kpcyrd also posted our list with a post entitled “Github Actions and the hashFiles incident”.
Simon Mudd posted to the list as well “looking for reproducible RPM building / rebuilding tooling”. Simon had watched a recent talk by Holger Levsen and was trying to ensure that he could rebuild various MySQL .rpms.
Lastly, there was a thread related to the hosting of the website powering this very report.
Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:
Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.
Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Bernhard M. Wiedemann:
SARndbox (race)clamav (rust toolchain)contrast/identity/loupe/mousai (need glib-macros update)cosmic (cosmic* HashMap)dealers-choice (nocheck)falcon (python-falcon date)FreeDoko (date)gnutls (FTBFS-CPU)gods-deluxe (jar mtimes)Kinect (date)libplasma6 (qmlcachegen race)llvm (rocm-omp date)rnp (FTBFS-2041)rocsolver (FTBFS-j1)switcheroo (FTBFS-j1)vdrift (date)Arnout Engelen:
ibus (parallelism)qmlcachegen (with Ulf Hermann)Chris Lamb:
python-gffutils.python-biom-format.python-requests-cache.python-tld.smart-open.vanguards.pycifrw.golang-github-apptainer-container-library-client.python-ofxhome.python-lupa.mu-editor.python-spdx-tools.python-django-waffle.biosquid.dateparser.parsinsert.rdf2rml.python-et-xmlfile.deblur.ytcc.pgpainless.trillian.pywavelets.jsonpath-ng.presto.python-pyutil.python-os-apply-config.pydata-sphinx-theme.python-ciso8601.python-pymummer.qcat.tkgate.tkgate.ruby-gnuplot.python-nixio.python-altair.python-graphene.python-phabricator.python-slimmer.python-kafka.python-sshsig.python-babelgladeextractor.python-genson.flawfinder.crasm.insilicoseq.pychopper.pycparser.whipper.vt.pyxnat.golang-github-kshedden-statmodel.nim-hts.golang-github-emicklei-dot.golang-gonum-v1-plot.beangulp.virulencefinder.ansible-lint.entropybroker.namecheap.spopt.pyasn.python-pyvcf.python-pysaml2.Jochen Sprickerhof:
Vagrant Cascadian:
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
IRC: #reproducible-builds on irc.oftc.net.
Mastodon: @reproducible_builds@fosstodon.org
Mailing list: rb-general@lists.reproducible-builds.org
As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.
According to the documentation “It instructs libvirt to avoid termination of the VM if the guest OS shuts down while the backup is still running. The VM is in that scenario reset and paused instead of terminated allowing the backup to finish. Once the backup finishes the VM process is terminated.”
Added support for this in virtnbdbackup 2.40.
Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.
I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
Or you prefer guix-on-dpkg on Docker Hub:
docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix
You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.
jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
guix 136fc8b
repository URL: https://gitlab.com/debdistutils/guix/mirror.git
branch: master
commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/#
You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.
I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.
Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:
.guile-gnutls: &guile-gnutls
before_script:
- /root/.config/guix/current/bin/guix-daemon --version
- env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
- GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
- type guix
- guix --version
- guix describe
- time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
- time apt-get update
- time apt-get install -y make git texinfo
- GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
script:
- git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
- cd guile-gnutls
- git checkout v5.0.1
- ./bootstrap
- ./configure
- make V=1
- make V=1 check VERBOSE=t
- make V=1 dist
after_script:
- mkdir -pv out/$CI_JOB_NAME_SLUG/src
- mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
- mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
artifacts:
paths:
- out/**
This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.
Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.
guile-gnutls-trisquel11-amd64:
tags: [ saas-linux-medium-amd64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
extends: .guile-gnutls
guile-gnutls-ubuntu22.04-arm64:
tags: [ saas-linux-medium-arm64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
extends: .guile-gnutls
guile-gnutls-trisquel12-amd64:
tags: [ saas-linux-medium-amd64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
extends: .guile-gnutls
guile-gnutls-ubuntu24.04-arm64:
tags: [ saas-linux-medium-arm64 ]
image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
extends: .guile-gnutls
Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:
guile-gnutls-compare:
image: alpine:latest
needs: [ guile-gnutls-trisquel11-amd64,
guile-gnutls-trisquel12-amd64,
guile-gnutls-ubuntu22.04-arm64,
guile-gnutls-ubuntu24.04-arm64 ]
script:
- cd out
- sha256sum */*.tar.* */*/*.tar.* | sort | grep -- -src.tar.
- sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
- sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
- sha256sum */*.tar.* */*/*.tar.* | grep -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
- sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
# Confirm modern git-archive tarball reproducibility
- cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
- cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
- cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
- cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
artifacts:
when: always
paths:
- ./out/**
Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:
$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^ 1 '
2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84 guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!
02 December, 2025 10:01PM by simon
A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.
This release 0.0.5 adds one new method: kmeans clustering. We also
added two version accessors for both mlpack and armadillo. We found during the work on
random forests (added in 0.0.4) that the multithreaded random number
generation was not quite right in the respective upstream codes. This
has by now been corrected in armadillo
15.2.2 as well as the trunk version of mlpack so if you build with those, and
set a seed, then your forests and classification will be stable across
reruns. We added a second state variable mlpack_silent that
can be used to suppress even the minimal prediction quality summary some
methods show, and expanded the documentation.
For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages:
Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven’t found any software that could provide this that is packaged in Debian.


The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.
It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button.
Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.
Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state.
First, I connected to the router using ssh:
ssh root@192.168.1.1
Then I listed the available snapshots:
$ schnapps list
# | Type | Size | Date | Description
------+-----------+-------------+-----------------------------+------------------------------------
500 | post | 15.98MiB | 2025-08-09 11:27:48 -0700 | Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
506 | pre | 17.92MiB | 2025-09-12 03:44:32 -0700 | Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
507 | post | 17.88MiB | 2025-09-12 03:45:14 -0700 | Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
515 | time | 20.03MiB | 2025-11-02 01:05:01 -0700 | Snapshot created by cron
516 | time | 20.05MiB | 2025-11-09 01:05:01 -0800 | Snapshot created by cron
517 | time | 20.29MiB | 2025-11-16 01:05:00 -0800 | Snapshot created by cron
518 | time | 20.64MiB | 2025-11-23 01:05:01 -0800 | Snapshot created by cron
519 | time | 20.83MiB | 2025-11-30 01:05:00 -0800 | Snapshot created by cron
520 | pre | 87.91MiB | 2025-11-30 07:41:10 -0800 | Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
521 | post | 196.32MiB | 2025-11-30 07:48:11 -0800 | Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
523 | pre | 4.44MiB | 2025-11-30 20:47:31 -0800 | Automatic pre-update snapshot
524 | post | 224.00KiB | 2025-11-30 20:47:43 -0800 | Automatic post-update snapshot
525 | rollback | 224.00KiB | 2025-12-01 04:56:32 +0000 | Rollback to snapshot factory
526 | pre | 4.44MiB | 2025-11-30 21:04:19 -0800 | Automatic pre-update snapshot
527 | post | 272.00KiB | 2025-11-30 21:04:31 -0800 | Automatic post-update snapshot
528 | rollback | 272.00KiB | 2025-12-01 05:13:38 +0000 | Rollback to snapshot factory
529 | pre | 4.52MiB | 2025-11-30 21:28:44 -0800 | Automatic pre-update snapshot
530 | single | 208.00KiB | |
531 | rollback | 224.00KiB | 2025-12-01 05:29:47 +0000 | Rollback to snapshot factory
Finally, I rolled back to the exact state I was on before the 9.0.0 update:
$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520
As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots
While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.
Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more:
See below for details on the above and more:
DebugControl interface (MR)org.freedesktop.FileManager1 in the demo (MR, MR, MR)make run invocation (MR)None for parent in adw_dialog_choose (MR)This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!
If you want to support my work see donations.
Join the Fediverse thread
Review: Forever and a Day, by Haley Cass
| Series: | Those Who Wait #1.5 |
| Publisher: | Haley Cass |
| Copyright: | 2020 |
| ISBN: | 979-8-5902-5966-3 |
| Format: | Kindle |
| Pages: | 101 |
Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement.
Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after.
I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that.
Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode.
I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied.
Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect.
I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics.
Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that.
Rating: 6 out of 10
01 December, 2025 04:06AM by Junichi Uekawa
Here’s my monthly but brief update about the activities I’ve done in the FOSS world.
Whilst I didn’t get a chance to do much, here are still a few things that I worked on:
I joined Canonical to work on Ubuntu full-time back in February 2021.
Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:
This month I have worked 22 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:
wordpress: There were multiple vulnerabilities reported in Wordpress, leading to Sent Data & Cross-site Scripting.
ruby-rack: There were multiple vulnerabilities reported in Rack, leading to DoS (memory exhaustion) and proxy bypass.
libwebsockets: Multiple issues were reported in LWS causing denial of service and stack-based buffer overflow.
mako: It was found that Mako, a Python template library, was vulnerable to a denial of service attack via crafted regular expressions.
ceph: Affected by CVE-2024-47866, using the argument x-amz-copy-source to put an object and specifying an empty string as its content leads to the RGW daemon crashing, resulting in a DoS attack.
[LTS] Attended the monthly LTS meeting on IRC. Summary here.
[E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates. Thanks, Sylvain, for a great documentation summary.
Until next time.
:wq for today.
Review: The Last Soul Among Wolves, by Melissa Caruso
| Series: | The Echo Archives #2 |
| Publisher: | Orbit |
| Copyright: | August 2025 |
| ISBN: | 0-316-30404-2 |
| Format: | Kindle |
| Pages: | 355 |
The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those.
Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble.
Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive.
The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt — the challenge instead is how to stop it from happening again — but Kem's problems quickly become more complicated.
As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance.
The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating.
There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction.
It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset — Rika, in particular, gets considerable character development and new complications that bode well for future volumes — but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry.
If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended.
The third book in the series has not yet been announced, but there are indications on social media that it is coming.
Rating: 7 out of 10

I am a huge fan of Git, as I have witnessed how it has made software development so much more productive compared to the pre-2010s era. I wish all Debian source code were in Git to reap the full benefits.
Git is not perfect, as it requires significant effort to learn properly, and the ecosystem is complex with even more things to learn ranging from cryptographic signatures and commit hooks to Git-assisted code review best practices, ‘forge’ websites and CI systems.
Sure, there is still room to optimize its use, but Git certainly has proven itself and is now the industry standard. Thus, some readers might be surprised to learn that Debian development in 2025 is not actually based on Git. In Debian, the version control is done by the Debian archive itself. Each ‘commit’ is a new upload to the archive, and the ‘commit message’ is the debian/changelog entry. The ‘commit log’ is available at snapshots.debian.org.
In practice, most Debian Developers (people who have the credentials to upload to the Debian archive) do use Git and host their packaging source code on salsa.debian.org – the GitLab instance of Debian. This is, however, based on each DD’s personal preferences. The Debian project does not have any policy requiring that packages be hosted on salsa.debian.org or be in version control at all.
Debian, however, has some peculiarities that may be surprising to people who have grown accustomed to GitHub, GitLab or various company-internal code review systems.
In Debian:
unstable area is equivalent to the main development branch).This system has served Debian for three decades. It is not broken, but using the package archive just feels… well, archaic.
There is a more efficient way, and indeed the majority of Debian packages have a metadata field Vcs-Git that advertises which version control repository the maintainer uses. However, newcomers to Debian are surprised to notice that not all packages are hosted on salsa.debian.org but at various random places with their own account and code submission systems, and there is nothing enforcing or even warning if the code there is out of sync with what was uploaded to Debian. Any Debian Developer can at any time upload a new package with whatever changes, bypassing the Git repository, even when the package advertised a Git repository. All PGP signed commits, Git tags and other information in the Git repository are just extras currently, as the Debian archive does not enforce or validate anything about them.
This also makes contributing to multiple packages in parallel hard. One can’t just go on salsa.debian.org and fork a bunch of repositories and submit Merge Requests. Currently, the only reliable way is to download source packages from Debian unstable, develop patches on top of them, and send the final version as a plain patch file by email to the Debian bug tracker. To my knowledge, no system exists to facilitate working with the patches in the bug tracker, such as rebasing patches 6 months later to detect if they or equivalent changes were applied or if sending refreshed versions is needed.
To newcomers in Debian, it is even more surprising that there are packages that are on salsa.debian.org but have the Merge Requests feature disabled. This is often because the maintainer does not want to receive notification emails about new Merge Requests, but rather just emails from bugs.debian.org. This may sound arrogant, but keep in mind that these developers put in the effort to set up their Mutt/Emacs workflow for the existing Debian process, and extending it to work with GitLab notifications is not trivial. There are also purists who want to do everything via the command-line (without having to open a browser, run JavaScript and maintain a live Internet connection), and tools like glab are not convenient enough for the full workflow.
I would claim, based on my personal experiences from the past 10+ years as a Debian Developer, that the lack of high-quality and productive tooling is seriously harming Debian. The current methods of collaboration are cumbersome for aspiring contributors to learn, and suboptimal to use both for new and seasoned contributors.
There are no exit interviews for contributors who left Debian, no comprehensive data on reasons to contribute or stop contributing, nor are there any metrics tracking how many people tried but failed to contribute to Debian. Some data points to support my concerns do exist:
Debian is all about community and collaboration. One would assume that Debian prioritized above all making collaboration tools and processes simpler, faster and less error-prone, as it would help both current and future package maintainers. Yet, it isn’t so, due to some reasons unique to Debian.
There is no single company or entity running Debian, and it has managed to operate as a pure meritocracy and do-cracy for over 30 years. This is impressive and admirable. Unfortunately, some of the infrastructure and technical processes are also nearly 30 years old and very difficult to change due to the same reason: the nature of Debian’s distributed decision-making process.
As a software developer and manager with 25+ years of experience, I strongly feel that developing software collaboratively using Git is a major step forward that Debian needs to take, in one form or another, and I hope to see other DDs voice their support if they agree.
Following how consensus is achieved in Debian, I started drafting DEP-18 in 2024, and it is currently awaiting enough thumbs up at https://salsa.debian.org/dep-team/deps/-/merge_requests/21 to get into CANDIDATE status next.
In summary the DEP-18 proposes that everyone keen on collaborating should:
The principles above are not novel. According to stats at e.g. trends.debian.net, and UDD, ~93% of all Debian source packages are already hosted on salsa.debian.org. As of June 1st, 2025, only 1640 source packages remain that are not hosted on Salsa. The purpose of DEP-18 is to state in writing what Debian is currently doing for most packages, and thus express what among others new contributors should be learning and doing, so basic collaboration is smooth and free from structural obstacles.
Most packages are also already allowing Merge Requests and using Salsa CI, but there hasn’t been any written recommendation anywhere in Debian to do so. The Debian Policy (v.4.7.2) does not even mention the word “Salsa” a single time. The current process documentation on how to do non-maintainer uploads or salvaging packages are all based on uploading packages to the archive, without any consideration of using git-based collaboration such as posting a Merge Request first. Personally I feel posting a Merge Request would be a better approach, as it would invite collaborators to discuss and provide code reviews. If there are no responses, the submitter can proceed to merge, but compared to direct uploads to the Debian archive, the Merge Request practice at least tries to offer a time and place for discussions and reviews to happen.
It could very well be that in the future somebody comes up with a new packaging format that makes upstream source package management easier, or a monorepo with all packages, or some other future structures or processes. Having a DEP to state how to do things now does not prevent people from experimenting and innovating if they intentionally want to do that. The DEP is merely an expression of the minimal common denominators in the packaging workflow that maintainers and contributors should follow, unless they know better.
Among the DEP-18 recommendations is:
The recommended first step in contributing to a package is to use the built-in “Fork” feature on Salsa. This serves two purposes. Primarily, it allows any contributor to publish their Git branches and submit them as Merge Requests. Additionally, the mere existence of a list of “Forks” enables contributors to discover each other, and in rare cases when the original package is not accepting improvements, collaboration could arise among the contributors and potentially lead to permanent forks in the general meaning. Forking is a fundamental part of the dynamics in open source that helps drive quality and agreement. The ability to fork ultimately serves as the last line of defense of users’ rights. Git supports this by making both temporary and permanent forks easy to create and maintain.
Further, it states:
Debian packaging work should be reasonably transparent and public to allow contributors to participate. A maintainer should push their pending changes to Salsa at regular intervals, so that a potential contributor can discover if a particular change has already been made or a bug has been fixed in version control, and thus avoid duplicate work.
Debian maintainers should make reasonable efforts to publish planned changes as Merge Requests on Salsa, and solicit feedback and reviews. While pushing changes directly on the main Git branch is the fastest workflow, second only to uploading all changes directly to Debian repositories, it is not an inclusive way to develop software. Even packages that are maintained by a single maintainer should at least occasionally publish Merge Requests to allow new contributors to step up and participate.
I think these are key aspects leading to transparency and true open source collaboration. Even though this talks about Salsa — which is based on GitLab — the concepts are universal and will work also on other forges, like Forgejo or GitHub. The point is that sharing work-in-progress on a real-time platform, with CI and other supporting features, empowers and motivates people to iterate on code collaboratively. As an example of an anti-pattern, Oracle MySQL publishes the source code for all their releases and are license-compliant, but as they don’t publish their Git commits in real-time, it does not feel like a real open source project. Non-Oracle employees are not motivated to participate as second-class developers who are kept in the dark. Debian should embrace git and sharing work in real-time, embodying a true open source spirit.
Note that the Debian Enhancement Proposals are not binding. Only the Debian Policy and Technical Committee decisions carry that weight. The nature of collaboration is voluntary anyway, so the DEP does not need to force anything on people who don’t want to use salsa.debian.org.
The DEP-18 is also not a guide for package maintainers. I have my own views and have written detailed guides in blog articles if you want to read more on, for example, how to do code reviews efficiently.
Within DEP-18, there is plenty of room to work in many different ways, and it does not try to force one single workflow. The goal here is to simply have agreed-upon minimal common denominators among those who are keen to collaborate using salsa.debian.org, not to dictate a complete code submission workflow.
Once we reach this, there will hopefully be less friction in the most basic and recurring collaboration tasks, giving DDs more energy to improve other processes or just invest in having more and newer packages for Debian users to enjoy.
In addition to lengthy online discussions on mailing lists and DEP reviews, I also presented on this topic at DebConf 2025 in Brest, France. Unfortunately the recording is not yet up on Peertube.
The feedback has been overwhelmingly positive. However, there are a few loud and very negative voices that cannot be ignored. Maintaining a Linux distribution at the scale and complexity of Debian requires extraordinary talent and dedication, and people doing this kind of work often have strong views, and most of the time for good reasons. We do not want to alienate existing key contributors with new processes, so maximum consensus is desirable.
We also need more data on what the 1000+ current Debian Developers view as a good process to avoid being skewed by a loud minority. If you are a current or aspiring Debian Developer, please add a thumbs up if you think I should continue with this effort (or a thumbs down if not) on the Merge Request that would make DEP-18 have candidate status.
There is also technical work to do. Increased Git use will obviously lead to growing adoption of the new tag2upload feature, which will need to get full git-buildpackage support so it can integrate into salsa.debian.org without turning off Debian packaging security features. The git-buildpackage tool itself also needs various improvements, such as making contributing to multiple different packages with various levels of diligence in debian/gbp.conf maintenance less error-prone.
Eventually, if it starts looking like all Debian packages might get hosted on salsa.debian.org, I would also start building a review.debian.org website to facilitate code review aspects that are unique to Debian, such as tracking Merge Requests across GitLab projects in ways GitLab can’t do, highlighting which submissions need review most urgently, feeding code reviews and approvals into the contributors.debian.org database for better attribution and so forth.
Details on this vision will be in a later blog post, so subscribe to updates!
The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for October.
During the month of October, 21 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).
The team released 37 DLAs fixing 893 CVEs.
The team has continued in its usual rhythm, preparing and uploading security updates targeting LTS and ELTS, as well as helping with updates to oldstable, stable, testing, and unstable. Additionally, the team received several contributions of LTS uploads from Debian Developers outside the standing LTS Team.
Notable security updates:
Notable non-security updates:
Contributions from outside the LTS Team:
Beyond the typical LTS updates, the team also helped the Debian community more broadly:
The LTS Team is grateful for the opportunity to contribute to making LTS a high quality for sponsors and users. We are also particularly grateful for the collaboration from others outside the time; their contributions are important to the success of the LTS effort.
Sponsors that joined recently are in bold.
29 November, 2025 12:00AM by Roberto C. Sánchez
One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working.
Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and
if I'm going to be forced to use a new subkey I want mine to
be of the 25519 variety. Therefore, to add a subkey by hand:
gpg --expert --edit-key $KEYID
Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey,
but change the key type to 11 (ECC (set your own capabilities)),
don't bother with Encrypt capability, and pick Curve25519.
monkeysphere subkey-to-ssh-agent and agent-transfer will be all
happy with the "ed25519" subkey without any code modifications,
and you won't need to rewrite monkeysphere from scratch to use
Sequoia for the next 15 years.
The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed.
The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull.
Supported architectures include amd64 and arm64. The multi-arch container is called:
registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
It may also be accessed via debian-with-guix at Docker Hub as:
docker.io/jas4711/debian-with-guix:stable
The container images may be used like this:
$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
guix c9eb69d
repository URL: https://gitlab.com/debdistutils/guix/mirror.git
branch: master
commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2 Nov 28 2025 10:14:11 (current)
guix c9eb69d
repository URL: https://gitlab.com/debdistutils/guix/mirror.git
branch: master
commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
hello 2.12.2
hint: Consider setting the necessary environment variables by running:
GUIX_PROFILE="/root/.guix-profile"
. "$GUIX_PROFILE/etc/profile"
Alternately, see `guix package --search-paths -p "/root/.guix-profile"'.
root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/#
Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.
test-wget-configure-make-libksba-amd64:
image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
before_script:
- env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
- GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
- guix describe
- guix install libgpg-error
- GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
- apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
script:
- wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
- tar xfa libksba-1.6.7.tar.bz2
- cd libksba-1.6.7
- ./configure
- make V=1
- make check VERBOSE=t V=1
The images were initially created for use in GitLab CI/CD Pipelines but should work for any use.
The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml.
The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh.
The pipeline also push images to the GitLab container registry, and then also to Docker Hub.
Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns.
Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before.
There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained.
For ppc64el support I ran into an error message that I wasn’t able to resolve:
guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted
For riscv64, I can’t even find a Guix riscv64 binary tarball for download, is there one anywhere?
For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com’s shared runners, otherwise you will get this error message:
guix install: error: clone: Invalid argument
Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:
guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted
Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:
guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented
This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that.
Happy Hacking!
28 November, 2025 04:32PM by simon
Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
2 available for devices) so this is really a device for use with 10Gbit uplink.
Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.
So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.
It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.
The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).
I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.
For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.
For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].
28 November, 2025 08:13AM by etbe
Review: A Matter of Execution, by Nicholas & Olivia Atwater
| Series: | Tales of the Iron Rose #0 |
| Publisher: | Starwatch Press |
| Copyright: | 2024 |
| ISBN: | 1-998257-08-8 |
| Format: | Kindle |
| Pages: | 131 |
A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium.
As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium: the extremely slow first half that seemed filled with the protagonist's impostor syndrome.
A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse.
Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion.
Breaking out Strahl's companion will be a more difficult, and surprising, problem.
Nicholas Atwater is a role-playing game GM, something that you will learn in the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium, this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands-style title card with a slightly demented description of each character.
This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story — it's clearly an introductory episode for both the characters and the world background — but it's a fun way to spend a few hours.
I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium, you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion.
If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better.
If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters.
Followed by Echoes of the Imperium. The back matter of the novella says that The Winds of Fortune is supposedly forthcoming.
Rating: 7 out of 10
I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2].
I started writing this post while using the band from a Colmi P80 [3]. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.
I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!
The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.
The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.
There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.
I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.
27 November, 2025 12:37AM by etbe
A few years ago I wrote some planner generating code to make myself a custom planner; in November 2023 I generated a few, and posted them here on the blog, in case somebody was interested in using them.
In 2024 I tried to do the same, and ended up being even more late, to the point where I didn’t generate any (uooops).
I did, however, start to write a Makefile to automate the generation (and got stuck on the fact that there wasn’t an easy way to deduce the correct options needed from just the template name); this year, with the same promptness as in 2023 I got back to the Makefile and finished it, so maybe next year I will be able to post them early enough for people to print and bind them? maybe :)
Anyway, these are all of the variants I currently generate, for 2026.
The files with -book in the name have been imposed on A4 paper for a
16 pages signature. All of the fonts have been converted to paths, for
ease of printing (yes, this means that customizing the font requires
running the script, but the alternative also had its drawbacks).
In English:
blank daily pages, 95 mm × 186 mm;
blank daily pages, A5;
blank daily pages, A6;
graph paper (4 mm) daily pages, A5;
pointed paper (4 mm), A5;
ruled paper daily pages, A5;
weekly planner, one week on two pages, A6;
weekly planner, one week per page, A6;
weekly planner, one week per page with 4 mm dots, A6;
weekly health tracker, one week per page with 4 mm dots, A6;
monthly planner, A6;
And the same planners, in Italian:
blank daily pages, 95 mm × 186 mm;
blank daily pages, A5;
blank daily pages, A6;
graph paper (4 mm) daily pages, A5;
pointed paper (4 mm), A5;
ruled paper daily pages, A5;
weekly planner, one week on two pages, A6;
weekly planner, one week per page, A6;
weekly planner, one week per page with 4 mm dots, A6;
weekly health tracker, one week per page with 4 mm dots, A6;
monthly planner, A6;
Some of the planners include ephemerids and moon phase data: these have been calculated for the town of Como, and specifically for geo:45.81478,9.07522?z=17, because that’s what everybody needs, right?
If you need the ephemerids for a different location and can’t run the script yourself (it depends on pdfjam, i.e. various GB of LaTeX, and a few python modules such as dateutil, pypdf and jinja2), feel free to ask: unless I receive too many requests to make this sustainable I’ll generate them and add them to this post.
I hereby release all the PDFs linked in this blog post under the CC0 license.
You may notice that I haven’t decided on a license for the code dump repository; again if you need it for something (that is compatible with its unsupported status) other than running it for personal use (for which afaik there is an implicit license) let me know and I’ll push “decide on a license” higher on the stack of things to do :D
Finishing the Makefile meant that I had to add a tiny feature to one of the scripts involved, which required me to add a dependency to pypdf: up to now I have been doing the page manipulations with pdfjam, which is pretty convenient to use, but also uses LaTeX, and apparently not every computer comes with texlive installed (shocking, I know).
If I’m not mistaken, pypdf can do all of the things I’m doing with pdfjam, so maybe for the next year I could convert my script to use that one instead.
But then the planners 2027 will be quick and easy, and I will be able to publish them promptly, right?
The following contributors got their Debian Developer accounts in the last two months:
The following contributors were added as Debian Maintainers in the last two months:
Congratulations!
26 November, 2025 04:00PM by Jean-Pierre Giraud
I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution.
This post covers many attempts to try and get the TV to work correctly and it doesn’t have good answers. The best answer might be to not buy Hisense devices but I still lack data.
I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn’t give me the answer on a silver platter but pointed me in the right direction of EDID [4].
I installed the Debian packages read-edid, wxedid, and edid-decode.
The command “get-edid > out.edid” saves the binary form of the edid to a file. The command “wxedid out.edid” allows graphical analysis of the EDID data. The command “edid-decode out.edid” dumps a plain text representation of the output, the command “edid-decode out.edid|grep VIC|cut -d: -f2|sort -n” shows an ordered list of video modes, in my case the highest resolution is 4096×2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).
xrandr --newmode 7680x4320 1042.63 7680 7984 7760 7824 4320 4353 4323 4328 xrandr --addmode HDMI-3 7680x4320 xrandr --output HDMI-3 --mode 7680x4320
I ran the above commands and got the below error:
xrandr: Configure crtc 0 failed
At this time I don’t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn’t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of.
I found a Github repository for EDID data [5] but that didn’t have an entry for my TV and didn’t appear to have any other entry for an 8K device I could use.
I installed a browser on the TV, Chrome and Firefox aren’t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar – which seems about right. My Note 9 phone reports 384*661 out of it’s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have “Screen zoom” in system settings at 1/4. When I changed “Screen zoom” to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn’t change the claimed resolution. The claimed “Browser Viewport Size” by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution!
When I view Netflix shows using the Netflix app running on the TV is reports “4K” which doesn’t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the “Device” setting it reports “Device Model” as “Hisense_SmartTV 8K FFM” so the Netflix app knows all about 4K content and knows the text string “8K”.
When I view a YouTube video that’s described as being 8K I don’t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on “State for Nerds” and one line has “Viewport / Frames 1920×1080*2.00” and another has “Current / Optimal Res 3840×2160@60 / 3840×2160@60” so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to “2160p60 HDR”. It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what’s required for 8K.
I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080.
It looks like I might have been ripped off by this. I can’t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080.
The “AI Upscaling” isn’t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer.
The next things I plan to do are to continue attempts to get the TV to do what it’s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC.
I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that’s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display.
It would be nice if someone made a USB device that does 8K video output – NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it.
The one thing I haven’t done yet is testing 8K MP4 files on a USB stick. That’s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick.
Most people would give up about now. But I am determined to solve this and buying another large TV isn’t out of the question.
25 November, 2025 07:09AM by etbe
I spent far too much time recently trying to get a Signal Private Messenger account to transfer from one device to another.
What I eventually found worked was a very finicky path to enable functioning "Wi-Fi Direct", which I go into below.
I also offer some troubleshooting and recovery-from-failure guidance.
All of this blogpost uses "original device" to refer to the Android pocket supercomputer that already has Signal installed and set up, and "new device" to mean the Android device that doesn't yet have Signal on it.
Signal Private Messenger is designed with the expectation that the user has a "primary device", which is either an iPhone or an Android pocket supercomputer.
If you have an existing Signal account, and try to change your primary device by backing up and restoring from backup, it looks to me like Signal will cause your long-term identity keys to be changed. This in turn causes your peers to see a message like "Your safety number with Alice has changed."
These warning messages are the same messages that they would get if an adversary were to take over your account. So it's a good idea to minimize them when there isn't an account takeover — false alarms train people to ignore real alarms.
You can avoid "safety number changed" warnings by using signal's "account transfer" process during setup, at least if you're transferring between two Android devices.
However, my experience was that the transfer between two Android devices was very difficult to get to happen at all. I ran into many errors trying to do this, until I finally found a path that worked.
After each failed attempt at a transfer, my original device's Signal installation would need to be re-registered. Having set a PIN meant that i could re-register the device without needing to receive a text message or phone call.
Set a PIN before you transfer!
Also, after a failure, you need to re-link any "linked device" (i.e. any Signal Desktop or iPad installation). If any message came in during the aborted transfer, the linked device won't get a copy of that message.
Finally, after a failed transfer, i recommend completely uninstalling Signal from the new device, and starting over with a fresh install on the new device.
My understanding is that Signal on Android uses Wi-Fi Direct to accomplish the transfer. But to use Wi-Fi Direct, Signal needs to have the right permissions.
On each device:
Settings » Apps » Signal » PermissionsThe transfer process depends on "Wi-Fi Direct", which is a bit of a disaster on its own.
I found that if i couldn't get Wi-Fi Direct to work between the two devices, then the Signal transfer was guaranteed to fail.
So, for clearer debugging, i first tried to establish a Wi-Fi Direct link on Android, without Signal being involved at all.
Setting up a Wi-Fi Direct connection directly failed, multiple times, until i found the following combination of steps, to be done on each device:
I found that this configuration is the most likely to enable a successful Wi-Fi Direct connection, where clicking "invite" on one device would pop up an alert on the other asking to accept the connection, and result in a "Connected" state between the two devices.
Start with both devices fully powered up and physically close to one another (on the same desk should be fine).
On the new device:
On the original device:
Now tap the "continue" choices on both devices until they both display a message that they are searching for each other. You might see the location indicator (a green dot) turn on during this process.
If you see an immediate warning of failure on either device, you probably don't have the permissions set up right.
You might see an alert (a "toast") on one of the devices that the other one is trying to connect. You should click OK on that alert.
In my experience, both devices are likely to get stuck "searching" for each other. Wait for both devices to show Signal's warning that the search has timed out.
At this point, leave Signal open on both devices, and go through all the steps described above to prepare for Wi-Fi Direct. Your Internet access will be disabled.
Now, tap "Try again" in Signal on both devices, pressing the buttons within a few seconds of each other. You should see another alert that one device is trying to connect to the other. Press OK there.
At this point, the transfer should start happening! The old device will indicate what percentag has been transferred, and the new device will indicate how many messages hav been transferred.
When this is all done, re-connect to Wi-Fi on the new device.
Note that during this process, if new messages are arriving, they will be queuing up for you.
When you reconnect to wi-fi, the queued messages will flow to your new device. But the process of transferring automatically unlinks any linked devices. So if you want to keep your instance of Signal Desktop with as short a gap as possible, you should re-link that installation promptly after the transfer completes.
After all this is done successfully, you probably want to go into the Permissions settings and turn off the Location and Nearby Devices permissions for Signal on both devices.
I recommend also going into Wi-Fi Direct and removing any connected devices and forgetting any existing connections.
This is an abysmally clunky user experience, and I'm glad I don't have to do it often. It would have been much simpler to make a backup and restore from it, but I didn't want to freak out my contacts with a safety number change.
By contrast, when i wanted extend a DeltaChat account across two devices, the transfer was prompt and entirely painless -- i just had to make sure the devices were on the same network, and then scanned a QR code from one to the other. And there was no temporal gap for any other deviees. And i could use Delta on both devices simultaneously until i was convinced that it would work on the new device -- Delta doesn't have the concept of a primary account.
I wish Signal made it that easy! Until it's that easy, i hope the processes described here are useful to someone.
21 November, 2025 05:00AM by Daniel Kahn Gillmor
Well, you are right, you can’t. At least not directly. This is well documented in many projects relying on interposing binaries, like faketime.
But what if we could write something that would take a static binary, replace at least the direct syscalls with ones going through libc and load it with the dynamic linker? We are in luck, because the excellent QEMU project has a user space emulator! It can be compiled as a dynamically linked executable, honors LD_PRELOAD and uses the host libc’s syscall – well, at least sometimes. Sometimes syscalls just bypass libc.
The missing piece was a way to make QEMU always take the interposable path and call the host libc instead of using an arch-specifix assembly routine (`safe_syscall_base`) to construct the syscall and going directly to the kernel. Luckily, this turned out to be doable. A small patch later, QEMU gained a switch that forces all syscalls through libc. Suddenly, our static binaries started looking a lot more dynamic!
$ faketime '2008-12-24 08:15:42' qemu-x86_64 ./test_static_clock_gettime
2008-12-24 08:15:42.725404654
$ file test_static_clock_gettime
test_clock_gettime: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, ...
With this in place, Firebuild can finally wrap even those secretive statically linked tools. QEMU runs them, libc catches their syscalls, LD_PRELOAD injects libfirebuild.so, and from there the usual interposition magic happens. The result: previously uncachable build steps can now be traced, cached, and shortcut just like their dynamic friends.
There is one more problem though. Why would the static binaries deep in the build be run by QEMU? Firebuild also intercepts the `exec()` calls and now it rewrites them on the fly whenever the executed binary would be statically linked!
$ firebuild -d comm bash -c ./test_static
...
FIREBUILD: fd 9.1: ({ExecedProcess 161077.1, running, "bash -c ./test_static", fds=[0: {FileFD ofd={FileO
FD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOFD #3 type=FD_PIPE_OUT w} {Pipe #0} close_o
n_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=FD_PIPE_OUT w} {Pipe #1} close_on_popen=fal
se cloexec=false}, 3: {FileFD NULL} /* times 2 */]})
{
"[FBBCOMM_TAG]": "exec",
"file": "test_static",
"// fd": null,
"// dirfd": null,
"arg": [
"./test_static"
],
"env": [
"SHELL=/bin/bash",
...
"FB_SOCKET=/tmp/firebuild.cpMn75/socket",
"_=./test_static"
],
"with_p": false,
"// path": null,
"utime_u": 0,
"stime_u": 1017
}
FIREBUILD: -> proc_ic_msg() (message_processor.cc:782) proc={ExecedProcess 161077.1, running, "bash -c
./test_static", fds=[0: {FileFD ofd={FileOFD #0 type=FD_PIPE_IN r} cloexec=false}, 1: {FileFD ofd={FileOF
D #3 type=FD_PIPE_OUT w} {Pipe #0} close_on_popen=false cloexec=false}, 2: {FileFD ofd={FileOFD #4 type=F
D_PIPE_OUT w} {Pipe #1} close_on_popen=false cloexec=false}, 3: {FileFD NULL} /* times 2 */]}, fd_conn=9.
1, tag=exec, ack_num=0
FIREBUILD: -> send_fbb() (utils.cc:292) conn=9.1, ack_num=0 fd_count=0
Sending message with ancillary fds []:
{
"[FBBCOMM_TAG]": "rewritten_args",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"path": "/usr/bin/qemu-user-interposable"
}
...
FIREBUILD: -> accept_ic_conn() (firebuild.cc:139) listener=6
...
FIREBUILD: fd 9.2: ({Process NULL})
{
"[FBBCOMM_TAG]": "scproc_query",
"pid": 161077,
"ppid": 161073,
"cwd": "/home/rbalint/projects/firebuild/test",
"arg": [
"/usr/bin/qemu-user-interposable",
"-libc-syscalls",
"./test_static"
],
"env_var": [
"CCACHE_DISABLE=1",
...
"SHELL=/bin/bash",
"SHLVL=0",
"_=./test_static"
],
"umask": "0002",
"jobserver_fds": [],
"// jobserver_fifo": null,
"executable": "/usr/bin/qemu-user-interposable",
"// executed_path": null,
"// original_executed_path": null,
"libs": [
"/lib/x86_64-linux-gnu/libatomic.so.1",
"/lib/x86_64-linux-gnu/libc.so.6",
"/lib/x86_64-linux-gnu/libglib-2.0.so.0",
"/lib/x86_64-linux-gnu/libm.so.6",
"/lib/x86_64-linux-gnu/libpcre2-8.so.0",
"/lib64/ld-linux-x86-64.so.2"
],
"version": "0.8.5.1"
}
The QEMU patch is forwarded to qemu-devel. If it lands, anyone using QEMU user-mode emulation could benefit — not just Firebuild.
For Firebuild users, though, the impact is immediate. Toolchains that mix dynamic and static helpers? Cross-builds that pull in odd little statically linked utilities? Previously “invisible” steps in your builds? All now fair game for caching.
Firebuild 0.8.5 ships this new capability out of the box. Just update, make sure you’re using a patched QEMU, and enjoy the feeling of watching even static binaries fall neatly into place in your cached build graph. Ubuntu users can get the prebuilt patched QEMU packages from the Firebuild PPA already.
Static binaries, welcome to the party!
20 November, 2025 08:56PM by Réczey Bálint
Last week, our university held a «Mega Vaccination Center». Things cannot be small or regular with my university, ever! According to the official information, during last week ≈31,000 people were given a total of ≈74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile).
I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd.
(↑ photo credit: La Jornada, 2025.11.14)
And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to.
Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher.
I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter.
If you plan to attend DebConf (hell… If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please… Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won’t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.
SLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that’s not the case anymore, as with more recent SLES15 releases the official images were gone.
In the past, it was possible to clone existing projects on the opensuse build service to build the images by yourself, but i couldn’t find any templates for SLES 16.
Naturally, there are several ways to build images, and the tooling around involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing packer recipes wont work anymore, as Yast has been replaced by a new installer, called agma). All pretty complicated, …
So my current take on creating a vagrant image for SLE16 has been the following:
Two guestfs-tools that can now be used to modify the created qcow2 image:
virt-sysprep -a sles16.qcow2#!/bin/bash
useradd vagrant
mkdir -p /home/vagrant/.ssh/
chmod 0700 /home/vagrant/.ssh/
echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
chmod 0600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/
# apply recommended ssh settings for vagrant boxes
SSHD_CONFIG=/etc/ssh/sshd_config.d/99-vagrant.conf
if [[ ! -d "$(dirname ${SSHD_CONFIG})" ]]; then
SSHD_CONFIG=/etc/ssh/sshd_config
# prepend the settings, so that they take precedence
echo -e "UseDNS no\nGSSAPIAuthentication no\n$(cat ${SSHD_CONFIG})" > ${SSHD_CONFIG}
else
echo -e "UseDNS no\nGSSAPIAuthentication no" > ${SSHD_CONFIG}
fi
SUDOERS_LINE="vagrant ALL=(ALL) NOPASSWD: ALL"
if [ -d /etc/sudoers.d ]; then
echo "$SUDOERS_LINE" >| /etc/sudoers.d/vagrant
visudo -cf /etc/sudoers.d/vagrant
chmod 0440 /etc/sudoers.d/vagrant
else
echo "$SUDOERS_LINE" >> /etc/sudoers
visudo -cf /etc/sudoers
fi
mkdir -p /vagrant
chown -R vagrant:vagrant /vagrant
systemctl enable sshd virt-customize -a sle16.qcow2 --upload vagrant.sh:/tmp/vagrant.sh virt-customize -a sle16.qcow2 --run-command "/tmp/vagrant.sh"After this, use the create-box.sh from the vagrant-libvirt project to create an box image:
https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh
and add the image to your environment:
create_box.sh sle16.qcow2 sle16.box
vagrant box add --name my/sles16 test.boxthe resulting box is working well within my CI environment as far as i can tell.
Just like a ship needs an anchor to stabilize and hold it to port, humans too, I feel, have and require anchors to hold them in life. It could be an emotional anchor, a physical anchor, an anchor that stimulates your curiosity, a family member, a friend or a partner or a spiritual being.
An anchor holds you and helps you stabilize in stormy weather. An anchor can keep you going or stop you from going. An anchor orients you, helps you formulate your values and beliefs.
An anchor could be someone or something or oneself (thanks Saswata for the thought). Writing here is one of my anchors; what’s your anchor?
Over on the ACLU's Free Future blog, I just published an article titled Your Smartphone, Their Rules: How App Stores Enable Corporate-Government Censorship.
Free Software users and developers likely already understand the reasons why it matters who controls what tools you have access to. Hopefully this post can help clarify, even to people typically used to common non-free tooling, that there are real world risks to consolidated, proprietary control over computing and communication tools.
Big shout out to the projects out there doing good work in the "pocket supercomputer" space,
providing an escape valve for many users and a counter-example to centralized corporate control,
including F-Droid, GrapheneOS, and phosh.
The screws are tightening on user freedom, in the very place where most computing is happening today. The smartphone is already far too similar to an ankle monitor than it should be.
Please, publish your own suggestions on creative forms of mutual technical liberation. These are communications tools, so no person can fix the problems alone.
I would love to see a flourishing of non-Android, non-iOS systems in people's pockets, but i also know with the market the way it is, that is a long haul. Until that happens, we should also try to keep Android open, check out keepandroidopen.org for more suggestions.
18 November, 2025 05:00AM by Daniel Kahn Gillmor
It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change — why not start talking about XDC 2025?
This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I’m working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.
The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:
Below, you can see one crucial series associated with amdgpu recovery implementation:
The next topic was a discussion around the
replacement of drm_sched_resubmit_jobs() since this function became
deprecated. Just a few drivers still use this function, and they need a
replacement for that. Some ideas were floating around to extract part of the
specific implementation from some drivers into a generic function. The next
day, Philipp Stanner continued to discuss this topic in his workshop,
DRM GPU Scheduler.
Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).
Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:
This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.

After cartridge pleating and honeycombing, I was still somewhat in the mood for that kind of fabric manipulation, and directing my internet searches in that vague direction, and I stumbled on this: https://katafalk.wordpress.com/2012/06/26/patternmaking-for-the-kampfrau-hemd-chemise/
Now, do I want to ever make myself a 16th century German costume, especially a kampfrau one? No! I’m from lake Como! Those are the enemies who come down the Alps pillaging and bringing the Black Death with them!
Although I have to admit that at times during my day job I have found the idea of leaving everything to go march with the Jägermonsters attractive. You know, the exciting prospective of long days of march spent knitting sturdy socks, punctuated by the excitement of settling down in camp and having a chance of doing lots of laundry. Or something. Sometimes being a programmer will make you think odd things.
Anyway, going back to the topic, no, I didn’t need an historically accurate hemd. But I did need a couple more shirts for daily wear, I did want to try my hand at smocking, and this looked nice, and I was intrigued by the way the shaping of the neck and shoulder worked, and wondered how comfortable it would be.
And so, it had to be done.
I didn’t have any suitable linen, but I did have quite a bit of cotton voile, and since I wasn’t aiming at historical accuracy it looked like a good option for something where a lot of fabric had to go in a small space.
At first I considered making it with a bit less fabric than the one in the blog, but then the voile was quite thin, so I kept the original measurement as is, only adapting the sleeve / sides seams to my size.

With the pieces being rectangles the width of the fabric, I was able to have at least one side of selvedge on all seams, and took advantage of it by finishing the seams by simply folding the allowances to one sides so that the selvedge was on top, and hemstitching them down as I would have done with a folded edge when felling.
Also, at first I wanted to make the smocking in white on white, but then I thought about a few hanks of electric blue floss I had in my stash, and decided to just go with it.
The initial seams were quickly made, then I started the smocking at the neck, and at that time the project went on hold while I got ready to go to DebConf. Then I came back and took some time to get back into a sewing mood, but finally the smocking on the next was finished, and I could go on with the main sewing, which, as I expected, went decently fast for a handsewing project.

While doing the diagonal smocking on the collar I counted the stitches to make each side the same length, which didn’t completely work because the gathers weren’t that regular to start with, and started each line from the two front opening going towards the center back, leaving a triangle with a different size right in the middle. I think overall it worked well enough.
Then there were a few more interruptions, but at last it was ready! just as the weather turned cold-ish and puffy shirts were no longer in season, but it will be there for me next spring.
I did manage to wear it a few times and I have to say that the neck shaping is quite comfortable indeed: it doesn’t pull in odd ways like the classical historically accurate pirate shirt sometimes does, and the heavy gathering at the neck makes it feel padded and soft.

I’m not as happy with the cuffs: the way I did them with just honeycombing means that they don’t need a closure, and after washing and a bit of steaming they lie nicely, but then they tend to relax in a wider shape. The next time I think I’ll leave a slit in the sleeves, possibly make a different type of smocking (depending on whether I have enough fabric) and then line them like the neck so that they are stable.
Because, yes, I think that there will be another time: I have a few more project before that, and I want to spend maybe another year working from my stash, but then I think I’ll buy some soft linen and make at least another one, maybe with white-on-white smocking so that it will be easier to match with different garments.